Is it possible to turn off sizeAttenuation for an Object3D in THREE.js?
I ask because I'm drawing a trajectory on a very large scale using arrow helper to indicate the direction of motion, and I want the heads of the arrows to stay oriented correctly (unlike using a pointcloud w/ sprite points), but also not shrink/grow as you zoom the camera.
Any other ideas? Can I achieve the same effect using a pointcloud/lines?
sizeAttenuation is just rescaling the sprite based on it's position to give a sense of a "virtual size". Object3D's have an actual size so they scale with perspective.
Outside of switching to an Orthographic Camera (might not be ideal for everything else you're doing), the only things that come to mind are
position the arrow closer to the camera always, so it doesn't change size
constantly adjust the scale of the arrow to make it larger/smaller in the scene, so it appears to be the same size on the screen.
Related
I would like to make a simple javascript library that involves rotating, translating, and scaling the canvas. However, when I rotate the canvas, since the center of rotation is (0, 0), half of the content gets deleted. I would like to know how to not have it deleted.
Translate to the center of the canvas first, rotate, and then translate back.
Note that inevitably (unless you also scale up the canvas) you will end up with some corners cut off.
I'm trying to achieve a permanent head-coupled perspective without using the full headtrackr library. My head won't be moving, but it won't be directly in front of the screen.
I have a little demo that you can download and run with python -m SimpleHTTPServer 8000
The code is adapted from mainly this headtrackr example and part of the headtrackr source
My expectations are based off this diagram:
In the third image, I imagine slightly swiveling my monitor counter-clockwise from above. This should be equivalent to reducing Z and making X less than zero. I expect my monitor to show the middle image, but instead I see something like this:
I think the "window" I'm looking through is the XY-plane, but shouldn't it stretch like the middle orange rectangle in the first diagram? Here's another window that stays fixed: http://kode80.com/2012/04/09/holotoy-perspective-in-webgl/ to see what I mean by "window."
Are off-axis perspective and head-tracking unrelated? How do I get a convincing illusion of off-axis perspective in THREE.js?
I think you can accomplish what you're aiming for with setViewOffset. Your diagram looks a little off to me. Perhaps its because there is no qube in the off-axis projection, but I think the point is that the frustum should remain framed on a fixed point without rotating the camera which would introducer perspective distortion.
To accomplish this with setViewOffset I would set the fullWidth and fullHeight to some extra large size. The view offset the will be a window in that oversized view. As the user moves that window will be offset in the opposite direction of the viewer.
http://threejs.org/docs/#Reference/Cameras/PerspectiveCamera
3D Projection mapping can be broken into a corner-pinning texture step and a perspective adjustment step. I think they're somewhat unrelated.
In Three.js you can make a quad surface (composed of two triangle Face3) and map a texture onto the surface. Then move the corners of the quad in XY (not Z). I don't think this step introduces perspective artifacts other than what's necessary for minor deformations in a quad texture. I think I'm talking about banding issues and nearest neighbor artifacts, not 3d perspective errors. The size of these artifacts depends on how the projector is shining on the object. If the projector is nicely perpendicular to the surface, very little corner-mapping is necessary. If you're using a monitor instead of a projector, then no corner-pinning is necessary.
Next is the perspective adjustment step. You have to adjust the content of the texture based on where the user is in relation to the real-life surface. I believe you can do this with an XYZ distance from the physical viewer to the center of the screen surface, a scaling factor between pixels and real-life size, and the pixel dimensions of the surface.
In my demo, the blueish faces of my cubes point in the positive Z direction. When I rotate my monitor in the real-life world, they continue to point in the positive Z direction of their monitor world. The diagram that I posted is a little misleading because the orange box in the middle picture is locally rotated to compensate for the rotating of the real-life monitor world. That orange box's front face is no longer pointing exactly in the positive Z direction of its monitor world.
Outside of Three.js, in Processing there are some techniques for projection mapping.
This one may be the simplest, although I haven't tried it myself: http://blogs.bl0rg.net/netzstaub/2008/08/24/wiimote-headtracking-in-processing/
SurfaceMapper for Processing has support for real-life curved surfaces (not just flat rectangles), but it only works for Processing before Processing 2.0.
If anyone develops a SurfaceMapper library for Three.js that would be really cool! I'd love to design a virtual world, put cameras in the world, have each camera consider real-life viewer perspective, and then put those rendered textures on real-life displays.
You need to adjust the perspective matrix. It's built with -left +right -bottom +top. Changing these will produce the effect you are looking for.
I have given up trying to orbit a camera around my scene in Three.js and have now decided to revert to doing what I used to do in XNA, just rotate everything except the camera.
The reason I have given up is because I cannot get the camera to orbit properly 360 degrees in all the axis, it starts inverting after going over the top or under the bottom. Using THREE.OrbitControls does not solve this because it merely restricts rotation in the problematic axis instead of fixing the problem.
My problem is now getting this other rotation story working. What I have done is put all objects except the camera in another object "rotSection" and I am now just rotating that object. This is working but rotation is always performed according to the relative (0, 0, 0) position of the rotation object which seems to always stay in the one corner but I would like to rotate around the centre of my world on not around the edge. I have tried to centre the rotSection relative to the scene but it still rotates around its corner and not its centre. Any idea how I can get rotation of an Object3D around a certain point?
The engines don’t move the ship at all. The ship stays where it is and
the engines move the universe around it.
Futurama
The camera in 3d technically never rotates, everything else is rotated and move in order to bring it to camera's local space. You don't have to do any tricks in order to do this, this should be the core of the 3d engine, setting the matrices, setting up the shaders, and doing the correct transforms. Three.js does this for you.
Perhaps you should look into quaternions? Specifically the axisAngle conversion to quats. THREE.OrbitControls won't do what you want.
Say I have a nice spaceship rendered in my scene. I then zoom out and the ship becomes smaller. This is all quite simple to achieve. Now at some point I want to replace the spaceship object with a simple triangle representing the ship. At this stage the icon for the ship should be rendered instead of the ship. Obviously the triangle should move based on the ship's movement and my camera movements. The triangle should not change orientation, so even if I rotate the camera or have the ship roll the triangle should stay the same orientation. So on a canvas this would be very simple to do. I would just take the x and y components of the 3d object and draw my triangle at the coordinates. My problem is that I do not know how to draw on the WebGL canvas directly? Is it possible? If not do anybody have any pointers to strategies to get this done? I would be happy if I could get nudged in the right direction :) Thanks in advance.
Update : What I eventually decided to do was to use the orthographic camera overlay approach suggested below and in a couple of other places.
Use Sprite object for icon. Then place icon right between spaceship and camera in a specific distance from the camera (using Raycaster or just calculation).
Use of orthographic overlay as mentioned by #WestLangley is also possible. Since you would have to calculate ship's position relative to your canvas in this case, maybe you could even create pure HTML overlay using DIV and place your icon in it as IMG object.
I've been working on a WebGL project that runs on top of the Three.js library. I am rendering several semi-transparent meshes, and I notice that depending on the angle you tilt the camera, a different object is on top.
To illustrate the problem, I made a quick demo using three semi-transparent cubes. When you rotate the image past perpendicular to the screen, the second half of the smallest cube "jumps" and is no longer visible. However, shouldn't it still be visible? I tried adjusting some of the blending equations, but that didn't seem to make a difference.
What I'm wondering is whether or not this is a bug in WebGL/Three, or something I can fix. Any insight would be much appreciated :)
Well, that's something they weren't able to solve when they invented all this hardware accelerated graphics business and sounds like we'll have to deal with this for a long while.
The issue here is that graphic cards do not sort the polygons, nor objects. The graphics card is "dumb", you tell it to draw an object and it will draw the pixels that represent it and also, in another non-visible "image" called zbuffer (or depthbuffer), will draw the pixels that represent the object but instead of color it will draw the distance to the camera for each pixels. Any other objects that you draw afterwards, the graphics card will check if the distance to the camera for each pixel, and if it's farther, it won't draw it (unless you disable the check, that is).
This speeds up things a lot and gives you nice intersections between solid objects. But it doesn't play well with transparency. Say that you have 2 transparent objects and you want A to be drawn behind B. You'll need to tell the graphics card to draw A first and then B. This works fine as long as they're not intersecting. In order to draw 2 transparent objects intersecting then the graphics would have to sort all the polygons, and as the graphics card doesn't do that, then you'll have to do it.
It's one of these things that you need to understand and specifically tweak for your case.
In three.js, if you set material.transparent = true we'll sort that object so it's drawn before (earlier) other objects that are in front. But we can't really help you if you want to intersect them.