Setting up a 2D view in Three.js - javascript

I'm new to three.js and am trying to set up what amounts to a 2D visualization (for an assortment of layered sprites) using these 3D tools. I'd like some guidance on the PerspectiveCamera() arguments and camera.position.set() arguments. I already have a nudge in the right direction from this answer to a related question, which said to set the z coordinate equal to 0 in camera.position.set(x,y,z).
Below is the snippet of code I'm modifying from one of stemkoski's three.js examples. The parts that are hanging me up for the moment are the values for the VIEW_ANGLE, x, and y. Assuming I want to have a flat camera view on a plane the size of the screen how should I assign these variables? I've tried range of values but it's hard to tell from the visualization what is happening. Thanks in advance!
var SCREEN_WIDTH = window.innerWidth, SCREEN_HEIGHT = window.innerHeight;
var VIEW_ANGLE = ?, ASPECT = SCREEN_WIDTH / SCREEN_HEIGHT, NEAR = 0.1, FAR = 20000;
camera = new THREE.PerspectiveCamera( VIEW_ANGLE, ASPECT, NEAR, FAR);
scene.add(camera);
var x = ?, y = ?, z = 0;
camera.position.set(x,y,z);
camera.lookAt(scene.position);
UPDATE - perspective vs orthographic camera:
Thanks #GuyGood, I realize I need to make a design choice about the perspective camera versus the orthographic camera. I now see that the PerspectiveCamera(), even in this 2D context would allow for things like parallax, whereas OrthographicCamera() would allow for literal rendering of sizes (no diminishing with distance) no matter what layer my 2D element is on. I'm inclined to think I'll still use the PerspectiveCamera() for effects such as small amounts of parallax between the sprite layers (so I guess my project is not purely 2D!).
It seems then that the main thing is to make all the sprite layers parallel to the viewing plane and that camera.position.set() is the orthogonal viewing line to the center of the field of view.This must be basic for so many folks here; it is such a new world to me!
I think I still have a hard time wrapping my head around the role of VIEW_ANGLE, x, and y and the distance between the camera and the far and near viewing planes in a 2D visualization. With the orthographic camera this is pretty immaterial - you just need enough depth to include all the layers you want and a viewing plane that suits the scale of your sprites. However, with the perspective camera the role of depth and field influences the effect of parallax, but are there other considerations as well?
UPDATE 2 - Angle of view and other variables:
After a bit more tooling around in pursuit of how to think about Angle of View (Field of View, or FOV) for the camera and the x,y,z arguments for the camera position, I came across this helpful video summary of the role of Field of View in game design (a close enough analog to answer my questions for my 2D visualization). Along with this Field of View tutorial for photographers that I also found helpful (if maybe a touch cheesy ;), these two resources helped me get a sense of how to choose a Field of View for my project and what happens with either very wide or narrow Fields of View (which are measured in number of degrees out of 360). The best results are a mix of what feels like a natural field of vision for a human, depending on the distance of the screen or projection from their face, and is also keenly related to the relative scale of things in the foreground versus background in the visualization (wider fields of view make the background look smaller, narrower fields of view magnify the background - similar to, though not as pronounced as the effect of an orthographic camera). I hope you find this as helpful as I did!
UPDATE 3 - Further reading
For anyone zesting for more detail about camera specifications in a range of uses, you may find chapter 13 of Computer Graphics Principles and Practice as useful as I have for addressing my above questions and much more.
UPDATE 4 - Considerations for the z dimension in the Orthographic camera
As I've continued my project I decided to use the orthographic camera so that I could increment the z dimensions of my sprites in order to avoid z-fighting, yet not have them appear to recede progressively into the distance. By contrast, if I want to make it appear as though a sprite is receding into the distance, I can simply adjust its size. However, today I ran across a silly mistake that I wanted to point out to save others from the same trouble. Although the orthographic camera does not depict receding size as objects are more distant, take care that there is still a back frustrum plane past which objects will be culled from view. Today I accidentally incremented the z values of several of my objects past that plane and could not figure out why they were not showing up on screen. It can be easy to forget this factor about the z coordinate while working with the orthographic camera.

What is your goal? If you do not need perspective distortion, use the orthographic camera.
Also just check the documentation:
https://threejs.org/docs/#api/en/cameras/PerspectiveCamera
View Angle/Fieldof View is self explanatory, if you don't know what it is, read up on it.
http://www.incgamers.com/wp-content/uploads/2013/05/6a0120a85dcdae970b0120a86d9495970b.png
Concerning the x y and z value. Well, this depends on the size of your plane and the distance to the camera. You can either set the camera position or the plane's position and keep the camera at (0,0,0).
Just imagine a plane in 3D space. You can calculate the position of the camera depending on the size of your plane or just go by try and error...
For using the orthographic camera, see this post:
Three.js - Orthographic camera

Related

Projection math in software 3d engine

I'm working on writing a software 3d engine in Javascript, rendering to 2d canvas. I'm totally stuck on an issue related to the projection from 3d world to 2d screen coordinates.
So far, I have:
A camera Projection and View matrix.
Transformed Model vertices (v x ModelViewProj).
Take the result and divide both the x and y by the z (perspective) coordinate, making viewport coords.
Scale the resultant 2d vector by the viewport size.
I'm drawing a plane, and everything works when all 4 vertices (2 tris) are on screen. When I fly my camera over the plane, at some point, the off screen vertices transform to the top of the screen. That point seems to coincide with when the perspective coordinate goes above 1. I have an example of what i mean here - press forward to see it flip:
http://davidgoemans.com/phaser3d/
Code isn't minified, so web dev tools can inspect easily, but i've also put the source here:
https://github.com/dgoemans/phaser3dtest/tree/fixcanvas
Thanks in advance!
note: I'm using phaser, not really to do anything at the moment, but my plan is to mix 2d and 3d. It shouldn't have any effect on the 3d math.
When projection points which lie behind of the virtual camera, the results will be projected in front of it, mirrored. x/z is just the same as -x/-z.
In rendering pipelines, this problem is addresses by clipping algorithms which intersect the primitives by clipping planes. In your case, a single clipping plane which lies somewhere in front of your camera, is enough (in rendering pipelines, one is usually using 6 clipping planes to describe a complete viewing volume). You must prevent the situation that a single primitive has at least one point on front of the camera, and at least another one behind it (and you must discard primitives which lie completely behind, but that is rather trivial). Clipping must be done before the perspective divide, which is also the reason why the space the projection matrix transforms to is called the clip space.

Head-coupled/Off-axis perspective in Three.js

I'm trying to achieve a permanent head-coupled perspective without using the full headtrackr library. My head won't be moving, but it won't be directly in front of the screen.
I have a little demo that you can download and run with python -m SimpleHTTPServer 8000
The code is adapted from mainly this headtrackr example and part of the headtrackr source
My expectations are based off this diagram:
In the third image, I imagine slightly swiveling my monitor counter-clockwise from above. This should be equivalent to reducing Z and making X less than zero. I expect my monitor to show the middle image, but instead I see something like this:
I think the "window" I'm looking through is the XY-plane, but shouldn't it stretch like the middle orange rectangle in the first diagram? Here's another window that stays fixed: http://kode80.com/2012/04/09/holotoy-perspective-in-webgl/ to see what I mean by "window."
Are off-axis perspective and head-tracking unrelated? How do I get a convincing illusion of off-axis perspective in THREE.js?
I think you can accomplish what you're aiming for with setViewOffset. Your diagram looks a little off to me. Perhaps its because there is no qube in the off-axis projection, but I think the point is that the frustum should remain framed on a fixed point without rotating the camera which would introducer perspective distortion.
To accomplish this with setViewOffset I would set the fullWidth and fullHeight to some extra large size. The view offset the will be a window in that oversized view. As the user moves that window will be offset in the opposite direction of the viewer.
http://threejs.org/docs/#Reference/Cameras/PerspectiveCamera
3D Projection mapping can be broken into a corner-pinning texture step and a perspective adjustment step. I think they're somewhat unrelated.
In Three.js you can make a quad surface (composed of two triangle Face3) and map a texture onto the surface. Then move the corners of the quad in XY (not Z). I don't think this step introduces perspective artifacts other than what's necessary for minor deformations in a quad texture. I think I'm talking about banding issues and nearest neighbor artifacts, not 3d perspective errors. The size of these artifacts depends on how the projector is shining on the object. If the projector is nicely perpendicular to the surface, very little corner-mapping is necessary. If you're using a monitor instead of a projector, then no corner-pinning is necessary.
Next is the perspective adjustment step. You have to adjust the content of the texture based on where the user is in relation to the real-life surface. I believe you can do this with an XYZ distance from the physical viewer to the center of the screen surface, a scaling factor between pixels and real-life size, and the pixel dimensions of the surface.
In my demo, the blueish faces of my cubes point in the positive Z direction. When I rotate my monitor in the real-life world, they continue to point in the positive Z direction of their monitor world. The diagram that I posted is a little misleading because the orange box in the middle picture is locally rotated to compensate for the rotating of the real-life monitor world. That orange box's front face is no longer pointing exactly in the positive Z direction of its monitor world.
Outside of Three.js, in Processing there are some techniques for projection mapping.
This one may be the simplest, although I haven't tried it myself: http://blogs.bl0rg.net/netzstaub/2008/08/24/wiimote-headtracking-in-processing/
SurfaceMapper for Processing has support for real-life curved surfaces (not just flat rectangles), but it only works for Processing before Processing 2.0.
If anyone develops a SurfaceMapper library for Three.js that would be really cool! I'd love to design a virtual world, put cameras in the world, have each camera consider real-life viewer perspective, and then put those rendered textures on real-life displays.
You need to adjust the perspective matrix. It's built with -left +right -bottom +top. Changing these will produce the effect you are looking for.

Three.js - How to check if an object is visible to the camera

I'm having hard times figuring out what's the best way to check if a Object3d is visible for the eyes of the camera.
I'm having a sphere in the middle of the screen. Some cubes are added on it's surface randomly. What I would need is a way to check which cubes are visible (on the front half of the sphere) and which one are invisible (on the back half of the sphere) for the eyes of the camera.
What I have found so far seems to be the right direction - but I must be missing something with the THREE.Raytracer class.
Here is a fiddle of the code that I'm using: jsfiddle. I have tried to make it as clear as possible.
This part of the fiddle might contain the buggy code:
var raycaster = new THREE.Raycaster();
var origin = camera.position, direction, intersects, rayGeometry = new THREE.Geometry(), g;
pointGroup.children.forEach(function(pointMesh) {
direction = pointMesh.position.clone();
// I THINK THIS CALCULATION MIGHT BE WRONG - BUT DON'T KNOW HOW TO CORRECT IT
raycaster.set(origin, direction.sub(origin).normalize());
// if the pointMesh's position is on the back half of the globe, the ray should intersect with globe first and the hit the point as second target - because the cube is hidden behind the bigger sphere object
intersects = raycaster.intersectObject(pointMesh);
// this is always empty - should contain objects that are located on the back of the sphere ...
console.log(intersects);
});
Frustum Culling is not working as outlined in this stack overflow question here: post1
Also this post2 and this post3 are explaining the topic really good but not quite for this situation.
Thank you for you help!
You want to look at Occlusion Culling techniques. Frustum culling works fine and is not what you are describing. Frustum culling just checks if an object (or its bounding box) is inside the camera pyramid. You perform Occlusion culling in addition to Frustum Culling specially when you want to eliminate objects which are occluded by other objects inside the view frustum. But it is not an easy task.
I just worked though a similar problem where I was trying to detect when a point in world space passed out of view of the camera and behind specific objects in the scene. I created a jsfiddle, (see below) for it. When the red "target" passes behind any of the three "walls" a blue line is drawn from the "target" to the camera. I hope this helps.

Rendering box2d rectangle as a 3d rectangle in three.js

When rendering box2d bodies with the canvas, there's the big problem that box2d can only gives us the centre of the body and its angle (while the canvas draws shapes like rectangles from the top-left angle). By looking at some tutorials from Seth Land I've bypassed this problem and now works fine (I would never been able to figure out something like this by myself):
g.ctx.save();
g.ctx.translate(b.GetPosition().x*SCALE, b.GetPosition().y*SCALE);
g.ctx.rotate(b.GetAngle());
g.ctx.translate(-(b.GetPosition().x)*SCALE, -(b.GetPosition().y)*SCALE);
g.ctx.strokeRect(b.GetPosition().x*SCALE-30,b.GetPosition().y*SCALE-5,60,10);
g.ctx.restore();
where b is the body, SCALE is the canvas-pixel/box2d-meters converter, 60 and 10 are the dimensions of the rectangle (so 30 and 5 are the half dimensions).
_
The problem
Now, I would like to render this on three.js. First objection is that three.js works in 3d while box2d doesn't. No problem on that: I just want to build a 3d rectangle on top of the 2d one. The 2d axis dimensions must be from box2d while I want the last dimension to be totally arbitrary.
My work doesn't require 3d physics, 2d will be sufficient but I would like to give these 2d objects a thickness in the third dimension. How can I do that?! So far I've been trying like this:
// look from above
camera.position.set(0,1000,0);
camera.rotation.set(-1.5,0,0);
geometry = new THREE.CubeGeometry( 200, 60, 10 , 1,1,1);
// get b as box2d rectangle
loop(){
mesh.rotation.y = -b.GetAngle();
mesh.position.x = b.GetPosition().x*SCALE;
mesh.position.y = -b.GetPosition().y*SCALE;
}
Unfortunately, this doesn't look as the box2d debug draw at all. -1.5 is not the right value to turn the camera of a perfect 90 degree angle (does anyone know the exact value?) and, even worst, the three.js rectangle doesn't follow the box2d movements, having almost the same problems I had with the standard canvas before using context translate and context rotation.
I hope anyone have time to explain a possible solution. Thanks a lot in advance! :)
EDIT: it looks someone did it already
(requires webgl on chrome): http://game.2x.io/
http://serv1.aelag.com:8082/threeBox
The second one are just spheres so no problem on the rotation mapping between box2d and threejs. The first one also includes cubes and rectangles: that what I'm trying to do.
What type of camera are you using?
For getting a "2d" effect you need to use an Orthographic camera, otherwise you will get perspective-projected stuff and things won't match with what you're seeing in 2D.

Relative position of latlon points

I'm currently involved in a project where it'd be useful to display geographical points as overlays on top of a VIDEO element (size 512x288).
The VIDEO element contains a live broadcast, and I also have a real time feed of bearing, latitude, longitude and altitude that's fed into the site as JavaScript variables.
I also have an array of POI's (Points of Interest) that's included in the site. The POI's are in the following format:
var points = [['Landmark name', 'latitude', 'longitude'], […]];
Every five seconds or so, I want to loop through the array of POI's, and check to see if any of them are within the video's current viewport - and if true, overlay them on top of the VIDEO element.
Could someone point me in the right direction as to what I should be looking at? I assume I have to map the points to a 2D plane using e.g. Mercator projection.
But I'm a bit lost when it comes to mapping the POI's relative pixel position to the video's.
Looking forward to getting some tips!
Having done this before, the most critical element is to determine the field-of-view of the camera accurately (at least to the hundredth of a degree) in either the vertical or horizontal direction. Then, use the aspect ratio (512/288 = 1.78) of the video to determine the other angle (if needed) using atan formula (do not make the common mistake of multiplying the vertical field of view by the aspect ratio to get the horizontal field of view. Field of view is angular, aspect ratio is linear). Think of it in terms of setting up a camera, for example, in OpenGL except your camera is in the real world. Instead of picking field-of-view and camera orientation, you are going to have to measure it.
You will need to know the attitude of the camera (pan/tilt or pitch/roll/yaw) in order to overlay graphics properly.
You won't need a Mercator projection. I am assuming that the field of view of the camera is relatively small (ie. 40 deg H or so) so you can usually assume the projected surface is a rectangle (technically, it is a small patch from a sphere).

Categories