Rendering box2d rectangle as a 3d rectangle in three.js - javascript

When rendering box2d bodies with the canvas, there's the big problem that box2d can only gives us the centre of the body and its angle (while the canvas draws shapes like rectangles from the top-left angle). By looking at some tutorials from Seth Land I've bypassed this problem and now works fine (I would never been able to figure out something like this by myself):
g.ctx.save();
g.ctx.translate(b.GetPosition().x*SCALE, b.GetPosition().y*SCALE);
g.ctx.rotate(b.GetAngle());
g.ctx.translate(-(b.GetPosition().x)*SCALE, -(b.GetPosition().y)*SCALE);
g.ctx.strokeRect(b.GetPosition().x*SCALE-30,b.GetPosition().y*SCALE-5,60,10);
g.ctx.restore();
where b is the body, SCALE is the canvas-pixel/box2d-meters converter, 60 and 10 are the dimensions of the rectangle (so 30 and 5 are the half dimensions).
_
The problem
Now, I would like to render this on three.js. First objection is that three.js works in 3d while box2d doesn't. No problem on that: I just want to build a 3d rectangle on top of the 2d one. The 2d axis dimensions must be from box2d while I want the last dimension to be totally arbitrary.
My work doesn't require 3d physics, 2d will be sufficient but I would like to give these 2d objects a thickness in the third dimension. How can I do that?! So far I've been trying like this:
// look from above
camera.position.set(0,1000,0);
camera.rotation.set(-1.5,0,0);
geometry = new THREE.CubeGeometry( 200, 60, 10 , 1,1,1);
// get b as box2d rectangle
loop(){
mesh.rotation.y = -b.GetAngle();
mesh.position.x = b.GetPosition().x*SCALE;
mesh.position.y = -b.GetPosition().y*SCALE;
}
Unfortunately, this doesn't look as the box2d debug draw at all. -1.5 is not the right value to turn the camera of a perfect 90 degree angle (does anyone know the exact value?) and, even worst, the three.js rectangle doesn't follow the box2d movements, having almost the same problems I had with the standard canvas before using context translate and context rotation.
I hope anyone have time to explain a possible solution. Thanks a lot in advance! :)
EDIT: it looks someone did it already
(requires webgl on chrome): http://game.2x.io/
http://serv1.aelag.com:8082/threeBox
The second one are just spheres so no problem on the rotation mapping between box2d and threejs. The first one also includes cubes and rectangles: that what I'm trying to do.

What type of camera are you using?
For getting a "2d" effect you need to use an Orthographic camera, otherwise you will get perspective-projected stuff and things won't match with what you're seeing in 2D.

Related

Projection math in software 3d engine

I'm working on writing a software 3d engine in Javascript, rendering to 2d canvas. I'm totally stuck on an issue related to the projection from 3d world to 2d screen coordinates.
So far, I have:
A camera Projection and View matrix.
Transformed Model vertices (v x ModelViewProj).
Take the result and divide both the x and y by the z (perspective) coordinate, making viewport coords.
Scale the resultant 2d vector by the viewport size.
I'm drawing a plane, and everything works when all 4 vertices (2 tris) are on screen. When I fly my camera over the plane, at some point, the off screen vertices transform to the top of the screen. That point seems to coincide with when the perspective coordinate goes above 1. I have an example of what i mean here - press forward to see it flip:
http://davidgoemans.com/phaser3d/
Code isn't minified, so web dev tools can inspect easily, but i've also put the source here:
https://github.com/dgoemans/phaser3dtest/tree/fixcanvas
Thanks in advance!
note: I'm using phaser, not really to do anything at the moment, but my plan is to mix 2d and 3d. It shouldn't have any effect on the 3d math.
When projection points which lie behind of the virtual camera, the results will be projected in front of it, mirrored. x/z is just the same as -x/-z.
In rendering pipelines, this problem is addresses by clipping algorithms which intersect the primitives by clipping planes. In your case, a single clipping plane which lies somewhere in front of your camera, is enough (in rendering pipelines, one is usually using 6 clipping planes to describe a complete viewing volume). You must prevent the situation that a single primitive has at least one point on front of the camera, and at least another one behind it (and you must discard primitives which lie completely behind, but that is rather trivial). Clipping must be done before the perspective divide, which is also the reason why the space the projection matrix transforms to is called the clip space.

Setting up a 2D view in Three.js

I'm new to three.js and am trying to set up what amounts to a 2D visualization (for an assortment of layered sprites) using these 3D tools. I'd like some guidance on the PerspectiveCamera() arguments and camera.position.set() arguments. I already have a nudge in the right direction from this answer to a related question, which said to set the z coordinate equal to 0 in camera.position.set(x,y,z).
Below is the snippet of code I'm modifying from one of stemkoski's three.js examples. The parts that are hanging me up for the moment are the values for the VIEW_ANGLE, x, and y. Assuming I want to have a flat camera view on a plane the size of the screen how should I assign these variables? I've tried range of values but it's hard to tell from the visualization what is happening. Thanks in advance!
var SCREEN_WIDTH = window.innerWidth, SCREEN_HEIGHT = window.innerHeight;
var VIEW_ANGLE = ?, ASPECT = SCREEN_WIDTH / SCREEN_HEIGHT, NEAR = 0.1, FAR = 20000;
camera = new THREE.PerspectiveCamera( VIEW_ANGLE, ASPECT, NEAR, FAR);
scene.add(camera);
var x = ?, y = ?, z = 0;
camera.position.set(x,y,z);
camera.lookAt(scene.position);
UPDATE - perspective vs orthographic camera:
Thanks #GuyGood, I realize I need to make a design choice about the perspective camera versus the orthographic camera. I now see that the PerspectiveCamera(), even in this 2D context would allow for things like parallax, whereas OrthographicCamera() would allow for literal rendering of sizes (no diminishing with distance) no matter what layer my 2D element is on. I'm inclined to think I'll still use the PerspectiveCamera() for effects such as small amounts of parallax between the sprite layers (so I guess my project is not purely 2D!).
It seems then that the main thing is to make all the sprite layers parallel to the viewing plane and that camera.position.set() is the orthogonal viewing line to the center of the field of view.This must be basic for so many folks here; it is such a new world to me!
I think I still have a hard time wrapping my head around the role of VIEW_ANGLE, x, and y and the distance between the camera and the far and near viewing planes in a 2D visualization. With the orthographic camera this is pretty immaterial - you just need enough depth to include all the layers you want and a viewing plane that suits the scale of your sprites. However, with the perspective camera the role of depth and field influences the effect of parallax, but are there other considerations as well?
UPDATE 2 - Angle of view and other variables:
After a bit more tooling around in pursuit of how to think about Angle of View (Field of View, or FOV) for the camera and the x,y,z arguments for the camera position, I came across this helpful video summary of the role of Field of View in game design (a close enough analog to answer my questions for my 2D visualization). Along with this Field of View tutorial for photographers that I also found helpful (if maybe a touch cheesy ;), these two resources helped me get a sense of how to choose a Field of View for my project and what happens with either very wide or narrow Fields of View (which are measured in number of degrees out of 360). The best results are a mix of what feels like a natural field of vision for a human, depending on the distance of the screen or projection from their face, and is also keenly related to the relative scale of things in the foreground versus background in the visualization (wider fields of view make the background look smaller, narrower fields of view magnify the background - similar to, though not as pronounced as the effect of an orthographic camera). I hope you find this as helpful as I did!
UPDATE 3 - Further reading
For anyone zesting for more detail about camera specifications in a range of uses, you may find chapter 13 of Computer Graphics Principles and Practice as useful as I have for addressing my above questions and much more.
UPDATE 4 - Considerations for the z dimension in the Orthographic camera
As I've continued my project I decided to use the orthographic camera so that I could increment the z dimensions of my sprites in order to avoid z-fighting, yet not have them appear to recede progressively into the distance. By contrast, if I want to make it appear as though a sprite is receding into the distance, I can simply adjust its size. However, today I ran across a silly mistake that I wanted to point out to save others from the same trouble. Although the orthographic camera does not depict receding size as objects are more distant, take care that there is still a back frustrum plane past which objects will be culled from view. Today I accidentally incremented the z values of several of my objects past that plane and could not figure out why they were not showing up on screen. It can be easy to forget this factor about the z coordinate while working with the orthographic camera.
What is your goal? If you do not need perspective distortion, use the orthographic camera.
Also just check the documentation:
https://threejs.org/docs/#api/en/cameras/PerspectiveCamera
View Angle/Fieldof View is self explanatory, if you don't know what it is, read up on it.
http://www.incgamers.com/wp-content/uploads/2013/05/6a0120a85dcdae970b0120a86d9495970b.png
Concerning the x y and z value. Well, this depends on the size of your plane and the distance to the camera. You can either set the camera position or the plane's position and keep the camera at (0,0,0).
Just imagine a plane in 3D space. You can calculate the position of the camera depending on the size of your plane or just go by try and error...
For using the orthographic camera, see this post:
Three.js - Orthographic camera

Transparency Face-Jumping?

I've been working on a WebGL project that runs on top of the Three.js library. I am rendering several semi-transparent meshes, and I notice that depending on the angle you tilt the camera, a different object is on top.
To illustrate the problem, I made a quick demo using three semi-transparent cubes. When you rotate the image past perpendicular to the screen, the second half of the smallest cube "jumps" and is no longer visible. However, shouldn't it still be visible? I tried adjusting some of the blending equations, but that didn't seem to make a difference.
What I'm wondering is whether or not this is a bug in WebGL/Three, or something I can fix. Any insight would be much appreciated :)
Well, that's something they weren't able to solve when they invented all this hardware accelerated graphics business and sounds like we'll have to deal with this for a long while.
The issue here is that graphic cards do not sort the polygons, nor objects. The graphics card is "dumb", you tell it to draw an object and it will draw the pixels that represent it and also, in another non-visible "image" called zbuffer (or depthbuffer), will draw the pixels that represent the object but instead of color it will draw the distance to the camera for each pixels. Any other objects that you draw afterwards, the graphics card will check if the distance to the camera for each pixel, and if it's farther, it won't draw it (unless you disable the check, that is).
This speeds up things a lot and gives you nice intersections between solid objects. But it doesn't play well with transparency. Say that you have 2 transparent objects and you want A to be drawn behind B. You'll need to tell the graphics card to draw A first and then B. This works fine as long as they're not intersecting. In order to draw 2 transparent objects intersecting then the graphics would have to sort all the polygons, and as the graphics card doesn't do that, then you'll have to do it.
It's one of these things that you need to understand and specifically tweak for your case.
In three.js, if you set material.transparent = true we'll sort that object so it's drawn before (earlier) other objects that are in front. But we can't really help you if you want to intersect them.

Background with three.js

Can anybody help me with three.js?
I need to draw background, something, like a THREE.Sprite, but it neet to be UNDER any 3d object, that will draw later. I have a camera, that can be move only on Z axis.
I tryed to use:
cube mapping shader - PROBLEM: artefacts with shadow planes, it's unstable draw
THREE.Sprite that dublicate camera moving - PROBLEM: artefacts with shadow plane - it have a edge highlighting OR drawing only other spirtes without objects.
HTML DOM Background - PROBLEM: big and ugly aliasing in models.
What can I try more? Thanks!
You could maybe try drawing in several passes, i.e. making a first render of the background scene to a buffer, and then a second one over the first "buffer". Maybe using the buffer as background (painting it in 2D with an orthographic projection, and disabling depth buffer writes in that pass).
I haven't tried it myself with three.js, but that's how I'd do that with "traditional" OpenGL.
If you want a "3d" background i.e. something that will follow the rotation of your camera, but not react to the movement (be infinitely far), then the only way to do it is with a cubemap.
The other solution is a environment dome - a fully 3d object.
If you want a static background, then you should be able todo just a html background, i'm not sure why this would fail and what 'aliasing in models' you are talking about.

how do we create turtle geometry in three.js?

We are trying to create a simple programming environment that lets people generate 3D forms (it's inspired by the Scratch project). We'd like it to work in a "turtle geometry" fashion, where a creature (we call it the beetle, by analogy to the logo turtle) moves around the 3D space, and can leave objects along the way that take on its position and orientation.
We are currently using Three.js. While you can move and rotate objects, it's not clear how to create the effect we want, where translations and rotations "accumulate" and can be applied to new objects. We'd also like to store a stack of these matrices to push and pop.
Here's a specific example. The user of our system would create a program like this (this is pseudocode):
repeat 36 [
move 10
rotate 10
draw cube
]
The idea is that the beetle would move around a circle as it executes this program, leaving a cube at each position.
Is this possible using Three.js? Do we need to switch to pure WebGL?
You might have a look at
http://u2d.com/turtle_js/index.html
It uses JavaScript systax though.
And it does not have a feedback in case of errors. The turtle just does nothing.
The requirements sound fairly high-level. There is no reason you would need the low-level access of raw webgl. A higher-level library like three.js, or another, would do just fine.
As #Orbling pointed out, you'd need to figure out rotation for 3D; e.g.
rotateX 10
(rotate 10 degrees counterclockwise around x axis), or
turnLeft 10
(rotate 10 degrees counterclockwise around current up vector).

Categories