I am new in 3D games and ThreeJS. As shown in image above. My Mesh at browser's left boundry and at that time Mesh position is -1035 instead of 0.
It means mesh.position.x in ThreeJS is not Browser window X pixel then what is mesh.position.x in ThreeJS. Is there any calculation to convert Browser Inner Width and Height to as same as mesh.position.x. How to detect collision with browser boundary .
Three.js uses a 3D spatial coordinate system where the X and Z axes are horizontal and the Y axis is vertical. The origin (0, 0, 0) is just an arbitrary point in that space, and you can "look around" in the 3D world so the origin might not be on the screen. The coordinates of an object in Three.js-space are also, for all intents and purposes, arbitrary units.
You can convert between the 2D screen space and the 3D world space using projectors. Going from 3D to 2D has already been answered, and there is also a nice tutorial on the topic.
So, to find out whether a mesh is on the screen, you should project its 3D position to 2D screen space and then check whether that is outside the size of the window or canvas.
Related
I am working on rendering a 3D world in a 2D space. I found this article: https://en.m.wikipedia.org/wiki/3D_projection. In the perspective projection sub-category, it talks about "the viewer's position relative to the display surface" which is represented by e. Where is e. Is it where the viewer is looking ( the center of the screen), the viewer's actual position relative to the screen(if so how this is gotten), or something completely different?
Position of e depends of coordinate system (space) we consider to be the camera in.
In world space e has different coordinates, in view space or screen space it is always located at the origin.
But the thing is that in computer graphics there's no such thing as camera (same as viewer, eye, e from your article), so transforming (rotating, translating or scaling) the camera actually means applying the appropriate transformations for the whole scene just with opposite values. for instance to rotate the camera around y axis by alpha radians you should rotate the scene around same axis by -alpha radians, thus camera always stays in the same position therefore emulating real world camera where scene stays in the same position but camera keeps transforming.
I'm working on writing a software 3d engine in Javascript, rendering to 2d canvas. I'm totally stuck on an issue related to the projection from 3d world to 2d screen coordinates.
So far, I have:
A camera Projection and View matrix.
Transformed Model vertices (v x ModelViewProj).
Take the result and divide both the x and y by the z (perspective) coordinate, making viewport coords.
Scale the resultant 2d vector by the viewport size.
I'm drawing a plane, and everything works when all 4 vertices (2 tris) are on screen. When I fly my camera over the plane, at some point, the off screen vertices transform to the top of the screen. That point seems to coincide with when the perspective coordinate goes above 1. I have an example of what i mean here - press forward to see it flip:
http://davidgoemans.com/phaser3d/
Code isn't minified, so web dev tools can inspect easily, but i've also put the source here:
https://github.com/dgoemans/phaser3dtest/tree/fixcanvas
Thanks in advance!
note: I'm using phaser, not really to do anything at the moment, but my plan is to mix 2d and 3d. It shouldn't have any effect on the 3d math.
When projection points which lie behind of the virtual camera, the results will be projected in front of it, mirrored. x/z is just the same as -x/-z.
In rendering pipelines, this problem is addresses by clipping algorithms which intersect the primitives by clipping planes. In your case, a single clipping plane which lies somewhere in front of your camera, is enough (in rendering pipelines, one is usually using 6 clipping planes to describe a complete viewing volume). You must prevent the situation that a single primitive has at least one point on front of the camera, and at least another one behind it (and you must discard primitives which lie completely behind, but that is rather trivial). Clipping must be done before the perspective divide, which is also the reason why the space the projection matrix transforms to is called the clip space.
I have a scene with many Object3D objects containing blocks, cylinders and meshes. I also have a perspective camera and use trackballcontrols. What I need is a way to set the camera position without modifying the FOV or the camera angle and insure that all objects in the scene are visible on screen. I have successfully done this with preset top, bottom, left and right views using the bounding box of the entire scene, but I need this to work from any camera angle since the user can rotate the view using trackball controls into any position.
If there were a way to get a 2d bounding rectangle based on the camera position, target and the list of objects, I think I could get it to work, but I am having trouble finding how to do this.
The functionality would work like this: With a scene containing many objects, the user rotates the camera with the mouse to some arbitrary position and then clicks a "Zoom All" button. Then, the camera is zoomed(moved) out or in(keeping FOV constant) to make all objects fit to the screen.
How about rotate and translate the whole szene, so that the camera lies on an axis looking at the origin.
From there get the bounding box, calculate the needed distance for the camera and translate the camera.
Then rotate and translate the whole szene back.
A couple of questions in my head while studying Interaction with Three.js
1)Please explain what are Viewport Coordinates?
2)How they differ from client Coordinates?
3)How we ran onto this formula.
var vpx = ( eltx / div.offsetWidth ) * 2 - 1;
var vpy = - ( elty / div.offsetHeight ) * 2 + 1;
// vp->viewport, eltx->client coords,div->The div where webGL renderer has been appended.
4)Why we take 3rd coordinate in viewport System as 0.5 to 1 while taking the vector ?
I would be really grateful if you will explain these Questions and the concept in detail or suggest me a book to read from. Best if some 3D diagrams are available for 1st Question.
Would be really really thankful.
The 3D Scene is rendered inside a canvas container. It can be any size, and located anywhere in the layout of a HTML page. Very often WebGL examples and apps are made made full screen, so that the canvas fills the whole screen and is effectively the same size as the HTML layout. But that's not always the case, and a WebGL scene can be embedded alongside other HTML content, much like an image.
Three.js generally doesn't care or know about how the 3D canvas relates to the coordinates and size of the whole HTML page. Inside the WebGL canvas a completely different coordinate system -totally independent of screen pixels- is used.
When you want to handle clicks inside the 3D canvas, unfortunately the browser only gives the pixel values counting from the top left corner of the HTML page (eltx and elty). So you need to first convert the HTML mouse coordinates to the WebGL coordinates (a vector usable in Three.js). In WebGL coordinates, 0,0 is the center of canvas, -1,-1 is top left, +1,+1 bottom right and so on, no matter what the size and position of the canvas is.
First you need to take the position of the canvas and subtract that from the mouse coordinates. Your formula does not take that into account, but instead assumes that the webgl container is located at the top left corner of the page (canvas position is 0px 0px). Thats ok but if you the container is moved or the HTML body has CSS padding for example, it won't be accurate anymore.
Second you need to convert the absolute mouse pixel position (adjusted in the previous step), and convert that to relative position inside the canvas. That's what your formula does. If mouse x position is 50px and your canvas is 100px wide, the formula goes like (50/100) * 2 - 1 = 0, which is the screen space center of the canvas viewport.
Now you have coordinates that make sense in the Three.js 3D scene.
I'm currently involved in a project where it'd be useful to display geographical points as overlays on top of a VIDEO element (size 512x288).
The VIDEO element contains a live broadcast, and I also have a real time feed of bearing, latitude, longitude and altitude that's fed into the site as JavaScript variables.
I also have an array of POI's (Points of Interest) that's included in the site. The POI's are in the following format:
var points = [['Landmark name', 'latitude', 'longitude'], […]];
Every five seconds or so, I want to loop through the array of POI's, and check to see if any of them are within the video's current viewport - and if true, overlay them on top of the VIDEO element.
Could someone point me in the right direction as to what I should be looking at? I assume I have to map the points to a 2D plane using e.g. Mercator projection.
But I'm a bit lost when it comes to mapping the POI's relative pixel position to the video's.
Looking forward to getting some tips!
Having done this before, the most critical element is to determine the field-of-view of the camera accurately (at least to the hundredth of a degree) in either the vertical or horizontal direction. Then, use the aspect ratio (512/288 = 1.78) of the video to determine the other angle (if needed) using atan formula (do not make the common mistake of multiplying the vertical field of view by the aspect ratio to get the horizontal field of view. Field of view is angular, aspect ratio is linear). Think of it in terms of setting up a camera, for example, in OpenGL except your camera is in the real world. Instead of picking field-of-view and camera orientation, you are going to have to measure it.
You will need to know the attitude of the camera (pan/tilt or pitch/roll/yaw) in order to overlay graphics properly.
You won't need a Mercator projection. I am assuming that the field of view of the camera is relatively small (ie. 40 deg H or so) so you can usually assume the projected surface is a rectangle (technically, it is a small patch from a sphere).