I'm working on writing a software 3d engine in Javascript, rendering to 2d canvas. I'm totally stuck on an issue related to the projection from 3d world to 2d screen coordinates.
So far, I have:
A camera Projection and View matrix.
Transformed Model vertices (v x ModelViewProj).
Take the result and divide both the x and y by the z (perspective) coordinate, making viewport coords.
Scale the resultant 2d vector by the viewport size.
I'm drawing a plane, and everything works when all 4 vertices (2 tris) are on screen. When I fly my camera over the plane, at some point, the off screen vertices transform to the top of the screen. That point seems to coincide with when the perspective coordinate goes above 1. I have an example of what i mean here - press forward to see it flip:
http://davidgoemans.com/phaser3d/
Code isn't minified, so web dev tools can inspect easily, but i've also put the source here:
https://github.com/dgoemans/phaser3dtest/tree/fixcanvas
Thanks in advance!
note: I'm using phaser, not really to do anything at the moment, but my plan is to mix 2d and 3d. It shouldn't have any effect on the 3d math.
When projection points which lie behind of the virtual camera, the results will be projected in front of it, mirrored. x/z is just the same as -x/-z.
In rendering pipelines, this problem is addresses by clipping algorithms which intersect the primitives by clipping planes. In your case, a single clipping plane which lies somewhere in front of your camera, is enough (in rendering pipelines, one is usually using 6 clipping planes to describe a complete viewing volume). You must prevent the situation that a single primitive has at least one point on front of the camera, and at least another one behind it (and you must discard primitives which lie completely behind, but that is rather trivial). Clipping must be done before the perspective divide, which is also the reason why the space the projection matrix transforms to is called the clip space.
Related
I would like to do the opposite of this tutorial which aligns HTML elements to ones in 3D space by projecting the 3D coordinates back onto the screen.
I am able to use unproject and some vector math to get pretty good results with location on the screen provided I'm using locations within the camera frustum. This no longer works when I want to use the position of HTML elements off-screen (i.e. places on the page where the user would need to scroll to) due to the perspective camera distortion.
How can I fix this?
Given a 2D coordinate (in particular, given an HTML element.offsetTop) and a perspective camera, how do I convert it into 3D space?
Thank you!
I have been trying to figure out a way of adding thickness to lines that can receive shadows and look like solid objects using Three.js but the best result I managed to get so far is just thicker lines that do not look like 3D geometry.
The application is for an online 3D printing platform so I am trying to visualise the sliced geometry that is comprised of lines, similar to how other slicing software handles this, such as cura, as shown in the image below.
Generating mesh geometry from these lines would be most probably problematic as in some cases there are thousands of a lines in a single model so it will be too heavy.
Any suggestions on how to achieve the desired result in either three.js or another javascript library would be greatly appreciated!
So the idea is to render primitive covering your thick line area and in fragment decide if fragment is inside or outside the thick line compute 3D position and normal and render or discard; if not.
The idea is to pass polyline geometry for rendering to OpenGL that would produce just thin lines and use shaders to do the rest.
Vertex shader
will just pass stuff into geometry shader
Geometry shader
will take in 2 vertexes (line) and output 2 triangles (quad) covering line BBOX (line enlarged by line half thickness). This is relatively easy. Simply shift the line endpoints by perpendicular vector to the line of size equal to the half thickness. This must be done in plane parallel with camera screen plane (using basis vectors extracted from direct camera matrix). Do not forget to pass both vertexes in world and camera coordinates.
Fragment shader
simply from world coordinates test if point is inside your thick line:
so simply compute P' and compute distance between P,P'. That is called perpendicular distance between point and line. Its doable exploiting dot product IIRC:
t = dot(P-P0,P1-P0)
P' = P0 + t*(P1-P0)
d = |P'-P0|
from that you just compute the 3D coordinate (depth of the fragment), normal and either render with some directional light or discard;...
Take a look at full example of this technique for 2D cubic curves:
rendering thick 2D Cubics in GLSL
I'm trying to achieve a permanent head-coupled perspective without using the full headtrackr library. My head won't be moving, but it won't be directly in front of the screen.
I have a little demo that you can download and run with python -m SimpleHTTPServer 8000
The code is adapted from mainly this headtrackr example and part of the headtrackr source
My expectations are based off this diagram:
In the third image, I imagine slightly swiveling my monitor counter-clockwise from above. This should be equivalent to reducing Z and making X less than zero. I expect my monitor to show the middle image, but instead I see something like this:
I think the "window" I'm looking through is the XY-plane, but shouldn't it stretch like the middle orange rectangle in the first diagram? Here's another window that stays fixed: http://kode80.com/2012/04/09/holotoy-perspective-in-webgl/ to see what I mean by "window."
Are off-axis perspective and head-tracking unrelated? How do I get a convincing illusion of off-axis perspective in THREE.js?
I think you can accomplish what you're aiming for with setViewOffset. Your diagram looks a little off to me. Perhaps its because there is no qube in the off-axis projection, but I think the point is that the frustum should remain framed on a fixed point without rotating the camera which would introducer perspective distortion.
To accomplish this with setViewOffset I would set the fullWidth and fullHeight to some extra large size. The view offset the will be a window in that oversized view. As the user moves that window will be offset in the opposite direction of the viewer.
http://threejs.org/docs/#Reference/Cameras/PerspectiveCamera
3D Projection mapping can be broken into a corner-pinning texture step and a perspective adjustment step. I think they're somewhat unrelated.
In Three.js you can make a quad surface (composed of two triangle Face3) and map a texture onto the surface. Then move the corners of the quad in XY (not Z). I don't think this step introduces perspective artifacts other than what's necessary for minor deformations in a quad texture. I think I'm talking about banding issues and nearest neighbor artifacts, not 3d perspective errors. The size of these artifacts depends on how the projector is shining on the object. If the projector is nicely perpendicular to the surface, very little corner-mapping is necessary. If you're using a monitor instead of a projector, then no corner-pinning is necessary.
Next is the perspective adjustment step. You have to adjust the content of the texture based on where the user is in relation to the real-life surface. I believe you can do this with an XYZ distance from the physical viewer to the center of the screen surface, a scaling factor between pixels and real-life size, and the pixel dimensions of the surface.
In my demo, the blueish faces of my cubes point in the positive Z direction. When I rotate my monitor in the real-life world, they continue to point in the positive Z direction of their monitor world. The diagram that I posted is a little misleading because the orange box in the middle picture is locally rotated to compensate for the rotating of the real-life monitor world. That orange box's front face is no longer pointing exactly in the positive Z direction of its monitor world.
Outside of Three.js, in Processing there are some techniques for projection mapping.
This one may be the simplest, although I haven't tried it myself: http://blogs.bl0rg.net/netzstaub/2008/08/24/wiimote-headtracking-in-processing/
SurfaceMapper for Processing has support for real-life curved surfaces (not just flat rectangles), but it only works for Processing before Processing 2.0.
If anyone develops a SurfaceMapper library for Three.js that would be really cool! I'd love to design a virtual world, put cameras in the world, have each camera consider real-life viewer perspective, and then put those rendered textures on real-life displays.
You need to adjust the perspective matrix. It's built with -left +right -bottom +top. Changing these will produce the effect you are looking for.
I am attempting to use an html canvas element to draw each character available in a font file to a canvas. To make this question as simple as possible, pretend only one character is drawn to a canvas. From there, I want to use Javascript to analyze the canvas and create triangle regions of the canvas that make up the entire character. The reason I need it in triangles is so that the data can later be sent to WebGL so text can be rendered and data will not be lost be scaling the text size up or down.
I am looking for some sort of algorithm to accomplish this or at least some knowledge to get me going in the right direction. If you believe I should use a different approach please tell me why, but I figured this would be the best to provide a way to modify text in many ways as well as make it possible to create 3d block text.
Here's an article on how to draw resolution independent curves with shaders
http://research.microsoft.com/en-us/um/people/cloop/loopblinn05.pdf
My understanding is instead of breaking the shapes into triangles you break them into quads with enough info sorted in the vertices to draw a portion of the curve inside each quad. In other words, as the shader draws each quad there's a formula that for each pixel can compute if that pixel is inside the curve or outside the curve.
I suggest you to start with the keyword Polygon Triangulation.
Using this methods, you can split n-Polygons into triangles like this:
These methods may only apply to figures with real (and not rounded) edges.
So, you are trying to convert a raster image into vector data?
When zoomed in, that will result in very jagged looking geometry.
Since each pixel is being treated as a square edged part of the geometry.
Couldn't you get your hands on the original vector (bezier curve) geometry for each glyph you are drawing?
Transforming that into triangle strips and fans would look smoother.
I'm trying to translate the mouse x, y coordinates into 3d world coordinates of webgl canvas. I've gotten it working partially, but am having some trouble when the world gets rotated on any axis.
I'm using the unproject method that gets the starting/ending points of the ray and then doing a line to plane collision test for a flat plane with the normal 0, 1, 0 and the point being used is 0, 0, 0.
You can find the code at wingsofexodus.com by doing a view source. The functions being used are RtoW (real to world, for mouse to world conversion), lpi (line plane intersection testing), and unproject.
It's been ages since I had to do any matrix/vector math and dusting off the books after so long is proving difficult.
The site may come up slow, my internet connection for it isn't all that great. If it proves to be to trouble some I'll copy the code to here.
Any help or links that might help is appreciated.
You've got the right idea, but I see two mistakes and one needless complication:
Complication: Instead of duplicating the code to compute the view matrix from rotation angles etc, save a copy of it when you compute it (at the beginning of drawScene) and use that instead of mm. Or, make sure the matrix stack is in the right place and have unproject just use mvMatrix. This will avoid errors from that.
You refer to the translation in what you do with the unprojection result (in RtoW). This is a mistake, because the translation is already included in mm; you're either doubling or cancelling it. After you have unprojected the mouse coordinates, you have world coordinates which do not need to be further modified before doing your ray collision test.
In unproject, you are inverting the view matrix before multiplying it with the projection matrix. You can either do (pseudocode) invert(view)*invert(projection), or (cheaper) invert(projection*view), but you're currently doing invert(projection*invert(view)), which is wrong.
These are what jumped out at me, but I haven't reviewed all of your code. The unprojection looks OK when I compare it to my own version of the same.