Mouse Coords to Game Coords WebGL - javascript

I'm trying to translate the mouse x, y coordinates into 3d world coordinates of webgl canvas. I've gotten it working partially, but am having some trouble when the world gets rotated on any axis.
I'm using the unproject method that gets the starting/ending points of the ray and then doing a line to plane collision test for a flat plane with the normal 0, 1, 0 and the point being used is 0, 0, 0.
You can find the code at wingsofexodus.com by doing a view source. The functions being used are RtoW (real to world, for mouse to world conversion), lpi (line plane intersection testing), and unproject.
It's been ages since I had to do any matrix/vector math and dusting off the books after so long is proving difficult.
The site may come up slow, my internet connection for it isn't all that great. If it proves to be to trouble some I'll copy the code to here.
Any help or links that might help is appreciated.

You've got the right idea, but I see two mistakes and one needless complication:
Complication: Instead of duplicating the code to compute the view matrix from rotation angles etc, save a copy of it when you compute it (at the beginning of drawScene) and use that instead of mm. Or, make sure the matrix stack is in the right place and have unproject just use mvMatrix. This will avoid errors from that.
You refer to the translation in what you do with the unprojection result (in RtoW). This is a mistake, because the translation is already included in mm; you're either doubling or cancelling it. After you have unprojected the mouse coordinates, you have world coordinates which do not need to be further modified before doing your ray collision test.
In unproject, you are inverting the view matrix before multiplying it with the projection matrix. You can either do (pseudocode) invert(view)*invert(projection), or (cheaper) invert(projection*view), but you're currently doing invert(projection*invert(view)), which is wrong.
These are what jumped out at me, but I haven't reviewed all of your code. The unprojection looks OK when I compare it to my own version of the same.

Related

Programmatically build meshes - UV mapping

I am working on a system to procedurally build meshes for "mines", right now I don't want to achieve visual perfection I am more focused on the basic.
I got the point in which I am able to generate the shape of the mines and from that generating the 2 meshes, one for the ground and one for the "walls" of the mine.
Now I am working on getting the UV mapping right but my problem is that the ground is really hard to map to UV coordinates properly and I am currently not able to get it right.
For the tessellation I am using a constrained version of the delaunay triangulation to which I added a sub-tessellation what simply splits the triangles at least once and keeps splitting them if the area of the triangle is greater than X.
Here a 2D rendering of the tessellation that highlight the contours, the triangles and the edges
Here the result of the 3D rendering (using three.js and webgl) with my current UV mapping applied (a displacement map as well, please ignore it for now).
I am taking a naive approach to the UV mapping, each vertex of a triangle in the grid is translated to values between 0 and 1 and that's it.
I think that, in theory should be right, but the issue is with the order of the vertexes that is creating a problem but if that would be the case the texture should be shown rotated or oddly not just oddly AND stretched like that.
Once I will get the UV mapping right, the next step would be to correctly implement the
I am currently writing this in javascript but any hint or solution in any language would be alright, I don't mind converting and/or re-engineering it to make it work.
My goal is to be able to procedurally build the mesh, send it to multiple clients and achieve the same visual rendering. I need to add quite a few bits and pieces after this other step is implemented so I can't rely on shaders on the client side because otherwise being able to place tracks, carts or something else on the ground would just be impossible for the server.
Once I will get these things sorted out, I will switch to Unity 3D for the rendering on the client side, webgl and three.js are currently being used just to have a quick and easy way to view what's being produced without the need of a client/server whole infrastructure.
Any suggestion?
Thanks!
I sorted out the issue in my code, it was pretty stupid though: by mistake I was adding 3 UV mappings per triangle and not 1 per point causing an huge visual mess. Sorted out that, I was able to achieve what I needed!
https://www.youtube.com/watch?v=DHF4YWYG7FM
Still a lot of work to do but starts to look decent!

Ray-Tracing inspired algorithm for sound reflection simulation

I am a civil architect by profession, with a passion for maths, physics and computers. For one of my projects, I am designing a room with 3 straight walls and 4th curved wall. A sound source is near the left wall.
+-------
| \
| + |
| /
+-------
Having some spare time on my hands, I decided to try modeling the acoustics of this room using JavaScript and Canvas API. My goal was to calculate for every point in the room:
Net intensity of sound by summing sound coming directly from source and reflections off the walls (including curved one). This would include attenuation due to inverse square law and absorption by walls.
Reverb characteristics by keeping track of path lengths directly from source and reflections from walls. If a point in the room received reflected signal about 0.05 seconds after the primary signal arrives, we might have an echo problem.
I assumed a canvas size of 800x600 pixels and real world dimensions of the room as 45x44 feet (left wall = 44ft, top/bottom walls 31ft, curved wall radius 22ft) and sound source 5ft from left wall. I modeled each wall as a line or circle equation and wrote a function that tells me if a point is inside the room or not. For each pixel in the canvas, I convert it to a real world coordinate and calculate its distance from the source and use the inverse square law to calculate sound intensity. What I ended up was this:
However, needless to say, this only captures the primary bounce from the source. It doesn't capture any reflections and those are proving way too hard for me to calculate.
I'm looking for an insight into how I can do that. I have tried the following partially:
Instead of iterating points in the room grid-wise, I've tried to generate rays from the source. Calculating reflections off the straight walls is easy. But the curved wall presents some challenges. The biggest problem I'm having is this: If I start with 360 rays, the closest points to the source have way too many points per pixel, but as we move outwards, the points become so diluted that there may be tens of pixels between adjacent points. This also means that when I reflect a ray, it would most-certainly not land on the points created by the primary bounce and I wouldn't be able to simply add them up. Even if I interpolate, the result would not be correct as some points would register intensity due to primary bounce, even fewer would register intensity due to secondary/tertiary bounces, and many points would register nothing. In the image below, I've tried this approach with primary bounces only.
Iterate the room grid-wise. For each point in the room, calculate direct direction to the source and reflected location of source in each wall. Use these distances to calculate net intensity at every sample point in the room. This is easy to do for the straight walls. But the math turns EXTRAORDINARILY complicated and unsolvable for me for the curved wall.
X
+
+B
+ +
A O
Assume A is the source, O is the center of curve, B is the point in room we're currently testing, and X is a point on the curve. For secondary bounces, ‹AXO = ‹BXO. We know A, O and B. If we can find X, then BX needs to be extended backwards a distance equal to AX and the image of the source would be located there. The problem is that finding X is a very hard problem. And even if this can be done, it only accounts for secondary bounces. Tertiary bounces would be even harder to calculate.
I believe Option #2 is the better way to go about this. But I do not possess enough math/computer skills to tackle this problem on my own. At this point in time, I'm trying to solve this not for my project, but for my personal curiosity. I would be grateful if any of you can solve this and share your solutions. Or if you can just give me insight into this problem, I would be glad to work on this further.
I lack expertise in computing interpolation and ray-tracing (which would be required for this problem, I think).
Thanks,
Asim
So. With great pointers from lara and a deep dive into matrix math and bresenham's line rendering algorithms, I was finally able to complete this hobby project. =) Outlined here are the steps for anyone wishing to follow down this route for similar problems.
Ditch algebraic equations in favor of matrix math. Ditch lines in favor of parametric lines.
Represent walls in terms of rays and circles. Lines can be represented as [x y 1] = [x0 y0 1] + t*[dx dy 1]. Circles can be represented as (X - C)^2 = r^2.
Project rays outwards from the source. For each ray, calculate its intersection with one of the walls, and resize it to span from its starting point to the intersection point.
When an intersection point is calculated, calculate the normal vector of the wall at the point of intersection. For straight walls, calculating normal is simple ([-dy dx 1]). For circles, the normal is X - C.
Use matrices to reflect the incident ray about the normal.
Repeat the process for as many bounces as needed.
Map the World Coordinate System the plan is in to a Cell Coordinate System to divide the world into a grid of cells. Each cell can be 1'-0" x 1'-0" in size (in feet). Use matrices again for transformation between the two coordinate systems.
Use the transformation matrix to convert the ray to the Cell Coordinate System.
Use Bresemham's Line algorithm to determine which cells the ray passes thru. For each cell, use the inverse square law to calculate ray intensity for that cell. Add this value to the cells grid.
Finally, use Canvas API and another transformation matrix to convert between Cell Coordinate System to Screen Coordinate System and render the cells grid on screen.
Here are a few screenshots of what I achieved using this:
Rays of light emanating from the source and intersecting with the walls.
Calculating multiple reflections of a single ray.
Calculating single reflections of all rays emanating from the source.
Using Bresenham's Line algorithm to identify cells crossed by the ray and plotting logarithmic intensity.
Rendering all cells for all primary rays only.
Rendering all cells for reflected rays only. The caustic surface is clearly visible here.
Rendering all cells for primary and reflected rays using color-coding.
I got to learn a lot of very interesting math skills during the course of this project. I respect matrices a lot more now. All during my college years, I wondered if I would ever need to use the Bresenham's Line algorithm ever in my life, since all graphics libraries have line drawing algorithms build-in. For the first time, I found that I needed to directly use this algorithm and without it, this project would not have been possible.
I will make available the code on GitHub soon. Thanks to everyone who contributed to my understanding of these concepts.
Asim

Projection math in software 3d engine

I'm working on writing a software 3d engine in Javascript, rendering to 2d canvas. I'm totally stuck on an issue related to the projection from 3d world to 2d screen coordinates.
So far, I have:
A camera Projection and View matrix.
Transformed Model vertices (v x ModelViewProj).
Take the result and divide both the x and y by the z (perspective) coordinate, making viewport coords.
Scale the resultant 2d vector by the viewport size.
I'm drawing a plane, and everything works when all 4 vertices (2 tris) are on screen. When I fly my camera over the plane, at some point, the off screen vertices transform to the top of the screen. That point seems to coincide with when the perspective coordinate goes above 1. I have an example of what i mean here - press forward to see it flip:
http://davidgoemans.com/phaser3d/
Code isn't minified, so web dev tools can inspect easily, but i've also put the source here:
https://github.com/dgoemans/phaser3dtest/tree/fixcanvas
Thanks in advance!
note: I'm using phaser, not really to do anything at the moment, but my plan is to mix 2d and 3d. It shouldn't have any effect on the 3d math.
When projection points which lie behind of the virtual camera, the results will be projected in front of it, mirrored. x/z is just the same as -x/-z.
In rendering pipelines, this problem is addresses by clipping algorithms which intersect the primitives by clipping planes. In your case, a single clipping plane which lies somewhere in front of your camera, is enough (in rendering pipelines, one is usually using 6 clipping planes to describe a complete viewing volume). You must prevent the situation that a single primitive has at least one point on front of the camera, and at least another one behind it (and you must discard primitives which lie completely behind, but that is rather trivial). Clipping must be done before the perspective divide, which is also the reason why the space the projection matrix transforms to is called the clip space.

3D models on a Node.js server

I'm making a 3D game, and I was told here and here that I should always perform collision detection on the server-side. Now, the problem is that I don't know how! Collision between players on a flat plane are easy, because players are represented by a cylinder, but, how do I do collision detection when the map itself is a model with hills and so on? Is is possible to somehow import a 3D model on a Node.js server? And then, say I do have the model imported, are there some collision detection libraries, or will I have to do the math myself? (My last resort would be converting the models to JSON (models are already converted to JSON) and then reading in all the vertices, but that would be a pain.)
The models are made in Blender and can be converted to .obj, .dae or .js (JSON) (this is what I currently use).
If there are no such modules, that allow the models to be imported and collisions to be checked against them, I guess I could make it myself. In that case, please do provide some hints, or further reading on how to test if a point is colliding with an object defined by some vertices.
EDIT: The "map" will have objects on it, like trees and houses. I was thinking, maybe even caves. It will also have hills and what-not, but a heightmap is out of the question I guess.
If you're going for a do-it-yourself solution, I'd suggest the following.
Terrain
Preferably have the terrain be a grid, with just each vertex varying in height. The result is a bunch of quads, each with the same dimensions in the (x, y) plane. Vertices have varying (z) values to make up the slopes. The grid arrangement allows you to easily determine which triangles you will have to test for intersection when performing collision checks.
If that's not an option (terrain pre-built in modeling package), you'll probably want to use an in-memory grid anyway, and store a list of triangles that are (partially) inside each cell.
Checking for collision
The easiest approach would be to consider your actors points in space. Then you'd determine the height of the terrain at that point as follows:
Determine grid cell the point is in
Get triangles associated with cell
Get the triangle containing the point (in the (x, y) plane)
Get height of triangle/plane at point
In the case of the "pure grid" terrain, step 3 involves just a single point/plane check to determine which of the 2 triangles we need. Otherwise, you'd have to do a point-in-triangle check for each triangle (you can probably re-use point/plane checks or use BSP for further optimization).
Step 4 pseudo-code:
point = [ x, y, z ] // actor position
relativePt = point - triangle.vertices[0]
normal = triangle.plane.normal
distance = relativePt DOT normal // (this is a dot-product)
intersection = [
point.x,
point.y,
point.z + distance / normal.z ]
This calculates the intersection of the ray straight up/down from the actor position with the triangle's plane. So that's the height of the terrain at that (x, y) position. Then you can simply check if the actor's position is below that height, and if so, set its z-coordinate to the terrain height.
Objects (houses, trees, ... )
Give each object 1 or more convex collision volumes that together roughly correspond to its actual shape (see this page on UDN to see how the Unreal Engine works with collision hulls for objects).
You will have to use some spatial subdivision technique to quickly determine which of all world objects to check for collision when moving an actor. If most movement is in 2 dimensions (for example, just terrain and some houses), you could use a simple grid or a quadtree (which is like a grid with further subdivisions). A 3-dimensional option would be the octree.
The point of the spatial subdivision is the same as with the terrain organized as a grid: To associate place objects with cells/volumes in space so you can determine the set of objects to check for a collision, instead of checking for collision with all objects.
Checking for collision
Get the "potential collision objects" using the spatial subdivision technique you've used; f.e. get the objects in the actor's current grid cell.
For each convex collision volume of each object:
Using the separating axis theorem, determine if the actor intersects with the collision volume. See my answer to a different post for some implementation hints (that question is about the 2D case, but the code largely applies; just read "edge" as "plane").
If collision occurs, use the normal of one of the "offending planes" to move the actor to just next to that plane.
Note: In this case, model your actor's collision volume as a box or 3-sided cylinder or so.
Also, you may want to consider building a BSP tree for each object and use axis-aligned bounding boxes for your actors instead. But that's getting a little beyond the scope of this answer. If your objects will have more complicated collision volumes, that will be faster.
Final thoughts
Well, this is already a really long answer, and these are just some broad strokes. Collision is a pretty broad topic, because there are so many different approaches you can take depending on your needs.
For example, I haven't covered "trace collision", which is detecting collision when an actor moves. Instead, the above suggestion on objects checks if an actor is inside an object. This may or may not suit your needs.
I also just realized I haven't covered actor-vs-actor collision. Perhaps that's best done as colliding 2 circles in the (x, y) plane, with an additional check to see if their vertical spaces intersect.
Anyway, I really gotta wrap this up. Hopefully this will at least point you in the right direction.

How to use GradientEntry in xfl?

<LinearGradient>
<matrix>
<Matrix a="0.0262451171875" d="0.009765625" tx="218.45" ty="83"/>
</matrix>
<GradientEntry color="#E63426" ratio="0.00392156862745098"/>
<GradientEntry color="#CA271E" ratio="0.36078431372549"/>
<GradientEntry color="#B31D19" ratio="0.749019607843137"/>
<GradientEntry color="#AB1917" ratio="1"/>
</LinearGradient>
This is the relevant part of the xfl file that is needed fill a shape with colors using gradientEntry.
The matrix values above are suppose to somehow help me get the start and end coordinates
for the gradient. Does anyone know how to extract the start and end coordinates. I did a similar thing not long ago using EaselJS Matrix 2D class with the decompose function to decide scaling, rotation, skewing and translation (displacement).
What im trying to do is to draw an xfl picture in HTML 5 with canvas.
Im a bit new at programming so maybe my question is not so well formulated! Sorry about that.
I have been looking into this for a good while, but I haven't figured it out entirely yet.
It's a typical transformation matrix found a lot in the XFL files, but what it transforms exactly is unknown to me. I do know that if you pull [0,0] through the transformation matrix and consider the local transformation space of the layer (ie, subtract the transformation point), you get the center of the gradient.
If I transform [0,1], [1,0] or [1,1], however, the results barely differ from [0,0] because the values in the transformation matrix are always extremely small. It does seem that [1,0] at least points in the right direction, though. If I put [1000,0] in it, I get about 1/2 of the entire length of the gradient.
So just based on sight, I would say that the gradient would run from [-1000,0] to [1000,0]. But that's just an empirical estimation. If anyone's got a better estimation, or perhaps a reason why they did it this way, I'd love to know it.

Categories