Has anyone tried accelerating collision detection via GPU? I thought about passing position+radius for a simple sphere intersection, rendering all intersecting triangle indices to a texture.
Using the GPU
I'm not sure if this is an good idea at all, but the math doesn't seem to be costly for this in vertex shader. It wouldn't resolve anything, just fetching the face indices to indicate which faces are relevant. It's also only supposed for the terrain.
Using the octree
The terrain is generated via dual contouring, and in it's highest lod several thousand nodes can be required, storing face indices based on the dual cells or connecting them to their octree nodes are costly in terms of memory and cpu. It already required a lot optimizations and needs to run multi-threaded, i'd like to avoid additional steps on this side.
It might work using the octree and the density function on the boundaries and trilinear interpolating the surface, but it requires to pass the nodes from the worker, or position to worker. Anyway this wouldn't completely match the polygons of a cell, but at least smooth out the error.
Using the density function
While required octree-nodes and their polygons are adaptive, their size are varying much, so a collision based on the density field function won't always fit the actual underlying geometry, surface can be far below or above the geometry.
Any suggestions?
Related
I am trying to implement a hidden surface determination algorithm in my 3D renderer. I have found very good approaches, such as Z-Buffer or Warnock's algorithm. However, they are extremely resource-consuming. Thus I wondered, Why not use opaque overlapping colours, with which I could get the same visual results?. I would like to receive some feedback an opinions before going further, and in case it turns out to be a good solution, of course, to use this post as a way of sharing it with the community. The method would basically come down to: 1) Ordering all polygons in the scene by their Z coordinate 2)Rendering all of them in order, using opaque colours. The image/view/visual effect would be the same, without having to resort to a costly pixel-by-pixel computational process.
(Example: say I have two intersecting polygons (P1, P2). Given that the viewer's closest Z coordinate is 0, if P1z=10 and P2z=3, then the rendering order would be: P2>P1. When drawn, P2's colour will cover those P1's edges and colours placed in the 2D-XY-intersection between the two polygons ) What cons do you think this could have? Do you think that it would suffice the problem?
PS: I do not use polygonal meshes, the 3D-objects to be processed are simple convex and concave figures.
Polygons can intersect (even partially) and determine what part of one polygon is in front of the other it's a very expensive computation.
I'm working on a WebGL application that works similarly to video compositing programs like After Effects.
The application takes textures and stacks them like layers. This can be as simple as drawing one texture on top of the other or using common blend modes like the screen to combine several layers.
Each layer can be individually scaled/rotated/translated via an API call, and altogether the application forms a basic compositing software.
Now my question, doing all this in a single WebGL canvas is a lot to keep track of.
// Start webgl rant
The application won't know how many layers there are ahead of time, and so textures coordinate planes and shaders will need to be made dynamically.
Blending the layers together would require writing out the math for each type of blend mode. Transforming vertex coordinates requires matrix math and just doing things in WebGL, in general, requires lots of code being a low-level API.
// End webgl rant
However, this could be easily solved by making a new canvas element for each layer. Each WebGL canvas will have a texture drawn onto it, and then I can scale/move/blend the layers using simple CSS code.
At the surface, it seems like the performance hit won't be that bad, because even if I did combine everything into a single context each layer would still need its own texture and coordinate system. So the number of textures and coordinates stays the same, just spread across multiple contexts.
However deep inside I know somehow this is horribly wrong, and my computer's going to catch fire if I even try. I just can't figure out why.
With a goal of being able to support, ~4 layers at a time would using multiple canvases be a valid option? Besides worrying about browsers having a max number of active WebGL context's are there any other limitations to be aware of?
So I found out that texturing planets can be really hard. I created a 4096k image and wrapped it around a high poly sphere. Apart from the possible memory management performance issue that comes with a 3-4 mb image, the texture looks bad / pixelated on a close up (orbital) view.
I was thinking that I could maybe increase the resolution significantly by splitting up the picture. Then create a low, medium and high version of each section. If the camera viewport is very close to that particular section then render the high resolution image. If far away remove image from memory and apply low or medium version.
To be honest I am not sure what strategy to use to render high quality planets. Should I maybe avoid textures and just use height maps and color the planet with Javascript? Same thing for the clouds. Should I create a sphere with an alpha map or should I use shaders?
As you can see this is the issue im having and hopefully you could enlighten me. Performance with Webgl / three.js has significantly improved over time but since this is all done within the browser I assume thinking about the right solution is vital in the long term.
You're going to need to implement a lod system. lod = "level of detail" and in 3d it means generally means switching from high-polygon to low-polygon models but in general it means doing anything to switch high detail to low-detail
Because you can't make textures 1000000x100000 which is pretty much what you'd need to do to get the results you want you'll need build a sphere out of multiple sections and texture each one separately. How many sections depends on how close you want to be able to zoom in. Google Maps has millions of sections. At the same time, if you can zoom out to see the whole planet (like you can in Google Maps) you can't draw millions of sections. Instead you'd switch to a single sphere. That process is called "LODing"
There is no "generic" solution. You generally need to write your own for your specific case.
In the case of something like Google Maps what they most likely do is have several levels of detail. A single sphere when you can see the whole planet. A sphere made of say 100 pieces when slightly closer. A sphere made of 1000 pieces when closer, A sphere made of 10000 pieces when closer, etc etc. They also only show the pieces you can see. Deciding and managing which pieces to show with a generic solution would be way to slow (look at millions of pieces every frame) but you, as the application writer know what pieces are visible so you can make sure only those pieces are in your scene.
Another thing that people often do is fade between LODs. So when Google Maps is showing the single mesh sphere when all the say zoomed out and they transition to the 100 piece or 1000 piece sphere they crossfade between the two.
Some examples of lodding
http://acko.net/blog/making-worlds-1-of-spheres-and-cubes/
http://vterrain.org/LOD/spherical.html
You could create a sphere with different topology.
Say you create 6 square planes, arranged in such a way that they form a box. You can tesselate these planes to give the sphere enough resolution. These planes would have UV mapping that would work similar to cube-mapping, each will hold a cubemap face.
Then you loop through all the vertices, take the position vector and normalize it. This will yield a sphere.
You can convert an equirectangular panorama image into a cubemap. I think it will allow you to to get more resolution and less stretching for cheap.
For starters, the 4096 x 4096 should be 4096x2048 on the default sphere with equirectangular, but the newly mapped sphere can hold 6 x 4096 x 4096 with no stretching, and can be drawn in 6 draw calls.
Further splitting these could yield a good basis for what gman suggests.
Say we are coding something in Javascript and we have a body, say an apple, and want to detect collision of a rock being thrown at it: it's easy because we can simply consider the apple as a circle.
But how about we have, for example, a "very complex" fractal? Then there is no polygon similar to it and we also cannot break it into smaller polygons without a herculean amount of effort. Is there any way to detect perfect collision in this case, as opposed to making something that "kind" of works, like considering the fractal a polygon (not perfect because there will be collision detected even in blank spaces)?
You can use a physics editor
https://www.codeandweb.com/physicseditor
It'll work with most game engines. You'll have to figure how to make it work in JS.
Here's an tutorial from the site using typescript - related to JS
http://www.gamefromscratch.com/post/2014/11/27/Adventures-in-Phaser-with-TypeScript-Physics-using-P2-Physics-Engine.aspx
If you have coordinates of the polygons, you can make an intersection of subject and clip polygons using Javascript Clipper
The question doesn't provide too much information of the collision objects, but usually anything can be represented as polygon(s) to certain precision.
EDIT:
It should be fast enough for real time rendering (depending of complexity of polygons). If the polygons are complex (many self intersections and/or many points), there are many methods to speedup the intersection detection:
reduce the point count using ClipperLib.JS.Lighten(). It removes the points that have no effect to the outline (eg. duplicate points and points on edge)
get first bounding rectangles of polygons using ClipperLib.JS.BoundsOfPath() or ClipperLib.JS.BoundsOfPaths(). If bounding rectangles are not in collision, there is no need to make intersection operation. This function is very fast, because it just gets min/max of x and y.
If the polygons are static (ie their geometry/pointdata doesn't change during animation), you can lighten and get bounds of paths and add polygons to Clipper before animation starts. Then during each frame, you have to do only minimal effort to get the actual intersections.
EDIT2:
If you are worried about the framerate, you could consider using an experimental floating point (double) Clipper, which is 4.15x faster than IntPoint version and when big integers are needed in IntPoint version, the float version is 8.37x faster than IntPoint version. The final speed is actually a bit higher because IntPoint Clipper needs that coordinates are first scaled up (to integers) and then scaled down (to floats) and this scaling time is not taken into account in the above measurements. However float version is not fully tested and should be used with care in production environments.
The code of experimental float version: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/clipper_unminified_6.1.3.4b_fpoint.js
Demo: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/main_demo3.html
Playground: http://jsbin.com/sisefo/1/edit?html,javascript,output
EDIT3:
If you don't have polygon point coordinates of your objects and the objects are bitmaps (eg. png/canvas), you have to first trace the bitmaps eg. using Marching Squares algorithm. One implementation is at
https://github.com/sakri/MarchingSquaresJS.
There you get an array of outline points, but because the array consists of huge amount of unneeded points (eg. straight lines can easily be represented as start and end point), you can reduce the point count using eg. ClipperLib.JS.Lighten() or http://mourner.github.io/simplify-js/.
After these steps you have very light polygonal representations of your bitmap objects, which are fast to run through intersection algorithm.
You can create bitmaps that indicate the area occupied by your objects in pixels. If there is intersection between the bitmaps, then there is a collision.
I'm making a 3D game, and I was told here and here that I should always perform collision detection on the server-side. Now, the problem is that I don't know how! Collision between players on a flat plane are easy, because players are represented by a cylinder, but, how do I do collision detection when the map itself is a model with hills and so on? Is is possible to somehow import a 3D model on a Node.js server? And then, say I do have the model imported, are there some collision detection libraries, or will I have to do the math myself? (My last resort would be converting the models to JSON (models are already converted to JSON) and then reading in all the vertices, but that would be a pain.)
The models are made in Blender and can be converted to .obj, .dae or .js (JSON) (this is what I currently use).
If there are no such modules, that allow the models to be imported and collisions to be checked against them, I guess I could make it myself. In that case, please do provide some hints, or further reading on how to test if a point is colliding with an object defined by some vertices.
EDIT: The "map" will have objects on it, like trees and houses. I was thinking, maybe even caves. It will also have hills and what-not, but a heightmap is out of the question I guess.
If you're going for a do-it-yourself solution, I'd suggest the following.
Terrain
Preferably have the terrain be a grid, with just each vertex varying in height. The result is a bunch of quads, each with the same dimensions in the (x, y) plane. Vertices have varying (z) values to make up the slopes. The grid arrangement allows you to easily determine which triangles you will have to test for intersection when performing collision checks.
If that's not an option (terrain pre-built in modeling package), you'll probably want to use an in-memory grid anyway, and store a list of triangles that are (partially) inside each cell.
Checking for collision
The easiest approach would be to consider your actors points in space. Then you'd determine the height of the terrain at that point as follows:
Determine grid cell the point is in
Get triangles associated with cell
Get the triangle containing the point (in the (x, y) plane)
Get height of triangle/plane at point
In the case of the "pure grid" terrain, step 3 involves just a single point/plane check to determine which of the 2 triangles we need. Otherwise, you'd have to do a point-in-triangle check for each triangle (you can probably re-use point/plane checks or use BSP for further optimization).
Step 4 pseudo-code:
point = [ x, y, z ] // actor position
relativePt = point - triangle.vertices[0]
normal = triangle.plane.normal
distance = relativePt DOT normal // (this is a dot-product)
intersection = [
point.x,
point.y,
point.z + distance / normal.z ]
This calculates the intersection of the ray straight up/down from the actor position with the triangle's plane. So that's the height of the terrain at that (x, y) position. Then you can simply check if the actor's position is below that height, and if so, set its z-coordinate to the terrain height.
Objects (houses, trees, ... )
Give each object 1 or more convex collision volumes that together roughly correspond to its actual shape (see this page on UDN to see how the Unreal Engine works with collision hulls for objects).
You will have to use some spatial subdivision technique to quickly determine which of all world objects to check for collision when moving an actor. If most movement is in 2 dimensions (for example, just terrain and some houses), you could use a simple grid or a quadtree (which is like a grid with further subdivisions). A 3-dimensional option would be the octree.
The point of the spatial subdivision is the same as with the terrain organized as a grid: To associate place objects with cells/volumes in space so you can determine the set of objects to check for a collision, instead of checking for collision with all objects.
Checking for collision
Get the "potential collision objects" using the spatial subdivision technique you've used; f.e. get the objects in the actor's current grid cell.
For each convex collision volume of each object:
Using the separating axis theorem, determine if the actor intersects with the collision volume. See my answer to a different post for some implementation hints (that question is about the 2D case, but the code largely applies; just read "edge" as "plane").
If collision occurs, use the normal of one of the "offending planes" to move the actor to just next to that plane.
Note: In this case, model your actor's collision volume as a box or 3-sided cylinder or so.
Also, you may want to consider building a BSP tree for each object and use axis-aligned bounding boxes for your actors instead. But that's getting a little beyond the scope of this answer. If your objects will have more complicated collision volumes, that will be faster.
Final thoughts
Well, this is already a really long answer, and these are just some broad strokes. Collision is a pretty broad topic, because there are so many different approaches you can take depending on your needs.
For example, I haven't covered "trace collision", which is detecting collision when an actor moves. Instead, the above suggestion on objects checks if an actor is inside an object. This may or may not suit your needs.
I also just realized I haven't covered actor-vs-actor collision. Perhaps that's best done as colliding 2 circles in the (x, y) plane, with an additional check to see if their vertical spaces intersect.
Anyway, I really gotta wrap this up. Hopefully this will at least point you in the right direction.