I am trying to implement Field of View algorithm in my game and I followed the great tutorial here : sight-and-light and here is what I got so far :
As you can see it works fine for me :)
And then I try to use this algorithm in a tiled map. It also works fine but just a little bit slow so I am now trying to find a way to optimize the algorithm.
Some information might help to optimize :
The tiled map is Orthogonal
All the tiles have the same size 32 * 32 and are square
Tiles marked 0 means empty marked as 1 means a obstacle
I have used Connected Component Labelling algorithm for a preprocess :
All the obstacles have been merged into several regions
I know all the vertices position in each regions
Something like this :
Say I have 9 connected regions ( 9 polygons ) and 40 vertices in total.
Based on the algorithm in the above link, there will be :
ray-cast : 40 * 3 ( 3 ray-cast per vertices in angle +- 0.00001)
edges : 40
edge * ray-cast intersection test : 40 * 40 * 3 == 4800
I think there should be a way to reduce the ray cast count and the edges count that I need to do the intersection calculation in a above situation but just could not figure out a good solution.
Any suggestion will be appreciated, thanks :)
What you are doing can be optimized a great deal.
First there's not point in using all the vertices and no need to do any intersection tests.
Take each polygon. For each vertex figure out if it's an inversion point. That is take the ray from the origin/eye and check if the neighbor are on the same side. Figure out which direction goes towards the origin and follow it until you find another inversion point. The segment in between these point are the ones facing the origin/eye. If you have a convex shape there's only 2 of them, for more complex ones there can be more.
Next convert all the inversion point to polar coordinates and sort them by the angle. Now you should have a bunch of possibly overlapping intervals (take care about the warparound 360->0). If you scene is simple the intervals are non overlapping, so you only need to follow your polygons with the light (no tests needed). If you have encounter an overlap, take the inversion point and the current edge from the existing interval and see if the inversion point is on the same side as the origin/eye. If so then you can intersect the ray to the inversion point with the current edge to get the far point and link it with the inversion point which will now replace as the current edge. If there's an interval that doesn't belong to any polygon the rays will go to infinity (there's nothing to see for all rays in that polygon).
This works for all non-overlapping polygons.
So in your case you will get only 9*2 inversion points (your polygons are simple) and you need to do very few edge * ray intersections because their layout is fairly sparse and the algorithm is quick to discard obvious non-intersecting cases.
If this is for some real-time application you can optimize it further by taking advantage of the fact that the inversion points mostly remain the same, and if they change they travel along the polygon usually one edge (could be more if the polygon is small the the move distance is large).
Related
I am a civil architect by profession, with a passion for maths, physics and computers. For one of my projects, I am designing a room with 3 straight walls and 4th curved wall. A sound source is near the left wall.
+-------
| \
| + |
| /
+-------
Having some spare time on my hands, I decided to try modeling the acoustics of this room using JavaScript and Canvas API. My goal was to calculate for every point in the room:
Net intensity of sound by summing sound coming directly from source and reflections off the walls (including curved one). This would include attenuation due to inverse square law and absorption by walls.
Reverb characteristics by keeping track of path lengths directly from source and reflections from walls. If a point in the room received reflected signal about 0.05 seconds after the primary signal arrives, we might have an echo problem.
I assumed a canvas size of 800x600 pixels and real world dimensions of the room as 45x44 feet (left wall = 44ft, top/bottom walls 31ft, curved wall radius 22ft) and sound source 5ft from left wall. I modeled each wall as a line or circle equation and wrote a function that tells me if a point is inside the room or not. For each pixel in the canvas, I convert it to a real world coordinate and calculate its distance from the source and use the inverse square law to calculate sound intensity. What I ended up was this:
However, needless to say, this only captures the primary bounce from the source. It doesn't capture any reflections and those are proving way too hard for me to calculate.
I'm looking for an insight into how I can do that. I have tried the following partially:
Instead of iterating points in the room grid-wise, I've tried to generate rays from the source. Calculating reflections off the straight walls is easy. But the curved wall presents some challenges. The biggest problem I'm having is this: If I start with 360 rays, the closest points to the source have way too many points per pixel, but as we move outwards, the points become so diluted that there may be tens of pixels between adjacent points. This also means that when I reflect a ray, it would most-certainly not land on the points created by the primary bounce and I wouldn't be able to simply add them up. Even if I interpolate, the result would not be correct as some points would register intensity due to primary bounce, even fewer would register intensity due to secondary/tertiary bounces, and many points would register nothing. In the image below, I've tried this approach with primary bounces only.
Iterate the room grid-wise. For each point in the room, calculate direct direction to the source and reflected location of source in each wall. Use these distances to calculate net intensity at every sample point in the room. This is easy to do for the straight walls. But the math turns EXTRAORDINARILY complicated and unsolvable for me for the curved wall.
X
+
+B
+ +
A O
Assume A is the source, O is the center of curve, B is the point in room we're currently testing, and X is a point on the curve. For secondary bounces, ‹AXO = ‹BXO. We know A, O and B. If we can find X, then BX needs to be extended backwards a distance equal to AX and the image of the source would be located there. The problem is that finding X is a very hard problem. And even if this can be done, it only accounts for secondary bounces. Tertiary bounces would be even harder to calculate.
I believe Option #2 is the better way to go about this. But I do not possess enough math/computer skills to tackle this problem on my own. At this point in time, I'm trying to solve this not for my project, but for my personal curiosity. I would be grateful if any of you can solve this and share your solutions. Or if you can just give me insight into this problem, I would be glad to work on this further.
I lack expertise in computing interpolation and ray-tracing (which would be required for this problem, I think).
Thanks,
Asim
So. With great pointers from lara and a deep dive into matrix math and bresenham's line rendering algorithms, I was finally able to complete this hobby project. =) Outlined here are the steps for anyone wishing to follow down this route for similar problems.
Ditch algebraic equations in favor of matrix math. Ditch lines in favor of parametric lines.
Represent walls in terms of rays and circles. Lines can be represented as [x y 1] = [x0 y0 1] + t*[dx dy 1]. Circles can be represented as (X - C)^2 = r^2.
Project rays outwards from the source. For each ray, calculate its intersection with one of the walls, and resize it to span from its starting point to the intersection point.
When an intersection point is calculated, calculate the normal vector of the wall at the point of intersection. For straight walls, calculating normal is simple ([-dy dx 1]). For circles, the normal is X - C.
Use matrices to reflect the incident ray about the normal.
Repeat the process for as many bounces as needed.
Map the World Coordinate System the plan is in to a Cell Coordinate System to divide the world into a grid of cells. Each cell can be 1'-0" x 1'-0" in size (in feet). Use matrices again for transformation between the two coordinate systems.
Use the transformation matrix to convert the ray to the Cell Coordinate System.
Use Bresemham's Line algorithm to determine which cells the ray passes thru. For each cell, use the inverse square law to calculate ray intensity for that cell. Add this value to the cells grid.
Finally, use Canvas API and another transformation matrix to convert between Cell Coordinate System to Screen Coordinate System and render the cells grid on screen.
Here are a few screenshots of what I achieved using this:
Rays of light emanating from the source and intersecting with the walls.
Calculating multiple reflections of a single ray.
Calculating single reflections of all rays emanating from the source.
Using Bresenham's Line algorithm to identify cells crossed by the ray and plotting logarithmic intensity.
Rendering all cells for all primary rays only.
Rendering all cells for reflected rays only. The caustic surface is clearly visible here.
Rendering all cells for primary and reflected rays using color-coding.
I got to learn a lot of very interesting math skills during the course of this project. I respect matrices a lot more now. All during my college years, I wondered if I would ever need to use the Bresenham's Line algorithm ever in my life, since all graphics libraries have line drawing algorithms build-in. For the first time, I found that I needed to directly use this algorithm and without it, this project would not have been possible.
I will make available the code on GitHub soon. Thanks to everyone who contributed to my understanding of these concepts.
Asim
Say we are coding something in Javascript and we have a body, say an apple, and want to detect collision of a rock being thrown at it: it's easy because we can simply consider the apple as a circle.
But how about we have, for example, a "very complex" fractal? Then there is no polygon similar to it and we also cannot break it into smaller polygons without a herculean amount of effort. Is there any way to detect perfect collision in this case, as opposed to making something that "kind" of works, like considering the fractal a polygon (not perfect because there will be collision detected even in blank spaces)?
You can use a physics editor
https://www.codeandweb.com/physicseditor
It'll work with most game engines. You'll have to figure how to make it work in JS.
Here's an tutorial from the site using typescript - related to JS
http://www.gamefromscratch.com/post/2014/11/27/Adventures-in-Phaser-with-TypeScript-Physics-using-P2-Physics-Engine.aspx
If you have coordinates of the polygons, you can make an intersection of subject and clip polygons using Javascript Clipper
The question doesn't provide too much information of the collision objects, but usually anything can be represented as polygon(s) to certain precision.
EDIT:
It should be fast enough for real time rendering (depending of complexity of polygons). If the polygons are complex (many self intersections and/or many points), there are many methods to speedup the intersection detection:
reduce the point count using ClipperLib.JS.Lighten(). It removes the points that have no effect to the outline (eg. duplicate points and points on edge)
get first bounding rectangles of polygons using ClipperLib.JS.BoundsOfPath() or ClipperLib.JS.BoundsOfPaths(). If bounding rectangles are not in collision, there is no need to make intersection operation. This function is very fast, because it just gets min/max of x and y.
If the polygons are static (ie their geometry/pointdata doesn't change during animation), you can lighten and get bounds of paths and add polygons to Clipper before animation starts. Then during each frame, you have to do only minimal effort to get the actual intersections.
EDIT2:
If you are worried about the framerate, you could consider using an experimental floating point (double) Clipper, which is 4.15x faster than IntPoint version and when big integers are needed in IntPoint version, the float version is 8.37x faster than IntPoint version. The final speed is actually a bit higher because IntPoint Clipper needs that coordinates are first scaled up (to integers) and then scaled down (to floats) and this scaling time is not taken into account in the above measurements. However float version is not fully tested and should be used with care in production environments.
The code of experimental float version: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/clipper_unminified_6.1.3.4b_fpoint.js
Demo: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/main_demo3.html
Playground: http://jsbin.com/sisefo/1/edit?html,javascript,output
EDIT3:
If you don't have polygon point coordinates of your objects and the objects are bitmaps (eg. png/canvas), you have to first trace the bitmaps eg. using Marching Squares algorithm. One implementation is at
https://github.com/sakri/MarchingSquaresJS.
There you get an array of outline points, but because the array consists of huge amount of unneeded points (eg. straight lines can easily be represented as start and end point), you can reduce the point count using eg. ClipperLib.JS.Lighten() or http://mourner.github.io/simplify-js/.
After these steps you have very light polygonal representations of your bitmap objects, which are fast to run through intersection algorithm.
You can create bitmaps that indicate the area occupied by your objects in pixels. If there is intersection between the bitmaps, then there is a collision.
I'm currently working on a project that involves creating various kind of "rings" using Three.js.
My "rings" are generated using an algorithm I wrote myself. Basically I define 4 sections positioned at π, π/2, 3π/2 and 2π and then interpolate a ring from those sections using quadratic bezier curves. So far this works pretty well. What I would like to be able to do next, is to programmatically determine which vertices are on the "upper surface" of the ring.
How would I be able to achieve something like that?
The 4 shapes on the left are my 4 sections. The ring on the right is generated using those 4 sections
Well there are several ways to do this, in order to plan an optimal way we have to know more about your scenario... but besides that, why not
go by vertice index if you have the same amount of vertices for every section? (like, every vertex where % == 2 or 3 is on the outside)
calculate a distance from the vertex to the ring's center and see if it's over a certain treshhold (i.e. those further away are visible)
Figure out whether the vertex normal is in a somewhat similar direction like the radius or distance you calculated.
Up to you... if you're not satisfied some thorough explanation of your scenario might help a bit more.
I am creating a piece of javascript code where it's necessary to identify every polygon created from a number of randomly generated intersecting lines. The following screenshot will give a better explanation of what I'm talking about:
Now, I need to calculate the area of each polygon and return the largest area. The approach I'm taking is to identify every intersection (denoted with red dots) and treat them as a vertex of whichever polygon(s) they belong to. If I can somehow identify which polygon(s) each vertex/intersection belongs to, then arrange the vertices of each polygon in a clockwise direction then it would be simple to apply the shoelace theorem to find the area of each polygon.
However, I'm afraid that I'm completely lost and have tried various (failed) methods to achieve this. What is the best way to compile a list of clockwise-arranged vertices for each polygon? I'm working on acquiring which segments are associated with every given intersection, and I think this is a step in the right direction but I don't know where to go from there. Does this require some vector work?
I can think of one possibility. Here I've labeled each of the vertices.
(source: i.imm.io)
I'm assuming that if you know the lines involved and their intersections, you can find all the line segments that intersect at a particular point. So lets start with a particular point, say K, and a directed segment, IK. Now we have four directed segments that lead from the end of that, KI, KJ, KL, and KM. We are interested only in the two that are closest to, but not on, the line KI. Let's focus on KM, although you can do the same thing with KJ.
(Note that if there are more than two lines intersecting at the point, we can still find the two that are closest to the line, generally one forming a positive angle with the initial segment, the other a negative one.)
We notice that IKM is a positive angle, and then examine the segments containing M, choosing the one with the smallest positive angle with KM, in this case MF, do this again at F (although there are only two choices here) to get FG, and then GH, and then HI, which completes one polygon, the hexagon IKMFGH.
Going back to our original segment of IK, we look at our other smallest angle, IKJ, and do a similar process to find the triangle IKJ. We have now found all the polygons containing IK.
Then of course you do this again, each other segment. You will need to remove duplicates, or be smarter about not continuing to analyze a path when you can see it will be a duplicate. (Each angle will be in at most one polygon, so if you see an angle already recorded, you can skip it.)
This would not work if your polygons weren't convex, but if they are made from lines cut through a rectangle, I'm pretty sure they will always be convex.
I haven't actually tried to code this, but I'm pretty sure it will work.
Two methods I can think of that are probably not the most efficient but should help out:
You can figure out the set of points that make up the polygon containing an arbitrary point by drawing an imaginary line from the arbitrary point to each other point, the ones that draw a line not intersecting any lines in your image are the vertices that make the convex polygon you care about. The problem with this method is I can't think of any particularly good method to reliably get all of the polygons (since you only care about the largest perhaps random/periodic sampling will suffice?)
For each possible polygon check to see if there is any line segment that lies within that polygon (a line segment that bisects 2 edges of the polygon) and if there is remove that polygon from your set. At the end you should only be left with the ones you care about. This method is very slow though.
If my explanations were unclear I can update with a couple pictures to help explain.
Hope this helps!
I'm making a 3D game, and I was told here and here that I should always perform collision detection on the server-side. Now, the problem is that I don't know how! Collision between players on a flat plane are easy, because players are represented by a cylinder, but, how do I do collision detection when the map itself is a model with hills and so on? Is is possible to somehow import a 3D model on a Node.js server? And then, say I do have the model imported, are there some collision detection libraries, or will I have to do the math myself? (My last resort would be converting the models to JSON (models are already converted to JSON) and then reading in all the vertices, but that would be a pain.)
The models are made in Blender and can be converted to .obj, .dae or .js (JSON) (this is what I currently use).
If there are no such modules, that allow the models to be imported and collisions to be checked against them, I guess I could make it myself. In that case, please do provide some hints, or further reading on how to test if a point is colliding with an object defined by some vertices.
EDIT: The "map" will have objects on it, like trees and houses. I was thinking, maybe even caves. It will also have hills and what-not, but a heightmap is out of the question I guess.
If you're going for a do-it-yourself solution, I'd suggest the following.
Terrain
Preferably have the terrain be a grid, with just each vertex varying in height. The result is a bunch of quads, each with the same dimensions in the (x, y) plane. Vertices have varying (z) values to make up the slopes. The grid arrangement allows you to easily determine which triangles you will have to test for intersection when performing collision checks.
If that's not an option (terrain pre-built in modeling package), you'll probably want to use an in-memory grid anyway, and store a list of triangles that are (partially) inside each cell.
Checking for collision
The easiest approach would be to consider your actors points in space. Then you'd determine the height of the terrain at that point as follows:
Determine grid cell the point is in
Get triangles associated with cell
Get the triangle containing the point (in the (x, y) plane)
Get height of triangle/plane at point
In the case of the "pure grid" terrain, step 3 involves just a single point/plane check to determine which of the 2 triangles we need. Otherwise, you'd have to do a point-in-triangle check for each triangle (you can probably re-use point/plane checks or use BSP for further optimization).
Step 4 pseudo-code:
point = [ x, y, z ] // actor position
relativePt = point - triangle.vertices[0]
normal = triangle.plane.normal
distance = relativePt DOT normal // (this is a dot-product)
intersection = [
point.x,
point.y,
point.z + distance / normal.z ]
This calculates the intersection of the ray straight up/down from the actor position with the triangle's plane. So that's the height of the terrain at that (x, y) position. Then you can simply check if the actor's position is below that height, and if so, set its z-coordinate to the terrain height.
Objects (houses, trees, ... )
Give each object 1 or more convex collision volumes that together roughly correspond to its actual shape (see this page on UDN to see how the Unreal Engine works with collision hulls for objects).
You will have to use some spatial subdivision technique to quickly determine which of all world objects to check for collision when moving an actor. If most movement is in 2 dimensions (for example, just terrain and some houses), you could use a simple grid or a quadtree (which is like a grid with further subdivisions). A 3-dimensional option would be the octree.
The point of the spatial subdivision is the same as with the terrain organized as a grid: To associate place objects with cells/volumes in space so you can determine the set of objects to check for a collision, instead of checking for collision with all objects.
Checking for collision
Get the "potential collision objects" using the spatial subdivision technique you've used; f.e. get the objects in the actor's current grid cell.
For each convex collision volume of each object:
Using the separating axis theorem, determine if the actor intersects with the collision volume. See my answer to a different post for some implementation hints (that question is about the 2D case, but the code largely applies; just read "edge" as "plane").
If collision occurs, use the normal of one of the "offending planes" to move the actor to just next to that plane.
Note: In this case, model your actor's collision volume as a box or 3-sided cylinder or so.
Also, you may want to consider building a BSP tree for each object and use axis-aligned bounding boxes for your actors instead. But that's getting a little beyond the scope of this answer. If your objects will have more complicated collision volumes, that will be faster.
Final thoughts
Well, this is already a really long answer, and these are just some broad strokes. Collision is a pretty broad topic, because there are so many different approaches you can take depending on your needs.
For example, I haven't covered "trace collision", which is detecting collision when an actor moves. Instead, the above suggestion on objects checks if an actor is inside an object. This may or may not suit your needs.
I also just realized I haven't covered actor-vs-actor collision. Perhaps that's best done as colliding 2 circles in the (x, y) plane, with an additional check to see if their vertical spaces intersect.
Anyway, I really gotta wrap this up. Hopefully this will at least point you in the right direction.