Reduce 2d irregular shapes to regular primatives - javascript

I'm looking for an algorithm or tried & tested method for analyzing irregular polgygons and reducing them to primitives (squares, rectangles, trapeziums). A way of recurrsively looking over shapes to determine the best fit for regular polygons.
see image blow
The black shapes are the irregular polygons and the blue depicts the desired regualar, which fits inside. The left example should be straight forward but that's because it's a case of finding the a rectangle that can fit in the largest shape. The polygons will be of an undetermined size (but let's say they have less then 32 sides) What I'm hoping for is to be able to break up polys into several regualar ones - which is where I'm a little lost.
Sadly, I've no code at this point as I'm stuck to know the best way forward. The script will be done in pure JavasScript. This isn't homework :)

First of all, you need to check whether your polygon is convex or concave. If it is the latter, then you need to view it as several convex polygons "put together" and handle them separately. (It is easy to your mind to imagine a large pair of scissors which cuts the polygon into several smaller polygons). When this is done, you either have a single polygon, or several polygons.
For each polygon, calculate the gravity point of the polygon and (P(i), P((i + 1) mod n), G) will form a trivial form, a triangle. These triangles will solve your problem.
If you need shapes with four angles, then four consecutive points form a shape of four angles. But this approach might leave you with a smaller polygon with lesser number of angles in the middle of the main polygon, which will have to be handled.

Related

Hidden and visible surface rendering in javascript by overlapping colours

I am trying to implement a hidden surface determination algorithm in my 3D renderer. I have found very good approaches, such as Z-Buffer or Warnock's algorithm. However, they are extremely resource-consuming. Thus I wondered, Why not use opaque overlapping colours, with which I could get the same visual results?. I would like to receive some feedback an opinions before going further, and in case it turns out to be a good solution, of course, to use this post as a way of sharing it with the community. The method would basically come down to: 1) Ordering all polygons in the scene by their Z coordinate 2)Rendering all of them in order, using opaque colours. The image/view/visual effect would be the same, without having to resort to a costly pixel-by-pixel computational process.
(Example: say I have two intersecting polygons (P1, P2). Given that the viewer's closest Z coordinate is 0, if P1z=10 and P2z=3, then the rendering order would be: P2>P1. When drawn, P2's colour will cover those P1's edges and colours placed in the 2D-XY-intersection between the two polygons ) What cons do you think this could have? Do you think that it would suffice the problem?
PS: I do not use polygonal meshes, the 3D-objects to be processed are simple convex and concave figures.
Polygons can intersect (even partially) and determine what part of one polygon is in front of the other it's a very expensive computation.

How to detect collision in not easily polygon divided body

Say we are coding something in Javascript and we have a body, say an apple, and want to detect collision of a rock being thrown at it: it's easy because we can simply consider the apple as a circle.
But how about we have, for example, a "very complex" fractal? Then there is no polygon similar to it and we also cannot break it into smaller polygons without a herculean amount of effort. Is there any way to detect perfect collision in this case, as opposed to making something that "kind" of works, like considering the fractal a polygon (not perfect because there will be collision detected even in blank spaces)?
You can use a physics editor
https://www.codeandweb.com/physicseditor
It'll work with most game engines. You'll have to figure how to make it work in JS.
Here's an tutorial from the site using typescript - related to JS
http://www.gamefromscratch.com/post/2014/11/27/Adventures-in-Phaser-with-TypeScript-Physics-using-P2-Physics-Engine.aspx
If you have coordinates of the polygons, you can make an intersection of subject and clip polygons using Javascript Clipper
The question doesn't provide too much information of the collision objects, but usually anything can be represented as polygon(s) to certain precision.
EDIT:
It should be fast enough for real time rendering (depending of complexity of polygons). If the polygons are complex (many self intersections and/or many points), there are many methods to speedup the intersection detection:
reduce the point count using ClipperLib.JS.Lighten(). It removes the points that have no effect to the outline (eg. duplicate points and points on edge)
get first bounding rectangles of polygons using ClipperLib.JS.BoundsOfPath() or ClipperLib.JS.BoundsOfPaths(). If bounding rectangles are not in collision, there is no need to make intersection operation. This function is very fast, because it just gets min/max of x and y.
If the polygons are static (ie their geometry/pointdata doesn't change during animation), you can lighten and get bounds of paths and add polygons to Clipper before animation starts. Then during each frame, you have to do only minimal effort to get the actual intersections.
EDIT2:
If you are worried about the framerate, you could consider using an experimental floating point (double) Clipper, which is 4.15x faster than IntPoint version and when big integers are needed in IntPoint version, the float version is 8.37x faster than IntPoint version. The final speed is actually a bit higher because IntPoint Clipper needs that coordinates are first scaled up (to integers) and then scaled down (to floats) and this scaling time is not taken into account in the above measurements. However float version is not fully tested and should be used with care in production environments.
The code of experimental float version: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/clipper_unminified_6.1.3.4b_fpoint.js
Demo: http://jsclipper.sourceforge.net/6.1.3.4b_fpoint/main_demo3.html
Playground: http://jsbin.com/sisefo/1/edit?html,javascript,output
EDIT3:
If you don't have polygon point coordinates of your objects and the objects are bitmaps (eg. png/canvas), you have to first trace the bitmaps eg. using Marching Squares algorithm. One implementation is at
https://github.com/sakri/MarchingSquaresJS.
There you get an array of outline points, but because the array consists of huge amount of unneeded points (eg. straight lines can easily be represented as start and end point), you can reduce the point count using eg. ClipperLib.JS.Lighten() or http://mourner.github.io/simplify-js/.
After these steps you have very light polygonal representations of your bitmap objects, which are fast to run through intersection algorithm.
You can create bitmaps that indicate the area occupied by your objects in pixels. If there is intersection between the bitmaps, then there is a collision.

Vectorize Cubic Bezier for 2D WebGL

I am building a 2D cad-like application in Javascript using WebGL and need to allow users to draw cubic bezier curves. My problem is that, as far as I know, WebGL doesn't have any easy way to draw anything but lines and filled triangles.
What makes it more complicated is that I want 'X' number of pixels per segment, and thus will not be able to just iterate through every 1% along the line.
I imagine that this would go something like:
Calculate the total length of the bezier curve
Divide that number by the segments per pixel
Iterate through the bezier curve by the previous number
This is an extremely high performance situation (hundreds of curves at a time), so I can't afford to use a constant number of segments for every curve.
So, my questions are:
Is there any native way to draw a cubic bezier in WebGL?
If not, can anyone help me with the calculations mentioned above, particularly the total length of a cubic bezier curve?
There's no direct way to tell WebGL to "draw a Curve". Just triangles (and lines).
So I think your general approach (Estimate length, divide for desired smoothness, walk the curve) will be good.
You could use a Vertex Shader to do some of the calculations, though. Depending on your data set, and how much it changes, it might be a win.
In WebGL, the vertex shader takes a list of points as input, and produces a same-sized list of points as output, after some transformation. The list cannot change size, so you'll need to figure out the number of subdivisions up in JS land.
The vertex shader could calculate the curve positions, if you assigned each point an attribute "t" between 0 and 1 for the parametric version of the Bezier. Might be handy.
From wikipedia,
As for Bezier length, if we describe it as (p0, p1, p2, p3) where p0 and p3 are the end points and p1 and p2 are the control points, we can quickly say that the Bezier length is at least dist(p0,p3), and at most dist(p0,p1)+dist(p1,p2)+dist(p2,p3).
Could make a fast-guess based on that.
A more thorough discussion for numerical solution is at https://math.stackexchange.com/questions/338463/length-of-bezier-curve-with-simpsons-rule.
There's no closed form solution.
Possibly of interest, I rendered a little Bezier animation for a blog post
I just wanted to add that clearly, we've been rasterizing Bézier curves for a long time, so in addition to the steve.hollasch.net link (http://steve.hollasch.net/cgindex/curves/cbezarclen.html) I pulled from a linked page in #davidvanbrink's answer, I figured there ought to be other resources for this...obviously WebGL/OpenGL adds another dimensional component into finding the appropriate resolution, but this cannot be something that hasn't been attempted before. I think the following links might prove useful.
http://en.wikipedia.org/wiki/NURBS (Non-uniform rational B-spline)
http://antigrain.com/research/adaptive_bezier/index.html (Adaptive Subdivision of Bezier Curves: An attempt to achieve perfect result in Bezier curve approximation)
http://www.neuroproductions.be/experiments/nurbs/
http://threejs.org/examples/webgl_geometry_nurbs.html

How would I implement the shoelace theorem to find the areas of multiple convex polygons created from intersecting lines?

I am creating a piece of javascript code where it's necessary to identify every polygon created from a number of randomly generated intersecting lines. The following screenshot will give a better explanation of what I'm talking about:
Now, I need to calculate the area of each polygon and return the largest area. The approach I'm taking is to identify every intersection (denoted with red dots) and treat them as a vertex of whichever polygon(s) they belong to. If I can somehow identify which polygon(s) each vertex/intersection belongs to, then arrange the vertices of each polygon in a clockwise direction then it would be simple to apply the shoelace theorem to find the area of each polygon.
However, I'm afraid that I'm completely lost and have tried various (failed) methods to achieve this. What is the best way to compile a list of clockwise-arranged vertices for each polygon? I'm working on acquiring which segments are associated with every given intersection, and I think this is a step in the right direction but I don't know where to go from there. Does this require some vector work?
I can think of one possibility. Here I've labeled each of the vertices.
(source: i.imm.io)
I'm assuming that if you know the lines involved and their intersections, you can find all the line segments that intersect at a particular point. So lets start with a particular point, say K, and a directed segment, IK. Now we have four directed segments that lead from the end of that, KI, KJ, KL, and KM. We are interested only in the two that are closest to, but not on, the line KI. Let's focus on KM, although you can do the same thing with KJ.
(Note that if there are more than two lines intersecting at the point, we can still find the two that are closest to the line, generally one forming a positive angle with the initial segment, the other a negative one.)
We notice that IKM is a positive angle, and then examine the segments containing M, choosing the one with the smallest positive angle with KM, in this case MF, do this again at F (although there are only two choices here) to get FG, and then GH, and then HI, which completes one polygon, the hexagon IKMFGH.
Going back to our original segment of IK, we look at our other smallest angle, IKJ, and do a similar process to find the triangle IKJ. We have now found all the polygons containing IK.
Then of course you do this again, each other segment. You will need to remove duplicates, or be smarter about not continuing to analyze a path when you can see it will be a duplicate. (Each angle will be in at most one polygon, so if you see an angle already recorded, you can skip it.)
This would not work if your polygons weren't convex, but if they are made from lines cut through a rectangle, I'm pretty sure they will always be convex.
I haven't actually tried to code this, but I'm pretty sure it will work.
Two methods I can think of that are probably not the most efficient but should help out:
You can figure out the set of points that make up the polygon containing an arbitrary point by drawing an imaginary line from the arbitrary point to each other point, the ones that draw a line not intersecting any lines in your image are the vertices that make the convex polygon you care about. The problem with this method is I can't think of any particularly good method to reliably get all of the polygons (since you only care about the largest perhaps random/periodic sampling will suffice?)
For each possible polygon check to see if there is any line segment that lies within that polygon (a line segment that bisects 2 edges of the polygon) and if there is remove that polygon from your set. At the end you should only be left with the ones you care about. This method is very slow though.
If my explanations were unclear I can update with a couple pictures to help explain.
Hope this helps!

How do I partition an oriented bounding box?

I am writing code that will build an oriented bounding box (obb) tree for a (not necessarily convex) polygon in 2 dimensions.
So far, I'm able to find the area-minimal obb of a polygon by finding its convex hull and using rotating calipers on the hull.
The picture below is an example of this. The yellow-filled polygon with red lines and red points depicts the original polygon. The convex hull is shown in blue with black lines and the obb is shown as purple lines.
(Edit) As requested: Interactive Version - tested only in chrome
Now I want to extend my code to build an OBB tree, instead of just an OBB. This means I have to cut the polygon, and compute new OBBs for each half of the polygon.
The recommended way to do this seems to be to cut the polygon by cutting the OBB in half. But if I cut the obb through the middle in either of its axes it looks like I would have to create new vertices on the polygon (otherwise how do I find the convex hull of that partition?).
Is there a way to avoid adding vertices like this?
If not, what is the easiest way to do it (with respect to implementation difficulty)? What is the most runtime-efficient way?
Here's an example of a concave polygon that we want to create an OBB tree for:
In order to split it into a new set of concave polygons, we can simply cut our current polygon by cutting the bounding box down the middle and adding new 'intersection' vertices as appropriate:
:
This can be done in O(vertices) time because we can simply iterate through all of the edges, and add an intersection vertex if the edge crosses the red splitting line.
The polygon can then be divided in terms of these intersection vertices to get a new set of smaller (but still possibly concave) polygons. There will be at least two such polygons (one per side of the red line) but there may be more. In this next picture, the new polygons have been colored for emphasis:
Recursively computing the oriented bounding boxes for these smaller polygons gives the desired result. For example, here are the boxes from recursion depth 2:
Hopefully this is clear enough to help someone who's struggling the same way I was!
I'm not really sure this is what you need without further context, but here goes...
Subdividing a concave polygon into a set of convex polygons
In my comment above I suggested recursively subdividing the concave polygon in order to obtain a set of convex polygons instead. One (common) approach is the following:
If the polygon is convex, stop. (add the polygon to an array, I suppose)
Select an unmarked edge of the polygon. Mark the edge.
Split the polygon across the edge (actually: the infinite line coinciding with the edge).
Recursively repeat this algorithm for both result polygons (if non-empty).
Note: This is exactly how a BSP tree is built. Except in the algorithm above we're not building tree nodes and storing polygons in them. Maybe a BSP-only solution would be a solution to your problem as well (instead of using OBBs).
Testing a polygon for convexity (step 1)
For each edge, classify each vertex as on, in front or behind the edge. All vertices should be on or in front of the edge. If not (at least 1 vertex behind the edge), the polygon is concave. For details on the 'classifying' part, see my answer to a different question, which does this as well.
The rest
Once you have the list of convex sub-polygons, you could generate OBBs for them as you've done in your original post. You would not have an OBB tree though...
With the subdividing, you are adding vertices (a concern in your question). Depending on your application you may not need to use the subdivided polygons though: If you were to use a BSP tree and only needed simple collision you'd just traverse the tree and do some point/edge classifications and not deal with any polygon vertices.
Anyway, not really sure what to recommend further since I don't know what you want your application to do, but hopefully this is of some help.
Edit: I just realized that maybe this is what you want to do: Build a BSP tree and generate OBBs for each node, from root to leaf nodes. So the root node OBB would contain the entire concave polygon, and leaf nodes only convex sub-polygons. I remember the original Doom engine does something similar (except with axis-aligned BBs).

Categories