Converting svg path to polygon in javascript - javascript

i am trying to convert an svg path to an svg polygon in javascript. i found this function to crawl along the path and extract its coordinates.
var length = path.getTotalLength();
var p=path.getPointAtLength(0);
var stp=p.x+","+p.y;
for(var i=1; i<length; i++){
p=path.getPointAtLength(i);
stp=stp+" "+p.x+","+p.y;
}
this works but it returns some hundreds of points for a polygon that has only six points originally. how would i get only the necessary points (all paths are straight lines, no curves)

ok got it.. the function getPathSegAtLength() returns the number of the actual path segment. with that it's easy then.
var len = path.getTotalLength();
var p=path.getPointAtLength(0);
var seg = path.getPathSegAtLength(0);
var stp=p.x+","+p.y;
for(var i=1; i<len; i++){
p=path.getPointAtLength(i);
if (path.getPathSegAtLength(i)>seg) {
stp=stp+" "+p.x+","+p.y;
seg = path.getPathSegAtLength(i);
}
}

path.getPointAtLength() is good for rough purposes where you don't need both speed and quality. If you get every pixel, you get thousands of points, but still the quality is low, because SVG path can have decimal values, eg. 0.1, 0.2.
If you want more precision by calling eg. path.getPointAtLength(0.1) you get easily tens of thousands of points in complex paths and the process last seconds or tens of seconds. And after that you have to reduce the count of point (https://stackoverflow.com/a/15976155/1691517), which last again seconds. But still the quality can be low, if wrong points are removed.
Better techique is to first convert all path segments to cubic curves eg. using Raphael's path2curve() and then use some adaptive method (http://antigrain.com/research/adaptive_bezier/) to convert cubic segments to points and you get at the same time both the speed and quality. And after that there is no need to reduce points because the adaptive process itself has parameters to adjust the quality.
I have made a function that does all that and I'm going to publish it when it is enough optimized for speed. The quality and reliability seems to be 100% after testing with thousands of random paths and the speed is yet significantly faster than with path.getPointAtLength().

To iterate over the segments, use something like this:
var segList = path.normalizedPathSegList; // or .pathSegList
for(var i=1; i<segList.numberOfSegments; i++){
var seg = segList.getItem(i);
}
If you want to reduce the number of vertices, then you can use Simplify.js as described here.

Related

In PaperJS is it possible to set the length of a straight line path explicitly?

From what I can see it looks like the only way to change the length of a single Path is to path.scale(x) it. However I want to make the line's length oscillate slightly around its original length, which is difficult to do with scale, because if you feed it random values you will in effect end up with its length taking a random walk, potentially ending up far away from its original length.
is it possible to set the length of a line path explicitly, something like path.length = 10? According to the documentation it doesn't seem like this is possible. Why not?? And what would be the best way to achieve my desired outcome?
You can do this by noting that paths in paperjs are not geometric shapes though they may have been specified as one. They are arrays of segment points; that's why setting the length of a line directly is not possible. A line has two segment points - one for the start and one for the end of the line. By manipulating the second segment point you can adjust the length of the line. This sketch (the following code) illustrates how to do so. To adjust the length of the original line uncomment the last line of code.
line = new Path.Line([100,100], [200, 300]);
line.strokeColor = 'black';
line.strokeWidth = 5;
// paths in paper are arrays of segments, not instances of
// circle or line or rectangle. so find the vector that
// represents the delta between the first point and the
// second point.
vector = line.segments[0].point - line.segments[1].point;
// adjustment for the line - this would vary in your case
factor = 0.9;
// I'm drawing a new line here to make it easy to see
p0 = line.segments[0].point;
p1 = p0 - vector * factor;
newline = new Path.Line(p0, p1);
newline.strokeColor = 'red';
newline.strokeWidth = 2;
// but to adjust your original line just use the following
//line.segments[1].point = p1;
You can store the original vector, if you choose to do so, in the property line.data which is an empty object that paper creates for whatever the user wishes. So line.data.vector = vector; would allow you to keep each line's original vector. And you can set the length of a line to a specific length with the following:
var v = line.data.vector.clone();
v.length = 10; // specific length
line.segments[1].point = p0 - v;

JavaScript "pixel"-perfect collision detection for rotating sprites using math (probably linear algebra)

I'm making a 2D game in JavaScript. For it, I need to be able to "perfectly" check collision between two sprites which have x/y positions (corresponding to their centre), a rotation in radians, and of course known width/height.
After spending many weeks of work (yeah, I'm not even exaggerating), I finally came up with a working solution, which unfortunately turned out to be about 10,000x too slow and impossible to optimize in any meaningful manner. I have entirely abandoned the idea of actually drawing and reading pixels from a canvas. That's just not going to cut it, but please don't make me explain in detail why. This needs to be done with math and an "imaginated" 2D world/grid, and from talking to numerous people, the basic idea became obvious. However, the practical implementation is not. Here's what I do and want to do:
What I already have done
In the beginning of the program, each sprite is pixel-looked through in its default upright position and a 1-dimensional array is filled up with data corresponding to the alpha channel of the image: solid pixels get represented by a 1, and transparent ones by 0. See figure 3.
The idea behind that is that those 1s and 0s no longer represent "pixels", but "little math orbs positioned in perfect distances to each other", which can be rotated without "losing" or "adding" data, as happens with pixels if you rotate images in anything but 90 degrees at a time.
I naturally do the quick "bounding box" check first to see if I should bother calculating accurately. This is done. The problem is the fine/"for-sure" check...
What I cannot figure out
Now that I need to figure out whether the sprites collide for sure, I need to construct a math expression of some sort using "linear algebra" (which I do not know) to determine if these "rectangles of data points", positioned and rotated correctly, both have a "1" in an overlapping position.
Although the theory is very simple, the practical code needed to accomplish this is simply beyond my capabilities. I've stared at the code for many hours, asking numerous people (and had massive problems explaining my problem clearly) and really put in an effort. Now I finally want to give up. I would very, very much appreciate getting this done with. I can't even give up and "cheat" by using a library, because nothing I find even comes close to solving this problem from what I can tell. They are all impossible for me to understand, and seem to have entirely different assumptions/requirements in mind. Whatever I'm doing always seems to be some special case. It's annoying.
This is the pseudo code for the relevant part of the program:
function doThisAtTheStartOfTheProgram()
{
makeQuickVectorFromImageAlpha(sprite1);
makeQuickVectorFromImageAlpha(sprite2);
}
function detectCollision(sprite1, sprite2)
{
// This easy, outer check works. Please ignore it as it is unrelated to the problem.
if (bounding_box_match)
{
/*
This part is the entire problem.
I must do a math-based check to see if they really collide.
These are the relevant variables as I have named them:
sprite1.x
sprite1.y
sprite1.rotation // in radians
sprite1.width
sprite1.height
sprite1.diagonal // might not be needed, but is provided
sprite2.x
sprite2.y
sprite2.rotation // in radians
sprite2.width
sprite2.height
sprite2.diagonal // might not be needed, but is provided
sprite1.vectorForCollisionDetection
sprite2.vectorForCollisionDetection
Can you please help me construct the math expression, or the series of math expressions, needed to do this check?
To clarify, using the variables above, I need to check if the two sprites (which can rotate around their centre, have any position and any dimensions) are colliding. A collision happens when at least one "unit" (an imagined sphere) of BOTH sprites are on the same unit in our imaginated 2D world (starting from 0,0 in the top-left).
*/
if (accurate_check_goes_here)
return true;
}
return false;
}
In other words, "accurate_check_goes_here" is what I wonder what it should be. It doesn't need to be a single expression, of course, and I would very much prefer seeing it done in "steps" (with comments!) so that I have a chance of understanding it, but please don't see this as "spoon feeding". I fully admit I suck at math and this is beyond my capabilities. It's just a fact. I want to move on and work on the stuff I can actually solve on my own.
To clarify: the 1D arrays are 1D and not 2D due to performance. As it turns out, speed matters very much in JS World.
Although this is a non-profit project, entirely made for private satisfaction, I just don't have the time and energy to order and sit down with some math book and learn about that from the ground up. I take no pride in lacking the math skills which would help me a lot, but at this point, I need to get this game done or I'll go crazy. This particular problem has prevented me from getting any other work done for far too long.
I hope I have explained the problem well. However, one of the most frustrating feelings is when people send well-meaning replies that unfortunately show that the person helping has not read the question. I'm not pre-insulting you all -- I just wish that won't happen this time! Sorry if my description is poor. I really tried my best to be perfectly clear.
Okay, so I need "reputation" to be able to post the illustrations I spent time to create to illustrate my problem. So instead I link to them:
Illustrations
(censored by Stackoverflow)
(censored by Stackoverflow)
OK. This site won't let me even link to the images. Only one. Then I'll pick the most important one, but it would've helped a lot if I could link to the others...
First you need to understand that detecting such collisions cannot be done with a single/simple equation. Because the shapes of the sprites matter and these are described by an array of Width x Height = Area bits. So the worst-case complexity of the algorithm must be at least O(Area).
Here is how I would do it:
Represent the sprites in two ways:
1) a bitmap indicating where pixels are opaque,
2) a list of the coordinates of the opaque pixels. [Optional, for speedup, in case of hollow sprites.]
Choose the sprite with the shortest pixel list. Find the rigid transform (translation + rotation) that transforms the local coordinates of this sprite into the local coordinates of the other sprite (this is where linear algebra comes into play - the rotation is the difference of the angles, the translation is the vector between upper-left corners - see http://planning.cs.uiuc.edu/node99.html).
Now scan the opaque pixel list, transforming the local coordinates of the pixels to the local coordinates of the other sprite. Check if you fall on an opaque pixel by looking up the bitmap representation.
This takes at worst O(Opaque Area) coordinate transforms + pixel tests, which is optimal.
If you sprites are zoomed-in (big pixels), as a first approximation you can ignore the zooming. If you need more accuracy, you can think of sampling a few points per pixel. Exact computation will involve a square/square collision intersection algorithm (with rotation), more complex and costly. See http://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm.
Here is an exact solution that will work regardless the size of the pixels (zoomed or not).
Use both a bitmap representation (1 opacity bit per pixel) and a decomposition into squares or rectangles (rectangles are optional, just an optimization; single pixels are ok).
Process all rectangles of the (source) sprite in turn. By means of rotation/translation, map the rectangles to the coordinate space of the other sprite (target). You will obtain a rotated rectangle overlaid on a grid of pixels.
Now you will perform a filling of this rectangle with a scanline algorithm: first split the rectangle in three (two triangles and one parallelogram), using horizontal lines through the rectangle vertexes. For the three shapes independently, find all horizontal between-pixel lines that cross them (this is simply done by looking at the ranges of Y values). For every such horizontal line, compute the two intersections points. Then find all pixel corners that fall between the two intersections (range of X values). For any pixel having a corner inside the rectangle, lookup the corresponding bit in the (target) sprite bitmap.
No too difficult to program, no complicated data structure. The computational effort is roughly proportional to the number of target pixels covered by every source rectangle.
Although you have already stated that you don't feel rendering to the canvas and checking that data is a viable solution, I'd like to present an idea which may or may not have already occurred to you and which ought to be reasonably efficient.
This solution relies on the fact that rendering any pixel to the canvas with half-opacity twice will result in a pixel of full opacity. The steps follow:
Size the test canvas so that both sprites will fit on it (this will also clear the canvas, so you don't have to create a new element each time you need to test for collision).
Transform the sprite data such that any pixel that has any opacity or color is set to be black at 50% opacity.
Render the sprites at the appropriate distance and relative position to one another.
Loop through the resulting canvas data. If any pixels have an opacity of 100%, then a collision has been detected. Return true.
Else, return false.
Wash, rinse, repeat.
This method should run reasonably fast. Now, for optimization--the bottleneck here will likely be the final opacity check (although rendering the images to the canvas could be slow, as might be clearing/resizing it):
reduce the resolution of the opacity detection in the final step, by changing the increment in your loop through the pixels of the final data.
Loop from middle up and down, rather than from the top to bottom (and return as soon as you find any single collision). This way you have a higher chance of encountering any collisions earlier in the loop, thus reducing its length.
I don't know what your limitations are and why you can't render to canvas, since you have declined to comment on that, but hopefully this method will be of some use to you. If it isn't, perhaps it might come in handy to future users.
Please see if the following idea works for you. Here I create a linear array of points corresponding to pixels set in each of the two sprites. I then rotate/translate these points, to give me two sets of coordinates for individual pixels. Finally, I check the pixels against each other to see if any pair are within a distance of 1 - which is "collision".
You can obviously add some segmentation of your sprite (only test "boundary pixels"), test for bounding boxes, and do other things to speed this up - but it's actually pretty fast (once you take all the console.log() statements out that are just there to confirm things are behaving…). Note that I test for dx - if that is too large, there is no need to compute the entire distance. Also, I don't need the square root for knowing whether the distance is less than 1.
I am not sure whether the use of new array() inside the pixLocs function will cause a problem with memory leaks. Something to look at if you run this function 30 times per second...
<html>
<script type="text/javascript">
var s1 = {
'pix': new Array(0,0,1,1,0,0,1,0,0,1,1,0),
'x': 1,
'y': 2,
'width': 4,
'height': 3,
'rotation': 45};
var s2 = {
'pix': new Array(1,0,1,0,1,0,1,0,1,0,1,0),
'x': 0,
'y': 1,
'width': 4,
'height': 3,
'rotation': 90};
pixLocs(s1);
console.log("now rotating the second sprite...");
pixLocs(s2);
console.log("collision detector says " + collision(s1, s2));
function pixLocs(s) {
var i;
var x, y;
var l1, l2;
var ca, sa;
var pi;
s.locx = new Array();
s.locy = new Array();
pi = Math.acos(0.0) * 2;
var l = new Array();
ca = Math.cos(s.rotation * pi / 180.0);
sa = Math.sin(s.rotation * pi / 180.0);
i = 0;
for(x = 0; x < s.width; ++x) {
for(y = 0; y < s.height; ++y) {
// offset to center of sprite
if(s.pix[i++]==1) {
l1 = x - (s.width - 1) * 0.5;
l2 = y - (s.height - 1) * 0.5;
// rotate:
r1 = ca * l1 - sa * l2;
r2 = sa * l1 + ca * l2;
// add position:
p1 = r1 + s.x;
p2 = r2 + s.y;
console.log("rotated pixel [ " + x + "," + y + " ] is at ( " + p1 + "," + p2 + " ) " );
s.locx.push(p1);
s.locy.push(p2);
}
else console.log("no pixel at [" + x + "," + y + "]");
}
}
}
function collision(s1, s2) {
var i, j;
var dx, dy;
for (i = 0; i < s1.locx.length; i++) {
for (j = 0; j < s2.locx.length; j++) {
dx = Math.abs(s1.locx[i] - s2.locx[j]);
if(dx < 1) {
dy = Math.abs(s1.locy[i] - s2.locy[j]);
if (dx*dx + dy+dy < 1) return 1;
}
}
}
return 0;
}
</script>
</html>

Performance concerns when storing data in large arrays with Javascript

I have a browser-based visualization app where there is a graph of data points, stored as an array of objects:
data = [
{x: 0.4612451, y: 1.0511} ,
... etc
]
This graph is being visualized with d3 and drawn on a canvas (see that question for an interesting discussion). It is interactive and the scales can change a lot, meaning the data has to be redrawn, and the array needs to be iterated through quite frequently, especially when animating zooms.
From the back of my head and reading other Javascript posts, I have a vague idea that optimizing dereferences in Javascript can lead to big performance improvements. Firefox is the only browser on which my app runs really slow (compared to IE9, Chrome, and Safari) and it needs to be improved. Hence, I'd like to get a firm, authoritative answer the following:
How much slower is this:
// data is an array of 2000 objects with {x, y} attributes
var n = data.length;
for (var i=0; i < n; i++) {
var d = data[i];
// Draw a circle at scaled values on canvas
var cx = xs(d.x);
var cy = ys(d.y);
canvas.moveTo(cx, cy);
canvas.arc(cx, cy, 2.5, 0, twopi);
}
compared to this:
// data_x and data_y are length 2000 arrays preprocessed once from data
var n = data_x.length;
for (var i=0; i < n; i++) {
// Draw a circle at scaled values on canvas
var cx = xs(data_x[i]);
var cy = ys(data_y[i]);
canvas.moveTo(cx, cy);
canvas.arc(cx, cy, 2.5, 0, twopi);
}
xs and ys are d3 scale objects, they are functions that compute the scaled positions. I mentioned the above that the above code may need to run up to 60 frames per second and can lag like balls on Firefox. As far as I can see, the only differences are array dereferences vs object accessing. Which one runs faster and is the difference significant?
It's pretty unlikely that any of these loop optimizations will make any difference. 2000 times through a loop like this is not much at all.
I tend to suspect the possibility of a slow implementation of canvas.arc() in Firefox. You could test this by substituting a canvas.lineTo() call which I know is fast in Firefox since I use it in my PolyGonzo maps. The "All 3199 Counties" view on the test map on that page draws 3357 polygons (some counties have more than one polygon) with a total of 33,557 points, and it loops through a similar canvas loop for every one of those points.
Thanks to the suggestion for JsPerf, I implemented a quick test. I would be grateful for anyone else to add their results here.
http://jsperf.com/canvas-dots-testing: results as of 3/27/13:
I have observed the following so far:
Whether arrays or objects is better seems to depend on the browser, and OS. For example Chrome was the same speed on Linux but objects were faster in Windows. But for many they are almost identical.
Firefox is just the tortoise of the bunch and this also helps confirm Michael Geary's hypothesis that its canvas.arc() is just super slow.

Simplifying SVG path strings by reducing number of nodes

I am generating a large SVG path string that represents a line chart.
Beneath the chart I have a slider for selecting a time range slice. Behind the slider is a mini preview of the whole line chart.
I am currently scaling down the path to generate the preview however in doing so I am ending up with tens of nodes per pixel and therefore far more detail then is necessary. Of course this gives the browser more rendering to do than it has to.
There is plenty of info available on compressing svg strings (gzipping etc), though little on algorithms that actually simplify the path by reducing the nodes.
I am using Raphaeljs and am looking for a javascript based solution. Any ideas?
Simplify.js is probably what you're looking after.
Given your line chart consists of straight line segments only (which by definition it should), you can use it like this:
var tolerance = 3
var pathSegArray = []
for (var i=0; i<path.pathSegList.numberOfItems; i++) {
pathSegArray.push(path.pathSegList.getItem(i))
}
var newPathArray = simplify(pathSegArray,tolerance)
var newD = "M";
for (i=0; i<newPathArray.length; i++) {
newD += newPathArray[i].x + " " + newPathArray[i].y + " "
}
path.setAttribute("d",newD)

How to subdivide a shape into sections of a given size

I'm currently trying to build a kind of pie chart / voronoi diagram hybrid (in canvas/javascript) .I don't know if it's even possible. I'm very new to this, and I haven't tried any approaches yet.
Assume I have a circle, and a set of numbers 2, 3, 5, 7, 11.
I want to subdivide the circle into sections equivalent to the numbers (much like a pie chart) but forming a lattice / honeycomb like shape.
Is this even possible? Is it ridiculously difficult, especially for someone who's only done some basic pie chart rendering?
This is my view on this after a quick look.
A general solution, assuming there are to be n polygons with k vertices/edges, will depend on the solution to n equations, where each equation has no more than 2nk, (but exactly 2k non-zero) variables. The variables in each polygon's equation are the same x_1, x_2, x_3... x_nk and y_1, y_2, y_3... y_nk variables. Exactly four of x_1, x_2, x_3... x_nk have non-zero coefficients and exactly four of y_1, y_2, y_3... y_nk have non-zero coefficients for each polygon's equation. x_i and y_i are bounded differently depending on the parent shape.. For the sake of simplicity, we'll assume the shape is a circle. The boundary condition is: (x_i)^2 + (y_i)^2 <= r^2
Note: I say no more than 2nk, because I am unsure of the lowerbound, but know that it can not be more than 2nk. This is a result of polygons, as a requirement, sharing vertices.
The equations are the collection of definite, but variable-bounded, integrals representing the area of each polygon, with the area equal for the ith polygon:
A_i = pi*r^2/S_i
where r is the radius of the parent circle and S_i is the number assigned to the polygon, as in your diagram.
The four separate pairs of (x_j,y_j), both with non-zero coefficients in a polygon's equation will yield the vertices for the polygon.
This may prove to be considerably difficult.
Is the boundary fixed from the beginning, or can you deform it a bit?
If I had to solve this, I would sort the areas from large to small. Then, starting with the largest area, I would first generate a random convex polygon (vertices along a circle) with the required size. The next area would share an edge with the first area, but would be otherwise also random and convex. Each polygon after that would choose an existing edge from already-present polygons, and would also share any 'convex' edges that start from there (where 'convex edge' is one that, if used for the new polygon, would result in the new polygon still being convex).
By evaluating different prospective polygon positions for 'total boundary approaches desired boundary', you can probably generate a cheap approximation to your initial goal. This is quite similar to what word-clouds do: place things incrementally from largest to smallest while trying to fill in a more-or-less enclosed space.
Given a set of voronio centres (i.e. a list of the coordinates of the centre for each one), we can calculate the area closest to each centre:
area[i] = areaClosestTo(i,positions)
Assume these are a bit wrong, because we haven't got the centres in the right place. So we can calculate the error in our current set by comparing the areas to the ideal areas:
var areaIndexSq = 0;
var desiredAreasMagSq = 0;
for(var i = 0; i < areas.length; ++i) {
var contrib = (areas[i] - desiredAreas[i]);
areaIndexSq += contrib*contrib;
desiredAreasMagSq += desiredAreas[i]*desiredAreas[i];
}
var areaIndex = Math.sqrt(areaIndexSq/desiredAreasMagSq);
This is the vector norm of the difference vector between the areas and the desiredAreas. Think of it like a measure of how good a least squares fit line is.
We also want some kind of honeycomb pattern, so we can call that honeycombness(positions), and get an overall measure of the quality of the thing (this is just a starter, the weighting or form of this can be whatever floats your boat):
var overallMeasure = areaIndex + honeycombnessIndex;
Then we have a mechanism to know how bad a guess is, and we can combine this with a mechanism for modifying the positions; the simplest is just to add a random amount to the x and y coords of each centre. Alternatively you can try moving each point towards neighbour areas which have an area too high, and away from those with an area too low.
This is not a straight solve, but it requires minimal maths apart from calculating the area closest to each point, and it's approachable. The difficult part may be recognising local minima and dealing with them.
Incidentally, it should be fairly easy to get the start points for the process; the centroids of the pie slices shouldn't be too far from the truth.
A definite plus is that you could use the intermediate calculations to animate a transition from pie to voronoi.

Categories