OpenLayers supports tissot's ellipses natively by adding a sphere to the circular() method.
Unfortunately the Leaflet L.circle() does not support such feature.
How do I draw tissot's ellipses with Leaflet?
EDIT 3:
New proposition using leaflet-geodesy which seems a perfect fit for your need. It is exempt from Turf's bug (see Edit 2 below).
The API is quite simple:
LGeo.circle([51.441767, 5.470247], 500000).addTo(map);
(center position in [latitude, longitude] degrees, radius in meters)
Demo: http://jsfiddle.net/ve2huzxw/61/
Quick comparison with nathansnider's solution: http://fiddle.jshell.net/58ud0ttk/2/
(shows that both codes produce the same resulting area, the only difference being in the number of segments used for approximating the area)
EDIT: a nice page that compares Leaflet-geodesy with the standard L.Circle: https://www.mapbox.com/mapbox.js/example/v1.0.0/leaflet-geodesy/
EDIT 2:
Unfortunately Turf uses JSTS Topology Suite to build the buffer. It looks like this operation in JSTS does not fit a non-plane geometry like the Earth surface.
The bug is reported here and as of today the main Turf library does not have a full workaround.
So the below answer (edit 1) produces WRONG results.
See nathansnider's answer for a workaround for building a buffer around a point.
EDIT 1:
You can easily build the described polygon by using Turf. It offers the turf.buffer method which creates a polygon with a specified distance around a given feature (could be a simple point).
So you can simply do for example:
var pt = {
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [5.470247, 51.441767]
}
};
var buffered = turf.buffer(pt, 500, 'kilometers');
L.geoJson(pt).addTo(map);
L.geoJson(buffered).addTo(map);
Demo: http://jsfiddle.net/ve2huzxw/41/
Original answer:
Unfortunately it seems that there is currently no Leaflet plugin to do so.
It is also unclear what the Tissot indicatrix should represent:
A true ellipse that represents the deformation of an infinitely small circle (i.e. distortion at a single point), or
A circular-like shape that represents the deformation of a finite-size circle when on the Earth surface, like the OpenLayers demo you link to?
In that demo, the shape in EPSG:4326 is not an ellipse, the length in the vertical axis decreases at higher latitude compared to the other half of the shape.
If you are looking for that 2nd option, then you would have to manually build a polygon that represents the intersection of a sphere and of the Earth surface. If my understanding is correct, this is what the OL demo does. If that is an option for you, maybe you can generate your polygons there and import them as GeoJSON features into Leaflet? :-)
Because turf.buffer appears to have a bug at the moment, here is a different approach with turf.js, this time using turf.destination rotated over a range of bearings to create a true circle:
//creates a true circle from a center point using turf.destination
//arguments are the same as in turf.destination, except using number of steps
//around the circle instead of fixed bearing. Returns a GeoJSON Polygon as output.
function tissotish(center, radius, steps, units) {
var step_angle = 360/steps;
var bearing = -180;
var tissot = { "type": "Polygon",
"coordinates": [[]]
};
for (var i = 0; i < steps + 1; ++i) {
var target = turf.destination(center, radius, bearing, units);
var coords = target.geometry.coordinates;
tissot.coordinates[0].push(coords);
bearing = bearing + step_angle;
}
return tissot;
}
This doesn't produce a true Tissot's indicatrix (which is supposed to measure distortion at a single point), but it does produce a true circle, which from your other comments seems to be what you're looking for anyway. Here it is at work in an example fiddle:
http://fiddle.jshell.net/nathansnider/58ud0ttk/
For comparison, I added the same measurement lines here as I applied to ghybs's answer above. (When you click on the lines, they do say that the horizontal distance is greater than 500km at higher latitudes, but that is because, the way I created them, they end up extending slightly outside the circle, and I was too lazy to crop them.)
Related
I have a large dataset of geographical points (around 22 000 points, but I could be more in the future) and I need to compute their Voronoï diagram. I first project my points from (lat,lng) to (x,y) (using latLngToLayerPoint() from Leaflet) and then compute the diagram based on a Javascript implementation of Fortune's algorithm . I recover each cells of the diagrams or more precisely va and vb, being respectively :
"A Voronoi.Vertex object with an x and a y property defining the start
point (relative to the Voronoi site on the left) of this Voronoi.Edge
object."
and
"A Voronoi.Vertex object with an x and a y property defining the end
point (relative to Voronoi site on the left) of this Voronoi.Edge
object."
(cf. Documentation)
Finally, I project back these points to display the diagram using leaflet. I know that, in order to compute the diagram each point needs to be unique, so I get rid of duplicates before computing the diagram. But the thing is, I end up with a pretty bad result (non-noded intersections, complex polygons):
Close-up
I have holes in the diagram and I'm not sure why. The points are house Address so some of them, even if they are not equals, are really (really) close. And I wonder if the issue doesn't come from the projection (if (lat1,lng1) and (lat2,lng2) are almost equals, will (x1,y1) and (x2,y2) be equals ?). I strongly suspect that is where the issue come from, but I don't know how to workaround (establish a threshold ?)
Edit : I precise that I delete the duplicates after the projection, so it's not about the precision of the projection but more about what happen if two points are one-pixel apart ?
So I found the solution to my problem, I post it in case of anyone need to compute a Voronoï diagram on a map using Leaflet and Turf and is having troubles implementing the Fortune's algorithm (until turf-voronoi works).
Other sources of how to compute a Voronoï diagram on map can be found (but using d3) (I think d3 also use this Javascript implementation of Fortune's algorithm)
The problem was not caused by the size of the dataset or the proximity of the points, but by how I recovered the cells.
So you first need to project your point from (lat,lng) to (x,y)(using latLngToLayerPoint()), compute the diagram : voronoi.compute(sites,bbox), where the sites are your points looking like this [ {x: 200, y: 200}, {x: 50, y: 250}, {x: 400, y: 100} /* , ... */ ] (note that your sites needs to be unique) and if you want the frame of the screen for your current zoom to be your bbox juste use :
var xl = 0,
xr = $(document).width(),
yt = 0,
yb = $(document).height();
Once you computed the diagram, just recover the cells (be carfull, if you want the right polygons you need the edges to be counterclockwise ordered (or clockwise ordered, but you them to be ordered), thankfully the algorithm provides the half edges of a given Voronoï.Vertex counterclockwise ordered). To recover the vertex of each cell you can use either getStartpoint() or getEndpoint() without forgetting to project them back from (x,y) to (lat,lng) (using layerPointToLatLng())
diagram.cells.forEach(function (c) {
var edges=[];
var size = c.halfedges.length;
for (var i = 0; i < size; i++) {
var pt = c.halfedges[i].getEndpoint();
edges.push(map.layerPointToLatLng(L.point(pt.x,pt.y)));
};
voronoi_cells.push(L.polygon(edges));
});
Finally, you have to use a FeatureCollection to display the diagram :
I highly recomment you don't implement a Voronoi tesselation algorithm by yourself, and use https://github.com/Turfjs/turf-voronoi instead.
I'm letting the user click on two points on a sphere and I would then like to draw a line between the two points along the surface of the sphere (basically on the great circle). I've been able to get the coordinates of the two selected points and draw a QuadraticBezierCurve3 between the points, but I need to be using CubicBezierCurve3. The problem is is that I have no clue how to find the two control points.
Part of the issue is everything I find is for circular arcs and only deals with [x,y] coordinates (whereas I'm working with [x,y,z]). I found this other question which I used to get a somewhat-working solution using QuadraticBezierCurve3. I've found numerous other pages with math/code like this, this, and this, but I really just don't know what to apply. Something else I came across mentioned the tangents (to the selected points), their intersection, and their midpoints. But again, I'm unsure of how to do that in 3D space (since the tangent can go in more than one direction, i.e. a plane).
An example of my code: http://jsfiddle.net/GhB82/
To draw the line, I'm using:
function drawLine(point) {
var middle = [(pointA['x'] + pointB['x']) / 2, (pointA['y'] + pointB['y']) / 2, (pointA['z'] + pointB['z']) / 2];
var curve = new THREE.QuadraticBezierCurve3(new THREE.Vector3(pointA['x'], pointA['y'], pointA['z']), new THREE.Vector3(middle[0], middle[1], middle[2]), new THREE.Vector3(pointB['x'], pointB['y'], pointB['z']));
var path = new THREE.CurvePath();
path.add(curve);
var curveMaterial = new THREE.LineBasicMaterial({
color: 0xFF0000
});
curvedLine = new THREE.Line(path.createPointsGeometry(20), curveMaterial);
scene.add(curvedLine);
}
Where pointA and pointB are arrays containing the [x,y,z] coordinates of the selected points on the sphere. I need to change the QuadraticBezierCurve3 to CubicBezierCurve3, but again, I'm really at a loss on finding those control points.
I have a description on how to fit cubic curves to circular arcs over at http://pomax.github.io/bezierinfo/#circles_cubic, the 3D case is essentially the same in that you need to find out the (great) circular cross-section your two points form on the sphere, and then build the cubic Bezier section along that circle.
Downside: Unless your arc is less than or equal to roughly a quarter circle, one curve is not going to be enough, you'll need two or more. You can't actually model true circular curves with Bezier curves, so using cubic instead of quadratic just means you can approximate a longer arc segment before it starts to look horribly off.
So on a completely different solution note: if you have an arc command available, much better to use that than to roll your own (and if three.js doesn't support them, definitely worth filing a feature request for, I'd think)
We've adapted Mike Bostock's original D3 + Leaflet example:
http://bost.ocks.org/mike/leaflet/
so that it does not redraw all paths on each zoom in Leaflet.
Our code is here: https://github.com/madeincluj/Leaflet.D3/blob/master/js/leaflet.d3.js
Specifically, the projection from geographical coordinates to pixels happens here:
https://github.com/madeincluj/Leaflet.D3/blob/master/js/leaflet.d3.js#L30-L35
We draw the SVG paths on the first load, then simply scale/translate the SVG to match the map.
This works very well, except for one issue: D3's path resampling, which looks great at the first zoom level, but looks progressively more broken once you start zooming in.
Is there a way to disable the resampling?
As to why we're doing this: We want to draw a lot of shapes (thousands) and redrawing them all on each zoom is impractical.
Edit
After some digging, seems that resampling happens here:
function d3_geo_pathProjectStream(project) {
var resample = d3_geo_resample(function(x, y) {
return project([ x * d3_degrees, y * d3_degrees ]);
});
return function(stream) {
return d3_geo_projectionRadians(resample(stream));
};
}
Is there a way to skip the resampling step?
Edit 2
What a red herring! We had switched back and forth between sending a raw function to d3.geo.path().projection and a d3.geo.transform object, to no avail.
But in fact the problem is with leaflet's latLngToLayerPoint, which (obviously!) rounds point.x & point.y to integers. Which means that the more zoomed out you are when you initialize the SVG rendering, the more precision you will lose.
The solution is to use a custom function like this:
function latLngToPoint(latlng) {
return map.project(latlng)._subtract(map.getPixelOrigin());
};
var t = d3.geo.transform({
point: function(x, y) {
var point = latLngToPoint(new L.LatLng(y, x));
return this.stream.point(point.x, point.y);
}
});
this.path = d3.geo.path().projection(t);
It's similar to leaflet's own latLngToLayerPoint, but without the rounding. (Note that map.getPixelOrigin() is rounded as well, so probably you'll need to rewrite it)
You learn something every day, don't you.
Coincidentally, I updated the tutorial recently to use the new d3.geo.transform feature, which makes it easy to implement a custom geometric transform. In this case the transform uses Leaflet’s built-in projection without any of D3’s advanced cartographic features, thus disabling adaptive resampling.
The new implementation looks like this:
var transform = d3.geo.transform({point: projectPoint}),
path = d3.geo.path().projection(transform);
function projectPoint(x, y) {
var point = map.latLngToLayerPoint(new L.LatLng(y, x));
this.stream.point(point.x, point.y);
}
As before, you can continue to pass a raw projection function to d3.geo.path, but you’ll get adaptive resampling and antimeridian cutting automatically. So to disable those features, you need to define a custom projection, and d3.geo.transform is an easy way to do this for simple point-based transformations.
I'm currently trying to build a kind of pie chart / voronoi diagram hybrid (in canvas/javascript) .I don't know if it's even possible. I'm very new to this, and I haven't tried any approaches yet.
Assume I have a circle, and a set of numbers 2, 3, 5, 7, 11.
I want to subdivide the circle into sections equivalent to the numbers (much like a pie chart) but forming a lattice / honeycomb like shape.
Is this even possible? Is it ridiculously difficult, especially for someone who's only done some basic pie chart rendering?
This is my view on this after a quick look.
A general solution, assuming there are to be n polygons with k vertices/edges, will depend on the solution to n equations, where each equation has no more than 2nk, (but exactly 2k non-zero) variables. The variables in each polygon's equation are the same x_1, x_2, x_3... x_nk and y_1, y_2, y_3... y_nk variables. Exactly four of x_1, x_2, x_3... x_nk have non-zero coefficients and exactly four of y_1, y_2, y_3... y_nk have non-zero coefficients for each polygon's equation. x_i and y_i are bounded differently depending on the parent shape.. For the sake of simplicity, we'll assume the shape is a circle. The boundary condition is: (x_i)^2 + (y_i)^2 <= r^2
Note: I say no more than 2nk, because I am unsure of the lowerbound, but know that it can not be more than 2nk. This is a result of polygons, as a requirement, sharing vertices.
The equations are the collection of definite, but variable-bounded, integrals representing the area of each polygon, with the area equal for the ith polygon:
A_i = pi*r^2/S_i
where r is the radius of the parent circle and S_i is the number assigned to the polygon, as in your diagram.
The four separate pairs of (x_j,y_j), both with non-zero coefficients in a polygon's equation will yield the vertices for the polygon.
This may prove to be considerably difficult.
Is the boundary fixed from the beginning, or can you deform it a bit?
If I had to solve this, I would sort the areas from large to small. Then, starting with the largest area, I would first generate a random convex polygon (vertices along a circle) with the required size. The next area would share an edge with the first area, but would be otherwise also random and convex. Each polygon after that would choose an existing edge from already-present polygons, and would also share any 'convex' edges that start from there (where 'convex edge' is one that, if used for the new polygon, would result in the new polygon still being convex).
By evaluating different prospective polygon positions for 'total boundary approaches desired boundary', you can probably generate a cheap approximation to your initial goal. This is quite similar to what word-clouds do: place things incrementally from largest to smallest while trying to fill in a more-or-less enclosed space.
Given a set of voronio centres (i.e. a list of the coordinates of the centre for each one), we can calculate the area closest to each centre:
area[i] = areaClosestTo(i,positions)
Assume these are a bit wrong, because we haven't got the centres in the right place. So we can calculate the error in our current set by comparing the areas to the ideal areas:
var areaIndexSq = 0;
var desiredAreasMagSq = 0;
for(var i = 0; i < areas.length; ++i) {
var contrib = (areas[i] - desiredAreas[i]);
areaIndexSq += contrib*contrib;
desiredAreasMagSq += desiredAreas[i]*desiredAreas[i];
}
var areaIndex = Math.sqrt(areaIndexSq/desiredAreasMagSq);
This is the vector norm of the difference vector between the areas and the desiredAreas. Think of it like a measure of how good a least squares fit line is.
We also want some kind of honeycomb pattern, so we can call that honeycombness(positions), and get an overall measure of the quality of the thing (this is just a starter, the weighting or form of this can be whatever floats your boat):
var overallMeasure = areaIndex + honeycombnessIndex;
Then we have a mechanism to know how bad a guess is, and we can combine this with a mechanism for modifying the positions; the simplest is just to add a random amount to the x and y coords of each centre. Alternatively you can try moving each point towards neighbour areas which have an area too high, and away from those with an area too low.
This is not a straight solve, but it requires minimal maths apart from calculating the area closest to each point, and it's approachable. The difficult part may be recognising local minima and dealing with them.
Incidentally, it should be fairly easy to get the start points for the process; the centroids of the pie slices shouldn't be too far from the truth.
A definite plus is that you could use the intermediate calculations to animate a transition from pie to voronoi.
I'm trying to wrap my head around using the Separating Axis Theorem in JavaScript to detect two squares colliding (one rotated, one not). As hard as I try, I can't figure out what this would look like in JavaScript, nor can I find any JavaScript examples. Please help, an explanation with plain numbers or JavaScript code would be most useful.
Update: After researching lots of geometry and math theories I've decided to roll out a simplified SAT implementation in a GitHub repo. You can find a working copy of SAT in JavaScript here: https://github.com/ashblue/canvas-sat
Transforming polygons
First you have to transform all points of your convex polygons (squares in this case) so they are all in the same space, by applying a rotation of angle.
For future support of scaling, translation, etc. I recommend doing this through matrix transforms. You'll have to code your own Matrix class or find some library that has this functionality already (I'm sure there are plenty of options).
Then you'll use code in the vain of:
var transform = new Matrix();
transform.appendRotation(alpha);
points = transform.transformPoints(points);
Where points is an array of Point objects or so.
Collision algorithm overview
So that's all before you get to any collision stuff. Regarding the collision algorithm, it's standard practice to try and separate 2 convex polygons (squares in your case) using the following steps:
For each polygon edge (edges of both polygon 0 and polygon 1):
Classify both polgyons as "in front", "spanning" or "behind" the edge.
If both polygons are on different sides (1 "in front" and 1 "behind"), there is no collision, and you can stop the algorithm (early exit).
If you get here, no edge was able to separate the polgyons: The polygons intersect/collide.
Note that conceptually, the "separating axis" is the axis perpendicular to the edge we're classifying the polygons with.
Classifying polygons with regards to an edge
In order to do this, we'll classify a polygon's points/vertices with regards to the edge. If all points are on one side, the polygon's on that side. Otherwise, the polygon's spanning the edge (partially on one side, partially on the other side).
To classify points, we first need to get the edge's normal:
// this code assumes p0 and p1 are instances of some Vector3D class
var p0 = edge[0]; // first point of edge
var p1 = edge[1]; // second point of edge
var v = p1.subtract(p0);
var normal = new Vector3D(0, 0, 1).crossProduct(v);
normal.normalize();
The above code uses the cross-product of the edge direction and the z-vector to get the normal. Ofcourse, you should pre-calculate this for each edge instead.
Note: The normal represents the separating axis from the SAT.
Next, we can classify an arbitrary point by first making it relative to the edge (subtracting an edge point), and using the dot-product with the normal:
// point is the point to classify as "in front" or "behind" the edge
var point = point.subtract(p0);
var distance = point.dotProduct(normal);
var inFront = distance >= 0;
Now, inFront is true if the point is in front or on the edge, and false otherwise.
Note that, when you loop over a polygon's points to classify the polygon, you can also exit early if you have at least 1 point in front and 1 behind, since then it's already determined that the polygon is spanning the edge (and not in front or behind).
So as you can see, you still have quite a bit of coding to do. Find some js library with Matrix and Vector3D classes or so and use that to implement the above. Represent your collision shapes (polygons) as sequences of Point and Edge instances.
Hopefully, this will get you started.