I have a large dataset of geographical points (around 22 000 points, but I could be more in the future) and I need to compute their Voronoï diagram. I first project my points from (lat,lng) to (x,y) (using latLngToLayerPoint() from Leaflet) and then compute the diagram based on a Javascript implementation of Fortune's algorithm . I recover each cells of the diagrams or more precisely va and vb, being respectively :
"A Voronoi.Vertex object with an x and a y property defining the start
point (relative to the Voronoi site on the left) of this Voronoi.Edge
object."
and
"A Voronoi.Vertex object with an x and a y property defining the end
point (relative to Voronoi site on the left) of this Voronoi.Edge
object."
(cf. Documentation)
Finally, I project back these points to display the diagram using leaflet. I know that, in order to compute the diagram each point needs to be unique, so I get rid of duplicates before computing the diagram. But the thing is, I end up with a pretty bad result (non-noded intersections, complex polygons):
Close-up
I have holes in the diagram and I'm not sure why. The points are house Address so some of them, even if they are not equals, are really (really) close. And I wonder if the issue doesn't come from the projection (if (lat1,lng1) and (lat2,lng2) are almost equals, will (x1,y1) and (x2,y2) be equals ?). I strongly suspect that is where the issue come from, but I don't know how to workaround (establish a threshold ?)
Edit : I precise that I delete the duplicates after the projection, so it's not about the precision of the projection but more about what happen if two points are one-pixel apart ?
So I found the solution to my problem, I post it in case of anyone need to compute a Voronoï diagram on a map using Leaflet and Turf and is having troubles implementing the Fortune's algorithm (until turf-voronoi works).
Other sources of how to compute a Voronoï diagram on map can be found (but using d3) (I think d3 also use this Javascript implementation of Fortune's algorithm)
The problem was not caused by the size of the dataset or the proximity of the points, but by how I recovered the cells.
So you first need to project your point from (lat,lng) to (x,y)(using latLngToLayerPoint()), compute the diagram : voronoi.compute(sites,bbox), where the sites are your points looking like this [ {x: 200, y: 200}, {x: 50, y: 250}, {x: 400, y: 100} /* , ... */ ] (note that your sites needs to be unique) and if you want the frame of the screen for your current zoom to be your bbox juste use :
var xl = 0,
xr = $(document).width(),
yt = 0,
yb = $(document).height();
Once you computed the diagram, just recover the cells (be carfull, if you want the right polygons you need the edges to be counterclockwise ordered (or clockwise ordered, but you them to be ordered), thankfully the algorithm provides the half edges of a given Voronoï.Vertex counterclockwise ordered). To recover the vertex of each cell you can use either getStartpoint() or getEndpoint() without forgetting to project them back from (x,y) to (lat,lng) (using layerPointToLatLng())
diagram.cells.forEach(function (c) {
var edges=[];
var size = c.halfedges.length;
for (var i = 0; i < size; i++) {
var pt = c.halfedges[i].getEndpoint();
edges.push(map.layerPointToLatLng(L.point(pt.x,pt.y)));
};
voronoi_cells.push(L.polygon(edges));
});
Finally, you have to use a FeatureCollection to display the diagram :
I highly recomment you don't implement a Voronoi tesselation algorithm by yourself, and use https://github.com/Turfjs/turf-voronoi instead.
Related
Conceptual Question
I am building a flight simulator in Three.js. I intend to rip CSV data for Latitude, Longitude, and Elevation from Google Earth and transfer it into arcGIS to create a Digital Elevation Model (DEM). I then want to create the terrain based on the DEM. I already have a splat map texture shader I wrote and things are looking good.
However, I will need to add models and more specifically text and landing zones for the towns. This will require accurate XYZ coordinates.
I figure this is an interesting problem. I have seen one question before on stackoverflow similar to this but it was not quite to the same depth I'm looking for.
1) How to create coordinate system that maps actual XYZ, Latitude, Longitude, Elevation data to a PlaneBufferGeometry?
My assumption is that if I take a hypothetical 100,000 x 100,000 map sample then I will need to create a Plane that has matching vert count and then maps 1:1.
new THREE.PlaneBufferGeometry( 100000, 100000, 100000, 100000 );
Then the trickier part of mapping lat long coordinates to this. Perhaps just a multiplier like * 100 or so per lat, long degrees?
2) How to create the most efficient data structure for this. It will contain a lot of data.
I am thinking the most efficient data structure would be an array with Z integers.
let vertArray = new Array(10000000000);
for (i = 0; i < 9999999999; i++) {
vertArray[i] = planeBufferGeometry.vertices[i].z;
}
Each 100,000 in the array would represent a Y coordinate, while each i value in said sections would be an X coordinate. The value of the Z coordinate would be stored in the array itself.
So hypothetically if I wanted to get X: 3, Y: 4, Z: ? it would be...
const xCoord = 3,
yCoord = 4,
index = (yCoord * 100000) + xCoord,
zCoord = vertArray[index];
This is the smallest overhead approach I can think of... defining Array length ahead of time, keeping the array one dimensional, filling with only integers. Any better ideas? Perhaps creating an array would be unneeded and I could create an equation that pulls vert data directly from the rendered mesh?
3) Are there ways to decrease the impact of large data stored in browser memory?
I have yet to implement this but the idea of a 10 Million Length array in the browser is quite a lot in my mind. I would prefer being able to load the entire thing rather than doing some sort of AJAX call when the helicopter gets near the edge of a sub-plane. Think "zones" in MMORPG's.
OpenLayers supports tissot's ellipses natively by adding a sphere to the circular() method.
Unfortunately the Leaflet L.circle() does not support such feature.
How do I draw tissot's ellipses with Leaflet?
EDIT 3:
New proposition using leaflet-geodesy which seems a perfect fit for your need. It is exempt from Turf's bug (see Edit 2 below).
The API is quite simple:
LGeo.circle([51.441767, 5.470247], 500000).addTo(map);
(center position in [latitude, longitude] degrees, radius in meters)
Demo: http://jsfiddle.net/ve2huzxw/61/
Quick comparison with nathansnider's solution: http://fiddle.jshell.net/58ud0ttk/2/
(shows that both codes produce the same resulting area, the only difference being in the number of segments used for approximating the area)
EDIT: a nice page that compares Leaflet-geodesy with the standard L.Circle: https://www.mapbox.com/mapbox.js/example/v1.0.0/leaflet-geodesy/
EDIT 2:
Unfortunately Turf uses JSTS Topology Suite to build the buffer. It looks like this operation in JSTS does not fit a non-plane geometry like the Earth surface.
The bug is reported here and as of today the main Turf library does not have a full workaround.
So the below answer (edit 1) produces WRONG results.
See nathansnider's answer for a workaround for building a buffer around a point.
EDIT 1:
You can easily build the described polygon by using Turf. It offers the turf.buffer method which creates a polygon with a specified distance around a given feature (could be a simple point).
So you can simply do for example:
var pt = {
"type": "Feature",
"properties": {},
"geometry": {
"type": "Point",
"coordinates": [5.470247, 51.441767]
}
};
var buffered = turf.buffer(pt, 500, 'kilometers');
L.geoJson(pt).addTo(map);
L.geoJson(buffered).addTo(map);
Demo: http://jsfiddle.net/ve2huzxw/41/
Original answer:
Unfortunately it seems that there is currently no Leaflet plugin to do so.
It is also unclear what the Tissot indicatrix should represent:
A true ellipse that represents the deformation of an infinitely small circle (i.e. distortion at a single point), or
A circular-like shape that represents the deformation of a finite-size circle when on the Earth surface, like the OpenLayers demo you link to?
In that demo, the shape in EPSG:4326 is not an ellipse, the length in the vertical axis decreases at higher latitude compared to the other half of the shape.
If you are looking for that 2nd option, then you would have to manually build a polygon that represents the intersection of a sphere and of the Earth surface. If my understanding is correct, this is what the OL demo does. If that is an option for you, maybe you can generate your polygons there and import them as GeoJSON features into Leaflet? :-)
Because turf.buffer appears to have a bug at the moment, here is a different approach with turf.js, this time using turf.destination rotated over a range of bearings to create a true circle:
//creates a true circle from a center point using turf.destination
//arguments are the same as in turf.destination, except using number of steps
//around the circle instead of fixed bearing. Returns a GeoJSON Polygon as output.
function tissotish(center, radius, steps, units) {
var step_angle = 360/steps;
var bearing = -180;
var tissot = { "type": "Polygon",
"coordinates": [[]]
};
for (var i = 0; i < steps + 1; ++i) {
var target = turf.destination(center, radius, bearing, units);
var coords = target.geometry.coordinates;
tissot.coordinates[0].push(coords);
bearing = bearing + step_angle;
}
return tissot;
}
This doesn't produce a true Tissot's indicatrix (which is supposed to measure distortion at a single point), but it does produce a true circle, which from your other comments seems to be what you're looking for anyway. Here it is at work in an example fiddle:
http://fiddle.jshell.net/nathansnider/58ud0ttk/
For comparison, I added the same measurement lines here as I applied to ghybs's answer above. (When you click on the lines, they do say that the horizontal distance is greater than 500km at higher latitudes, but that is because, the way I created them, they end up extending slightly outside the circle, and I was too lazy to crop them.)
I'm trying to smooth the data I'm getting from the deviceOrientation API to make a Google Cardboard application in the browser.
I'm piping the accelerometer data straight into the ThreeJs camera rotation but we're getting a lot of noise on the signal which is causing the view to judder.
Someone suggested a Kalman filter as the best way to approach smoothing signal processing noise and I found this simple Javascript library on gitHub
https://github.com/itamarwe/kalman
However its really light on the documentation.
I understand that I need to create a Kalman model by providing a Vector and 3 Matrices as arguments and then update the model, again with a vector and matrices as arguments over a time frame.
I also understand that a Kalman filter equation has several distinct parts: the current estimated position, the Kalman gain value, the current reading from the orientation API and the previous estimated position.
I can see that a point in 3D space can be described as a Vector so any of the position values, such as an estimated position, or the current reading can be a Vector.
What I don't understand is how these parts could be translated into Matrices to form the arguments for the Javascript library.
Well, I wrote the abhorrently documented library a couple of years ago. If there's interest I'm definitely willing to upgrade it, improve the documentation and write tests.
Let me shortly explain what are all the different matrices and vectors and how they should be derived:
x - this is the vector that you try to estimate. In your case, it's probably the 3 angular accelerations.
P - is the covariance matrix of the estimation, meaning the uncertainty of the estimation. It is also estimated in each step of the Kalman filter along with x.
F - describes how X develops according to the model. Generally, the model is x[k] = Fx[k-1]+w[k]. In your case, F might be the identity matrix, if you expect the angular acceleration to be relatively smooth, or the zero matrix, if you expect the angular acceleration to be completely unpredictable. In any case, w would represent how much you expect the acceleration to change from step to step.
w - describes the process noise, meaning, how much does the model diverge from the "perfect" model. It is defined as a zero mean multivariate normal distribution with covariance matrix Q.
All the variables above define your model, meaning what you are trying to estimate. In the next part, we talk about the model of the observation - what you measure in order to estimate your model.
z - this is what you measure. In your case, since you are using the accelerometers, you are measuring what you are also estimating. It will be the angular accelerations.
H - describes the relation between your model and the observation. z[k]=H[k]x[k]+v[k]. In your case, it is the identity matrix.
v - is the measurement noise and is assumed to be zero mean Gaussian white noise with covariance R[k]. Here you need to measure how noisy are the accelerometers, and calculate the noise covariance matrix.
To summarize, the steps to use the Kalman filter:
Determine x[0] and P[0] - the initial state of your model, and the initial estimation of how accurately you know x[0].
Determine F based on your model and how it develops from step to step.
Determine Q based on the stochastic nature of your model.
Determine H based on the relation between what you measure and what you want to estimate (between the model and the measurement).
Determine R based on the measurement noise. How noisy is your measurement.
Then, with every new observation, you can update the model state estimation using the Kalman filter, and have an optimal estimation of the state of the model(x[k]), and of the accuracy of that estimation(P[k]).
var acc = {
x:0,
y:0,
z:0
};
var count = 0;
if (window.DeviceOrientationEvent) {
window.addEventListener('deviceorientation', getDeviceRotation, false);
}else{
$(".accelerometer").html("NOT SUPPORTED")
}
var x_0 = $V([acc.x, acc.y, acc.z]); //vector. Initial accelerometer values
//P prior knowledge of state
var P_0 = $M([
[1,0,0],
[0,1,0],
[0,0,1]
]); //identity matrix. Initial covariance. Set to 1
var F_k = $M([
[1,0,0],
[0,1,0],
[0,0,1]
]); //identity matrix. How change to model is applied. Set to 1
var Q_k = $M([
[0,0,0],
[0,0,0],
[0,0,0]
]); //empty matrix. Noise in system is zero
var KM = new KalmanModel(x_0,P_0,F_k,Q_k);
var z_k = $V([acc.x, acc.y, acc.z]); //Updated accelerometer values
var H_k = $M([
[1,0,0],
[0,1,0],
[0,0,1]
]); //identity matrix. Describes relationship between model and observation
var R_k = $M([
[2,0,0],
[0,2,0],
[0,0,2]
]); //2x Scalar matrix. Describes noise from sensor. Set to 2 to begin
var KO = new KalmanObservation(z_k,H_k,R_k);
//each 1/10th second take new reading from accelerometer to update
var getNewPos = window.setInterval(function(){
KO.z_k = $V([acc.x, acc.y, acc.z]); //vector to be new reading from x, y, z
KM.update(KO);
$(".kalman-result").html(" x:" +KM.x_k.elements[0]+", y:" +KM.x_k.elements[1]+", z:" +KM.x_k.elements[2]);
$(".difference").html(" x:" +(acc.x-KM.x_k.elements[0])+", y:" +(acc.y-KM.x_k.elements[1])+", z:" +(acc.z-KM.x_k.elements[2]))
}, 100);
//read event data from device
function getDeviceRotation(evt){
// gamma is the left-to-right tilt in degrees, where right is positive
// beta is the front-to-back tilt in degrees, where front is positive
// alpha is the compass direction the device is facing in degrees
acc.x = evt.alpha;
acc.y = evt.beta;
acc.z = evt.gamma;
$(".accelerometer").html(" x:" +acc.x+", y:" +acc.y+", z:" +acc.z);
}
Here is a demo page showing my results
http://cardboard-hand.herokuapp.com/kalman.html
I've set sensor noise to a 2 scalar matrix for now to see if the Kalman was doing its thing but we have noticed the sensor has greater variance in the x axis when the phone is lying flat on the table. We think this might be an issue with Gimbal lock. We haven't tested but its possible the variance changes in each axis depending on the orientation of the device.
I'm currently trying to build a kind of pie chart / voronoi diagram hybrid (in canvas/javascript) .I don't know if it's even possible. I'm very new to this, and I haven't tried any approaches yet.
Assume I have a circle, and a set of numbers 2, 3, 5, 7, 11.
I want to subdivide the circle into sections equivalent to the numbers (much like a pie chart) but forming a lattice / honeycomb like shape.
Is this even possible? Is it ridiculously difficult, especially for someone who's only done some basic pie chart rendering?
This is my view on this after a quick look.
A general solution, assuming there are to be n polygons with k vertices/edges, will depend on the solution to n equations, where each equation has no more than 2nk, (but exactly 2k non-zero) variables. The variables in each polygon's equation are the same x_1, x_2, x_3... x_nk and y_1, y_2, y_3... y_nk variables. Exactly four of x_1, x_2, x_3... x_nk have non-zero coefficients and exactly four of y_1, y_2, y_3... y_nk have non-zero coefficients for each polygon's equation. x_i and y_i are bounded differently depending on the parent shape.. For the sake of simplicity, we'll assume the shape is a circle. The boundary condition is: (x_i)^2 + (y_i)^2 <= r^2
Note: I say no more than 2nk, because I am unsure of the lowerbound, but know that it can not be more than 2nk. This is a result of polygons, as a requirement, sharing vertices.
The equations are the collection of definite, but variable-bounded, integrals representing the area of each polygon, with the area equal for the ith polygon:
A_i = pi*r^2/S_i
where r is the radius of the parent circle and S_i is the number assigned to the polygon, as in your diagram.
The four separate pairs of (x_j,y_j), both with non-zero coefficients in a polygon's equation will yield the vertices for the polygon.
This may prove to be considerably difficult.
Is the boundary fixed from the beginning, or can you deform it a bit?
If I had to solve this, I would sort the areas from large to small. Then, starting with the largest area, I would first generate a random convex polygon (vertices along a circle) with the required size. The next area would share an edge with the first area, but would be otherwise also random and convex. Each polygon after that would choose an existing edge from already-present polygons, and would also share any 'convex' edges that start from there (where 'convex edge' is one that, if used for the new polygon, would result in the new polygon still being convex).
By evaluating different prospective polygon positions for 'total boundary approaches desired boundary', you can probably generate a cheap approximation to your initial goal. This is quite similar to what word-clouds do: place things incrementally from largest to smallest while trying to fill in a more-or-less enclosed space.
Given a set of voronio centres (i.e. a list of the coordinates of the centre for each one), we can calculate the area closest to each centre:
area[i] = areaClosestTo(i,positions)
Assume these are a bit wrong, because we haven't got the centres in the right place. So we can calculate the error in our current set by comparing the areas to the ideal areas:
var areaIndexSq = 0;
var desiredAreasMagSq = 0;
for(var i = 0; i < areas.length; ++i) {
var contrib = (areas[i] - desiredAreas[i]);
areaIndexSq += contrib*contrib;
desiredAreasMagSq += desiredAreas[i]*desiredAreas[i];
}
var areaIndex = Math.sqrt(areaIndexSq/desiredAreasMagSq);
This is the vector norm of the difference vector between the areas and the desiredAreas. Think of it like a measure of how good a least squares fit line is.
We also want some kind of honeycomb pattern, so we can call that honeycombness(positions), and get an overall measure of the quality of the thing (this is just a starter, the weighting or form of this can be whatever floats your boat):
var overallMeasure = areaIndex + honeycombnessIndex;
Then we have a mechanism to know how bad a guess is, and we can combine this with a mechanism for modifying the positions; the simplest is just to add a random amount to the x and y coords of each centre. Alternatively you can try moving each point towards neighbour areas which have an area too high, and away from those with an area too low.
This is not a straight solve, but it requires minimal maths apart from calculating the area closest to each point, and it's approachable. The difficult part may be recognising local minima and dealing with them.
Incidentally, it should be fairly easy to get the start points for the process; the centroids of the pie slices shouldn't be too far from the truth.
A definite plus is that you could use the intermediate calculations to animate a transition from pie to voronoi.
I'm trying to wrap my head around using the Separating Axis Theorem in JavaScript to detect two squares colliding (one rotated, one not). As hard as I try, I can't figure out what this would look like in JavaScript, nor can I find any JavaScript examples. Please help, an explanation with plain numbers or JavaScript code would be most useful.
Update: After researching lots of geometry and math theories I've decided to roll out a simplified SAT implementation in a GitHub repo. You can find a working copy of SAT in JavaScript here: https://github.com/ashblue/canvas-sat
Transforming polygons
First you have to transform all points of your convex polygons (squares in this case) so they are all in the same space, by applying a rotation of angle.
For future support of scaling, translation, etc. I recommend doing this through matrix transforms. You'll have to code your own Matrix class or find some library that has this functionality already (I'm sure there are plenty of options).
Then you'll use code in the vain of:
var transform = new Matrix();
transform.appendRotation(alpha);
points = transform.transformPoints(points);
Where points is an array of Point objects or so.
Collision algorithm overview
So that's all before you get to any collision stuff. Regarding the collision algorithm, it's standard practice to try and separate 2 convex polygons (squares in your case) using the following steps:
For each polygon edge (edges of both polygon 0 and polygon 1):
Classify both polgyons as "in front", "spanning" or "behind" the edge.
If both polygons are on different sides (1 "in front" and 1 "behind"), there is no collision, and you can stop the algorithm (early exit).
If you get here, no edge was able to separate the polgyons: The polygons intersect/collide.
Note that conceptually, the "separating axis" is the axis perpendicular to the edge we're classifying the polygons with.
Classifying polygons with regards to an edge
In order to do this, we'll classify a polygon's points/vertices with regards to the edge. If all points are on one side, the polygon's on that side. Otherwise, the polygon's spanning the edge (partially on one side, partially on the other side).
To classify points, we first need to get the edge's normal:
// this code assumes p0 and p1 are instances of some Vector3D class
var p0 = edge[0]; // first point of edge
var p1 = edge[1]; // second point of edge
var v = p1.subtract(p0);
var normal = new Vector3D(0, 0, 1).crossProduct(v);
normal.normalize();
The above code uses the cross-product of the edge direction and the z-vector to get the normal. Ofcourse, you should pre-calculate this for each edge instead.
Note: The normal represents the separating axis from the SAT.
Next, we can classify an arbitrary point by first making it relative to the edge (subtracting an edge point), and using the dot-product with the normal:
// point is the point to classify as "in front" or "behind" the edge
var point = point.subtract(p0);
var distance = point.dotProduct(normal);
var inFront = distance >= 0;
Now, inFront is true if the point is in front or on the edge, and false otherwise.
Note that, when you loop over a polygon's points to classify the polygon, you can also exit early if you have at least 1 point in front and 1 behind, since then it's already determined that the polygon is spanning the edge (and not in front or behind).
So as you can see, you still have quite a bit of coding to do. Find some js library with Matrix and Vector3D classes or so and use that to implement the above. Represent your collision shapes (polygons) as sequences of Point and Edge instances.
Hopefully, this will get you started.