How accurate and reliable is GeolocationCoordinates.speed API? - javascript

I was wondering how accurate and reliable is the property speed of a single sample of GeolocationCoordinates API.
I want to take few samples of the current speed of an object (with GPS) and I am not sure if I should calculate the velocity myself (by the distance traveled between two time samples using altitude, longitude and altitude), or just use the property speed of a simple GeolocationCoordinates sample.
How does the speed property is being measured?
Which approach should I take?

The GeolocationCoordinates.speed read-only property is a double
representing the velocity of the device in meters per second.
source, also this might help
So I guess you don't have to calculate it manually.
TIP: For better accuracy, make sure the device is not in battery save mode, as it affects the result a lot.

Related

WebAudioAPI: PannerNode: What value is represented by `orientation` and `forward`

I'm using a PannerNode from WebAudioAPI. Among others it contains orientation X/Y/Z. Also, the Listener contains a forward X/Y/Z.
Both, orientation and forward are represented by a value of (-3.4028235e38, 3.4028235e38).
source
Question
What do these values represent?
I thought orientation and forward were directional vectors that would be 1 in length. Instead they have the weird maximum value of +/-34028....
edit:
What I've done
I've checked Mozilla's MDN and W3C's information. However, the following questions remain:
Perhaps orientation and forward mark a point in the coordinate system?
If that's the case, what is the coordinate's anchor? ((0,0) or position - i.e. are the coordinates relative to the position?)
If orientation is a "coordinate point", why would you need maxDistance? I'd say the value is inferred by the coordinates of orientation
You can also look at the WebAudio spec on spatialization. There are diagrams there that show what the forward, up, and orientation vectors mean. They are, in fact, direction vectors. The magnitude doesn't matter. The meaning of these is in the section on "Azimuth and Elevation", but it might be a bit hard to extract out the parts you're interested in.
maxDistance is used to clamp the attenuation after some point. This is based on the distance between the listener (AudioListener positionX/Y/Z) and the source, (PannerNode positionX/Y/Z). This is described in "Distance Effects" along with the DistanceModelType.

Find the closest coordinate from a set of coordinates

I have about 1000 set of geographical coordinates (lat, long).
Given one coordinate i want to find the closest one from that set. My approach was to measure the distance but on hundreds requests per second can be a little rough to the server doing all that math.
What is the best optimized solution for this?
Thanks
You will want to use the 'Nearest Neighbor Algorithm'.
You can use this library sphere-knn, or look at something like PostGIS.
Why not select the potential closest points from the set (eg set a threshold, say, 0.1 and filter the set so that you have any points with +-0.1 in both axes from your target point). Then do that actual calcs on this set.
If none are within the first range, just enlarge it (0,2) and repeat (0.3, 0.4...) until you've got a match. Obviously you would tune the threshold so it best matched your likely results.
(I'm assuming the time-consulming bit is the actual distance calculation, so the idea is to limit the number of calculations.)
An Algorithmic Response
Your approach is already O(n) in time. It's algorithmically very fast, and fairly simple to implement.
If that is not enough, you should consider taking a look at R-trees. The idea behind R-trees is roughly paraphrased as follows:
You already have a set of n elements. You can preprocess this data to form rough 'squares' of regions each containing a set of points, with an established boundary.
Now say a new element comes in. Instead of comparing across every coordinate, you identify which 'square' it belongs in by just comparing whether the point is smaller than the boundaries, and then measure the distance with only the points inside that square.
You can see at once the benefits:
You are no longer comparing against all coordinates, but instead only the boundaries (strictly less than the number of all elements) and then against the number of coordinates within your chosen boundary (also less than the number of all elements).
The upper bound of such an algorithm is O(n) time. The lower bound may, on average, be O(log n).
The main improvement is mostly in the pre-processing step (which is 'free' in that it's a one-time cost) and in the reduced number of comparisons needed.
A Systemic Response
Just buy another server, and distribute the requests and the elements using a load balancer such as Haproxy.
Servers are fairly cheap, especially if they are critical to your business, and if you want to be fast, it's an easy way to scale.

How does google's position.coordinates.speed translate into mph?

I've been messing around with Google Maps API. It returns a position object, with relevant bit being the following:
{
"speed": 1.41837963
}
I have a number of readings from driving around the block ranging from -1, 0 (understandable since I was just sitting there) to about 19~. It seems about right considering how I was driving but how does this translate into approximate mph?
I'm assuming your not using Google Maps to obtain the user's position, but instead using the W3C Geolocation standard. Speed is given in meters/second, so to convert to MPH just multiply the returned value by 2.23694.
If your code contains something like navigator.geolocation.getCurrentPosition() or navigator.geolocation.watchPosition(), then speed is given in m/s. If, however, the above methods don't look familiar, post your code, or the endpoint you're requesting and I'd be more than happy to try to get you sorted out.
I was unable to find any documentation on google maps api position object in relation to speed, but you can calculate the speed yourself given two sets of gps coords.
Here is a link on how to do this in Java but the idea's will be the same.
http://www.ridgesolutions.ie/index.php/2013/11/14/algorithm-to-calculate-speed-from-two-gps-latitude-and-longitude-points-and-time-difference/
That way you are doing the calculations yourself and not relying on someone else to do it for you.

Looking for an algorithm to cluster 3d points, around 2d points

I'm trying to cluster photos (GPS + timestamp) around known GPS locations.
3d points = 2d + time stamp.
For example:
I walk along the road and take photos of lampposts, some are interesting and so I take 10 photos and others are not so I don't take any.
I'd like to cluster my photos around the lampposts, allowing me to see which lamppost was being photographed.
I've been looking at something like k-means clustering and wanted something intelligent than just snapping the photos to nearest lamppost.
(I'm going to write the code in javascript for a client side app handing about (2000,500) points at a time )
KMeans Clustering is indeed a popular and easy to implement algorithm, but it has a couple of problems.
You need to feed him the number of clusters N as an input
variable. Now, since I assume you don't know how many "things" you
want to photoigraph, finding the right N. Using Iterative KMeans or similar variations only slides the problem to finding a proper evaluation function for multicluster partitions, that's in no way easier then finding N itself.
It can only detect linearly separable shapes. Let's say you are walking around Versailles, and you take a lot of pictures of the external walls. Then you move inside, and take pictures of the inside garden. The two shapes you obtain are a thorus with a disk inside it, but KMeans can't distinguish them.
Personally, I'd go with some sort of Density Based Clustering : you'll still have to feed the algorithm some variables,but, since we assume that the space will be Euclidian, finding them shouldn't take too much. Plus, it gives you the ability to distinct Noise points from Cluster points, and let you treat them differently.
Furthermore, it can distinguish between most shapes, and you don't need to give the number of cluster beforehand.
Density based clustering, such as DBSCAN, definitely is the way to go.
The two parameters of DBSCAN should be quite obvious to set:
epsilon: this is the radius for clustering, so e.g. you could use 10 meters, assuming that there are no lampposts closer than 10 meters. (You should be using Geodetic distance, not Euclidean!)
minPts: essentially the minimum size of a cluster. You can use 1 or 2, even.
distance: this parameter is implicit, but probably more important. You can use a combination of space and time here. E.g. 10 meters spatially, and 1 year in the time domain. See Generalized DBSCAN for the more flexible version, which makes it obvious how to use multiple domains.
You can use a delaunay triangulation to look for nearest points. It gives you a nearest neighbor graph where the points is on the delaunay edges. Or you can cluster by color like in photo mosaic. It uses an anti pole tree. Here is a similar answer: Algorithm to find for all points in set A the nearest neighbor in set B

javascript 'deviceorientation' event - what sensors does it measure?

If I have a simple web page and script that looks like this:
<body>
<div id="alpha">a</div>
<div id="beta">b</div>
<div id="gamma">g</div>
</body>
<script>
window.addEventListener('deviceorientation', function(event) {
var alpha = event.alpha;
var beta = event.beta;
var gamma = event.gamma;
document.getElementById("alpha").innerHTML = alpha;
document.getElementById("beta").innerHTML = beta;
document.getElementById("gamma").innerHTML = gamma;
}, false);
</script>
I can open it up in mobile Firefox for Android and it will output 3 numbers that look like the following:
89.256125
3.109375
0.28125
Where when I rotate the device, the numbers change based on the axis of rotation. I noticed the values for "alpha" are really noisy - they bounce around non-stop even if the phone is at rest on my desk, while the other two remain steady. I understand that alpha is my heading. I'm curious then, is it getting the "alpha" value from the compass (which has noise issues) and the other two from the gyroscope?
Another issue is when I change the pitch, for some reason the heading changes too, even if I don't actually change the heading. I'm just curious why this is and how it can be corrected?
Also, since the gyroscope measures angular velocity, I presume this event listener is integrating it automatically - is the integration algorithm as good as any? Does it use the accelerometer to correct the drift?
In this google tech talk video, from 15:00 to 19:00, the speaker talks about correcting the drift inherent in the gyroscope by using the accelermoter, as well as calibrating the orientation with respect to gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k
How would I go about doing this?
Thanks for any insights anyone may have.
All the orientation values are also very noisy for me. Shakily hand, Euler angles, magnetic interference, manufacturing bug, ... Who knows?
I made a small exponential smoothing. That is, I replaced the fluctuating event.alpha by a smoothed value, which was conveniently called alpha:
alpha = event.alpha + s*(alpha - event.alpha), with 0 <= s <= 1;
In other words, each time a new observation is received, the smoothed value is updated with a correction proportional to the error.
If s=0, the smoothed is exactly the observed value and there is no smoothing.
If s=1, alpha remains constant, which is indeed a too efficient a smoothing.
Otherwise alpha is somewhere in between the observed and the smoothed value. In facts, it is a (weighted) average between the last observation and history. It thus follows changes in values with a certain damping effect.
If s is small, then the process is near the last observation and adapts quickly to recent changes (and also to random fluctuations). The damping is small.
If s is near 1, the process is more viscous. It reacts lazily to random fluctuation (and also to change in the central tendency). The damping is large.
Do not try s outside the 0..1 range, as this leads to a negative feedback loop and alpha starts soon to diverge with larger and larger fluctuations.
I used s=0.25, after testing that there was no significant difference for s between 0.1 and 0.3.
Important: When using this method, do not forget to initialize alpha, outside the addEventListener function:
var alpha = guestimate (= here 0);
Note that this simple adaptive smoothing works in many other cases, and is really simple programming.
The device orientation is obtained by sensor fusion. Strictly speaking, none of the sensors measures it. The orientation is the result of merging the accelerometer, gyro and magnetometer data in a smart way.
I noticed the values for "alpha" are really noisy - they bounce around
non-stop even if the phone is at rest on my desk, while the other two
remain steady.
This a common problem with the Euler angles, try to avoid them if you can.
By the way, the Sensor Fusion on Android Devices: A Revolution in Motion Processing video you link to explains it at 38:25.
Also, since the gyroscope measures angular velocity, I presume this
event listener is integrating it automatically - is the integration
algorithm as good as any? Does it use the accelerometer to correct the
drift?
Yes, the gyro drift is corrected with the help of the accelerometer (and magnetometer, if any) readings. This is called sensor fusion.
In this google tech talk video, from 15:00 to 19:00, the speaker talks
about correcting the drift inherent in the gyroscope by using the
accelermoter, as well as calibrating the orientation with respect to
gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k How would I go
about doing this?
If you have orientation then somebody already did all this for you. You don't have to do anything.
Use a directional cosine matrix or a kalman filter. You can use the accelerometer to plot it or the gyroscope or a combination of both. The drift can be calculated with a bit of machine learning. I think that motion fusion is a part of Texas instruments calibration package. I could be wrong. But its not hard to check. multiply 3 rotational matrices, be grand.. http://www.itu.dk/stud/speciale/segmentering/Matlab6p5/help/toolbox/aeroblks/euleranglestodirectioncosinematrix.html

Categories