javascript 'deviceorientation' event - what sensors does it measure? - javascript

If I have a simple web page and script that looks like this:
<body>
<div id="alpha">a</div>
<div id="beta">b</div>
<div id="gamma">g</div>
</body>
<script>
window.addEventListener('deviceorientation', function(event) {
var alpha = event.alpha;
var beta = event.beta;
var gamma = event.gamma;
document.getElementById("alpha").innerHTML = alpha;
document.getElementById("beta").innerHTML = beta;
document.getElementById("gamma").innerHTML = gamma;
}, false);
</script>
I can open it up in mobile Firefox for Android and it will output 3 numbers that look like the following:
89.256125
3.109375
0.28125
Where when I rotate the device, the numbers change based on the axis of rotation. I noticed the values for "alpha" are really noisy - they bounce around non-stop even if the phone is at rest on my desk, while the other two remain steady. I understand that alpha is my heading. I'm curious then, is it getting the "alpha" value from the compass (which has noise issues) and the other two from the gyroscope?
Another issue is when I change the pitch, for some reason the heading changes too, even if I don't actually change the heading. I'm just curious why this is and how it can be corrected?
Also, since the gyroscope measures angular velocity, I presume this event listener is integrating it automatically - is the integration algorithm as good as any? Does it use the accelerometer to correct the drift?
In this google tech talk video, from 15:00 to 19:00, the speaker talks about correcting the drift inherent in the gyroscope by using the accelermoter, as well as calibrating the orientation with respect to gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k
How would I go about doing this?
Thanks for any insights anyone may have.

All the orientation values are also very noisy for me. Shakily hand, Euler angles, magnetic interference, manufacturing bug, ... Who knows?
I made a small exponential smoothing. That is, I replaced the fluctuating event.alpha by a smoothed value, which was conveniently called alpha:
alpha = event.alpha + s*(alpha - event.alpha), with 0 <= s <= 1;
In other words, each time a new observation is received, the smoothed value is updated with a correction proportional to the error.
If s=0, the smoothed is exactly the observed value and there is no smoothing.
If s=1, alpha remains constant, which is indeed a too efficient a smoothing.
Otherwise alpha is somewhere in between the observed and the smoothed value. In facts, it is a (weighted) average between the last observation and history. It thus follows changes in values with a certain damping effect.
If s is small, then the process is near the last observation and adapts quickly to recent changes (and also to random fluctuations). The damping is small.
If s is near 1, the process is more viscous. It reacts lazily to random fluctuation (and also to change in the central tendency). The damping is large.
Do not try s outside the 0..1 range, as this leads to a negative feedback loop and alpha starts soon to diverge with larger and larger fluctuations.
I used s=0.25, after testing that there was no significant difference for s between 0.1 and 0.3.
Important: When using this method, do not forget to initialize alpha, outside the addEventListener function:
var alpha = guestimate (= here 0);
Note that this simple adaptive smoothing works in many other cases, and is really simple programming.

The device orientation is obtained by sensor fusion. Strictly speaking, none of the sensors measures it. The orientation is the result of merging the accelerometer, gyro and magnetometer data in a smart way.
I noticed the values for "alpha" are really noisy - they bounce around
non-stop even if the phone is at rest on my desk, while the other two
remain steady.
This a common problem with the Euler angles, try to avoid them if you can.
By the way, the Sensor Fusion on Android Devices: A Revolution in Motion Processing video you link to explains it at 38:25.
Also, since the gyroscope measures angular velocity, I presume this
event listener is integrating it automatically - is the integration
algorithm as good as any? Does it use the accelerometer to correct the
drift?
Yes, the gyro drift is corrected with the help of the accelerometer (and magnetometer, if any) readings. This is called sensor fusion.
In this google tech talk video, from 15:00 to 19:00, the speaker talks
about correcting the drift inherent in the gyroscope by using the
accelermoter, as well as calibrating the orientation with respect to
gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k How would I go
about doing this?
If you have orientation then somebody already did all this for you. You don't have to do anything.

Use a directional cosine matrix or a kalman filter. You can use the accelerometer to plot it or the gyroscope or a combination of both. The drift can be calculated with a bit of machine learning. I think that motion fusion is a part of Texas instruments calibration package. I could be wrong. But its not hard to check. multiply 3 rotational matrices, be grand.. http://www.itu.dk/stud/speciale/segmentering/Matlab6p5/help/toolbox/aeroblks/euleranglestodirectioncosinematrix.html

Related

How do I detect if a device has a gyroscope in a web browser?

I am using THREE.js and creating a web-app where the user can rotate the device and the scene will move accordingly. Something similar to this.
I am having a problem differentiating between devices that have a gyroscope and those that don't.
Detecting devices that don't have orientation sensors at all is easy. All the alpha, beta, gamma values of DeviceOrientationEvent are null. But, if a mobile device doesn't have a gyro, it still gives alpha, beta, gamma values in DeviceOrientationEvent. The problem is these values are very noisy, and cause a lot of shaking in the scene. So, I want to disable the device orientation for these devices. But, so far I haven't been able to find how to make out if the data is coming from a gyro or accelerometer (that's my guess on where the data is coming from).
I don't know if it helps, but a good example of how this is handled can be seen here. (Press the axis like icon at the bottom; you'll have to see it on a device that doesn't have a gyroscope and a gyroscope to see the difference). What they are doing for devices without a gyroscope is only updating the pitch and the roll. The yaw isn't updated when you rotate with the phone.
So, it is definitely possible, but I haven't yet found out how even after searching a lot. It would be great if anyone could help.
Thanks a lot.
EDIT:
On devices that just have an accelerometer, like MOTO E, all values are null - DeviceOrientationEvent and rotationRate - with the only exception of accelerationIncludingGravity. But, the device I was testing earlier, that didn't have a gyro but still gave alpha, beta, gamma values for DeviceOrientationEvent, seems to have 2 accelerometers according to the "sensors" details on GSM Arena. That is how I suspect it was able to give DeviceOrientationEvent data, albeit noisy. Looks like 2 accelerometers aren't enough to give rotation rate ;)
If you want to check if gyroscope is present or not, check the parameters that only gyroscope can measure. For example rotation rate is something only gyroscope measures.
Have a look at an example code which says whether gyroscope is present or not:
var gyroPresent = false;
window.addEventListener("devicemotion", function(event){
if(event.rotationRate.alpha || event.rotationRate.beta || event.rotationRate.gamma)
gyroPresent = true;
});
Hope this helps!
EDIT:
Just a small note: Here, the DeviceMotionEvent is used because the rotationRate (and acceleration etc.) can be accessed from this event only. The OP had tried only the DeviceOrientationEvent, so this is worth a mention.
Modern solution:
if (window.DeviceOrientationEvent) {
// gyro exist
}
source

How can I detect the direction/distance of movement on iOS with javascript? [duplicate]

I was looking into implementing an Inertial Navigation System for an Android phone, which I realise is hard given the accelerometer accuracy, and constant fluctuation of readings.
To start with, I set the phone on a flat surface and sampled 1000 accelerometer readings in the X and Y directions (parallel to the table, so no gravity acting in these directions). I then averaged these readings and used this value to calibrate the phone (subtracting this value from each subsequent reading).
I then tested the system by again placing it on the table and sampling 5000 accelerometer readings in the X and Y directions. I would expect, given the calibration, that these accelerations should add up to 0 (roughly) in each direction. However, this is not the case, and the total acceleration over 5000 iterations is nowhere near 0 (averaging around 10 on each axis).
I realise without seeing my code this might be difficult to answer but in a more general sense...
Is this simply an example of how inaccurate the accelerometer readings are on a mobile phone (HTC Desire S), or is it more likely that I've made some errors in my coding?
You get position by integrating the linear acceleration twice but the error is horrible. It is useless in practice.
Here is an explanation why (Google Tech Talk) at 23:20. I highly recommend this video.
It is not the accelerometer noise that causes the problem but the gyro white noise, see subsection 6.2.3 Propagation of Errors. (By the way, you will need the gyroscopes too.)
As for indoor positioning, I have found these useful:
RSSI-Based Indoor Localization and Tracking Using Sigma-Point Kalman Smoothers
Pedestrian Tracking with Shoe-Mounted Inertial Sensors
Enhancing the Performance of Pedometers Using a Single Accelerometer
I have no idea how these methods would perform in real-life applications or how to turn them into a nice Android app.
A similar question is this.
UPDATE:
Apparently there is a newer version than the above Oliver J. Woodman, "An introduction to inertial navigation", his PhD thesis:
Pedestrian Localisation for Indoor Environments
I am just thinking out loud, and I haven't played with an android accelerometer API yet, so bear with me.
First of all, traditionally, to get navigation from accelerometers you would need a 6-axis accelerometer. You need accelerations in X, Y, and Z, but also rotations Xr, Yr, and Zr. Without the rotation data, you don't have enough data to establish a vector unless you assume the device never changes it's attitude, which would be pretty limiting. No one reads the TOS anyway.
Oh, and you know that INS drifts with the rotation of the earth, right? So there's that too. One hour later and you're mysteriously climbing on a 15° slope into space. That's assuming you had an INS capable of maintaining location that long, which a phone can't do yet.
A better way to utilize accelerometers -even with a 3-axis accelerometer- for navigation would be to tie into GPS to calibrate the INS whenever possible. Where GPS falls short, INS compliments nicely. GPS can suddenly shoot you off 3 blocks away because you got too close to a tree. INS isn't great, but at least it knows you weren't hit by a meteor.
What you could do is log the phones accelerometer data, and a lot of it. Like weeks worth. Compare it with good (I mean really good) GPS data and use datamining to establish correlation of trends between accelerometer data and known GPS data. (Pro tip: You'll want to check the GPS almanac for days with good geometry and a lot of satellites. Some days you may only have 4 satellites and that's not enough) What you might be able to do is find that when a person is walking with their phone in their pocket, the accelerometer data logs a very specific pattern. Based on the datamining, you establish a profile for that device, with that user, and what sort of velocity that pattern represents when it had GPS data to go along with it. You should be able to detect turns, climbing stairs, sitting down (calibration to 0 velocity time!) and various other tasks. How the phone is being held would need to be treated as separate data inputs entirely. I smell a neural network being used to do the data mining. Something blind to what the inputs mean, in other words. The algorithm would only look for trends in the patterns, and not really paying attention to the actual measurements of the INS. All it would know is historically, when this pattern occurs, the device is traveling and 2.72 m/s X, 0.17m/s Y, 0.01m/s Z, so the device must be doing that now. And it would move the piece forward accordingly. It's important that it's completely blind, because just putting a phone in your pocket might be oriented in one of 4 different orientations, and 8 if you switch pockets. And there's many ways to hold your phone, as well. We're talking a lot of data here.
You'll obviously still have a lot of drift, but I think you'd have better luck this way because the device will know when you stopped walking, and the positional drift will not be a perpetuating. It knows that you're standing still based on historical data. Traditional INS systems don't have this feature. The drift perpetuates to all future measurements and compounds exponentially. Ungodly accuracy, or having a secondary navigation to check with at regular intervals, is absolutely vital with traditional INS.
Each device, and each person would have to have their own profile. It's a lot of data and a lot of calculations. Everyone walks different speeds, with different steps, and puts their phones in different pockets, etc. Surely to implement this in the real world would require number-crunching to be handled server-side.
If you did use GPS for the initial baseline, part of the problem there is GPS tends to have it's own migrations over time, but they are non-perpetuating errors. Sit a receiver in one location and log the data. If there's no WAAS corrections, you can easily get location fixes drifting in random directions 100 feet around you. With WAAS, maybe down to 6 feet. You might actually have better luck with a sub-meter RTK system on a backpack to at least get the ANN's algorithm down.
You will still have angular drift with the INS using my method. This is a problem. But, if you went so far to build an ANN to pour over weeks worth of GPS and INS data among n users, and actually got it working to this point, you obviously don't mind big data so far. Keep going down that path and use more data to help resolve the angular drift: People are creatures of habit. We pretty much do the same things like walk on sidewalks, through doors, up stairs, and don't do crazy things like walk across freeways, through walls, or off balconies.
So let's say you are taking a page from Big Brother and start storing data on where people are going. You can start mapping where people would be expected to walk. It's a pretty sure bet that if the user starts walking up stairs, she's at the same base of stairs that the person before her walked up. After 1000 iterations and some least-squares adjustments, your database pretty much knows where those stairs are with great accuracy. Now you can correct angular drift and location as the person starts walking. When she hits those stairs, or turns down that hall, or travels down a sidewalk, any drift can be corrected. Your database would contain sectors that are weighted by the likelihood that a person would walk there, or that this user has walked there in the past. Spatial databases are optimized for this using divide and conquer to only allocate sectors that are meaningful. It would be sort of like those MIT projects where the laser-equipped robot starts off with a black image, and paints the maze in memory by taking every turn, illuminating where all the walls are.
Areas of high traffic would get higher weights, and areas where no one has ever been get 0 weight. Higher traffic areas are have higher resolution. You would essentially end up with a map of everywhere anyone has been and use it as a prediction model.
I wouldn't be surprised if you could determine what seat a person took in a theater using this method. Given enough users going to the theater, and enough resolution, you would have data mapping each row of the theater, and how wide each row is. The more people visit a location, the higher fidelity with which you could predict that that person is located.
Also, I highly recommend you get a (free) subscription to GPS World magazine if you're interested in the current research into this sort of stuff. Every month I geek out with it.
I'm not sure how great your offset is, because you forgot to include units. ("Around 10 on each axis" doesn't say much. :P) That said, it's still likely due to inaccuracy in the hardware.
The accelerometer is fine for things like determining the phone's orientation relative to gravity, or detecting gestures (shaking or bumping the phone, etc.)
However, trying to do dead reckoning using the accelerometer is going to subject you to a lot of compound error. The accelerometer would need to be insanely accurate otherwise, and this isn't a common use case, so I doubt hardware manufacturers are optimizing for it.
Android accelerometer is digital, it samples acceleration using the same number of "buckets", lets say there are 256 buckets and the accelerometer is capable of sensing from -2g to +2g. This means that your output would be quantized in terms of these "buckets" and would be jumping around some set of values.
To calibrate an android accelerometer, you need to sample a lot more than 1000 points and find the "mode" around which the accelerometer is fluctuating. Then find the number of digital points by how much the output fluctuates and use that for your filtering.
I recommend Kalman filtering once you get the mode and +/- fluctuation.
I realise this is quite old, but the issue at hand is not addressed in ANY of the answers given.
What you are seeing is the linear acceleration of the device including the effect of gravity. If you lay the phone on a flat surface the sensor will report the acceleration due to gravity which is approximately 9.80665 m/s2, hence giving the 10 you are seeing. The sensors are inaccurate, but they are not THAT inaccurate! See here for some useful links and information about the sensor you may be after.
You are making the assumption that the accelerometer readings in the X and Y directions, which in this case is entirely hardware noise, would form a normal distribution around your average. Apparently that is not the case.
One thing you can try is to plot these values on a graph and see whether any pattern emerges. If not then the noise is statistically random and cannot be calibrated against--at least for your particular phone hardware.

HTML5 Canvas Collision Detection "globalCompositeOperation" performance

Morning,
Over the past few months I have been tinkering with the HTML5 Canvas API and have had quite a lot of fun doing so.
I've gradually created a number of small games purely for teaching myself the do's and don'ts of game development. I am at a point where I am able to carry out basic collision detection, i.e. collisions between circles and platforms (fairly simple for most out there but it felt like quite an achievement when you first get it working, and even better when you understand what is actually going on). I know pixel collision detection is not for every game purely because in many scenarios you can achieve good enough results using the methods discussed above and this method is obviously quite expensive on resources.
But I just had a brainwave (It is more than likely somebody else has thought of this and I am way down the field but I've googled it and found nothing)....so here goes....
Would it possible to use/harness the "globalCompositeOperation" feature of canvas. My initial thoughts were to set its method to "xor" and then check the all pixels on the canvas for transparency, if a pixel is found there must be a collision. Right? Obviously at this point you need to work out which objects the pixel in question is occupied by and how to react but you would have to do this for other other techniques.
Saying that is the canvas already doing this collision detection behind the scenes in order to work out when shapes are overlapping? Would it be possible to extend upon this?
Any ideas?
Gary
The canvas doesn't do this automatically (probably b/c it is still in its infancy). easeljs takes this approach for mouse enter/leave events, and it is extremely inefficient. I am using an algorithmic approach to determining bounds. I then use that to see if the mouse is inside or outside of the shape. In theory, to implement hit detection this way, all you have to do is take all the points of both shapes, and see if they are ever in the other shape. If you want to see some of my code, just let me know
However, I will say that, although your way is very inefficient, it is globally applicable to any shape.
I made a demo on codepen which does the collision detection using an off screen canvas with globalCompositeOperation set to xor as you mentioned. The code is short and simple, and should have OK performance with smallish "collision canvases".
http://codepen.io/sakri/pen/nIiBq
if you are using a Xor mode fullscreen ,the second step is to getImageData of the screen, which is a high cost step, and next step is to find out which objects were involved in the collision.
No need to benchmark : it will be too slow.
I'd suggest rather you use the 'classical' bounding box test, then a test on the inner BBOxes of objects, and only after you'd go for pixels, locally.
By inner bounding box, i mean a rectangle for which you're sure to be completely inside your object, the reddish part in this example :
So use this mixed strategy :
- do a test on the bounding boxes of your objects.
- if there's a collision in between 2 BBoxes, perform an inner bounding box test : we are sure there's a collision if the sprite's inner bboxes overlaps.
- then you keep the pixel-perfect test only for the really problematic cases, and you need only to draw both sprites on a temporary canvas that has the size of the bigger sprite. You'll be able to perform a much much faster getImageData. At this step, you know which objects are involved in the collision.
Notice that you can draw the sprites with a scale, on a smaller canvas, to get faster getImageData at the cost of a lower resolution.
Be sure to disable smoothing, and i think that already a 8X8 canvas should be enough (it depends on average sprite speed in fact. If your sprites are slow, increase the resolution).
That way the data is 8 X 8 X 4 = 256 bytes big and you can keep a good frame-rate.
Rq also that, when deciding how you compute the inner BBox, you can allow a given number of empty pixels to get into that inner BBox, trading accuracy for speed.

Most efficient way to implement mouse smoothing

I'm finishing up a drawing application that uses OpenGL ES 2.0 (WebGL) and JS. Things work pretty well unless I draw with very quick movements. See the image below:
This loop was drawn with a smooth motion, but because JS was only able to get mouse readings at specific locations, the result is faceted. This happens to a certain degree in Photoshop if you have mouse smoothing turned off, though obviously much less because PS has the ability to poll at a much higher rate.
So, I would like to implement some mouse smoothing, but I'm concerned about making sure it's very efficient so that it doesn't bog down the actual pixel drawing operations. I was originally thinking about using the mouse locations that JS is able to grab to generate splines and interpolate between readings to give a smoother result. I'm not sure if this is the best approach, though. If it is, how do I make sure I sample the correct locations on the intermediate spline? Most of the spline equations I've found don't have uniformly-distributed values for t = [0, 1].
Any help/guidance/advice would be very appreciated. Thanks!
Catmull-Rom might be a good one to try, if you haven't already.
http://www.mvps.org/directx/articles/catmull/
I'd pick a minimum segment length and divide up segments that are over that into 1+segmentLength/minSegmentLength sub-segments.

JavaScript HTML5 canvas collision detection

I'm working on creating an air hockey-like game using HTML5 canvas and JavaScript. I've gotten pretty far, but detecting the collision of the mallet and the ball has me stumped. I've tried using the distance between the two circles and distance squared (to conserve CPU by bypassing square root). I can't figure out why the collision is not being detected.
Here's what I have: http://austin.99k.org/z_Archive/Air_Hockey/
Please take a look and help me figure it out. The source files are somewhat commented.
Your hit function is wrong. You should simply compute the distance between the two points (which you do correctly), and compare that to the minimum distance between the mallet and ball.
For example,
return distance_squared < radii_squared
You're actually (effectively) doing:
return -COLLIDEDISTANCE < radii_squared - distance_squared && radii_squared - distance_squared < COLLIDEDISTANCE
Which requires any hit to be within 2 units of the edge, but the numbers I saw running through hit() implied you're at a scale factor that makes a single unit less than one pixel.

Categories