WebAudioAPI: PannerNode: What value is represented by `orientation` and `forward` - javascript

I'm using a PannerNode from WebAudioAPI. Among others it contains orientation X/Y/Z. Also, the Listener contains a forward X/Y/Z.
Both, orientation and forward are represented by a value of (-3.4028235e38, 3.4028235e38).
source
Question
What do these values represent?
I thought orientation and forward were directional vectors that would be 1 in length. Instead they have the weird maximum value of +/-34028....
edit:
What I've done
I've checked Mozilla's MDN and W3C's information. However, the following questions remain:
Perhaps orientation and forward mark a point in the coordinate system?
If that's the case, what is the coordinate's anchor? ((0,0) or position - i.e. are the coordinates relative to the position?)
If orientation is a "coordinate point", why would you need maxDistance? I'd say the value is inferred by the coordinates of orientation

You can also look at the WebAudio spec on spatialization. There are diagrams there that show what the forward, up, and orientation vectors mean. They are, in fact, direction vectors. The magnitude doesn't matter. The meaning of these is in the section on "Azimuth and Elevation", but it might be a bit hard to extract out the parts you're interested in.
maxDistance is used to clamp the attenuation after some point. This is based on the distance between the listener (AudioListener positionX/Y/Z) and the source, (PannerNode positionX/Y/Z). This is described in "Distance Effects" along with the DistanceModelType.

Related

How to make boids pursue a movable character in a 2d game

I was wondering what would be the simplest way to make my group of boids pursue my character in a top down 2d game with a path finding algorithm?
I'm thinking I'd have to first find a path to the player with the path finding, get the desired vector and then subtract that from the velocity to get the steering value.
I found "Pursuit and Evasion steering behaviors" on Craig Reynolds' website but I'm not sure if I'm heading in the right direction.
Note that there is a little more detail in a 1999 GDC paper. There are also many open source implementations, such as this (in retrospect, surprisingly complicated) one in OpenSteer.
An agent’s seek behavior simply assumes a desired velocity toward a static target, and defines a “steering force” which is the error/difference between that and its current velocity.
So a pursue behavior requires only to estimate a current target location. A rough approximation is usually good enough, since it is discarded after this animation frame / simulation step. Typically steering behaviors use constant velocity predictions, simply multiplying the current velocity by some time interval. A helpful estimate for that interval is the distance to the quarry agent, divided by our current speed.
How this interacts with (a) the pursuers being a flock, and (b) a simulation based on path finding, would depend on specific details of your game. Probably you want to blend a small pursuit “force” with the flocking behavior so it does not interfere with the nature of the flock.

Find the closest coordinate from a set of coordinates

I have about 1000 set of geographical coordinates (lat, long).
Given one coordinate i want to find the closest one from that set. My approach was to measure the distance but on hundreds requests per second can be a little rough to the server doing all that math.
What is the best optimized solution for this?
Thanks
You will want to use the 'Nearest Neighbor Algorithm'.
You can use this library sphere-knn, or look at something like PostGIS.
Why not select the potential closest points from the set (eg set a threshold, say, 0.1 and filter the set so that you have any points with +-0.1 in both axes from your target point). Then do that actual calcs on this set.
If none are within the first range, just enlarge it (0,2) and repeat (0.3, 0.4...) until you've got a match. Obviously you would tune the threshold so it best matched your likely results.
(I'm assuming the time-consulming bit is the actual distance calculation, so the idea is to limit the number of calculations.)
An Algorithmic Response
Your approach is already O(n) in time. It's algorithmically very fast, and fairly simple to implement.
If that is not enough, you should consider taking a look at R-trees. The idea behind R-trees is roughly paraphrased as follows:
You already have a set of n elements. You can preprocess this data to form rough 'squares' of regions each containing a set of points, with an established boundary.
Now say a new element comes in. Instead of comparing across every coordinate, you identify which 'square' it belongs in by just comparing whether the point is smaller than the boundaries, and then measure the distance with only the points inside that square.
You can see at once the benefits:
You are no longer comparing against all coordinates, but instead only the boundaries (strictly less than the number of all elements) and then against the number of coordinates within your chosen boundary (also less than the number of all elements).
The upper bound of such an algorithm is O(n) time. The lower bound may, on average, be O(log n).
The main improvement is mostly in the pre-processing step (which is 'free' in that it's a one-time cost) and in the reduced number of comparisons needed.
A Systemic Response
Just buy another server, and distribute the requests and the elements using a load balancer such as Haproxy.
Servers are fairly cheap, especially if they are critical to your business, and if you want to be fast, it's an easy way to scale.

Looking for an algorithm to cluster 3d points, around 2d points

I'm trying to cluster photos (GPS + timestamp) around known GPS locations.
3d points = 2d + time stamp.
For example:
I walk along the road and take photos of lampposts, some are interesting and so I take 10 photos and others are not so I don't take any.
I'd like to cluster my photos around the lampposts, allowing me to see which lamppost was being photographed.
I've been looking at something like k-means clustering and wanted something intelligent than just snapping the photos to nearest lamppost.
(I'm going to write the code in javascript for a client side app handing about (2000,500) points at a time )
KMeans Clustering is indeed a popular and easy to implement algorithm, but it has a couple of problems.
You need to feed him the number of clusters N as an input
variable. Now, since I assume you don't know how many "things" you
want to photoigraph, finding the right N. Using Iterative KMeans or similar variations only slides the problem to finding a proper evaluation function for multicluster partitions, that's in no way easier then finding N itself.
It can only detect linearly separable shapes. Let's say you are walking around Versailles, and you take a lot of pictures of the external walls. Then you move inside, and take pictures of the inside garden. The two shapes you obtain are a thorus with a disk inside it, but KMeans can't distinguish them.
Personally, I'd go with some sort of Density Based Clustering : you'll still have to feed the algorithm some variables,but, since we assume that the space will be Euclidian, finding them shouldn't take too much. Plus, it gives you the ability to distinct Noise points from Cluster points, and let you treat them differently.
Furthermore, it can distinguish between most shapes, and you don't need to give the number of cluster beforehand.
Density based clustering, such as DBSCAN, definitely is the way to go.
The two parameters of DBSCAN should be quite obvious to set:
epsilon: this is the radius for clustering, so e.g. you could use 10 meters, assuming that there are no lampposts closer than 10 meters. (You should be using Geodetic distance, not Euclidean!)
minPts: essentially the minimum size of a cluster. You can use 1 or 2, even.
distance: this parameter is implicit, but probably more important. You can use a combination of space and time here. E.g. 10 meters spatially, and 1 year in the time domain. See Generalized DBSCAN for the more flexible version, which makes it obvious how to use multiple domains.
You can use a delaunay triangulation to look for nearest points. It gives you a nearest neighbor graph where the points is on the delaunay edges. Or you can cluster by color like in photo mosaic. It uses an anti pole tree. Here is a similar answer: Algorithm to find for all points in set A the nearest neighbor in set B

javascript 'deviceorientation' event - what sensors does it measure?

If I have a simple web page and script that looks like this:
<body>
<div id="alpha">a</div>
<div id="beta">b</div>
<div id="gamma">g</div>
</body>
<script>
window.addEventListener('deviceorientation', function(event) {
var alpha = event.alpha;
var beta = event.beta;
var gamma = event.gamma;
document.getElementById("alpha").innerHTML = alpha;
document.getElementById("beta").innerHTML = beta;
document.getElementById("gamma").innerHTML = gamma;
}, false);
</script>
I can open it up in mobile Firefox for Android and it will output 3 numbers that look like the following:
89.256125
3.109375
0.28125
Where when I rotate the device, the numbers change based on the axis of rotation. I noticed the values for "alpha" are really noisy - they bounce around non-stop even if the phone is at rest on my desk, while the other two remain steady. I understand that alpha is my heading. I'm curious then, is it getting the "alpha" value from the compass (which has noise issues) and the other two from the gyroscope?
Another issue is when I change the pitch, for some reason the heading changes too, even if I don't actually change the heading. I'm just curious why this is and how it can be corrected?
Also, since the gyroscope measures angular velocity, I presume this event listener is integrating it automatically - is the integration algorithm as good as any? Does it use the accelerometer to correct the drift?
In this google tech talk video, from 15:00 to 19:00, the speaker talks about correcting the drift inherent in the gyroscope by using the accelermoter, as well as calibrating the orientation with respect to gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k
How would I go about doing this?
Thanks for any insights anyone may have.
All the orientation values are also very noisy for me. Shakily hand, Euler angles, magnetic interference, manufacturing bug, ... Who knows?
I made a small exponential smoothing. That is, I replaced the fluctuating event.alpha by a smoothed value, which was conveniently called alpha:
alpha = event.alpha + s*(alpha - event.alpha), with 0 <= s <= 1;
In other words, each time a new observation is received, the smoothed value is updated with a correction proportional to the error.
If s=0, the smoothed is exactly the observed value and there is no smoothing.
If s=1, alpha remains constant, which is indeed a too efficient a smoothing.
Otherwise alpha is somewhere in between the observed and the smoothed value. In facts, it is a (weighted) average between the last observation and history. It thus follows changes in values with a certain damping effect.
If s is small, then the process is near the last observation and adapts quickly to recent changes (and also to random fluctuations). The damping is small.
If s is near 1, the process is more viscous. It reacts lazily to random fluctuation (and also to change in the central tendency). The damping is large.
Do not try s outside the 0..1 range, as this leads to a negative feedback loop and alpha starts soon to diverge with larger and larger fluctuations.
I used s=0.25, after testing that there was no significant difference for s between 0.1 and 0.3.
Important: When using this method, do not forget to initialize alpha, outside the addEventListener function:
var alpha = guestimate (= here 0);
Note that this simple adaptive smoothing works in many other cases, and is really simple programming.
The device orientation is obtained by sensor fusion. Strictly speaking, none of the sensors measures it. The orientation is the result of merging the accelerometer, gyro and magnetometer data in a smart way.
I noticed the values for "alpha" are really noisy - they bounce around
non-stop even if the phone is at rest on my desk, while the other two
remain steady.
This a common problem with the Euler angles, try to avoid them if you can.
By the way, the Sensor Fusion on Android Devices: A Revolution in Motion Processing video you link to explains it at 38:25.
Also, since the gyroscope measures angular velocity, I presume this
event listener is integrating it automatically - is the integration
algorithm as good as any? Does it use the accelerometer to correct the
drift?
Yes, the gyro drift is corrected with the help of the accelerometer (and magnetometer, if any) readings. This is called sensor fusion.
In this google tech talk video, from 15:00 to 19:00, the speaker talks
about correcting the drift inherent in the gyroscope by using the
accelermoter, as well as calibrating the orientation with respect to
gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k How would I go
about doing this?
If you have orientation then somebody already did all this for you. You don't have to do anything.
Use a directional cosine matrix or a kalman filter. You can use the accelerometer to plot it or the gyroscope or a combination of both. The drift can be calculated with a bit of machine learning. I think that motion fusion is a part of Texas instruments calibration package. I could be wrong. But its not hard to check. multiply 3 rotational matrices, be grand.. http://www.itu.dk/stud/speciale/segmentering/Matlab6p5/help/toolbox/aeroblks/euleranglestodirectioncosinematrix.html

What is the most processing efficient way to store mouse movement data in JavaScript?

I'm trying to record exactly where the mouse moves on a web page (to the pixel). I have the following code, but there are gaps in the resulting data.
var mouse = new Array();
$("html").mousemove(function(e){
mouse.push(e.pageX + "," + e.pageY);
});
But, when I look at the data that is recorded, this is an example of what I see.
76,2 //start x,y
78,5 //moved right two pixels, down three pixels
78,8 //moved down three pixels
This would preferably look more like:
76,2 //start x,y
77,3 //missing
78,4 //missing
78,5 //moved right two pixels, down three pixels
78,6 //missing
78,7 //missing
78,8 //moved down three pixels
Is there a better way to store pixel by pixel mouse movement data? Are my goals too unrealistic for a web page?
You can only save that information as fast as it's given to you. The mousemove events fires at a rate that is determined by the browser, usually topping at 60hz. Since you can certainly move your pointer faster than 60px/second, you won't be able to fill in the blanks unless you do some kind of interpolation.
That sounds like a good idea to me, imagine the hassle (and performance drag) of having 1920 mousemove events firing at once when you jump the mouse to the other side of the screen - and I don't even think the mouse itself polls fast enough, gaming mice don't go further than 1000hz.
See a live framerate test here: http://jsbin.com/ucevar/
On the interpolation, see this question that implements the Bresenham's line algorithm which you can use to find the missing points. This is a hard problem, the PenUltimate app for the iPad implements some amazing interpolation that makes line drawings look completely natural and fluid, but there is nothing about it on the web.
As for storing the data, just push an array of [x,y] instead of a string. A slow event handler function will also slow down the refresh rate, since events will be dropped when left behind.
The mouse doesn't exist at every pixel when you move it. During the update cycle, it actually jumps from point to point in a smooth manner, so to the eye it looks like it hits every point in between, when in fact it just skips around willy-nilly.
I'd recommend just storing the points where the mouse move event was registered. Each interval between two points creates a line, which can be used for whatever it is you need it for.
And, as far as processing efficiency goes...
Processing efficiency is going to depend on a number of factors. What browser is being used, how much memory the computer has, how well the code is optimized for the data-structure, etc.
Rather than prematurely optimize, write the program and then benchmark the slow parts to find out where your bottlenecks are.
I'd probably create a custom Point object with a bunch of functions on the prototype and see how it performs
if that bogs down too much, I'd switch to using object literals with x and y set.
If that bogs down, I'd switch to using two arrays, one for x and one for y and make sure to always set x and y values together.
If that bogs down, I'd try something new.
goto 4
Is there a better way to store pixel by pixel mouse movement data?
What are your criteria for "better"?
Are my goals too unrealistic for a web page?
If your goal is to store a new point each time the cursor enters a new pixel, yes. Also note that browser pixels don't necessarily map 1:1 to screen pixels, especially in the case of CRT monitors where they almost certainly don't.

Categories