JavaScript HTML5 canvas collision detection - javascript

I'm working on creating an air hockey-like game using HTML5 canvas and JavaScript. I've gotten pretty far, but detecting the collision of the mallet and the ball has me stumped. I've tried using the distance between the two circles and distance squared (to conserve CPU by bypassing square root). I can't figure out why the collision is not being detected.
Here's what I have: http://austin.99k.org/z_Archive/Air_Hockey/
Please take a look and help me figure it out. The source files are somewhat commented.

Your hit function is wrong. You should simply compute the distance between the two points (which you do correctly), and compare that to the minimum distance between the mallet and ball.
For example,
return distance_squared < radii_squared
You're actually (effectively) doing:
return -COLLIDEDISTANCE < radii_squared - distance_squared && radii_squared - distance_squared < COLLIDEDISTANCE
Which requires any hit to be within 2 units of the edge, but the numbers I saw running through hit() implied you're at a scale factor that makes a single unit less than one pixel.

Related

Finding the correct orientation for the normal

I am trying to implement a collision system for my three js racing game. I am following this guide to implement the system which calculates the final linear and angular velocities following a collision between two cars.
https://www.myphysicslab.com/engine2D/collision-en.html#resting_contact
However I have troubling when it comes to finding the correct direction for the normal. According to the link: Let the vector n be normal (perpendicular) to the edge of body B that is being impacted, and pointing outward from body B. I am using the following method for finding this normal.
How do I calculate the normal vector of a line segment?
Finding the numerical value of the normal is easy but I have trouble making my program use the correct direction. For instance i want the blue normal and not the red one.
Here is a picture that explains what i mean more clearly I hope:
No formula can guess what side of the surface you are interested in, so it is up to you to provide this hint.
You can select the right orientation that by using one of Rap x Rbp or Rbp x Rap, but it is up to you to choose depending on the orientation conventions used in your model. (With the little information provided, I can't tell you more.)

javascript 'deviceorientation' event - what sensors does it measure?

If I have a simple web page and script that looks like this:
<body>
<div id="alpha">a</div>
<div id="beta">b</div>
<div id="gamma">g</div>
</body>
<script>
window.addEventListener('deviceorientation', function(event) {
var alpha = event.alpha;
var beta = event.beta;
var gamma = event.gamma;
document.getElementById("alpha").innerHTML = alpha;
document.getElementById("beta").innerHTML = beta;
document.getElementById("gamma").innerHTML = gamma;
}, false);
</script>
I can open it up in mobile Firefox for Android and it will output 3 numbers that look like the following:
89.256125
3.109375
0.28125
Where when I rotate the device, the numbers change based on the axis of rotation. I noticed the values for "alpha" are really noisy - they bounce around non-stop even if the phone is at rest on my desk, while the other two remain steady. I understand that alpha is my heading. I'm curious then, is it getting the "alpha" value from the compass (which has noise issues) and the other two from the gyroscope?
Another issue is when I change the pitch, for some reason the heading changes too, even if I don't actually change the heading. I'm just curious why this is and how it can be corrected?
Also, since the gyroscope measures angular velocity, I presume this event listener is integrating it automatically - is the integration algorithm as good as any? Does it use the accelerometer to correct the drift?
In this google tech talk video, from 15:00 to 19:00, the speaker talks about correcting the drift inherent in the gyroscope by using the accelermoter, as well as calibrating the orientation with respect to gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k
How would I go about doing this?
Thanks for any insights anyone may have.
All the orientation values are also very noisy for me. Shakily hand, Euler angles, magnetic interference, manufacturing bug, ... Who knows?
I made a small exponential smoothing. That is, I replaced the fluctuating event.alpha by a smoothed value, which was conveniently called alpha:
alpha = event.alpha + s*(alpha - event.alpha), with 0 <= s <= 1;
In other words, each time a new observation is received, the smoothed value is updated with a correction proportional to the error.
If s=0, the smoothed is exactly the observed value and there is no smoothing.
If s=1, alpha remains constant, which is indeed a too efficient a smoothing.
Otherwise alpha is somewhere in between the observed and the smoothed value. In facts, it is a (weighted) average between the last observation and history. It thus follows changes in values with a certain damping effect.
If s is small, then the process is near the last observation and adapts quickly to recent changes (and also to random fluctuations). The damping is small.
If s is near 1, the process is more viscous. It reacts lazily to random fluctuation (and also to change in the central tendency). The damping is large.
Do not try s outside the 0..1 range, as this leads to a negative feedback loop and alpha starts soon to diverge with larger and larger fluctuations.
I used s=0.25, after testing that there was no significant difference for s between 0.1 and 0.3.
Important: When using this method, do not forget to initialize alpha, outside the addEventListener function:
var alpha = guestimate (= here 0);
Note that this simple adaptive smoothing works in many other cases, and is really simple programming.
The device orientation is obtained by sensor fusion. Strictly speaking, none of the sensors measures it. The orientation is the result of merging the accelerometer, gyro and magnetometer data in a smart way.
I noticed the values for "alpha" are really noisy - they bounce around
non-stop even if the phone is at rest on my desk, while the other two
remain steady.
This a common problem with the Euler angles, try to avoid them if you can.
By the way, the Sensor Fusion on Android Devices: A Revolution in Motion Processing video you link to explains it at 38:25.
Also, since the gyroscope measures angular velocity, I presume this
event listener is integrating it automatically - is the integration
algorithm as good as any? Does it use the accelerometer to correct the
drift?
Yes, the gyro drift is corrected with the help of the accelerometer (and magnetometer, if any) readings. This is called sensor fusion.
In this google tech talk video, from 15:00 to 19:00, the speaker talks
about correcting the drift inherent in the gyroscope by using the
accelermoter, as well as calibrating the orientation with respect to
gravity: http://www.youtube.com/watch?v=C7JQ7Rpwn2k How would I go
about doing this?
If you have orientation then somebody already did all this for you. You don't have to do anything.
Use a directional cosine matrix or a kalman filter. You can use the accelerometer to plot it or the gyroscope or a combination of both. The drift can be calculated with a bit of machine learning. I think that motion fusion is a part of Texas instruments calibration package. I could be wrong. But its not hard to check. multiply 3 rotational matrices, be grand.. http://www.itu.dk/stud/speciale/segmentering/Matlab6p5/help/toolbox/aeroblks/euleranglestodirectioncosinematrix.html

HTML5 Canvas Collision Detection "globalCompositeOperation" performance

Morning,
Over the past few months I have been tinkering with the HTML5 Canvas API and have had quite a lot of fun doing so.
I've gradually created a number of small games purely for teaching myself the do's and don'ts of game development. I am at a point where I am able to carry out basic collision detection, i.e. collisions between circles and platforms (fairly simple for most out there but it felt like quite an achievement when you first get it working, and even better when you understand what is actually going on). I know pixel collision detection is not for every game purely because in many scenarios you can achieve good enough results using the methods discussed above and this method is obviously quite expensive on resources.
But I just had a brainwave (It is more than likely somebody else has thought of this and I am way down the field but I've googled it and found nothing)....so here goes....
Would it possible to use/harness the "globalCompositeOperation" feature of canvas. My initial thoughts were to set its method to "xor" and then check the all pixels on the canvas for transparency, if a pixel is found there must be a collision. Right? Obviously at this point you need to work out which objects the pixel in question is occupied by and how to react but you would have to do this for other other techniques.
Saying that is the canvas already doing this collision detection behind the scenes in order to work out when shapes are overlapping? Would it be possible to extend upon this?
Any ideas?
Gary
The canvas doesn't do this automatically (probably b/c it is still in its infancy). easeljs takes this approach for mouse enter/leave events, and it is extremely inefficient. I am using an algorithmic approach to determining bounds. I then use that to see if the mouse is inside or outside of the shape. In theory, to implement hit detection this way, all you have to do is take all the points of both shapes, and see if they are ever in the other shape. If you want to see some of my code, just let me know
However, I will say that, although your way is very inefficient, it is globally applicable to any shape.
I made a demo on codepen which does the collision detection using an off screen canvas with globalCompositeOperation set to xor as you mentioned. The code is short and simple, and should have OK performance with smallish "collision canvases".
http://codepen.io/sakri/pen/nIiBq
if you are using a Xor mode fullscreen ,the second step is to getImageData of the screen, which is a high cost step, and next step is to find out which objects were involved in the collision.
No need to benchmark : it will be too slow.
I'd suggest rather you use the 'classical' bounding box test, then a test on the inner BBOxes of objects, and only after you'd go for pixels, locally.
By inner bounding box, i mean a rectangle for which you're sure to be completely inside your object, the reddish part in this example :
So use this mixed strategy :
- do a test on the bounding boxes of your objects.
- if there's a collision in between 2 BBoxes, perform an inner bounding box test : we are sure there's a collision if the sprite's inner bboxes overlaps.
- then you keep the pixel-perfect test only for the really problematic cases, and you need only to draw both sprites on a temporary canvas that has the size of the bigger sprite. You'll be able to perform a much much faster getImageData. At this step, you know which objects are involved in the collision.
Notice that you can draw the sprites with a scale, on a smaller canvas, to get faster getImageData at the cost of a lower resolution.
Be sure to disable smoothing, and i think that already a 8X8 canvas should be enough (it depends on average sprite speed in fact. If your sprites are slow, increase the resolution).
That way the data is 8 X 8 X 4 = 256 bytes big and you can keep a good frame-rate.
Rq also that, when deciding how you compute the inner BBox, you can allow a given number of empty pixels to get into that inner BBox, trading accuracy for speed.

What is the most processing efficient way to store mouse movement data in JavaScript?

I'm trying to record exactly where the mouse moves on a web page (to the pixel). I have the following code, but there are gaps in the resulting data.
var mouse = new Array();
$("html").mousemove(function(e){
mouse.push(e.pageX + "," + e.pageY);
});
But, when I look at the data that is recorded, this is an example of what I see.
76,2 //start x,y
78,5 //moved right two pixels, down three pixels
78,8 //moved down three pixels
This would preferably look more like:
76,2 //start x,y
77,3 //missing
78,4 //missing
78,5 //moved right two pixels, down three pixels
78,6 //missing
78,7 //missing
78,8 //moved down three pixels
Is there a better way to store pixel by pixel mouse movement data? Are my goals too unrealistic for a web page?
You can only save that information as fast as it's given to you. The mousemove events fires at a rate that is determined by the browser, usually topping at 60hz. Since you can certainly move your pointer faster than 60px/second, you won't be able to fill in the blanks unless you do some kind of interpolation.
That sounds like a good idea to me, imagine the hassle (and performance drag) of having 1920 mousemove events firing at once when you jump the mouse to the other side of the screen - and I don't even think the mouse itself polls fast enough, gaming mice don't go further than 1000hz.
See a live framerate test here: http://jsbin.com/ucevar/
On the interpolation, see this question that implements the Bresenham's line algorithm which you can use to find the missing points. This is a hard problem, the PenUltimate app for the iPad implements some amazing interpolation that makes line drawings look completely natural and fluid, but there is nothing about it on the web.
As for storing the data, just push an array of [x,y] instead of a string. A slow event handler function will also slow down the refresh rate, since events will be dropped when left behind.
The mouse doesn't exist at every pixel when you move it. During the update cycle, it actually jumps from point to point in a smooth manner, so to the eye it looks like it hits every point in between, when in fact it just skips around willy-nilly.
I'd recommend just storing the points where the mouse move event was registered. Each interval between two points creates a line, which can be used for whatever it is you need it for.
And, as far as processing efficiency goes...
Processing efficiency is going to depend on a number of factors. What browser is being used, how much memory the computer has, how well the code is optimized for the data-structure, etc.
Rather than prematurely optimize, write the program and then benchmark the slow parts to find out where your bottlenecks are.
I'd probably create a custom Point object with a bunch of functions on the prototype and see how it performs
if that bogs down too much, I'd switch to using object literals with x and y set.
If that bogs down, I'd switch to using two arrays, one for x and one for y and make sure to always set x and y values together.
If that bogs down, I'd try something new.
goto 4
Is there a better way to store pixel by pixel mouse movement data?
What are your criteria for "better"?
Are my goals too unrealistic for a web page?
If your goal is to store a new point each time the cursor enters a new pixel, yes. Also note that browser pixels don't necessarily map 1:1 to screen pixels, especially in the case of CRT monitors where they almost certainly don't.

Rotating HTML5 canvas slow?

I'm experimenting using rotation on canvas, I have it now so each object has its own rotation. Without them rotating I can get around 400 objects on screen on a very low end computer and nearly 2000 on a normally stocked pc. when I factor in rotation more than 0, the performance drops at least a third!
Why is just changing the rotation slowing it down so much? Is this one of canvases weird hiccups?
I have a global rotation variable and at the beginning of drawing each object I:
ctx.rotate(globRot);
For individual objects cache the rotations. Some of my findings.
Realtime rotation demo
Cached rotations demo (note move up using arrows to find the zombies)
I guess a lot of time might be spent actually creating and multiplying the matrix for the transformation. If you can (find a way to) cache the transformation when it's not changing, that might help. Maybe.

Categories