Morning,
Over the past few months I have been tinkering with the HTML5 Canvas API and have had quite a lot of fun doing so.
I've gradually created a number of small games purely for teaching myself the do's and don'ts of game development. I am at a point where I am able to carry out basic collision detection, i.e. collisions between circles and platforms (fairly simple for most out there but it felt like quite an achievement when you first get it working, and even better when you understand what is actually going on). I know pixel collision detection is not for every game purely because in many scenarios you can achieve good enough results using the methods discussed above and this method is obviously quite expensive on resources.
But I just had a brainwave (It is more than likely somebody else has thought of this and I am way down the field but I've googled it and found nothing)....so here goes....
Would it possible to use/harness the "globalCompositeOperation" feature of canvas. My initial thoughts were to set its method to "xor" and then check the all pixels on the canvas for transparency, if a pixel is found there must be a collision. Right? Obviously at this point you need to work out which objects the pixel in question is occupied by and how to react but you would have to do this for other other techniques.
Saying that is the canvas already doing this collision detection behind the scenes in order to work out when shapes are overlapping? Would it be possible to extend upon this?
Any ideas?
Gary
The canvas doesn't do this automatically (probably b/c it is still in its infancy). easeljs takes this approach for mouse enter/leave events, and it is extremely inefficient. I am using an algorithmic approach to determining bounds. I then use that to see if the mouse is inside or outside of the shape. In theory, to implement hit detection this way, all you have to do is take all the points of both shapes, and see if they are ever in the other shape. If you want to see some of my code, just let me know
However, I will say that, although your way is very inefficient, it is globally applicable to any shape.
I made a demo on codepen which does the collision detection using an off screen canvas with globalCompositeOperation set to xor as you mentioned. The code is short and simple, and should have OK performance with smallish "collision canvases".
http://codepen.io/sakri/pen/nIiBq
if you are using a Xor mode fullscreen ,the second step is to getImageData of the screen, which is a high cost step, and next step is to find out which objects were involved in the collision.
No need to benchmark : it will be too slow.
I'd suggest rather you use the 'classical' bounding box test, then a test on the inner BBOxes of objects, and only after you'd go for pixels, locally.
By inner bounding box, i mean a rectangle for which you're sure to be completely inside your object, the reddish part in this example :
So use this mixed strategy :
- do a test on the bounding boxes of your objects.
- if there's a collision in between 2 BBoxes, perform an inner bounding box test : we are sure there's a collision if the sprite's inner bboxes overlaps.
- then you keep the pixel-perfect test only for the really problematic cases, and you need only to draw both sprites on a temporary canvas that has the size of the bigger sprite. You'll be able to perform a much much faster getImageData. At this step, you know which objects are involved in the collision.
Notice that you can draw the sprites with a scale, on a smaller canvas, to get faster getImageData at the cost of a lower resolution.
Be sure to disable smoothing, and i think that already a 8X8 canvas should be enough (it depends on average sprite speed in fact. If your sprites are slow, increase the resolution).
That way the data is 8 X 8 X 4 = 256 bytes big and you can keep a good frame-rate.
Rq also that, when deciding how you compute the inner BBox, you can allow a given number of empty pixels to get into that inner BBox, trading accuracy for speed.
Related
I'm making a small online multiplayer game with JavaScript and the native Canvas API (WebGL). I want to find the color of pixels on the screen as a form of collision detection, I figure it'd save resources to not have to process every shape every frame, but rather to simply check if the color of a pixel at a certain position is such that it is contacting a shape. (I hope that makes sense)
I ran some tests and I have an average frame delay of about 4-5 milliseconds without collision detection, and then when I make a single call to my canvas context's .getImageData() method, suddenly that frame delay shoots up to 19-20 milliseconds...
As far as I can find online getImageData() is the only means of checking the color of a given pixel, but I have to think there's some other way that doesn't introduce such a huge amount of lag.
I tried running getImageData() on a small section of the screen vs larger sections, and a 1x1 pixel request introduces 10ms latency, where a 600x600 pixel request is about 15ms... So the issue isn't the amount/size of the request, but rather just the request itself is extremely slow, so there's no potential for optimization here, I NEED another way.
Also, caching the image data is also not an option. I need to poll these pixels every single frame, I can't cache it (because the player and the object it needs to collide with are all constantly moving, and they're being controlled over the internet so there's also no way of predicting where they'll be at any given time... I NEED to poll every frame with no exceptions)
To be clear, I'm not asking how to write collisions or how to make pixel-perfect collision detection systems... I'm asking ONLY how to get the color of a pixel on the canvas without having to use .toImageData() because .toImageData() is far too slow for my use case.
Standard collision detection may end up being a better option. Pixel-perfect checking works well for perfect collision detection with complex objects but can get expensive with lots of pixels.
What I would probably recommend instead is to use standard collision detection with simple shapes. If done right, it should have good performance, even for a more complex game.
If you really do want to use pixel-perfect collision, you'll want to check the rectangular bounding boxes of the two objects first to ensure they aren't far away from each other. If their bounding boxes intersect, you could then use bitmasks of your sprites to quickly check each pixel for overlap. This question has a bit more info. It's not JS, but the concept would be the same.
I'm trying to make a little platform game with pure HTML5 and JavaScript. No frameworks.
So in order to make my character jump on top of enemies and floors/walls etc., it needs some proper collision detection algorithms.
Since I'm not usually into doing this. I really have no clue on how to approach the problem.
Should I do a re-check in every frame (it runs in 30 FPS) for all obstacles in the Canvas and see if it collides with my player, or is there a better and faster way to do so?
I even thought of making dynamic maps. So the width, height, x- and y coordinates of the obstacle are stored in an object. Would that make it faster to check if it's colliding with the player?
1. Should I re-check in every frame (it runs on 30 FPS)?
Who says it runs in 30 FPS? I found no such thing in the HTML5 specification. Closest you'll get to have anything to say about the framerate at all is to programmatically call setInterval or the newish, more preferred, requestAnimationFrame function.
However, back to the story. You should always look for collisions as much as you can. Usually, writing games on other platforms where one have a greater ability to measure CPU load, this could be one of those things you might find favorable to scale back some if the CPU has a hard time to follow suit. In JavaScript though, you're out of luck trying to implement advanced solutions like this one.
I don't think there's a shortcut here. The computer has no way of knowing what collided, how, when- and where, if you don't make that computation yourself. And yes, this is usually, if not at all times even, done just before each new frame is painted.
2. A dynamic map?
If by "map" you mean an array-like object or multidimensional array that maps coordinates to objects, then the short answer has to be no. But please do have an array of all objects on the scene. The width, height and coordinates of the object should be stored in variables in the object. Leaking these things would quickly become a burden; rendering the code complex and introduce bugs (please see separation of concerns and cohesion).
Do note that I just said "array of all objects on the scene" =) There is a subtle but most important point in this quote:
Whenever you walk through objects to determine their position and whether they have collided with someone or not. Also have a look at your viewport boundaries and determine whether the object are still "on the scene" or not. For instance, if you have a space craft simulator of some kind and a star just passed the player's viewport from one side to the other and then of the screen, and there is no way for the star to return and become visible again, then there is no reason for the star to be left behind in the system any more. He should be deleted and removed. He should definitely not be stored in an array and become part of a future collision detection with the player's avatar! Such things could dramatically slow down your game.
Bonus: Collision quick tips
Divide the screen into parts. There is no reason for you to look for a collision between two objects if one of them are on left side of the screen, and the other one is on the right side. You could split up the screen into more logical units than just left and right too.
Always strive to have a cheap computation made first. We kind of already did that in the last tip. But even if you now know that two objects just might be in collision with each other, draw two logical squares around your objects. For instance, say you have two 2D airplanes, then there is no reason for you to first look if some part of their wings collide. Draw a square around each airplane, effectively capturing their largest width and their largest height. If these two squares do not overlap, then just like in the last tip, you know they cannot be in collision with each other. But, if your first-phase cheap computation hinted that they might be in collision, pass those two airplanes to another more expensive computation to really look into the matter a bit more.
I am still working on something i wanted to make lots of divs and make them act on physics. I will share somethings that weren't obvious to me at first.
Detect collisions in data first. I was reading the x and y of boxes on screen then checking against other divs. After a week it occurred to me how stupid this was. I mean first i would assign a new value to div, then read it from div. Accessing divs is expensive. Think dom as a rendering stage.
Use webworkers if possible easy.
Use canvas if possible.
And if possible make elements carry a list of elements they should be checked against for collision.(this would be helpful in only certain cases).
I learned that interactive collisions are way more expensive. Because you have to check for changes in environment while in normal interaction you simulate what is going to happen in future, and therefore your animation would be more fluid and more cpu available.
i made something very very early stage just for fun: http://www.lastnoob.com/
For a game, I'm trying to calculate light and shadows. For this, I break down my canvas into square areas and calculate, if a light ray would be blocked on the way from the player to each square position. I've managed now to reach a reasonably good performance for those calculations.
The results are then visualized by covering non-visible areas with dark squares (Canvas.fillRect(...)), but this step becomes too expensive when a want a nice resolution, i.e. ~10'000 squares for calculation. I've tried to first render them into an off-screen canvas (=buffer), then draw the buffer on my visible canvas, but I could not experience any remarkable performance improvement.
Is there something I missed, or are there other methods to fasten up drawing?
Update:
The affected code can be found here: https://github.com/otruffer/Ape_On_Tape/blob/master/src/client/js/visibility.js (Code is a bit too big to post here)
The actual drawing takes place in drawCloudAt(...) and flushBuffer() in the lower part of this file.
Doing real-time light calculation in software is always a performance killer. Did you consider using a real 3d engine instead which does the light calculation on the video card? Yes, Javascript can do that now - this new feature is called WebGL.
When you just need a faster way to apply your lightmap, you could manipulate the RGB values of your canvas directly instead of using fillRect. You can use getImageData to get an array of raw 8 bit RGBA values of your canvas. You can then manipulate this array and put it back with putImageData.
This is more of a generic question to be honest, just wondering if anyone has done any sort of research on the subject.
Basically I am adding event support to a small game engine I am creating for my own personal use. I would like pixel perfect hover over 2d object event support and am just thinking of the best way of doing it. Realistically it would be faster for me personally just to invoke a draw of my objects onto a transparent canvas and checking if the mouse x y is over a transparent pixel or not since I dont have to make a set of points defining the outside of an object. This would also allow me to have holes in my object and it would still correctly know if I hovered over or not.
What I am wondering is using methods shown here: How can I determine whether a 2D Point is within a Polygon?
How much slower would my method be to the methods shown there?
Im currently still learning so its not easy for me to implement all of this and just test it myself since it would probably take me ages to get to work correctly and test the speeds.
Side note: I would still have a basic bounding box to save it from redrawing and testing every single time.
Checking if a point is in a polygon will 99.999999% of the time be vastly, vastly faster.
To be slower the polygon would need to be extremely complex.
To do the other method you need to use getImageData, and getting image data on the canvas is very slow.
Point in polygon algorithms do properly account for holes. Make sure you have one that obeys the non-zero winding number rule, because that is what canvas uses (as opposed to the even odd rule) and you may want compatibility with paths constructed in the canvas (either now or later).
I'm investigating the possibility of producing a game using only HTML's canvas as the display media. To take an example task I need to do, I need to construct the game environment from a number of isometric tiles. Of course, working in 2D means they by necessity come in rectangular packages so there's a large overlap between tiles.
I'm old enough that the natural solution to this problem is to call BitBltMasked. Oh wait, no, an HTML canvas doesn't have something as simple and as pleasing as BitBlt. It seems that the only way to dump pixel data in to a canvas is either with drawImage() which has no useful drawing modes that ignore the alpha channel or to use ImageData objects that have the image data in an array.. to which every. access. is. bounds. checked. and. therefore. dog. slow.
OK, that's more of a rant than a question (things the W3C like tend to provoke that from me), but what I really want to know is how to draw fast to a canvas? I'm finding it very difficult to ditch the feeling that doing 100s of drawImages() a second where every draw respects the alpha channel is inherently sinful and likely to make my application perform like arse in many browsers. On the other hand, the only way to implement BitBlt proper relies heavily on a browser using a hotspot-like execution technique to make it run fast.
Is there any way to draw fast across every possible implementation, or do I just have to forget about performance?
This is a really interesting problem, and there's a few interesting things you can do to solve it.
First, you should know that drawImage can accept a Canvas, not just an image. The "sub-Canvas"es don't even need to be in the DOM. This means that you can do some compositing on one canvas, then draw it to another. This opens a whole world of optimization opportunities, especially in the context of isometric tiles.
Let's say you have an area that's 50 tiles long by 50 tiles wide (I'll say meters for the sake of my own sanity). You might divide the area into 10x10m chunks. Each chunk is represented by its own Canvas. To draw the full scene, you'd simply draw each of the chunks' Canvas objects to the main canvas that's shown to the user. If only four chunks (a 20x20m area), you would only perform four drawImage operations.
Of course, each of those individual chunks will need to render its own Canvas. On game ticks where nothing happens in the chunk, you simply don't do anything: the Canvas will remain unchanged and will be drawn as you'd expect. When something does change, you can do one of a few things depending on your game:
If your tiles extend into the third dimension (i.e.: you have a Z-axis), you can draw each "layer" of the chunk into its own Canvas and only update the layers that need to be updated. For example, if each chunk contains ten layers of depth, you'd have ten Canvas objects. If something on layer 6 was updated, you would only need to re-paint layer 6's Canvas (probably one drawImage per square meter, which would be 100), then perform one drawImage operation per layer in the chunk (ten) to re-draw the chunk's Canvas. Decreasing or increasing the chunk size may increase or decrease performance depending on the number of update you make to the environment in your game. Further optimizations can be made to eliminate drawImage calls for obscured tiles and the like.
If you don't have a third dimension, you can simply perform one drawImage per square meter of a chunk. If two chunks are updated, that's only 200 drawImage calls per tick (plus one call per chunk visible on the screen). If your game involves very few updates, decreasing the chunk size will decrease the number of calls even further.
You can perform updates to the chunks in their own game loop. If you're using requestAnimationFrame (as you should be), you only need to paint the chunk Canvas objects to the screen. Independently, you can perform game logic in a setTimeout loop or the like. Then, each chunk could be updated in its own tick between frames without affecting performance. This can also be done in a web worker using getImageData and putImageData to send the rendered chunk back to the main thread whenever it needs to be updated, though making this work seamlessly will take a good deal of effort.
The other option that you have is to use a library like pixi.js to render the scene using WebGL. Even for 2D, it will increase performance by decreasing the amount of work that the CPU needs to do and shifting that over to the GPU. I'd highly recommend checking it out.
I know that GameJS has blit operations, and I certainly assume any other html5 game libraries do as well (gameQuery, LimeJS, etc etc). I don't know if these packages have addressed the specific array-bounds-checking concern that you had, but in practice their samples seem to work plenty fast on all platforms.
You should not make assumptions about what speedups make sense. For example, the GameJS developer reports that he was going to implement dirty rectangle tracking but it turned out that modern browsers do this automatically---link.
For this reason and others, I suggest to get something working before thinking about the speed. Also, make use of drawing libraries, as the authors have presumably spent some time optimizing performance.
I have no personal knowledge about this, but you can look into the appMobi "direct canvas" HTML element which is allegedly a much faster version of normal canvas, link. I'm confused about whether this works in all browsers or just webkit browsers or just appMobi's own special browser.
Again, you should not make assumptions about what speedups make sense without a very deep knowledge of web browser internal processes. That webpage about "direct canvas" mentions a bunch of things that slow down canvas-drawing: "Reflowing text, mapping hot spots, creating indexes for reference links, on and on." Alpha-blending and array-bounds-checking are not mentioned as prominent causes of slowness!
Unfortunately, there's no way around the alpha composition overhead. Clipping may be one solution, but I doubt there would be much, if any, performance gain. Not to mention how complicated such a route would be to implement on irregular shapes.
When you have to draw the entire display, you're going to have to deal with the performance hit. Although afterwards, you have a whole screen's worth of pre-calculated alpha imagery and you can draw this image data at an offset in one drawImage call. Then, you would only have to individually draw the new tiles that are scrolled into view.
But still, the browser is having to redraw each pixel at a different location in the canvas. Which is quite expensive. It would be nice if there was a method for just scrolling pixels, but no luck there either.
One idea that comes to mind is that you could implement multiple canvases, translating each individual canvas instead of redrawing the pixels. This would allow the browser to decide how to redraw those pixels, in a more native way, at least in theory anyway. Then you could render the newly visible tiles on a new, or used/cached, canvas element. Positioning it to match up with the last screen render.
But that's just my two blits... I mean bits... duh, I mean cents :]