Rotating HTML5 canvas slow? - javascript

I'm experimenting using rotation on canvas, I have it now so each object has its own rotation. Without them rotating I can get around 400 objects on screen on a very low end computer and nearly 2000 on a normally stocked pc. when I factor in rotation more than 0, the performance drops at least a third!
Why is just changing the rotation slowing it down so much? Is this one of canvases weird hiccups?
I have a global rotation variable and at the beginning of drawing each object I:
ctx.rotate(globRot);

For individual objects cache the rotations. Some of my findings.
Realtime rotation demo
Cached rotations demo (note move up using arrows to find the zombies)

I guess a lot of time might be spent actually creating and multiplying the matrix for the transformation. If you can (find a way to) cache the transformation when it's not changing, that might help. Maybe.

Related

Is there any way to find the color of a pixel in JavaScript WITHOUT using getImageData()?

I'm making a small online multiplayer game with JavaScript and the native Canvas API (WebGL). I want to find the color of pixels on the screen as a form of collision detection, I figure it'd save resources to not have to process every shape every frame, but rather to simply check if the color of a pixel at a certain position is such that it is contacting a shape. (I hope that makes sense)
I ran some tests and I have an average frame delay of about 4-5 milliseconds without collision detection, and then when I make a single call to my canvas context's .getImageData() method, suddenly that frame delay shoots up to 19-20 milliseconds...
As far as I can find online getImageData() is the only means of checking the color of a given pixel, but I have to think there's some other way that doesn't introduce such a huge amount of lag.
I tried running getImageData() on a small section of the screen vs larger sections, and a 1x1 pixel request introduces 10ms latency, where a 600x600 pixel request is about 15ms... So the issue isn't the amount/size of the request, but rather just the request itself is extremely slow, so there's no potential for optimization here, I NEED another way.
Also, caching the image data is also not an option. I need to poll these pixels every single frame, I can't cache it (because the player and the object it needs to collide with are all constantly moving, and they're being controlled over the internet so there's also no way of predicting where they'll be at any given time... I NEED to poll every frame with no exceptions)
To be clear, I'm not asking how to write collisions or how to make pixel-perfect collision detection systems... I'm asking ONLY how to get the color of a pixel on the canvas without having to use .toImageData() because .toImageData() is far too slow for my use case.
Standard collision detection may end up being a better option. Pixel-perfect checking works well for perfect collision detection with complex objects but can get expensive with lots of pixels.
What I would probably recommend instead is to use standard collision detection with simple shapes. If done right, it should have good performance, even for a more complex game.
If you really do want to use pixel-perfect collision, you'll want to check the rectangular bounding boxes of the two objects first to ensure they aren't far away from each other. If their bounding boxes intersect, you could then use bitmasks of your sprites to quickly check each pixel for overlap. This question has a bit more info. It's not JS, but the concept would be the same.

How can I improve performance for multiple images drawn in webgl?

I am programming a simple webgl application, which draws multiple images (textures) over each other. Depending on the scroll position, the scale and opacity of the images is changed, to create a 3D multi-layer parallax effect. You can see the effect here:
http://gsstest.gluecksschmiede.io/ct2/
I am currently working on improving the performance, because the effect does not perform well on older devices (low fps). I lack the in-depth knowledge in webgl (and webgl debugging) to see what is the reason for the bad performance, so I need help. This question only is concerned with desktop devices.
I've tried / I am currently:
always working with the same program and shader pair
the images are in 2000x1067 and already compressed. I need png because of the transparency. I could compress them a little more, but not much. The resolution has to be that way.
already using requestAnimationFrame and non-blocking scroll listeners
The webgl functions I am using to draw the image can be read in this file:
http://gsstest.gluecksschmiede.io/ct2/js/setup.js
Shader code can be found here (just right click -> show sourcecode):
http://gsstest.gluecksschmiede.io/ct2/
Basically, I've used this tutorial/code and did just a few changes:
https://webglfundamentals.org/webgl/lessons/webgl-2d-drawimage.html
I'm then using this setup code to draw the images depending on current scroll position as seen in this file (see the "update" method):
http://gsstest.gluecksschmiede.io/ct2/js/para.js
In my application, about 15 images of 2000x1067 size are drawn on to each other for every frame. I expected this to perform way better than it actually is. I don't know what is causing the bottleneck.
How you can help me:
Provide hints or ideas what code / image compression / whatever changes could improve rendering performance
Provide help on how to debug the performance. Is there a more clever why then just printing out times with console.log and performance.now?
Provide ideas on how I could gracefully degrade or provide a fallback that performance better on older devices.
This is just a guess but ...
drawing 15 fullscreen images is going to be slow on many systems. It's just too many pixels. It's not the size of the images it's the size they are drawn. Like on my MacBook Air the resolution of the screen is 2560x1600
You're drawing 15 images. Those images are drawn into a canvas. That canvas is then drawn into the browser's window and the browser's window is then drawn on the desktop. So that's at least 17 draws or
2560 * 1600 * 17 = 70meg pixels
To get a smooth framerate we generally want to run at 60 frames a second. 60 frames a second means
60 frames a second * 70 meg pixels = 4.2gig pixels a second.
My GPU is rated for 8gig pixels a second so it looks like we might get 60fps here
Let's compare to a 2015 Macbook Air with a Intel HD Graphics 6000. Its screen resolution is 1440x900 which if we calculate things out comes to 1.3gig pixels at 60 frames a second. It's GPU is rated for 1.2gig pixels a second so we're not going to hit 60fps on a 2015 Macbook Air
Note that like everything, the specified max fillrate for a GPU is one of those theoretical max things, you'll probably never see it hit the top rate because of other overheads. In other words, if you look up the fillrate of a GPU multiply by 85% or something (just a guess) to get the fillrate you're more likely to see in reality.
You can test this easily, just make the browser window smaller. If you make the browser window 1/4 the size of the screen and it runs smooth then your issue was fillrate (assuming you are resizing the canvas's drawing buffer to match its display size). This is because once you do that less pixels are being draw (75% less) but all the other work stays the same (all the javascript, webgl, etc)
Assuming that shows your issue is fillrate then things you can do
Don't draw all 15 layers.
If some layers fade out to 100% transparent then don't draw those layers. If you can design the site so that only 4 to 7 layers are ever visible at once you'll go a long way to staying under your fillrate limit
Don't draw transparent areas
You said 15 layers but it appears some of those layers are mostly transparent. You could break those apart into say 9+ pieces (like a picture frame) and not draw the middle piece. Whether it's 9 pieces or 50 pieces it's probably better than 80% of the pixels being 100% transparent.
Many game engines if you give them an image they'll auto generate a mesh that only uses the parts of the texture that are > 0% opaque. For example I made this frame in photoshop
Then loading it into unity you can see Unity made a mesh that covers only the non 100% transparent parts
This is something you'd do offline either by writing a tool or doing it by hand or using some 3D mesh editor like blender to generate meshes that fit your images so you're not wasting time trying to render pixels that are 100% transparent.
Try discarding transparent pixels
This you'd have to test. In your fragment shader you can put something like
if (color.a <= alphaThreshold) {
discard; // don't draw this pixel
}
Where alphaThreashold is 0.0 or greater. Whether this saves time might depend on the GPU since using discarding is slower than not. The reason is if you don't use discard then the GPU can do certain checks early. In your case though I think it might be a win. Note that option #2 above, using a mesh for each plane that only covers the non-transparent parts is by far better than this option.
Pass more textures to a single shader
This one is overly complicated but you could make a drawMultiImages function that takes multiple textures and multiple texture matrices and draws N textures at once. They'd all have the same destination rectangle but by adjusting the source rectangle for each texture you'd get the same effect.
N would probably be 8 or less since there's a limit on the number of textures you can in one draw call depending on the GPU. 8 is the minimum limit IIRC meaning some GPUs will support more than 8 but if you want things to run everywhere you need to handle the minimum case.
GPUs like most processors can read faster than they can write so reading multiple textures and mixing them in the shader would be faster than doing each texture individually.
Finally it's not clear why you're using WebGL for this example.
Option 4 would be fastest but I wouldn't recommend it. Seems like too much work to me for such a simple effect. Still, I just want to point out that at least at a glance you could just use N <div>s and set their css transform and opacity and get the same effect. You'd still have the same issues, 15 full screen layers is too many and you should hide <div>s who's opacity is 0% (the browser might do that for you but best not to assume). You could also use the 2D canvas API and you should see similar perf. Of course if you're doing some kind of special effect (didn't look at the code) then feel free to use WebGL, just at a glance it wasn't clear.
A couple of things:
Performance profiling shows that your problem isn't webgl but memory leaking.
Image compression is worthless in webgl as webgl doesn't care about png, jpg or webp. Textures are always just arrays of bytes on the gpu, so each of your layers is 2000*1067*4 bytes = 8.536 megabytes.
Never construct data in your animation loop, you do, learn how to use the math library you're using.

Rendering on html5 canvas takes cosumes too much time, buffering barely helps

For a game, I'm trying to calculate light and shadows. For this, I break down my canvas into square areas and calculate, if a light ray would be blocked on the way from the player to each square position. I've managed now to reach a reasonably good performance for those calculations.
The results are then visualized by covering non-visible areas with dark squares (Canvas.fillRect(...)), but this step becomes too expensive when a want a nice resolution, i.e. ~10'000 squares for calculation. I've tried to first render them into an off-screen canvas (=buffer), then draw the buffer on my visible canvas, but I could not experience any remarkable performance improvement.
Is there something I missed, or are there other methods to fasten up drawing?
Update:
The affected code can be found here: https://github.com/otruffer/Ape_On_Tape/blob/master/src/client/js/visibility.js (Code is a bit too big to post here)
The actual drawing takes place in drawCloudAt(...) and flushBuffer() in the lower part of this file.
Doing real-time light calculation in software is always a performance killer. Did you consider using a real 3d engine instead which does the light calculation on the video card? Yes, Javascript can do that now - this new feature is called WebGL.
When you just need a faster way to apply your lightmap, you could manipulate the RGB values of your canvas directly instead of using fillRect. You can use getImageData to get an array of raw 8 bit RGBA values of your canvas. You can then manipulate this array and put it back with putImageData.

HTML5 Canvas Collision Detection "globalCompositeOperation" performance

Morning,
Over the past few months I have been tinkering with the HTML5 Canvas API and have had quite a lot of fun doing so.
I've gradually created a number of small games purely for teaching myself the do's and don'ts of game development. I am at a point where I am able to carry out basic collision detection, i.e. collisions between circles and platforms (fairly simple for most out there but it felt like quite an achievement when you first get it working, and even better when you understand what is actually going on). I know pixel collision detection is not for every game purely because in many scenarios you can achieve good enough results using the methods discussed above and this method is obviously quite expensive on resources.
But I just had a brainwave (It is more than likely somebody else has thought of this and I am way down the field but I've googled it and found nothing)....so here goes....
Would it possible to use/harness the "globalCompositeOperation" feature of canvas. My initial thoughts were to set its method to "xor" and then check the all pixels on the canvas for transparency, if a pixel is found there must be a collision. Right? Obviously at this point you need to work out which objects the pixel in question is occupied by and how to react but you would have to do this for other other techniques.
Saying that is the canvas already doing this collision detection behind the scenes in order to work out when shapes are overlapping? Would it be possible to extend upon this?
Any ideas?
Gary
The canvas doesn't do this automatically (probably b/c it is still in its infancy). easeljs takes this approach for mouse enter/leave events, and it is extremely inefficient. I am using an algorithmic approach to determining bounds. I then use that to see if the mouse is inside or outside of the shape. In theory, to implement hit detection this way, all you have to do is take all the points of both shapes, and see if they are ever in the other shape. If you want to see some of my code, just let me know
However, I will say that, although your way is very inefficient, it is globally applicable to any shape.
I made a demo on codepen which does the collision detection using an off screen canvas with globalCompositeOperation set to xor as you mentioned. The code is short and simple, and should have OK performance with smallish "collision canvases".
http://codepen.io/sakri/pen/nIiBq
if you are using a Xor mode fullscreen ,the second step is to getImageData of the screen, which is a high cost step, and next step is to find out which objects were involved in the collision.
No need to benchmark : it will be too slow.
I'd suggest rather you use the 'classical' bounding box test, then a test on the inner BBOxes of objects, and only after you'd go for pixels, locally.
By inner bounding box, i mean a rectangle for which you're sure to be completely inside your object, the reddish part in this example :
So use this mixed strategy :
- do a test on the bounding boxes of your objects.
- if there's a collision in between 2 BBoxes, perform an inner bounding box test : we are sure there's a collision if the sprite's inner bboxes overlaps.
- then you keep the pixel-perfect test only for the really problematic cases, and you need only to draw both sprites on a temporary canvas that has the size of the bigger sprite. You'll be able to perform a much much faster getImageData. At this step, you know which objects are involved in the collision.
Notice that you can draw the sprites with a scale, on a smaller canvas, to get faster getImageData at the cost of a lower resolution.
Be sure to disable smoothing, and i think that already a 8X8 canvas should be enough (it depends on average sprite speed in fact. If your sprites are slow, increase the resolution).
That way the data is 8 X 8 X 4 = 256 bytes big and you can keep a good frame-rate.
Rq also that, when deciding how you compute the inner BBox, you can allow a given number of empty pixels to get into that inner BBox, trading accuracy for speed.

Most efficient way to implement mouse smoothing

I'm finishing up a drawing application that uses OpenGL ES 2.0 (WebGL) and JS. Things work pretty well unless I draw with very quick movements. See the image below:
This loop was drawn with a smooth motion, but because JS was only able to get mouse readings at specific locations, the result is faceted. This happens to a certain degree in Photoshop if you have mouse smoothing turned off, though obviously much less because PS has the ability to poll at a much higher rate.
So, I would like to implement some mouse smoothing, but I'm concerned about making sure it's very efficient so that it doesn't bog down the actual pixel drawing operations. I was originally thinking about using the mouse locations that JS is able to grab to generate splines and interpolate between readings to give a smoother result. I'm not sure if this is the best approach, though. If it is, how do I make sure I sample the correct locations on the intermediate spline? Most of the spline equations I've found don't have uniformly-distributed values for t = [0, 1].
Any help/guidance/advice would be very appreciated. Thanks!
Catmull-Rom might be a good one to try, if you haven't already.
http://www.mvps.org/directx/articles/catmull/
I'd pick a minimum segment length and divide up segments that are over that into 1+segmentLength/minSegmentLength sub-segments.

Categories