Multiple canvases? [duplicate] - javascript

If we use multiple <canvas> on a single html page does it hamper the performance of the application that is being developed and does the code get very bulky and require more time to load the page?

Sometimes multiple canvases results in better performance. It's best to test if you can afford the time.
Say you are making a program that has items on the screen and allows the user to draw a selection box.
With one canvas, to draw the selection box you'd have to redraw all of the elements over and over to update the selection box that the user sees since they are all on the same canvas.
Or, you can have two canvases, one with the objects and then another one in front for things like "tools" (like the selection box graphics). Here two canvases may be more efficient.
Other times you may want to have a background that changes very rarely and foreground objects that change all the time. Instead of redrawing all of them at 60 frames per second, you make a background canvas and foreground canvas, and only have the foreground's objects redraw at the fast speed. Here two canvases ought to be more efficient than one, but it may be more optimal to "cache" that background canvas as an image and drawing the image first each frame.

I've used dozens of canvases on the same page display different graphs using a javascript graphing library. The graphs are quite fast, it's gathering the data for them that's a bit slow in our case.
If you want you can wait to do all your drawing until the rest of the page loads by kicking it off from the onLoad function.

Also, HTML5Rocks says it is a best approach.

According to Mark Pilgrim, it's a good idea to use multiple canvases.
See This Link
Using multiple canvases can simplify things on your end, by isolating regions of the screen to update and isolating input events. If your page is well-suited for dividing-up regions of the screen, I say go for it.

A single instance runs smoothly, more does not affect rendering on page. Data is the factor of slowing canvas down. In order to increase page loading time, you can simply call canvas rendering methods after page loading.

Related

What is a high performance way of splitting up Sprite Sheets/ Chip Sets in JS?

I'm working on a 2d world editor for html/js, and trying to find the best way to split up a chip set (as seen below) into multiple little squares (chips).
Currently, I'm using a method similar to the default css sprite sheets method, of using background position to move the background of many little <div> elements until each displays one square/chip of the chip set.
It is working fine, with no big performance issues, but it seems like an overall clunky way to do it.
Other ways I've thought of doing it would be to slice the chip set into many temporary images and make an <image> element for each, or using <canvas> instead of <div>'s or <image>'s
Anyways, I'm looking for advice on the subject:
What is a high performance way of splitting up Sprite Sheets/ Chip Sets in JS?
example of a chip set
The way you're doing it right now should be pretty fast. You could essentially reimplement that on the canvas level by storing just the one image then using drawImage to only draw sections from it. To the best of my knowledge, this is the most common way and is very fast. At least that was my experience when I wrote a game engine using canvas.
Using the canvas is most likely faster and more memory efficient since you don't have the additional DOM overhead but you'd have to measure it for your specific use case and verify that.
Creating a bunch of individual images, as you suggested, would be very slow since it'd require copying portions of the image and creating an additional DOM element for each image.
In short: using the canvas will always be faster (for a comparable implementation) since you're almost directly interfacing with the GPU. Drawing from one image rather than copying it into multiple sub images will always be faster since you won't have duplicate memory sitting around and as long as you're drawing from the same sheet, the GPU doesn't have to switch out the texture.

How to avoid redrawing evey objects at each frame in paper.js

How to avoid redrawing eveything at each frame in paper.js?
I suppose I have to detach the frame event from the view view.detach('frame');, and then call draw manually every time I want to update something ?
This is very usefull to make drawing applications.
EDIT
Here is an example of what I want to avoid (click to toggle copies visibility):
the framerate decreases drastically when I show many other shapes (since everything is redrawn at each frame) but the copies could be drawn only on click and then let untouched (the framerate would be always high).
Just in case:
Symbols are not a solutions here, this is maybe a better example of what I want to achieve. The trails are fading away since the canvas is not cleared at each frame, just darkened.
I found some infos about here, it seems redraw optimizations are not implemented yet.
Ok, I implemented the persistence request here, but I didn't pull it yet.
You can check two examples: the tail effect and the performance benchmark (click to toggle visibility of modified clones, press 'space' to toggle persistence).
You can find the example code here, under drawings.

Is it better to have one big canvas or up to 100 small dynamically generated canvases?

I am working on a mobile web dice simulator. The initial prototype is here: http://dicewalla.com
I currently have one large canvas where I draw all the dice. I am planning to re-write the code in way that is more MVC and easier for me to update. I think it would be easier for me to generate a small canvas for each die object than to draw all of the dice on the big canvas and keep updating that big canvas.
My question is if there is a bad performance hit in having the browser create lots of little canvases vs one big one. It's hard to test locally, I was hoping someone here might know what the best practice is.
Multiple canvases usually allow for better performance as you're able to selectively re-render.
If you have only one canvas and want to update one die, you'll typically have to redraw the entire canvas. On the other hand, multiple canvases allow you to update only the dice that need to be redrawn. That's an increase in efficiency.
Further, you shouldn't see any noticeable difference in loading 1 canvas versus 100.
In terms of performance, like was mentioned earlier, there should be little difference between 1-100 canvas elements if you're not updating the graphics on a regular basis. (ie: static graphics / no animation)
Most of the references around the net regarding multiple canvases tend to deal with cases where you have multiple layers and need to handle drawing on top of other things with transparency.
That being said, what you're doing with dicewalla doesn't look like it will gain anything from having multiple canvases.
Also you can selectively redraw the regions of a single canvas to get better performance if updating the entire canvas is a bottleneck. This gives you the performance benefits of having multiple canvases without having to deal with managing and creating those elements.

What's the fastest way to draw to an HTML 5 canvas?

I'm investigating the possibility of producing a game using only HTML's canvas as the display media. To take an example task I need to do, I need to construct the game environment from a number of isometric tiles. Of course, working in 2D means they by necessity come in rectangular packages so there's a large overlap between tiles.
I'm old enough that the natural solution to this problem is to call BitBltMasked. Oh wait, no, an HTML canvas doesn't have something as simple and as pleasing as BitBlt. It seems that the only way to dump pixel data in to a canvas is either with drawImage() which has no useful drawing modes that ignore the alpha channel or to use ImageData objects that have the image data in an array.. to which every. access. is. bounds. checked. and. therefore. dog. slow.
OK, that's more of a rant than a question (things the W3C like tend to provoke that from me), but what I really want to know is how to draw fast to a canvas? I'm finding it very difficult to ditch the feeling that doing 100s of drawImages() a second where every draw respects the alpha channel is inherently sinful and likely to make my application perform like arse in many browsers. On the other hand, the only way to implement BitBlt proper relies heavily on a browser using a hotspot-like execution technique to make it run fast.
Is there any way to draw fast across every possible implementation, or do I just have to forget about performance?
This is a really interesting problem, and there's a few interesting things you can do to solve it.
First, you should know that drawImage can accept a Canvas, not just an image. The "sub-Canvas"es don't even need to be in the DOM. This means that you can do some compositing on one canvas, then draw it to another. This opens a whole world of optimization opportunities, especially in the context of isometric tiles.
Let's say you have an area that's 50 tiles long by 50 tiles wide (I'll say meters for the sake of my own sanity). You might divide the area into 10x10m chunks. Each chunk is represented by its own Canvas. To draw the full scene, you'd simply draw each of the chunks' Canvas objects to the main canvas that's shown to the user. If only four chunks (a 20x20m area), you would only perform four drawImage operations.
Of course, each of those individual chunks will need to render its own Canvas. On game ticks where nothing happens in the chunk, you simply don't do anything: the Canvas will remain unchanged and will be drawn as you'd expect. When something does change, you can do one of a few things depending on your game:
If your tiles extend into the third dimension (i.e.: you have a Z-axis), you can draw each "layer" of the chunk into its own Canvas and only update the layers that need to be updated. For example, if each chunk contains ten layers of depth, you'd have ten Canvas objects. If something on layer 6 was updated, you would only need to re-paint layer 6's Canvas (probably one drawImage per square meter, which would be 100), then perform one drawImage operation per layer in the chunk (ten) to re-draw the chunk's Canvas. Decreasing or increasing the chunk size may increase or decrease performance depending on the number of update you make to the environment in your game. Further optimizations can be made to eliminate drawImage calls for obscured tiles and the like.
If you don't have a third dimension, you can simply perform one drawImage per square meter of a chunk. If two chunks are updated, that's only 200 drawImage calls per tick (plus one call per chunk visible on the screen). If your game involves very few updates, decreasing the chunk size will decrease the number of calls even further.
You can perform updates to the chunks in their own game loop. If you're using requestAnimationFrame (as you should be), you only need to paint the chunk Canvas objects to the screen. Independently, you can perform game logic in a setTimeout loop or the like. Then, each chunk could be updated in its own tick between frames without affecting performance. This can also be done in a web worker using getImageData and putImageData to send the rendered chunk back to the main thread whenever it needs to be updated, though making this work seamlessly will take a good deal of effort.
The other option that you have is to use a library like pixi.js to render the scene using WebGL. Even for 2D, it will increase performance by decreasing the amount of work that the CPU needs to do and shifting that over to the GPU. I'd highly recommend checking it out.
I know that GameJS has blit operations, and I certainly assume any other html5 game libraries do as well (gameQuery, LimeJS, etc etc). I don't know if these packages have addressed the specific array-bounds-checking concern that you had, but in practice their samples seem to work plenty fast on all platforms.
You should not make assumptions about what speedups make sense. For example, the GameJS developer reports that he was going to implement dirty rectangle tracking but it turned out that modern browsers do this automatically---link.
For this reason and others, I suggest to get something working before thinking about the speed. Also, make use of drawing libraries, as the authors have presumably spent some time optimizing performance.
I have no personal knowledge about this, but you can look into the appMobi "direct canvas" HTML element which is allegedly a much faster version of normal canvas, link. I'm confused about whether this works in all browsers or just webkit browsers or just appMobi's own special browser.
Again, you should not make assumptions about what speedups make sense without a very deep knowledge of web browser internal processes. That webpage about "direct canvas" mentions a bunch of things that slow down canvas-drawing: "Reflowing text, mapping hot spots, creating indexes for reference links, on and on." Alpha-blending and array-bounds-checking are not mentioned as prominent causes of slowness!
Unfortunately, there's no way around the alpha composition overhead. Clipping may be one solution, but I doubt there would be much, if any, performance gain. Not to mention how complicated such a route would be to implement on irregular shapes.
When you have to draw the entire display, you're going to have to deal with the performance hit. Although afterwards, you have a whole screen's worth of pre-calculated alpha imagery and you can draw this image data at an offset in one drawImage call. Then, you would only have to individually draw the new tiles that are scrolled into view.
But still, the browser is having to redraw each pixel at a different location in the canvas. Which is quite expensive. It would be nice if there was a method for just scrolling pixels, but no luck there either.
One idea that comes to mind is that you could implement multiple canvases, translating each individual canvas instead of redrawing the pixels. This would allow the browser to decide how to redraw those pixels, in a more native way, at least in theory anyway. Then you could render the newly visible tiles on a new, or used/cached, canvas element. Positioning it to match up with the last screen render.
But that's just my two blits... I mean bits... duh, I mean cents :]

Can I resize images using JavaScript (not scale, real resize)

I need to dynamically load and put on screen huge number of images — it can be something like 1000–3000. Usually these pictures are of different size, and I'm getting their URLs from user. So, some of these pictures can be 1024x800 or 10x40 pixels.
I wrote a good JS script showing them nicely on one page (ala Google Images Search style), but they are still very heavy on RAM used (a hundred 500K images on one page is not good), so I thought about the option of really resizing images. Like making an image that’s 1000x800 pixels something like 100x80, and then forget (free the ram) of the original one.
Can this be done using JavaScript (without server side processing)?
I would suggest a different approach: Use pagination.
Display, say, 15 images. Then the user click on 'next page' and the next page is shown.
Or, even better, you can script that when the user reaches the end of the page the next page is automatically loaded.
If such thing is not what you want to do. Maybe you want to do a collage of images, then maybe you can check CSS3 transforms. I think they should be fast.
What you want to do is to take some pressure from the client so that it can handle all the images. Letting it resize all the images (JavaScript is client side) will do exactly the opposite because actually resizing an image is usually way more expensive than just displaying it (and not possible with browser JS anyway).
Usually there is always a better solution than displaying that many items at once. One would be dynamic loading e.g. when a user scrolls down the page (like the new Facebook profiles do) or using pagination. I can't imagine that all 1k - 3k images will be visible all at once.
There is no native JS way of doing this. You may be able to hack something using Flash but you really should resize the images on the server because:
You will save on bandwidth transferring those large 500K images to the client.
The client will be able to cache those images.
You'll get a faster loading page.
You'll be able to fit a lot more thumbnail images in memory and therefore will require less pagination.
more...
I'm (pretty) sure it can be done in browsers that support canvas. If this is a path you would like to take you should start here.
I see to possible problems with the canvas approach:
It will probably take a really long time (relatively speaking) to resize many images. Because of this, you're probably going to have to look into utilizing webworkers.
Will the browser actually free up any memory if you remove the image from the DOM and/or delete/null all references to those images? I don't know.
And some pretty pictures of a canvas-resized image:
this answer needs a ninja:--> Qk

Categories