HTML5 Blending modes - layers in paint program - javascript

I'm creating html5 paint application and I'm currently working on blending layers. I'm wondering which of the approaches would be the best (fastest and gimp/photoshop like) to build in such program. My layers (canvases) are stacked.
Changing blend mode by CSS3 property (probably very fast - blending directly on graphics card)
Having hidden canvases (layers) and one canvas to show flattened image to user. So we draws on these hidden canvases and there is some mechanism which taking each of hidden canvas and draw it to user viewable canvas (probably slower but each context.drawImage(...) is optimized and computed on graphics card)
Hidden canvases (layers) are truly virtual. There is no hidden canvases elements at all. Instead there is some structure in memory which imitate canvas. These structure just saving user actions performed on this virtual layer. Then when repainting is required, user canvas is reconstructed by taking each operations from each virtual layers and paint it (true paint). Operations must be correctly ordered is such virtual layer structure (this is probably slow but may be faster than 2nd approach(we don't wasting time to draw anything on layer, just storing options how we will draw on real layer)?)
Blending by WebGL(probably fast)
Compute manually each pixel and show score to user (super slow?)
I'm using second approach but i'm not really sure it's the best (especially for more layers). I was wondering do you know any other approach, maybe you would suggest what would be better to implement blending operations to make it work as in Gimp/Adobe Photoshop

My way to implement this was to have multiple canvases, each of them store it's own blending mode. But it's just virual canvas (not drawable for user). Then there is 33 times per second timer (made by RequestAnimationFrame) which grab all this canvases, flatten them and draw it to viewport. For each layer I perform blend mode change on viewport canvas. It works perfect since drawImage(...) is computed on GPU and it's really fast.
I think code is fairly easy to understand and to apply in own solutions:
Process(lm: dl.LayerManager) {
var canvasViewPort = lm.GetWorkspaceViewPort();
var virtualLayers = lm.GetAllNonTemporary();
canvasViewPort.Save();
canvasViewPort.Clear();
canvasViewPort.SetBlendMode(virtualLayers[0].GetBlendMode());
canvasViewPort.ApplyBlendMode();
canvasViewPort.DrawImage(virtualLayers[0], 0, 0, canvasViewPort.Width(), canvasViewPort.Height(), 0, 0, canvasViewPort.Width(), canvasViewPort.Height());
for (var i = 1; i < virtualLayers.length; ++i) {
var layer = virtualLayers[i];
var shift = other.GetShift();
canvasViewPort.SetBlendMode(virtualLayers[i].GetBlendMode());
canvasViewPort.ApplyBlendMode();
canvasViewPort.DrawImage(layer, 0, 0, layer.Width(), layer.Height(), shift.x, shift.y, layer.Width(), layer.Height());
}
canvasViewPort.Restore();
}
Don't be affraid of this code. It maintains underlaying canvas, pure CanvasRenderingContext2D.
I will perform such optimization: I Won't redraw canvasViewPort until something changes on any virtual canvas.
Next optimization I would perform: Get selected layer, cache all before and all after current layer.
So it will looks like:
CurrentLayer: 3 (which might be affected by some tool)
[1]:L1
[2]:L2
[3]:L3
[4]:L4
[5]:L5
[6]:L6
Draw to temporary with appropiate blend mode [1, 2] to tmp1 and [4, 5, 6] to tmp2.
When something change on third layer, I just need to redraw tmp1 + [3] + tmp2.
So it's just only 2 iteration's. Super fast.

Related

Why aren't I getting performance improvements with my canvas?

I'm trying to make a simple maze navigation game using an HTML5 canvas, where the player character stays in the center of the screen and the maze moves around them.
The maze is represented using a 2D array of tile objects, so my first approach to drawing was something like:
for(var row=frameStart; row<=frameEnd; row++) {
for(var col=frameStart; col<=frameEnd; col++)
maze[row][col].draw();
}
}
...where frameStart and frameEnd are computed each call to the paint method in order to avoid drawing parts of the maze that aren't visible. Anyway, this was a bit too slow for me, so I decided to save the whole maze into an image using
var img = new Image();
img.src = canvas.toDataURL();
So now, I'm drawing the maze just one time, saving it into an image, and from then on just drawing a part of that saved image every frame using ctx.drawImage(img, ...) instead of looping through lots of tile elements and drawing each individually.
However, I found that this method is not noticeably faster, and I'm at a loss for how I can increase the performance any further, and am left wondering why my image idea didn't increase the performance. It currently takes longer than I'd like to render each frame. I currently have the game locked into rendering every single frame so I can easily see how long it takes to do so based on how fast the player character moves.
Improve rendering
Looking at your game I found it ran just fine on my machine.
Use requestAnimationFrame
But looking at your code you have implemented the rendering in a bad way.
You are placing renders back to back with
... cut from bottom of function reactToUserInput
if(needsRedraw) {
// Redraw canvas with interpolation
drawMaze(true, oldLocation, 0);
return; // drawMaze will call this function when it's done
}
}
drawMaze();
setTimeout(reactToUserInput, 0);
in function reactToUserInput
And in drawMaze()
// Either continue interpolation, recall user input function, or stop entirely
if (interpolate) {
if (interpOffset.mag == 0) {
reactToUserInput();
} else {
setTimeout(function () {
drawMaze(true, oldUserLocation, recurseCount + 1);
}, 0);
}
return;
}
This can for some hardware setups cause you avoidable slowdowns.
Use requestAnimationFrame to render the frame every 1/60th second. This will also throttle your game on fast machines as i can not see any time controlled movement. It will also help the rendering on machines with low GPU RAM and GPU power better manage the GPU state as now you are forcing state changes when not needed.
Any rendering done at over 60fps just will not be seen by the user, so avoid needless rendering.
Why is preRendering not helping
Then I ran a profile on your game and the results show that indeed the drawCell function is the bottleneck. The call to ctx.drawImage in drawCell accounts for 32% of your overall processing. (but this value is misleading as you are constantly rendering)
The reason your rendering is not improved by prerendering is that you are asking too much of the GPU. The maze as I saw it is 150 tiles by 150 tiles with each cell being 65 by 65 pixels. That makes the dimension of the whole maze 9750 by 9750 pixels consuming a total of 380.25MB of GPU memory that you are sharing with the page, the OS and any other process that is happening. Only the very top end machines will be happy to handle that amount of RAM, but the rest will be frantically paging it from system RAM causing the slowdown (compounded by the constant rendering).
Rule of thumb about images sizes on the canvas. Never try to use images greater than 4 times the longest resolution of the device. Devices are tuned to the resolution of the display and will happily deal with images that are near that resolution. Go over that and you exceed the capabilities of the hardware.
How to fix and get a good frame rate.
Looking at the game there is no reason why it should not be running at 60fps on all devices.
Use requestAnimationFrame to time the rendering and user input (nobody can toggle a key at over 60hz and your game does not demand instant reaction).
Reduce pixel memory usage. Your tiles are 65 by 65 which means that in the gpu the till image occupies 128 by 128 pixels in memory. Search "rendering powers of two" to find out why.
Change the cell resolution to 64 by 64.
Rather than pre render the whole scene create a offscreen canvas that is the play area size plus 2 cells. So if the number of cells to draw is 32 by 32 then create a canvas that is 34 by 34 cells. On first render draw all the 34 by 34 cells to that canvas. Then draw that canvas to follow the player, when the play get to a point and there are no cells along the edge that you are moving to, copy the canvas onto itself to make room for new cells in the direction of travel, then render the row or column of missing cells.
// playfield is the offscreen canvas with .ctx as it context
playfield.ctx = playfield.getContext("2d");
// to move one cell up in the playfield
playfield.ctx.drawImage(playfield,
0, cellSize, playfield.width, playfield.height-cellSize,
0,0,playfield.width, playfield.height-cellSize
)
// then draw the missing bottom row of cells only
// then just draw the playfield to the onscreen canvas
ctx.drawImage(playfield, mazeLeft, mazeTop)
It will take a bit of a rewrite but will get your game running very smoothly and get you back to concentrating on game play rather than performance.

How to create merged shapes based upon blurred originals

I'm using easeljs and attempting to generate a simple water simulation based on this physics liquid demo. The issue I'm struggling with is the final step where the author states they "get hard edges". It is this step that merges the particles into an amorphous blob that gives the effect of cohesion and flow.
In case the link is no longer available, in summary, I've followed the simulation "steps" and created a prototype for particle liquid with the following :
Created a particle physics simulation
Added a blur filter
Apply a threshold to get "hard edges"
So I wrote some code that is using a threshold check to color red (0xFF0000) any shapes/particles that meet the criteria. In this case the criteria is any that have a color greater than RGB (0,0,200). If not, they are colored blue (0x0000FF).
var blurFilter = new createjs.BlurFilter(emitter.size, emitter.size*3, 1);
var thresholdFilter = new createjs.ThresholdFilter(0, 0, 200, 0xFF0000, 0x0000FF);
Note that only blue and red appear because of the previously mentioned threshold filter. For reference, the colors generated are RGB (0,0,0-255). The method r() simply generates a random number up to the passed in value.
graphics.beginFill(createjs.Graphics.getRGB(0,0,r(255)));
I'm using this idea of applying a threshold criteria so that I can later set some boundaries around the particle adhesion. My thought is that larger shapes would have greater "gravity".
You can see from the fountain of particles running below in the attached animated gif that I've completed Steps #1-2 above, but it is this Step #3 that I'm not certain how to apply. I haven't been able to identify a single filter that I could apply from easeljs that would transform the shapes or merge them in any way.
I was considering that I might be able to do a getBounds() and draw a new shape but they wouldn't truly be merged at that time. Nor would they exhibit the properties of liquid despite being larger and appearing to be combined.
bounds = blurFilter.getBounds(); // emitter.size+bounds.x, etc.
The issue really becomes how to define the shapes that are blurred in the image. Apart from squinting my eyes and using my imagination I haven't been able to come to a solution.
I also looked around for a solution to apply gravity between shapes so they could, perhaps, draw together and combine but maybe it's simply that easeljs is not the right tool for this.
Thanks for any thoughts on how I might approach this.

HTML5 Canvas - Scaling relative to the center of an object without translating context

I'm working on a canvas game that has several particle generators. The particles gradually scale down after being created. To scale the particles down from their center points I am using the context.translate() method:
context.save();
context.translate(particle.x, particle.y);
context.translate(-particle.width/2, -particle.height/2);
context.drawImage(particle.image, 0, 0, particle.width, particle.height);
context.restore();
I've read several sources that claim the save(), translate() and restore() methods are quite computationally expensive. Is there an alternative method I can use?
My game is targeted at mobile browsers so I am trying to optimize for performance as much as possible.
Thanks in advance!
Yes, just use setTransform() at the end instead of using save/restore:
//context.save();
context.translate(particle.x, particle.y);
context.translate(-particle.width/2, -particle.height/2);
context.drawImage(particle.image, 0, 0, particle.width, particle.height);
//context.restore();
context.setTransform(1,0,0,1,0,0); // reset matrix
Assuming there are no other accumulated transform in use (in which case you could refactor the code to set absolute transforms where needed).
The numbers given as argument are numbers representing an identity matrix, ie. a reset matrix state.
This is much faster than the save/restore approach which stores and restores not only transform state, but style settings, shadow settings, clipping area and what have you.
You could also combine the two translation calls into a single call, and use multiply instead of divide (which is much faster at CPU level):
context.translate(particle.x-particle.width*0.5, particle.y-particle.height*0.5);
or simply use the x/y coordinate directly with the particle "shader" without translating at all.

render a tile map using javascript

I'm looking for a logical understanding with sample implementation ideas on taking a tilemap such as this:
http://thorsummoner.github.io/old-html-tabletop-test/pallete/tilesets/fullmap/scbw_tiles.png
And rendering in a logical way such as this:
http://thorsummoner.github.io/old-html-tabletop-test/
I see all of the tiles are there, but I don't understand how they are placed in a way that forms shapes.
My understanding of rendering tiles so far is simple, and very manual. Loop through map array, where there are numbers (1, 2, 3, whatever), render that specified tile.
var mapArray = [
[0, 0, 0, 0 ,0],
[0, 1, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 1, 1 ,0]
];
function drawMap() {
background = new createjs.Container();
for (var y = 0; y < mapArray.length; y++) {
for (var x = 0; x < mapArray[y].length; x++) {
if (parseInt(mapArray[y][x]) == 0) {
var tile = new createjs.Bitmap('images/tile.png');
}
if (parseInt(mapArray[y][x]) == 1) {
var tile = new createjs.Bitmap('images/tile2.png');
}
tile.x = x * 28;
tile.y = y * 28;
background.addChild(tile);
}
}
stage.addChild(background);
}
Gets me:
But this means I have to manually figure out where each tile goes in the array so that logical shapes are made (rock formations, grass patches, etc)
Clearly, the guy who made the github code above used a different method. Any guidance on understanding the logic (with simply pseudo code) would be very helpful
There isn't any logic there.
If you inspect the page's source, you'll see that the last script tag, in the body, has a huge array of tile coordinates.
There is no magic in that example which demonstrates an "intelligent" system for figuring out how to form shapes.
Now, that said, there are such things... ...but they're not remotely simple.
What is more simple, and more manageable, is a map-editor.
Tile Editors
out of the box:
There are lots of ways of doing this... There are free or cheap programs which will allow you to paint tiles, and will then spit out XML or JSON or CSV or whatever the given program supports/exports.
Tiled ( http://mapeditor.org ) is one such example.
There are others, but Tiled is the first I could think of, is free, and is actually quite decent.
pros:
The immediate upside is that you get an app that lets you load image tiles, and paint them into maps.
These apps might even support adding collision-layers and entity-layers (put an enemy at [2,1], a power-up at [3,5] and a "hurt-player" trigger, over the lava).
cons:
...the downside is that you need to know exactly how these files are formatted, so that you can read them into your game engines.
Now, the outputs of these systems are relatively-standardized... so that you can plug that map data into different game engines (what's the point, otherwise?), and while game-engines don't all use tile files that are exactly the same, most good tile-editors allow for export into several formats (some will let you define your own format).
...so that said, the alternative (or really, the same solution, just hand-crafted), would be to create your own tile-editor.
DIY
You could create it in Canvas, just as easily as creating the engine to paint the tiles.
The key difference is that you have your map of tiles (like the tilemap .png from StarCr... erm... the "found-art" from the example, there).
Instead of looping through an array, finding the coordinates of the tile and painting them at the world-coordinates which match that index, what you would do is choose a tile from the map (like choosing a colour in MS Paint), and then wherever you click (or drag), figure out which array point that relates to, and set that index to be equal to that tile.
pros:
The sky is the limit; you can make whatever you want, make it fit any file-format you want to use, and make it handle any crazy stuff you want to throw at it...
cons:
...this of course, means you have to make it, yourself, and define the file-format you want to use, and write the logic to handle all of those zany ideas...
basic implementation
While I'd normally try to make this tidy, and JS-paradigm friendly, that would result in a LOT of code, here.
So I'll try to denote where it should probably be broken up into separate modules.
// assuming images are already loaded properly
// and have fired onload events, which you've listened for
// so that there are no surprises, when your engine tries to
// paint something that isn't there, yet
// this should all be wrapped in a module that deals with
// loading tile-maps, selecting the tile to "paint" with,
// and generating the data-format for the tile, for you to put into the array
// (or accepting plug-in data-formatters, to do so)
var selected_tile = null,
selected_tile_map = get_tile_map(), // this would be an image with your tiles
tile_width = 64, // in image-pixels, not canvas/screen-pixels
tile_height = 64, // in image-pixels, not canvas/screen-pixels
num_tiles_x = selected_tile_map.width / tile_width,
num_tiles_y = selected_tile_map.height / tile_height,
select_tile_num_from_map = function (map_px_X, map_px_Y) {
// there are *lots* of ways to do this, but keeping it simple
var tile_y = Math.floor(map_px_Y / tile_height), // 4 = floor(280/64)
tile_x = Math.floor(map_px_X / tile_width ),
tile_num = tile_y * num_tiles_x + tile_x;
// 23 = 4 down * 5 per row + 3 over
return tile_num;
};
// won't go into event-handling and coordinate-normalization
selected_tile_map.onclick = function (evt) {
// these are the coordinates of the click,
//as they relate to the actual image at full scale
map_x, map_y;
selected_tile = select_tile_num_from_map(map_x, map_y);
};
Now you have a simple system for figuring out which tile was clicked.
Again, there are lots of ways of building this, and you can make it more OO,
and make a proper "tile" data-structure, that you expect to read and use throughout your engine.
Right now, I'm just returning the zero-based number of the tile, reading left to right, top to bottom.
If there are 5 tiles per row, and someone picks the first tile of the second row, that's tile #5.
Then, for "painting", you just need to listen to a canvas click, figure out what the X and Y were,
figure out where in the world that is, and what array spot that's equal to.
From there, you just dump in the value of selected_tile, and that's about it.
// this might be one long array, like I did with the tile-map and the number of the tile
// or it might be an array of arrays: each inner-array would be a "row",
// and the outer array would keep track of how many rows down you are,
// from the top of the world
var world_map = [],
selected_coordinate = 0,
world_tile_width = 64, // these might be in *canvas* pixels, or "world" pixels
world_tile_height = 64, // this is so you can scale the size of tiles,
// or zoom in and out of the map, etc
world_width = 320,
world_height = 320,
num_world_tiles_x = world_width / world_tile_width,
num_world_tiles_y = world_height / world_tile_height,
get_map_coordinates_from_click = function (world_x, world_y) {
var coord_x = Math.floor(world_px_x / num_world_tiles_x),
coord_y = Math.floor(world_px_y / num_world_tiles_y),
array_coord = coord_y * num_world_tiles_x + coord_x;
return array_coord;
},
set_map_tile = function (index, tile) {
world_map[index] = tile;
};
canvas.onclick = function (evt) {
// convert screen x/y to canvas, and canvas to world
world_px_x, world_px_y;
selected_coordinate = get_map_coordinates_from_click(world_px_x, world_px_y);
set_map_tile(selected_coordinate, selected_tile);
};
As you can see, the procedure for doing one is pretty much the same as the procedure for doing the other (because it is -- given an x and y in one coordinate-set, convert it to another scale/set).
The procedure for drawing the tiles, then, is nearly the exact opposite.
Given the world-index and tile-number, work in reverse to find the world-x/y and tilemap-x/y.
You can see that part in your example code, as well.
This tile-painting is the traditional way of making 2d maps, whether we're talking about StarCraft, Zelda, or Mario Bros.
Not all of them had the luxury of having a "paint with tiles" editor (some were by hand in text-files, or even spreadsheets, to get the spacing right), but if you load up StarCraft or even WarCraft III (which is 3D), and go into their editors, a tile-painter is exactly what you get, and is exactly how Blizzard made those maps.
additions
With the basic premise out of the way, you now have other "maps" which are also required:
you'd need a collision-map to know which of those tiles you could/couldn't walk on, an entity-map, to show where there are doors, or power-ups or minerals, or enemy-spawns, or event-triggers for cutscenes...
Not all of these need to operate in the same coordinate-space as the world map, but it might help.
Also, you might want a more intelligent "world".
The ability to use multiple tile-maps in one level, for instance...
And a drop-down in a tile-editor to swap tile-maps.
...a way to save out both tile-information (not just X/Y, but also other info about a tile), and to save out the finished "map" array, filled with tiles.
Even just copying JSON, and pasting it into its own file...
Procedural Generation
The other way of doing this, the way you suggested earlier ("knowing how to connect rocks, grass, etc") is called Procedural Generation.
This is a LOT harder and a LOT more involved.
Games like Diablo use this, so that you're in a different randomly-generated environment, every time you play. Warframe is an FPS which uses procedural generation to do the same thing.
premise:
Basically, you start with tiles, and instead of just a tile being an image, a tile has to be an object that has an image and a position, but ALSO has a list of things that are likely to be around it.
When you put down a patch of grass, that grass will then have a likelihood of generating more grass beside it.
The grass might say that there's a 10% chance of water, a 20% chance of rocks, a 30% chance of dirt, and a 40% chance of more grass, in any of the four directions around it.
Of course, it's really not that simple (or it could be, if you're wrong).
While that's the idea, the tricky part of procedural generation is actually in making sure everything works without breaking.
constraints
You couldn't, for example have the cliff wall, in that example, appear on the inside of the high-ground. It can only appear where there's high ground above and to the right, and low-ground below and to the left (and the StarCraft editor did this automatically, as you painted). Ramps can only connect tiles that make sense. You can't wall off doors, or wrap the world in a river/lake that prevents you from moving (or worse, prevents you from finishing a level).
pros
Really great for longevity, if you can get all of your pathfinding and constraints to work -- not only for pseudo-randomly generating the terrain and layout, but also enemy-placement, loot-placement, et cetera.
People are still playing Diablo II, nearly 14 years later.
cons
Really difficult to get right, when you're a one-man team (who doesn't happen to be a mathematician/data-scientist in their spare time).
Really bad for guaranteeing that maps are fun/balanced/competitive...
StarCraft could never have used 100% random-generation for fair gameplay.
Procedural-generation can be used as a "seed".
You can hit the "randomize" button, see what you get, and then tweak and fix from there, but there'll be so much fixing for "balance", or so many game-rules written to constrain the propagation, that you'll end up spending more time fixing the generator than just painting a map, yourself.
There are some tutorials out there, and learning genetic-algorithms, pathfinding, et cetera, are all great skills to have... ...buuuut, for purposes of learning to make 2D top-down tile-games, are way-overkill, and rather, are something to look into after you get a game/engine or two under your belt.

How does 2D drawing frameworks such as Pixi.js make canvas drawing faster?

I found a bunnymark for Javascript canvas here.
Now of course, I understand their default renderer is using webGL but I am only interested in the native 2D context performance for now. I disabled webGL on firefox and after spawning 16500 bunnies, the counter showed a FPS of 25. I decided to wrote my own little very simple rendering loop to see how much overhead Pixi added. To my surprise, I only got a FPS of 20.
My roughly equivalent JSFiddle.
So I decided to take a look into their source here and it doesn't appear to be that the magic is in their rendering code:
do
{
transform = displayObject.worldTransform;
...
if(displayObject instanceof PIXI.Sprite)
{
var frame = displayObject.texture.frame;
if(frame)
{
context.globalAlpha = displayObject.worldAlpha;
context.setTransform(transform[0], transform[3], transform[1], transform[4], transform[2], transform[5]);
context.drawImage(displayObject.texture.baseTexture.source,
frame.x,
frame.y,
frame.width,
frame.height,
(displayObject.anchor.x) * -frame.width,
(displayObject.anchor.y) * -frame.height,
frame.width,
frame.height);
}
}
Curiously, it seems they are using a linked list for their rendering loop and a profile on both app shows that while my version allocates the same amount of cpu time per frame, their implementation shows cpu usage in spikes.
My knowledge ends here unfortunately and I am curious if anyone can shed some light on whats going on.
I think, in my opinion, that it boils down to how "compilable" (cache-able) the code is. Chrome and Firefox uses two different JavaScript "compilers"/engines as we know which optimizes and caching code differently.
Canvas operations
Using transform versus direct coordinates should not have an impact as setting a transform merely updates the matrix which is in any case is used with what-ever is in it.
The type of position values can affect performance though, float versus integer values, but as both your fiddle and PIXI seem to use floats only this is not the key here.
So here I don't think canvas is the cause of the difference.
Variable and property caching
(I got unintentionally too focused on the prototypal aspect in the first version of this answer. The essence I was trying to get at was mainly object traversing, so here the following text is re-worded a bit -)
PIXI uses object properties as the fiddle but these custom objects in PIXI are smaller in size so the traversing of the object tree takes less time compared to what it takes to traverse a larger object such as canvas or image (a property such as width would also be at the end of this object).
It's a well known classic optimization trick to cache variables due to this very reason (traverse time). The effect is less today as the engines has become smarter, especially V8 in Chrome which seem to be able to predict/cache this better internally, while in Firefox it seem to still have a some impact not to cache these variables in code.
Does it matter performance-wise? For short operations very little, but drawing 16,500 bunnies onto canvas is demanding and do gain a benefit from doing this (in FF) so any micro-optimization do actually count in situations such as this.
Demos
I prototyped the "renderer" to get even closer to PIXI as well as caching the object properties. This gave a performance burst in Firefox:
http://jsfiddle.net/AbdiasSoftware/2Dbys/8/
I used a slow computer (to scale the impact) which ran your fiddle at about 5 FPS. After caching the values it ran at 6-7 fps which is more than 20% increase on this computer showing it do have an effect. On a computer with a larger CPU instruction cache and so forth the effect may be less, but it's there as this is related to the FF engine itself (disclaimer: I am not claiming this to be a scientific test however, only a pointer :-) ).
/// cache object properties
var lastTime = 0,
w = canvas.width,
h = canvas.height,
iw = image.width,
ih = image.height;
This next version caches these variables as properties on an object (itself) to show that also this improves performance compared to using large global objects directly - result about the same as above:
http://jsfiddle.net/AbdiasSoftware/2Dbys/9/
var RENDER = function () {
this.width = canvas.width;
this.height = canvas.height;
this.imageWidth = image.width;
this.imageHeight = image.height;
}
In conclusion
I am certain based on the results and previous experience that PIXI can run the code faster due to using custom small-sized objects rather than getting the properties directly from large objects (elements) such as canvas and image.
The FF engine seem not yet to be as "smart" as the V8 engine in regard to object traversing of tree and branches so caching variables do have an impact in FF which comes to display when the demand is high (such as when drawing 16,500 bunnies per "frame").
One difference I noticed between your version and Pixi's is this:
You render image at certain coordinates by passing x/y straight to drawImage function:
drawImage(img, x, y, ...);
..whereas Pixi translates entire canvas context, and then draws image at 0/0 (of already shifted context):
setTransform(1, 0, 0, 1, x, y);
drawImage(img, 0, 0, ...);
They also pass more arguments to drawImage; arguments that control "destination rectangle" — dx, dy, dw, dh.
I suspected this is where speed difference hides. However, changing your test to use same "technique" doesn't really make things better.
But there's something else...
I clocked bunnies to 5000, disabled WebGL, and Pixi actually performs worse than the custom fiddle version.
I get ~27 FPS on Pixi:
and ~32-35 FPS on Fiddle:
This is all on Chrome 33.0.1712.4 dev, Mac OS X.
I'd suspect that this is some canvas compositing issue. Canvas is transparent by default, so the page background needs to be combined with the canvas contents...
I found this in their source...
// update the background color
if (this.view.style.backgroundColor != stage.backgroundColorString &&
!this.transparent) {
this.view.style.backgroundColor = stage.backgroundColorString;
}
Maybe they set the canvas to be opaque for this demo (the fiddle doesn't really work for me, seems like most of the bunnys jump out with an extremely large dt most of the time)?
I don't think it's an object property access timing / compilability thing: The point is valid, but I don't think it can explain that much of a difference.

Categories