I'm having the following problem, and I believe I must be doing something stupid, but I can't find it. (This is in Firefox 8)
I'm taking a large sprite file, making a new small canvas to hold just one tile of that sprite, and then using the most basic overload of drawImage to draw this isolated tile in hundreds of places in my "screen" canvas.
I'm doing this instead of simply using the last overload of drawImage, that takes only a piece of the larger sprite file. By avoiding this clipping, in Chrome, I get a performance increase of about 10%, to my surprise. However, in Firefox, frame rate drops from 300 to 17 FPS.
So, what I'm seeing essentially is that drawing "from image to canvas" is about 20 times faster than drawing "from canvas to canvas" in Firefox.
Is this correct? I haven't found any information on this particular case, but this is what I see on my tests.
Also, this the code I'm using. Am I doing anything incredibly stupid?
function Test5() {
var imgSprite = $('imgSprite');
var tileCanvas = document.createElement("canvas");
tileCanvas.width = 64; tileCanvas.height = 31;
var tileCtx = tileCanvas.getContext("2d");
tileCtx.drawImage(imgSprite, 0, 0, 64, 31, 0, 0, 64, 31);
//--------------------------------------
var ctx = getContext('mainScreen');
ctx.fillStyle = '#fff';
time(function() { // time will run this function many times and time it
ctx.fillRect(0,0, 1200,900);
var x=0, y=0, row = 0;
for (var i=1; i < 1000; i++) {
ctx.drawImage(tileCanvas, x,y); // <-- this is the line I care about
// some simple code to calculate new x/y
}
}, 1000, "Test5", 'Drawing from an individual tile canvas, instead of a section of big sprite');
}
If, instead of
ctx.drawImage(tileCanvas, x,y);
I do :
ctx.drawImage(imgSprite, 0, 0, 64, 31, x, y, 64, 31);
That's 20 times faster
Am I missing something here?
EDIT: After asking this I made a little page for myself to test several different things in different platforms and see what was the best way to do things in each.
http://www.crystalgears.com/isoengine/jstests3-canvas.html
I'm sorry that code is horrifically ugly, it was a quick hack, not meant to be seen by others, or even myself in a few days.
It's hard to be sure without profiling (esp. with your particular images and maybe your particular graphics card+driver), but you may be hitting https://bugzilla.mozilla.org/show_bug.cgi?id=705559
Certainly that bug should cause drawImage with <canvas> argument to be somewhat slower; though 20x slower is very surprising (hence the "may").
Related
I'm using HTML5 Canvas elements in a project I'm working on and while rendering a bunch of stuff I ran into a really strange artifact that I'm wondering if anyone has ever seen before. Basically, in this particular scenario (this is the only case I've seen that produces this behavior), I happened to be panning/zooming in my application and noticed a very bizarre rendering effect on part of the canvas. Upon further inspection, I was able to reproduce the effect in a very simplistic example:
In this case, I have a path (whose coordinates don't change) and all that is changing from the first screenshot to the second is the applied transformation matrix (by a very small amount).
You can access the JSFiddle I used to generate these screenshots at https://jsfiddle.net/ynsv66g8/ and here is the relevant rendering code:
// Clear context
context.setTransform(1, 0, 0, 1, 0, 0);
context.fillStyle = "#000000";
context.fillRect(0, 0, 500, 500);
if (showArtifact) { // This transformation causes the artifact
context.transform(0.42494658722537976, 0, 0, -0.42494658722537976, 243.95776291868646, 373.14630356628857);
} else { // This transformation does not
context.transform(0.4175650109545749, 0, 0, -0.4175650109545749, 243.70987992043368, 371.06797574381795);
}
// Draw path
context.lineWidth = 3.488963446543301;
context.strokeStyle = 'red';
context.beginPath();
var p = path[0];
context.moveTo(p[0], p[1]);
for (var i = 1; i < path.length; ++i) {
p = path[i];
context.lineTo(p[0], p[1]);
}
context.closePath();
context.stroke();
It looks to be related to the call to canvas.closePath() because if you remove the call to context.closePath() and replace it with:
p = path[0];
context.lineTo(p[0], p[1]);
(thus manually "closing" the path) everything works properly (granted, this really doesn't fix my particular issue because I rely on multiple closed paths to apply filling rules).
Another interesting change that can be made that makes the issue go away is to reverse the path array (i.e., by adding a call to path.reverse() right after its definition).
Together, all of this seems to be adding up to some kind of browser rendering bug related to the characteristics of the path, especially since, on my Mac, the issue occurs in both Chrome (v61.0.3163.91) and Firefox (v55.0.3) but not Safari (v11). I've done some extensive searching to try to find this issue (or something similar), but thus far have come up empty.
Any insight into what might be causing this (or the proper way to go about reporting this issue if the consensus is that it is caused by some browser bug) would be greatly appreciated.
it seems a line mitering problem; that is, the renderer fails to correctly join lines when the width is too big relative to the segment size/orientation.
This seems not influenced by path closing ( I can reproduce the artifact with the open path ); note that manually closing the path is not the same as a closePath(), because no line join is performed in the former case.
As far as I can tell, it seems solved by setting lineJoin to 'round' or reducing the line width ... anyway, seems a renderer bug to me ... just my two cents :)
I'm looking for a logical understanding with sample implementation ideas on taking a tilemap such as this:
http://thorsummoner.github.io/old-html-tabletop-test/pallete/tilesets/fullmap/scbw_tiles.png
And rendering in a logical way such as this:
http://thorsummoner.github.io/old-html-tabletop-test/
I see all of the tiles are there, but I don't understand how they are placed in a way that forms shapes.
My understanding of rendering tiles so far is simple, and very manual. Loop through map array, where there are numbers (1, 2, 3, whatever), render that specified tile.
var mapArray = [
[0, 0, 0, 0 ,0],
[0, 1, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 1, 1 ,0]
];
function drawMap() {
background = new createjs.Container();
for (var y = 0; y < mapArray.length; y++) {
for (var x = 0; x < mapArray[y].length; x++) {
if (parseInt(mapArray[y][x]) == 0) {
var tile = new createjs.Bitmap('images/tile.png');
}
if (parseInt(mapArray[y][x]) == 1) {
var tile = new createjs.Bitmap('images/tile2.png');
}
tile.x = x * 28;
tile.y = y * 28;
background.addChild(tile);
}
}
stage.addChild(background);
}
Gets me:
But this means I have to manually figure out where each tile goes in the array so that logical shapes are made (rock formations, grass patches, etc)
Clearly, the guy who made the github code above used a different method. Any guidance on understanding the logic (with simply pseudo code) would be very helpful
There isn't any logic there.
If you inspect the page's source, you'll see that the last script tag, in the body, has a huge array of tile coordinates.
There is no magic in that example which demonstrates an "intelligent" system for figuring out how to form shapes.
Now, that said, there are such things... ...but they're not remotely simple.
What is more simple, and more manageable, is a map-editor.
Tile Editors
out of the box:
There are lots of ways of doing this... There are free or cheap programs which will allow you to paint tiles, and will then spit out XML or JSON or CSV or whatever the given program supports/exports.
Tiled ( http://mapeditor.org ) is one such example.
There are others, but Tiled is the first I could think of, is free, and is actually quite decent.
pros:
The immediate upside is that you get an app that lets you load image tiles, and paint them into maps.
These apps might even support adding collision-layers and entity-layers (put an enemy at [2,1], a power-up at [3,5] and a "hurt-player" trigger, over the lava).
cons:
...the downside is that you need to know exactly how these files are formatted, so that you can read them into your game engines.
Now, the outputs of these systems are relatively-standardized... so that you can plug that map data into different game engines (what's the point, otherwise?), and while game-engines don't all use tile files that are exactly the same, most good tile-editors allow for export into several formats (some will let you define your own format).
...so that said, the alternative (or really, the same solution, just hand-crafted), would be to create your own tile-editor.
DIY
You could create it in Canvas, just as easily as creating the engine to paint the tiles.
The key difference is that you have your map of tiles (like the tilemap .png from StarCr... erm... the "found-art" from the example, there).
Instead of looping through an array, finding the coordinates of the tile and painting them at the world-coordinates which match that index, what you would do is choose a tile from the map (like choosing a colour in MS Paint), and then wherever you click (or drag), figure out which array point that relates to, and set that index to be equal to that tile.
pros:
The sky is the limit; you can make whatever you want, make it fit any file-format you want to use, and make it handle any crazy stuff you want to throw at it...
cons:
...this of course, means you have to make it, yourself, and define the file-format you want to use, and write the logic to handle all of those zany ideas...
basic implementation
While I'd normally try to make this tidy, and JS-paradigm friendly, that would result in a LOT of code, here.
So I'll try to denote where it should probably be broken up into separate modules.
// assuming images are already loaded properly
// and have fired onload events, which you've listened for
// so that there are no surprises, when your engine tries to
// paint something that isn't there, yet
// this should all be wrapped in a module that deals with
// loading tile-maps, selecting the tile to "paint" with,
// and generating the data-format for the tile, for you to put into the array
// (or accepting plug-in data-formatters, to do so)
var selected_tile = null,
selected_tile_map = get_tile_map(), // this would be an image with your tiles
tile_width = 64, // in image-pixels, not canvas/screen-pixels
tile_height = 64, // in image-pixels, not canvas/screen-pixels
num_tiles_x = selected_tile_map.width / tile_width,
num_tiles_y = selected_tile_map.height / tile_height,
select_tile_num_from_map = function (map_px_X, map_px_Y) {
// there are *lots* of ways to do this, but keeping it simple
var tile_y = Math.floor(map_px_Y / tile_height), // 4 = floor(280/64)
tile_x = Math.floor(map_px_X / tile_width ),
tile_num = tile_y * num_tiles_x + tile_x;
// 23 = 4 down * 5 per row + 3 over
return tile_num;
};
// won't go into event-handling and coordinate-normalization
selected_tile_map.onclick = function (evt) {
// these are the coordinates of the click,
//as they relate to the actual image at full scale
map_x, map_y;
selected_tile = select_tile_num_from_map(map_x, map_y);
};
Now you have a simple system for figuring out which tile was clicked.
Again, there are lots of ways of building this, and you can make it more OO,
and make a proper "tile" data-structure, that you expect to read and use throughout your engine.
Right now, I'm just returning the zero-based number of the tile, reading left to right, top to bottom.
If there are 5 tiles per row, and someone picks the first tile of the second row, that's tile #5.
Then, for "painting", you just need to listen to a canvas click, figure out what the X and Y were,
figure out where in the world that is, and what array spot that's equal to.
From there, you just dump in the value of selected_tile, and that's about it.
// this might be one long array, like I did with the tile-map and the number of the tile
// or it might be an array of arrays: each inner-array would be a "row",
// and the outer array would keep track of how many rows down you are,
// from the top of the world
var world_map = [],
selected_coordinate = 0,
world_tile_width = 64, // these might be in *canvas* pixels, or "world" pixels
world_tile_height = 64, // this is so you can scale the size of tiles,
// or zoom in and out of the map, etc
world_width = 320,
world_height = 320,
num_world_tiles_x = world_width / world_tile_width,
num_world_tiles_y = world_height / world_tile_height,
get_map_coordinates_from_click = function (world_x, world_y) {
var coord_x = Math.floor(world_px_x / num_world_tiles_x),
coord_y = Math.floor(world_px_y / num_world_tiles_y),
array_coord = coord_y * num_world_tiles_x + coord_x;
return array_coord;
},
set_map_tile = function (index, tile) {
world_map[index] = tile;
};
canvas.onclick = function (evt) {
// convert screen x/y to canvas, and canvas to world
world_px_x, world_px_y;
selected_coordinate = get_map_coordinates_from_click(world_px_x, world_px_y);
set_map_tile(selected_coordinate, selected_tile);
};
As you can see, the procedure for doing one is pretty much the same as the procedure for doing the other (because it is -- given an x and y in one coordinate-set, convert it to another scale/set).
The procedure for drawing the tiles, then, is nearly the exact opposite.
Given the world-index and tile-number, work in reverse to find the world-x/y and tilemap-x/y.
You can see that part in your example code, as well.
This tile-painting is the traditional way of making 2d maps, whether we're talking about StarCraft, Zelda, or Mario Bros.
Not all of them had the luxury of having a "paint with tiles" editor (some were by hand in text-files, or even spreadsheets, to get the spacing right), but if you load up StarCraft or even WarCraft III (which is 3D), and go into their editors, a tile-painter is exactly what you get, and is exactly how Blizzard made those maps.
additions
With the basic premise out of the way, you now have other "maps" which are also required:
you'd need a collision-map to know which of those tiles you could/couldn't walk on, an entity-map, to show where there are doors, or power-ups or minerals, or enemy-spawns, or event-triggers for cutscenes...
Not all of these need to operate in the same coordinate-space as the world map, but it might help.
Also, you might want a more intelligent "world".
The ability to use multiple tile-maps in one level, for instance...
And a drop-down in a tile-editor to swap tile-maps.
...a way to save out both tile-information (not just X/Y, but also other info about a tile), and to save out the finished "map" array, filled with tiles.
Even just copying JSON, and pasting it into its own file...
Procedural Generation
The other way of doing this, the way you suggested earlier ("knowing how to connect rocks, grass, etc") is called Procedural Generation.
This is a LOT harder and a LOT more involved.
Games like Diablo use this, so that you're in a different randomly-generated environment, every time you play. Warframe is an FPS which uses procedural generation to do the same thing.
premise:
Basically, you start with tiles, and instead of just a tile being an image, a tile has to be an object that has an image and a position, but ALSO has a list of things that are likely to be around it.
When you put down a patch of grass, that grass will then have a likelihood of generating more grass beside it.
The grass might say that there's a 10% chance of water, a 20% chance of rocks, a 30% chance of dirt, and a 40% chance of more grass, in any of the four directions around it.
Of course, it's really not that simple (or it could be, if you're wrong).
While that's the idea, the tricky part of procedural generation is actually in making sure everything works without breaking.
constraints
You couldn't, for example have the cliff wall, in that example, appear on the inside of the high-ground. It can only appear where there's high ground above and to the right, and low-ground below and to the left (and the StarCraft editor did this automatically, as you painted). Ramps can only connect tiles that make sense. You can't wall off doors, or wrap the world in a river/lake that prevents you from moving (or worse, prevents you from finishing a level).
pros
Really great for longevity, if you can get all of your pathfinding and constraints to work -- not only for pseudo-randomly generating the terrain and layout, but also enemy-placement, loot-placement, et cetera.
People are still playing Diablo II, nearly 14 years later.
cons
Really difficult to get right, when you're a one-man team (who doesn't happen to be a mathematician/data-scientist in their spare time).
Really bad for guaranteeing that maps are fun/balanced/competitive...
StarCraft could never have used 100% random-generation for fair gameplay.
Procedural-generation can be used as a "seed".
You can hit the "randomize" button, see what you get, and then tweak and fix from there, but there'll be so much fixing for "balance", or so many game-rules written to constrain the propagation, that you'll end up spending more time fixing the generator than just painting a map, yourself.
There are some tutorials out there, and learning genetic-algorithms, pathfinding, et cetera, are all great skills to have... ...buuuut, for purposes of learning to make 2D top-down tile-games, are way-overkill, and rather, are something to look into after you get a game/engine or two under your belt.
I'm trying to put together a tool that checks whether a given character is displayed in the specified style font or a system default font. My ultimate goal is to be able to check, at least in modern (read: IE8+) browsers, whether a ligature is supported in a given font.
I've got two canvases displaying the same ligature (in this case, st). I turn those canvases into data and compare them to see if the characters match.
Arial (like most fonts) does not have an st ligature, so it falls back to the default serif font. Here's where it gets weird: although they're displaying the same font, the two canvases don't have the same data.
Why? Because their positions on the canvas aren't exactly the same. I'm guessing it has something to do with the different relative heights of the fonts (one is slightly taller than the other, though which varies font to font). The difference appears to be one of a pixel or two, and it varies font by font.
How might one go about solving this? My only current idea is finding a way to measure the height of the font and adjusting its position accordingly, but I have no idea how to do that, unfortunately. Are there other approaches I might take to make the two images identical?
You can see the code below. Both canvases are successfully initialized and appended to the body of the element so I can see what's going on visually (though that's not necessary in the actual script I'm working on). I've dropped the initialization and context, as those are all working just fine.
function checkLig() {
lig = 'fb06' // this is the unicode for the st ligature
canvas0.width = 250;
canvas0.height = 50;
context0.fillStyle = 'rgb(0, 0, 0)';
context0.textBaseline = 'top';
context0.font = 'normal normal normal 40px Arial';
context0.fillText(String.fromCharCode(parseInt(lig, 16)), 0, 0);
var print0 = context0.getImageData(0, 0, 720, 50).data;
canvas1.width = 250;
canvas1.height = 50;
context1.fillStyle = 'rgb(0, 0, 0)';
context1.textBaseline = 'top';
context1.font = 'normal normal normal 40px serif';
context1.fillText(String.fromCharCode(parseInt(lig, 16)), 0, 0);
var print1 = context1.getImageData(0, 0, 720, 50).data;
var i = 0, same = true, len = 720 * 50 * 4;
while (i < len && same === true) {
if (print0[i] !== print1[i]) {
same = false;
}
else {
i++;
}
}
return same;
}
So i understand the question correctly, the problem is one canvas is specifying Arial (but falls back to Serif) and the other is Serif and when you do the pixel matching, and it's not a match because one of them has a slight offset?
One suggestion is to grab a reference pixel from each canvas and compare the positions of the two reference pixels to get an offset. And then factor that offset into your comparison loop.
For example, in this case you could get your reference pixel by starting to check pixels from the upper left corner of one canvas and scan downwards in that column and when you reach the bottom, go back to the top of the next column and scan down.
As soon as you hit a pixel that is not the color of your background, record the position and use that as your reference pixel. That should be the edge of your font.
Do the same with your next canvas and then compare the positions of the two reference pixels to get an offset. Take that offset into consideration in your comparison loop.
Hope I have the right idea of the problem and i hope that helps!
Summary
When repeatedly drawing anything (apparently with a low alpha value) on a canvas, regardless of if it's with drawImage() or a fill function, the resulting colors are significantly inaccurate in all browsers that I've tested. Here's a sample of the results I'm getting with a particular blend operation:
Problem Demonstration
For an example and some code to play with, check out this jsFiddle I worked up:
http://jsfiddle.net/jMjFh/2/
The top set of data is a result of a test that you can customize by editing iters and rgba at the top of the JS code. It takes whatever color your specify, RGBA all in range [0, 255], and stamps it on a completely clean, transparent canvas iters number of times. At the same time, it keeps a running calculation of what the standard Porter-Duff source-over-dest blend function would produce for the same procedure (ie. what the browsers should be running and coming up with).
The bottom set of data presents a boundary case that I found where blending [127, 0, 0, 2] on top of [127, 0, 0, 63] produces an inaccurate result. Interestingly, blending [127, 0, 0, 2] on top of [127, 0, 0, 62] and all [127, 0, 0, x] colors where x <= 62 produces the expected result. Note that this boundary case is only valid in Firefox and IE on Windows. In every other browser on every other operating system I've tested, the results are far worse.
In addition, if you run the test in Firefox on Windows and Chrome on Windows, you'll notice a significant difference in the blending results. Unfortunately we're not talking about one or two values off--it's far worse.
Background Information
I am aware of the fact that the HTML5 canvas spec points out that doing a drawImage() or putImageData() and then a getImageData() call can show slightly varying results due to rounding errors. In that case, and based on everything I've run into thus far, we're talking about something miniscule like the red channel being 122 instead of 121.
As a bit of background material, I asked this question a while ago where #NathanOstgard did some excellent research and drew the conclusion that alpha premultiplication was to blame. While that certainly could be at work here, I think the real underlying issue is something a bit larger.
The Question
Does anyone have any clue how I could possibly end up with the (wildly inaccurate) color values I'm seeing? Furthermore, and more importantly, does anyone have any ideas how to circumvent the issue and produce consistent results in all browsers? Manually blending the pixels is not an option due to performance reasons.
Thanks!
Edit: I just added a stack trace of sorts that outputs each intermediate color value and the expected value for that stage in the process.
Edit: After researching this a bit, I think I've made some slight progress.
Reasoning for Chrome14 on Win7
In Chrome14 on Win7 the resulting color after the blend operation is gray, which is many values off from the expected and desired reddish result. This led me to believe that Chrome doesn't have an issue with rounding and precision because the difference is so large. Instead, it seems like Chrome stores all pixel values with premultiplied alpha.
This idea can be demonstrated by drawing a single color to the canvas. In Chrome, if you apply the color [127, 0, 0, 2] once to the canvas, you get [127, 0, 0, 2] when you read it back. However, if you apply [127, 0, 0, 2] to the canvas twice, Chrome gives you [85, 0, 0, 3] if you check the resulting color. This actually makes some sense when you consider that the premultiplied equivalent of [127, 0, 0, 2] is [1, 0, 0, 2].
Apparently, when Chrome14 on Win7 performs the blending operation, it references the premultiplied color value [1, 0, 0, 2] for the dest component and the non-premultiplied color value [127, 0, 0, 2] for the source component. When these two colors are blended together, we end up with [85, 0, 0, 3] using the Porter-Duff source-over-dest approach, as expected.
So, it seems like Chrome14 on Win7 inconsistently references the pixel values when you work with them. The information is stored in a premultiplied state internally, is presented back to you in non-premultiplied form, and is manipulated using values of both forms.
I'm thinking it might be possible to circumvent this by doing some additional calls to getImageData() and/or putImageData(), but the performance implications don't seem that great. Furthermore, this doesn't seem to be the same issue that Firefox7 on Win7 is exhibiting, so some more research will have to be done on that side of things.
Edit: Below is one possible approach that seems likely to work.
Thoughts on a Solution
I haven't gotten back around to working on this yet, but one WebGL approach that I came up with recently was to run all drawing operations through a simple pixel shader that performs the Porter-Duff source-over-dest blend itself. It seems as though the issue here only occurs when interpolating values for blending purposes. So, if the shader reads the two pixel values, calculates the result, and writes (not blends) the value into the destination, it should mitigate the problem. At the very least, it shouldn't compound. There will definitely still be minor rounding inaccuracies, though.
I know in my initial question I mentioned that manually blending the pixels wouldn't be viable. For a 2D canvas implementation of an application that needs real-time feedback, I don't see a way to fix this. All of the blending would be done on the CPU and would prevent other JS code from executing. With WebGL, however, your pixel shader runs on the GPU so I'm pretty sure there shouldn't be any performance hit.
The tricky part is being able to feed the dest canvas in as a texture to the shader because you also need to render it on a canvas to be viewable. Ultimately, you want to avoid having to generate a WebGL texture object from your viewable canvas every time it needs to be updated. Maintaining two copies of the data, with one as an in-memory texture and one as a viewable canvas (that's updated by stamping the texture on as needed), should solve this.
Oh geez. I think we just stepped into the twilight zone here. I'm afraid I don't have an answer but I can provide some more information at least. Your hunch is right that at least something is larger here.
I'd prefer an example that is drastically more simple (and thus less prone to any sort of error - I fear you might have a problem with 0-255 somewhere instead of 0-1 but didnt look thoroughly) than yours so I whipped up something tiny. What I found wasn't pleasant:
<canvas id="canvas1" width="128" height="128"></canvas>
<canvas id="canvas2" width="127" height="127"></canvas>
+
var can = document.getElementById('canvas1');
var ctx = can.getContext('2d');
var can2 = document.getElementById('canvas2');
var ctx2 = can2.getContext('2d');
var alpha = 2/255.0;
ctx.fillStyle = 'rgba(127, 0, 0, ' + alpha + ')';
ctx2.fillStyle = 'rgba(127, 0, 0, ' + alpha + ')'; // this is grey
//ctx2.fillStyle = 'rgba(171, 0, 0, ' + alpha + ')'; // this works
for (var i = 0; i < 32; i++) {
ctx.fillRect(0,0,500,500);
ctx2.fillRect(0,0,500,500);
}
Yields this:
See for yourself here:
http://jsfiddle.net/jjAMX/
It seems that if your surface area is too small (not the fillRect area, but the canvas area itself) then the drawing operation will be a wash.
It seems to work fine in IE9 and FF7. I submitted it as a bug to Chromium. (We'll see what they say, but their typical M.O. so far is to ignore most bug reports unless they are security issues).
I'd urge you to check your example - I think you might have a 0-255 to 0-1 issue somewhere. The code I have here seems to work just dandy, save for maybe the expected reddish color, but it is a consistent resultant color across all browsers with the exception of this Chrome issue.
In the end Chrome is probably doing something to optimize canvases with a surface area smaller than 128x128 that is fouling things up.
Setting globalCompositeOperation to "lighter" (the operation where overlapping pixels values are added together, it is not the default operation for many browsers) yields same color and results in ie9, chrome 14 and latest firefox aurora:
Expected: [126.99999999999999, 0, 0, 63.51372549019608]
Result: [127, 0, 0, 64]
Expected: [126.99999999999999, 0, 0, 63.51372549019608]
Result: [127, 0, 0, 64]
http://jsfiddle.net/jMjFh/11/
edit: you may learn what the different composite operations mean here:
https://developer.mozilla.org/en/Canvas_tutorial/Compositing#globalCompositeOperation
for example, "copy" was useful to me when I needed to do fade animation in certain parts of a canvas
I've been playing around with canvas a lot lately. Now I am trying to build a little UI-library, here is a demo to a simple list (Note: Use your arrow keys, Chrome/Firefox only)
As you can tell, the performance is kinda bad - this is because I delete and redraw every item on every frame:
this.drawItems = function(){
this.clear();
if(this.current_scroll_pos != this.scroll_pos){
setTimeout(function(me) { me.anim(); }, 20, this);
}
for (var i in this.list_items){
var pos = this.current_scroll_pos + i*35;
if(pos > -35 && pos < this.height){
if(i == this.selected){
this.ctx.fillStyle = '#000';
this.ctx.fillText (this.list_items[i].title, 5, pos);
this.ctx.fillStyle = '#999';
} else {
this.ctx.fillText (this.list_items[i].title, 5, pos);
}
}
}
}
I know there must be better ways to do this, like via save() and transform() but I can't wrap my head around the whole idea - I can only save the whole canvas, transform it a bit and restore the whole canvas. The information and real-life examples on this specific topic are also pretty rare, maybe someone here can push me in the right direction.
One thing you could try to speed up drawing is:
Create another canvas element (c2)
Render your text to c2
Draw c2 in the initial canvas with the transform you want, simply using drawImage
drawImage takes a canvas as well as image elements.
Ok, I think I got it. HTML5 canvas uses a technique called "immediate mode" for drawing, this means that the screen is meant to be constantly redrawn. What sounds odd (and slow) first is actually a big advantage for stuff like GPU-acceleration, also it is a pretty common technique found in opengl or sdl. A bit more information here:
http://www.8bitrocket.com/2010/5/15/HTML-5-Canvas-Creating-Gaudy-Text-Animations-Just-Like-Flash-sort-of/
So the redrawing of every label in every frame is totally OK, I guess.