I'm using HTML5 Canvas elements in a project I'm working on and while rendering a bunch of stuff I ran into a really strange artifact that I'm wondering if anyone has ever seen before. Basically, in this particular scenario (this is the only case I've seen that produces this behavior), I happened to be panning/zooming in my application and noticed a very bizarre rendering effect on part of the canvas. Upon further inspection, I was able to reproduce the effect in a very simplistic example:
In this case, I have a path (whose coordinates don't change) and all that is changing from the first screenshot to the second is the applied transformation matrix (by a very small amount).
You can access the JSFiddle I used to generate these screenshots at https://jsfiddle.net/ynsv66g8/ and here is the relevant rendering code:
// Clear context
context.setTransform(1, 0, 0, 1, 0, 0);
context.fillStyle = "#000000";
context.fillRect(0, 0, 500, 500);
if (showArtifact) { // This transformation causes the artifact
context.transform(0.42494658722537976, 0, 0, -0.42494658722537976, 243.95776291868646, 373.14630356628857);
} else { // This transformation does not
context.transform(0.4175650109545749, 0, 0, -0.4175650109545749, 243.70987992043368, 371.06797574381795);
}
// Draw path
context.lineWidth = 3.488963446543301;
context.strokeStyle = 'red';
context.beginPath();
var p = path[0];
context.moveTo(p[0], p[1]);
for (var i = 1; i < path.length; ++i) {
p = path[i];
context.lineTo(p[0], p[1]);
}
context.closePath();
context.stroke();
It looks to be related to the call to canvas.closePath() because if you remove the call to context.closePath() and replace it with:
p = path[0];
context.lineTo(p[0], p[1]);
(thus manually "closing" the path) everything works properly (granted, this really doesn't fix my particular issue because I rely on multiple closed paths to apply filling rules).
Another interesting change that can be made that makes the issue go away is to reverse the path array (i.e., by adding a call to path.reverse() right after its definition).
Together, all of this seems to be adding up to some kind of browser rendering bug related to the characteristics of the path, especially since, on my Mac, the issue occurs in both Chrome (v61.0.3163.91) and Firefox (v55.0.3) but not Safari (v11). I've done some extensive searching to try to find this issue (or something similar), but thus far have come up empty.
Any insight into what might be causing this (or the proper way to go about reporting this issue if the consensus is that it is caused by some browser bug) would be greatly appreciated.
it seems a line mitering problem; that is, the renderer fails to correctly join lines when the width is too big relative to the segment size/orientation.
This seems not influenced by path closing ( I can reproduce the artifact with the open path ); note that manually closing the path is not the same as a closePath(), because no line join is performed in the former case.
As far as I can tell, it seems solved by setting lineJoin to 'round' or reducing the line width ... anyway, seems a renderer bug to me ... just my two cents :)
Related
Good evening, first i'm not sure i should post this here, since i have litteraly no idea what's wrong here (only clue is it happened after i moved all my variables at the top of my code while trying to "clean up" a little).
but here is the problem: i've created a canvas game and after moving my variables (i think) i started getting major framedrops. Game weighs less than 20Ko, images are super tiny and simple, i have a for loop in the update loop but it never seemed to be a problem (it's not infinite) so in short i do not know what's wrong here
here is a bit of code since links to code "must be accompanied by code" (dont know what's up with that)
for (var i = 0; i<boxes.length; i++){
ctx.rect(boxes[i].x, boxes[i].y, boxes[i].width, boxes[i].height);
var col = coli(player,boxes[i])
};
i've tried different things, like disabling functions (anim and colision), friction and gravity but nothing seems to have any effect, and i dont know that much about the dom except how to look at my own variables so i havent found anything with firebug.
Really hope someone has an idea
You need to add ctx.beginPath() before using ctx.rect, moveTo, lineTo, arc, and any function you need to use ctx.stroke() or ctx.fill() to see.
beginPath tell the canvas 2D context that you wish to start a new shape. If you do not do this you end up adding more and more shapes each update.
From your fiddle
function update() {
ctx.clearRect(0, 0, width, height);
ctx.fillStyle = pat;
//===============================================================
ctx.beginPath() // <=== this will fix your problem. It removes all the
// previous rects
//===============================================================
for (var i = 0; i < boxes.length; i++) {
ctx.rect(boxes[i].x, boxes[i].y, boxes[i].width, boxes[i].height);
// or use ctx.fillRect(...
var col = coli(player, boxes[i])
};
ctx.fill();
I have tried following the suggestions given as answer to this questions but I still can't figure out how the "rendering flow" of a WebGL program really works.
I am simply trying to draw two triangles on a canvas, and it works in a rather non-deterministic way: sometimes both triangles are rendered, sometimes only the second one (second as in the last one drawn) is rendered.
(it appears to depend on rendering time: strangely enough, the longer it takes, the better the odds of ending up with both triangles rendered). EDIT: not true, tried refreshing over and over and the two triangles sometimes show up on very rapid renders (~55ms), sometimes on longer-running ones (~120ms). What does seem to be a recurring pattern is that on the very first time the page is rendered, the two triangles show, and on subsequent repeated refreshes the red one either shows for good or for a very short lapse of time, then flickers away.
Apparently I'm missing something here, let me explain my program's flow in pseudo-code (can include the real code if need be) to see if I'm doing something wrong:
var canvas = new Canvas(/*...*/);
var redTriangle = new Shape(/* vertex positions & colors */);
var blueTriangle = new Shape(/* vertex positions & colors */);
canvas.add(redTriangle, blueTriangle);
canvas.init(); //compiles and links shaders, calls gl.enableVertexAttribArray()
//for vertex attributes "position" and "color"
for(shape in canvas) {
for(bufferType in [PositionBuffer, ColorBuffer]) {
shape.bindBuffer(bufferType); //calls gl.bindBuffer() and gl.bufferData()
//This is equivalent to the initBuffers()
//function in the tutorial
}
}
for(shape in canvas) {
shape.draw();
//calls:
//-gl.bindBuffer() and gl.vertexAttribPointer() for each buffer (position & color),
//-setMatrixUniforms()
//-drawArrays()
//This is equivalent to the drawScene() function in the tutorial
}
Despite the fact I've wrapped the instructions inside object methods in my attempt to make the use of WebGLs slightly more OO, it seems to me I have fully complied to the instructions on this lesson (comparing the lesson's source and my own code), hence I cannot figure out what I'm doing wrong.
I've even tried to use only one for(shape in canvas) loop, as so:
for(shape in canvas) {
for(bufferType in [PositionBuffer, ColorBuffer]) {
shape.bindBuffer(bufferType); //calls gl.bindBuffer() and gl.bufferData()
//This is equivalent to the initBuffers()
//function in the tutorial
}
shape.draw();
//calls:
//-gl.bindBuffer() and gl.vertexAttribPointer() for each buffer (position & color),
//-setMatrixUniforms()
//-drawArrays()
//This is equivalent to the drawScene() function in the tutorial
}
but it doesn't seem to have any effect.
Any clues?
I'm guessing the issue is that by default WebGL canvases are cleared everytime they are composited
Try changing your WebGL context creation to
var gl = someCanvas.getContext("webgl", { preserveDrawingBuffer: true });
I'm just guessing your app is doing things asynchronously which means each triangle is drawn in response to some event? So, if both events happen to come in quick enough (between a single composite) then you get both triangles. If they come on different composites then you'll only see the second one.
preserveDrawingBuffer: true says "don't clear after each composite". Clearing is the default because it allows certain optimizations for certain devices, specifically iOS, and the majority of WebGL apps clear at the beginning of each draw operation. Those few apps that don't clear can set preserveDrawingBuffer: true
In your particular case line 21 of angulargl-canvas.js
options = {alpha: false, premultipliedAlpha: false};
try changing it to
options = {alpha: false, premultipliedAlpha: false, preserveDrawingBuffer: true};
I found a bunnymark for Javascript canvas here.
Now of course, I understand their default renderer is using webGL but I am only interested in the native 2D context performance for now. I disabled webGL on firefox and after spawning 16500 bunnies, the counter showed a FPS of 25. I decided to wrote my own little very simple rendering loop to see how much overhead Pixi added. To my surprise, I only got a FPS of 20.
My roughly equivalent JSFiddle.
So I decided to take a look into their source here and it doesn't appear to be that the magic is in their rendering code:
do
{
transform = displayObject.worldTransform;
...
if(displayObject instanceof PIXI.Sprite)
{
var frame = displayObject.texture.frame;
if(frame)
{
context.globalAlpha = displayObject.worldAlpha;
context.setTransform(transform[0], transform[3], transform[1], transform[4], transform[2], transform[5]);
context.drawImage(displayObject.texture.baseTexture.source,
frame.x,
frame.y,
frame.width,
frame.height,
(displayObject.anchor.x) * -frame.width,
(displayObject.anchor.y) * -frame.height,
frame.width,
frame.height);
}
}
Curiously, it seems they are using a linked list for their rendering loop and a profile on both app shows that while my version allocates the same amount of cpu time per frame, their implementation shows cpu usage in spikes.
My knowledge ends here unfortunately and I am curious if anyone can shed some light on whats going on.
I think, in my opinion, that it boils down to how "compilable" (cache-able) the code is. Chrome and Firefox uses two different JavaScript "compilers"/engines as we know which optimizes and caching code differently.
Canvas operations
Using transform versus direct coordinates should not have an impact as setting a transform merely updates the matrix which is in any case is used with what-ever is in it.
The type of position values can affect performance though, float versus integer values, but as both your fiddle and PIXI seem to use floats only this is not the key here.
So here I don't think canvas is the cause of the difference.
Variable and property caching
(I got unintentionally too focused on the prototypal aspect in the first version of this answer. The essence I was trying to get at was mainly object traversing, so here the following text is re-worded a bit -)
PIXI uses object properties as the fiddle but these custom objects in PIXI are smaller in size so the traversing of the object tree takes less time compared to what it takes to traverse a larger object such as canvas or image (a property such as width would also be at the end of this object).
It's a well known classic optimization trick to cache variables due to this very reason (traverse time). The effect is less today as the engines has become smarter, especially V8 in Chrome which seem to be able to predict/cache this better internally, while in Firefox it seem to still have a some impact not to cache these variables in code.
Does it matter performance-wise? For short operations very little, but drawing 16,500 bunnies onto canvas is demanding and do gain a benefit from doing this (in FF) so any micro-optimization do actually count in situations such as this.
Demos
I prototyped the "renderer" to get even closer to PIXI as well as caching the object properties. This gave a performance burst in Firefox:
http://jsfiddle.net/AbdiasSoftware/2Dbys/8/
I used a slow computer (to scale the impact) which ran your fiddle at about 5 FPS. After caching the values it ran at 6-7 fps which is more than 20% increase on this computer showing it do have an effect. On a computer with a larger CPU instruction cache and so forth the effect may be less, but it's there as this is related to the FF engine itself (disclaimer: I am not claiming this to be a scientific test however, only a pointer :-) ).
/// cache object properties
var lastTime = 0,
w = canvas.width,
h = canvas.height,
iw = image.width,
ih = image.height;
This next version caches these variables as properties on an object (itself) to show that also this improves performance compared to using large global objects directly - result about the same as above:
http://jsfiddle.net/AbdiasSoftware/2Dbys/9/
var RENDER = function () {
this.width = canvas.width;
this.height = canvas.height;
this.imageWidth = image.width;
this.imageHeight = image.height;
}
In conclusion
I am certain based on the results and previous experience that PIXI can run the code faster due to using custom small-sized objects rather than getting the properties directly from large objects (elements) such as canvas and image.
The FF engine seem not yet to be as "smart" as the V8 engine in regard to object traversing of tree and branches so caching variables do have an impact in FF which comes to display when the demand is high (such as when drawing 16,500 bunnies per "frame").
One difference I noticed between your version and Pixi's is this:
You render image at certain coordinates by passing x/y straight to drawImage function:
drawImage(img, x, y, ...);
..whereas Pixi translates entire canvas context, and then draws image at 0/0 (of already shifted context):
setTransform(1, 0, 0, 1, x, y);
drawImage(img, 0, 0, ...);
They also pass more arguments to drawImage; arguments that control "destination rectangle" — dx, dy, dw, dh.
I suspected this is where speed difference hides. However, changing your test to use same "technique" doesn't really make things better.
But there's something else...
I clocked bunnies to 5000, disabled WebGL, and Pixi actually performs worse than the custom fiddle version.
I get ~27 FPS on Pixi:
and ~32-35 FPS on Fiddle:
This is all on Chrome 33.0.1712.4 dev, Mac OS X.
I'd suspect that this is some canvas compositing issue. Canvas is transparent by default, so the page background needs to be combined with the canvas contents...
I found this in their source...
// update the background color
if (this.view.style.backgroundColor != stage.backgroundColorString &&
!this.transparent) {
this.view.style.backgroundColor = stage.backgroundColorString;
}
Maybe they set the canvas to be opaque for this demo (the fiddle doesn't really work for me, seems like most of the bunnys jump out with an extremely large dt most of the time)?
I don't think it's an object property access timing / compilability thing: The point is valid, but I don't think it can explain that much of a difference.
Summary
When repeatedly drawing anything (apparently with a low alpha value) on a canvas, regardless of if it's with drawImage() or a fill function, the resulting colors are significantly inaccurate in all browsers that I've tested. Here's a sample of the results I'm getting with a particular blend operation:
Problem Demonstration
For an example and some code to play with, check out this jsFiddle I worked up:
http://jsfiddle.net/jMjFh/2/
The top set of data is a result of a test that you can customize by editing iters and rgba at the top of the JS code. It takes whatever color your specify, RGBA all in range [0, 255], and stamps it on a completely clean, transparent canvas iters number of times. At the same time, it keeps a running calculation of what the standard Porter-Duff source-over-dest blend function would produce for the same procedure (ie. what the browsers should be running and coming up with).
The bottom set of data presents a boundary case that I found where blending [127, 0, 0, 2] on top of [127, 0, 0, 63] produces an inaccurate result. Interestingly, blending [127, 0, 0, 2] on top of [127, 0, 0, 62] and all [127, 0, 0, x] colors where x <= 62 produces the expected result. Note that this boundary case is only valid in Firefox and IE on Windows. In every other browser on every other operating system I've tested, the results are far worse.
In addition, if you run the test in Firefox on Windows and Chrome on Windows, you'll notice a significant difference in the blending results. Unfortunately we're not talking about one or two values off--it's far worse.
Background Information
I am aware of the fact that the HTML5 canvas spec points out that doing a drawImage() or putImageData() and then a getImageData() call can show slightly varying results due to rounding errors. In that case, and based on everything I've run into thus far, we're talking about something miniscule like the red channel being 122 instead of 121.
As a bit of background material, I asked this question a while ago where #NathanOstgard did some excellent research and drew the conclusion that alpha premultiplication was to blame. While that certainly could be at work here, I think the real underlying issue is something a bit larger.
The Question
Does anyone have any clue how I could possibly end up with the (wildly inaccurate) color values I'm seeing? Furthermore, and more importantly, does anyone have any ideas how to circumvent the issue and produce consistent results in all browsers? Manually blending the pixels is not an option due to performance reasons.
Thanks!
Edit: I just added a stack trace of sorts that outputs each intermediate color value and the expected value for that stage in the process.
Edit: After researching this a bit, I think I've made some slight progress.
Reasoning for Chrome14 on Win7
In Chrome14 on Win7 the resulting color after the blend operation is gray, which is many values off from the expected and desired reddish result. This led me to believe that Chrome doesn't have an issue with rounding and precision because the difference is so large. Instead, it seems like Chrome stores all pixel values with premultiplied alpha.
This idea can be demonstrated by drawing a single color to the canvas. In Chrome, if you apply the color [127, 0, 0, 2] once to the canvas, you get [127, 0, 0, 2] when you read it back. However, if you apply [127, 0, 0, 2] to the canvas twice, Chrome gives you [85, 0, 0, 3] if you check the resulting color. This actually makes some sense when you consider that the premultiplied equivalent of [127, 0, 0, 2] is [1, 0, 0, 2].
Apparently, when Chrome14 on Win7 performs the blending operation, it references the premultiplied color value [1, 0, 0, 2] for the dest component and the non-premultiplied color value [127, 0, 0, 2] for the source component. When these two colors are blended together, we end up with [85, 0, 0, 3] using the Porter-Duff source-over-dest approach, as expected.
So, it seems like Chrome14 on Win7 inconsistently references the pixel values when you work with them. The information is stored in a premultiplied state internally, is presented back to you in non-premultiplied form, and is manipulated using values of both forms.
I'm thinking it might be possible to circumvent this by doing some additional calls to getImageData() and/or putImageData(), but the performance implications don't seem that great. Furthermore, this doesn't seem to be the same issue that Firefox7 on Win7 is exhibiting, so some more research will have to be done on that side of things.
Edit: Below is one possible approach that seems likely to work.
Thoughts on a Solution
I haven't gotten back around to working on this yet, but one WebGL approach that I came up with recently was to run all drawing operations through a simple pixel shader that performs the Porter-Duff source-over-dest blend itself. It seems as though the issue here only occurs when interpolating values for blending purposes. So, if the shader reads the two pixel values, calculates the result, and writes (not blends) the value into the destination, it should mitigate the problem. At the very least, it shouldn't compound. There will definitely still be minor rounding inaccuracies, though.
I know in my initial question I mentioned that manually blending the pixels wouldn't be viable. For a 2D canvas implementation of an application that needs real-time feedback, I don't see a way to fix this. All of the blending would be done on the CPU and would prevent other JS code from executing. With WebGL, however, your pixel shader runs on the GPU so I'm pretty sure there shouldn't be any performance hit.
The tricky part is being able to feed the dest canvas in as a texture to the shader because you also need to render it on a canvas to be viewable. Ultimately, you want to avoid having to generate a WebGL texture object from your viewable canvas every time it needs to be updated. Maintaining two copies of the data, with one as an in-memory texture and one as a viewable canvas (that's updated by stamping the texture on as needed), should solve this.
Oh geez. I think we just stepped into the twilight zone here. I'm afraid I don't have an answer but I can provide some more information at least. Your hunch is right that at least something is larger here.
I'd prefer an example that is drastically more simple (and thus less prone to any sort of error - I fear you might have a problem with 0-255 somewhere instead of 0-1 but didnt look thoroughly) than yours so I whipped up something tiny. What I found wasn't pleasant:
<canvas id="canvas1" width="128" height="128"></canvas>
<canvas id="canvas2" width="127" height="127"></canvas>
+
var can = document.getElementById('canvas1');
var ctx = can.getContext('2d');
var can2 = document.getElementById('canvas2');
var ctx2 = can2.getContext('2d');
var alpha = 2/255.0;
ctx.fillStyle = 'rgba(127, 0, 0, ' + alpha + ')';
ctx2.fillStyle = 'rgba(127, 0, 0, ' + alpha + ')'; // this is grey
//ctx2.fillStyle = 'rgba(171, 0, 0, ' + alpha + ')'; // this works
for (var i = 0; i < 32; i++) {
ctx.fillRect(0,0,500,500);
ctx2.fillRect(0,0,500,500);
}
Yields this:
See for yourself here:
http://jsfiddle.net/jjAMX/
It seems that if your surface area is too small (not the fillRect area, but the canvas area itself) then the drawing operation will be a wash.
It seems to work fine in IE9 and FF7. I submitted it as a bug to Chromium. (We'll see what they say, but their typical M.O. so far is to ignore most bug reports unless they are security issues).
I'd urge you to check your example - I think you might have a 0-255 to 0-1 issue somewhere. The code I have here seems to work just dandy, save for maybe the expected reddish color, but it is a consistent resultant color across all browsers with the exception of this Chrome issue.
In the end Chrome is probably doing something to optimize canvases with a surface area smaller than 128x128 that is fouling things up.
Setting globalCompositeOperation to "lighter" (the operation where overlapping pixels values are added together, it is not the default operation for many browsers) yields same color and results in ie9, chrome 14 and latest firefox aurora:
Expected: [126.99999999999999, 0, 0, 63.51372549019608]
Result: [127, 0, 0, 64]
Expected: [126.99999999999999, 0, 0, 63.51372549019608]
Result: [127, 0, 0, 64]
http://jsfiddle.net/jMjFh/11/
edit: you may learn what the different composite operations mean here:
https://developer.mozilla.org/en/Canvas_tutorial/Compositing#globalCompositeOperation
for example, "copy" was useful to me when I needed to do fade animation in certain parts of a canvas
I've been playing around with canvas a lot lately. Now I am trying to build a little UI-library, here is a demo to a simple list (Note: Use your arrow keys, Chrome/Firefox only)
As you can tell, the performance is kinda bad - this is because I delete and redraw every item on every frame:
this.drawItems = function(){
this.clear();
if(this.current_scroll_pos != this.scroll_pos){
setTimeout(function(me) { me.anim(); }, 20, this);
}
for (var i in this.list_items){
var pos = this.current_scroll_pos + i*35;
if(pos > -35 && pos < this.height){
if(i == this.selected){
this.ctx.fillStyle = '#000';
this.ctx.fillText (this.list_items[i].title, 5, pos);
this.ctx.fillStyle = '#999';
} else {
this.ctx.fillText (this.list_items[i].title, 5, pos);
}
}
}
}
I know there must be better ways to do this, like via save() and transform() but I can't wrap my head around the whole idea - I can only save the whole canvas, transform it a bit and restore the whole canvas. The information and real-life examples on this specific topic are also pretty rare, maybe someone here can push me in the right direction.
One thing you could try to speed up drawing is:
Create another canvas element (c2)
Render your text to c2
Draw c2 in the initial canvas with the transform you want, simply using drawImage
drawImage takes a canvas as well as image elements.
Ok, I think I got it. HTML5 canvas uses a technique called "immediate mode" for drawing, this means that the screen is meant to be constantly redrawn. What sounds odd (and slow) first is actually a big advantage for stuff like GPU-acceleration, also it is a pretty common technique found in opengl or sdl. A bit more information here:
http://www.8bitrocket.com/2010/5/15/HTML-5-Canvas-Creating-Gaudy-Text-Animations-Just-Like-Flash-sort-of/
So the redrawing of every label in every frame is totally OK, I guess.