Why does my canvas go blank after converting to image? - javascript

I am trying to convert the canvas element on this page to a png using the following snippet (e.g. enter in JavaScript console):
(function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL("image/png");
return image;
})($$('canvas')[0]);
Unfortunately, the png I get is completely blank. Notice also that the original canvas goes blank after resizing the page.
Why does the canvas go blank? How can I convert this canvas to a png?

Kevin Reid's preserveDrawingBuffer suggestion is the correct one, but there is (usually) a better option. The tl;dr is the code at the end.
It can be expensive to put together the final pixels of a rendered webpage, and coordinating that with rendering WebGL content even more so. The usual flow is:
JavaScript issues drawing commands to WebGL context
JavaScript returns, returning control to the main browser event loop
WebGL context turns drawing buffer (or its contents) over to the compositor for integration into web page currently being rendered on screen
Page, with WebGL content, displayed on screen
Note that this is different from most OpenGL applications. In those, rendered content is usually displayed directly, rather than being composited with a bunch of other stuff on a page, some of which may actually be on top of and blended with the WebGL content.
The WebGL spec was changed to treat the drawing buffer as essentially empty after Step 3. The code you're running in devtools is coming after Step 4, which is why you get an empty buffer. This change to the spec allowed big performance improvements on platforms where blanking after Step 3 is basically what actually happens in hardware (like in many mobile GPUs). If you want work around this to sometimes make copies of the WebGL content after step 3, the browser would have to always make a copy of the drawing buffer before step 3, which is going to make your framerate drop precipitously on some platforms.
You can do exactly that and force the browser to make the copy and keep the image content accessible by setting preserveDrawingBuffer to true. From the spec:
This default behavior can be changed by setting the preserveDrawingBuffer attribute of the WebGLContextAttributes object. If this flag is true, the contents of the drawing buffer shall be preserved until the author either clears or overwrites them. If this flag is false, attempting to perform operations using this context as a source image after the rendering function has returned can lead to undefined behavior. This includes readPixels or toDataURL calls, or using this context as the source image of another context's texImage2D or drawImage call.
In the example you provided, the code is just changing the context creation line:
gl = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});
Just keep in mind that it will force that slower path in some browsers and performance will suffer, depending on what and how you are rendering. You should be fine in most desktop browsers, where the copy doesn't actually have to be made, and those do make up the vast majority of WebGL capable browsers...but only for now.
However, there is another option (as somewhat confusingly mentioned in the next paragraph in the spec).
Essentially, you make the copy yourself before step 2: after all your draw calls have finished but before you return control to the browser from your code. This is when the WebGL drawing buffer is still in tact and is accessible, and you should have no trouble accessing the pixels then. You use the the same toDataUrl or readPixels calls you would use otherwise, it's just the timing that's important.
Here you get the best of both worlds. You get a copy of the drawing buffer, but you don't pay for it in every frame, even those in which you didn't need a copy (which may be most of them), like you do with preserveDrawingBuffer set to true.
In the example you provided, just add your code to the bottom of drawScene and you should see the copy of the canvas right below:
function drawScene() {
...
var webglImage = (function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL('image/png');
return image;
})(document.querySelectorAll('canvas')[0]);
window.document.body.appendChild(webglImage);
}

Here's some things to try. I don't know whether either of these should be necessary to make this work, but they might make a difference.
Add preserveDrawingBuffer: true to the getContext attributes.
Try doing this with a later tutorial which does animation; i.e. draws on the canvas repeatedly rather than just once.

toDataURL() read data from Buffer.
we don't need to use preserveDrawingBuffer: true
before read data, we also need to use render()
finally:
renderer.domElement.toDataURL();

Related

How can I efficiently store hand-drawn lines(canvas)?

Background
I'm making a simple online drawing app so that I can practice my JS and canvas skills. It's going really well, with the ability to draw freely, and also the ability to draw straight lines.
How my drawing app works
It registers mouse and touch events, which handle all of the drawing and saving.
Here's some example code:
var tools={
"pencil":{
"started": false,
... (some data used by the pencil tool),
"start":function(e){
if(currentTool!=="pencil")return;
// Make sure that the selected tool on the toolbar is the pencil
// code for mousedown/touchstart
tools.pencil.started=true;
},
"move":function(e){
if(!tools.pencil.started)return;
// code for mousemove/touchmove
},
"end":function(e){
if(!tools.pencil.started)return;
// code for mouseup/touchend
tools.pencil.started=false;
}
},
"other tools":{
...
}
};
// And here would be a function which adds the mouse and touch events
Here is my pencil tool:
var tools={
pencil:{
started:false,
start:function(e){
if(currentTool!=="pencil")return; // Make sure the pencil tool is selected
e.preventDefault();
e=e.clientX?e:e.touches[0]; // Allow touch
context.beginPath(); // Begin drawing
context.moveTo(e.clientX-50,e.clientY); // Set the position of the pencil
tools.pencil.started=true; // Mark the pencil tool as started
},
move:function(e){
if(tools.pencil.started){ // Make sure the pencil is started
e.preventDefault();
e=e.clientX?e:e.touches[0];
context.lineTo(e.clientX-50,e.clientY); // Draw a line
context.stroke(); // Make the line visible
}
},
end:function(e){
if(tools.pencil.started){
e.preventDefault();
//tools.pencil.move(e); // Finish drawing the line
tools.pencil.started=false; // Mark the pencil tool as not started
}
}
}
};
Ignore the -50 parts (they're just to adjust with the sidebar). This works, but doesn't save to localStorage.
Problem
TL;DR: I need to save everything on the canvas into some storage (I was currently using localStorage but anything would work, although I would prefer usage of the client-side only). I can't figure out how to efficiently store it, where 'efficiently' means both fast and accurate (accurate as in it stores the whole line). Lines can be stored, but hand-drawn items I haven't figured out yet.
Explanation:
When the user resizes the window, the canvas resizes to the window size (I made this happen, not a bug). I was able to make it that when you resize, it first saves the drawings onto a temporary canvas and then, after the main canvas resizes, redraws them back. But there's a problem. Here's an example to make it clear:
You open the draw app and fill the screen with drawings.
You open DevTools and some of the drawings get covered
When you close the DevTools, then those drawings are gone.
The reason is because since the canvas got smaller, the drawings that went off of it were lost, and when it came back to the original size, they weren't visible (because they're gone). I decided to save everything to localStorage so that I can retain them (and also for some more features that I may add). The lines are working (since they only need to say, "I am a line, I start at x,y and end at x,y". That's it for a line of any size. But for hand-drawn images, they can go anywhere, so they need to say, "I am a pixel, I am at x,y", but many times for even remotely complex images.
Attempted solution
Whenever they move the mouse, it saves to a variable which then updates to localStorage. The problems with this is that if you go fast, then the lines isn't complete (has holes in it) and localStorage is storing a lot of text.
Question
How can I efficiently store all of the user's hand-drawn (pencil tool) images (client-side operations preferred)?
My suggestion is to consider the target use of this app and work backwards from it.
If you're doing something like a MS Paint drawing app, storing a canvas as a bitmap may be better. To deal with cropping issues, I would suggest that you use a predefined canvas size or a setup stage (with a maximum size to prevent issues of saving to LocalStorage).
If you're worried about image size being restrictive due to memory cap, you can also utilize compression.
ie.
canvas.toDataURL('image/jpeg', 0.5);
👆 will compress the image and return a string that will be localStorage friendly
If, on the other ✋, you are building something more like an excalidraw app, where you want to maintain the ability to edit any drawn object, then you'll need to store each object individually. Whether you use canvas or SVG doesn't really matter imho, but the SVG can make it a bit easier in the sense that the definition of how to store each element is already established, so you don't need to re-invent the wheel, so-to-speak, just need to implement it. Use of SVG doesn't necessarily preclude you from using canvas, as there are a couple SVG-in-canvas implementation, but if you're doing this as a learning project (especially for canvas), it's likely not the route you want to take.

Weird compression of html image

I want to use a base64-encoded png that I retrieve from a server in WebGL.
To do this, I load the encoded png into an html Image object.
For my application, I need the png data to be absolutely lossless, but the retrieved pixel values by the shader are different in different browsers...
(if I load the Image into a canvas and use getImageData, the retrieved pixel values are different across browsers as well).
There must be some weird filtering/compression of pixel values happening, but I can't figure out how and why. Anyone familiar with this problem?
Loading the image from the server:
var htmlImage = new Image();
htmlImage.src = BASE64_STRING_FROM_SERVER
Loading the image into the shader:
ctx.texImage2D(ctx.TEXTURE_2D, 0, ctx.RGB, ctx.RGB, ctx.UNSIGNED_BYTE,
htmlImage);
Trying to read the pixel values using a canvas (different values across browsers):
var canvas = document.createElement('canvas');
canvas.width = htmlImage.width;
canvas.height = htmlImage.height;
canvas.getContext('2d').drawImage(htmlImage, 0, 0, htmlImage.width,
htmlImage.height);
// This data is different in, for example, the latest version of Chrome and Firefox
var pixelData = canvas.getContext('2d').getImageData(0, 0,
htmlImage.width, htmlImage.height).data;
As #Sergiu points out, by default the browser may apply color correction, gamma correction, color profiles or anything else to images.
In WebGL though you can turn this off. Before uploading the image to the texture call gl.pixelStorei with gl.UNPACK_COLORSPACE_CONVERSION_WEBGL and pass it gl_NONE as in
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
This will tell the browser not to apply color spaces, gamma, etc. This was important for WebGL because lots of 3D applications use textures to pass things other than images. Examples include normal maps, height maps, ambient occlusion maps, glow maps, specular maps, and many other kinds of data.
The default setting is
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.BROWSER_DEFAULT_WEBGL);
Note this likely only works when taking data directly from an image, not when passing the image through a 2d canvas.
Note that if you're getting the data from WebGL canvas by drawing it into a 2D canvas then all bets are off. If nothing else a canvas 2D uses premultiplied alpha so copying data into and out of a 2D canvas is always lossy if alpha < 255. Use gl.readPixels if you want the data back unaffected by whatever 2D canvas does.
Note that one potential problem with this method is speed. The browser probably assumes when you download an image that it will eventually be displayed. It has no way of knowing in advance that it's going to be used in a texture. So, you create an image tag, set the src attribute, the browser downloads the image, decompresses it, prepares it for display, then emits the load event, you then upload that image to a texture with UNPACK_COLORSPACE_CONVERSION_WEBGL = NONE. The browser at this point might have to re-decompress it if it didn't keep around a version that doesn't have color space conversion already applied. It's not likely a noticeable speed issue but it's also not zero.
To get around this the browsers added the ImageBitmap api. This API solves a few problems.
It can be used in a web worker because it's not a DOM element like Image is
You can pass it a sub rectangle so you don't have to first get the entire image just to ultimately get some part if it
You can tell it whether or not to apply color space correction before it starts avoiding the issue mentioned above.
Unfortunately as of 2018/12 it's only fully supported by Chrome. Firefox has partial support. Safari has none.

Putting renderer.clear() at the end of render() makes my scene black

My code is like this:
function render() {
renderer.render( scene, camera );
renderer.clear();
}
I wonder why it makes my scene black. Is that because the color buffer is cleared before being actually rendered?
In this way it works well:
function render() {
renderer.clear();
renderer.render( scene, camera );
}
But how can I make sure the color buffer has been rendered before I call clear()?
I'm curious about the difference between clearing at the end and at the beginning.
The difference between clearing renderer's framebuffer before and after rendering lies in the way WebGL content is being presented to a web page. The thing is WebGL is always at least double buffered (in WebGL Insights book guys from Mozilla say that in Firefox WebGL canvases actualy are triple-buffered). That means that within a requestAnimationFrame callback (render function in your case) all WebGL calls affect only a so-called back buffer. The other buffer (called front buffer) is unaffected. Then, when the callback ends, browser swaps buffers: the back buffer becomes front buffer, the front one becomes back buffer. The new front buffer is then used by the browser to draw the web page. The new back buffer is drawn to by WebGL next time rAF callback's called. Its important to note that the browser by default doesn't change contents of the buffer upon swapping them (preserveDrawingBuffer context option changes that though).
Going back to your question, the difference is that when you first render the scene and then clear the buffer you first will get strange results since the framebuffer contains rendering result of a same previously rendered frame (you won't see those results on the screen, they'll just be in the buffer's memory), and then you'll make all that irrelevant since you clear the buffer. After that the browser'll present clean buffer to the page as it is, or as black rectangle (or some different color depending on options of the renderer). However, if you clear first and then render a scene, you'll get correct results: first clearing eliminates relicts of a previous frame and then puts new content into it. Then browser presents it to the page.
So, to sum up: we usually clear framebuffer first to remove any traces of previous frames from so we start with a "clean slate" and then render stuff to it.

Capture/save/export an image with CSS filter effects applied

I'm tooling around to make a simple picture editor that uses CSS3 filter effects (saturation, sepia, contrast, etc.)
Making the picture editor is the easy part, however whether it is possible to save or export the image with the filters applied seems incredibly difficult..
I had originally had high hopes it would be possible with #niklasvh's html2canvas. Unfortunately, it doesn't capture most CSS3 properties, let alone filter effects.
If anybody has a solution or sadly, definitive knowledge that this just isn't possible, it would be greatly appreciated! Thanks!
You're not, that I'm aware of, able to apply CSS to graphics in the HTML5 canvas element (as they're not a part of the DOM).
However, that's OK! We can still do basic filter effects relatively easy and save them out as an image with just a few lines of JavaScript.
I found a good article that goes over applying a sepia-like effect to the canvas and saving it as an image. Rather than copying it, I'll highlight the larger takeaways from the article.
Modifying the canvas image:
var canvas = document.getElementById('canvasElementId'),
context = canvas.getContext('2d');
var imageData = context.getImageData(0, 0, canvas.width, canvas.height);
for (var i = 0; i < imageData.data.length; i+=4) {
...
}
In order to get access to some canvas API, you'll need to activate the 2d context on the canvas using the above JavaScript. MDN has some great documentation on the API that is available to you with the context object, but the important part to note here is the getImageData() call. Basically, it will grab all the pixel values in the area that you defined (in the case above, we're grabbing the whole image). Then, with this data in hand, we can iterate through all the pixels and change them as needed. In the sepia article, it's obviously making some interesting adjustments, but you can also do grayscale, blurring, or any other changes as necessary and there's an awesome canvas filters library on Github for just that.
How to save the canvas image:
var canvas = document.getElementById('canvasElementId'),
image = document.createElement("img");
image.src = canvas.toDataURL('image/jpeg');
document.body.appendChild(image);
The above script will select your canvas (assuming you've already done your drawings) and create an image element. It them uses toDataURL() to generate a url that you can use to set the source on an image element. In the example above, the image element is appended to the document body. You can view more info on MDN's canvas page.
I got your answer.
I made this program, finally it's work.
those step is :
1. upload the image (JPG/PNG)
2. convert to canvas
3. custom with css filters.
4. render using camanJS to save as image.
5. done.
you also can reset effect value by modifying value of filters to its default.
good luck!

javascript memory leak with HTML5 getImageData

I've been working on trying to make some visual effects for a javascript game I'm creating, and I noticed a piece of code that I'm using to modulate the color of my sprites makes my browsers memory usage go up and up, seemingly without limit.
You can see the code and the memory leak here: http://timzook.tk/javascript/test.html
This memory leak only happens in my updateimage() function when I call getImageData from my canvas context every frame (via setInterval) in order to make a new ImageData object to recolor. I would have thought that javascript's garbage collector would be destroying the old one, but if not I have no idea how to destroy it manually. Any help figuring out why it does this or how to fix it would be appreciated.
My question is very similar to this one: What is leaking memory with this use of getImageData, javascript, HTML5 canvas However, I NEED my code to run every frame in the functioned called by setInterval, his solution of moving it outside the setInterval function isn't an option for me, and I can't leave a comment asking if he found some other method of solving it.
Note to people testing it out: Since this example uses getImageData, it can't be tested out locally just by throwing it in a .html file, a web server is required. Also it obviously uses HTML5 elements so some browsers won't work with it.
Edit: *SOLVED* Thanks, the solution below fixed it. I didn't realize that you could use a canvas element the same way as you could use an image in drawImage(), I restructured my code so it now uses significantly less memory. I uploaded this changed code to the page linked above, if anyone wants to see it.
You aren't getting a memory leak from your calls to getImageData(). The source of your problem is this line:
TempImg.src = ImgCanvas.toDataURL("image/png");
Effectively, every time that line of code is executed the browser "downloads" another image and stores it in memory. So, what you actually end up with is a cache that is growing very quickly. You can easily verify this by opening the site in Chrome and checking the resources tab of the developer tools (ctrl+shift+i).
You can work around this by making a TempImgCanvas and storing your image data on that canvas instead of keeping an image object updated after every call to your updateimage() loop.
I have to step away for a while, but I can work up an example in a few hours when I get back, if you would like.
Edit: I restructured things a bit and eliminated your caching issue. You just have to make two changes. The first is replacing your updateimage() function with this one:
function updateimage() {
var TempImgData = ImgContext.getImageData(0, 0, ImgCanvas.width, ImgCanvas.height);
var NewData = TempImgData.data;
var OrigData = ImgData.data;
//Change image color
var len = 4*ImgData.width*ImgData.height-1;
for(var i=0;i<=len;i+=4) {
NewData[i+0] = OrigData[i+0] * color.r;
NewData[i+1] = OrigData[i+1] * color.g;
NewData[i+2] = OrigData[i+2] * color.b;
NewData[i+3] = OrigData[i+3];
}
//Put changed image onto the canvas
ImgContext.putImageData(TempImgData, 0, 0);
}
The second is updating the last line in draw() to read as follows:
drawImg(ImgCanvas, Positions[i].x, Positions[i].y, Positions[i].x+Positions[i].y);
Using this updated code, we simply refer to the original base (white) image data and calculate new values based on that every time we go through the updateimage() function. When you call getImageData() you receive a copy of the image data on the canvas, so if you edit the canvas or the data, the other one remains untouched. You were already grabbing the original base image data to begin with, so it made sense to just use that instead of having to re-acquire it every time we updated.
Also, you'll notice that I modified your loop that changes the image color slightly. By obtaining a handle to the actual data array that you want to access/modify, you save yourself having to resolve the array location from the parent object every time you go through the loop. This technique actually results in a pretty nice performance boost. Your performance was fine to start with, but it can't hurt to be more efficient since it's pretty straight-forward.

Categories