When I try to get image data from my png everything works fine on every browser.
For most of the users it works also fine.
But, on some computers this code
imageData = ctx.getImageData(0, 0, img.width, img.height).data;
returns pixels where some of the colors are lower or higher by 1.
This only happens on Firefox and IE.
Chrome returns correct result even on these computers.
I find out that this could be related to color profile on user computer.
Is there any way to get original data without browser applying color correction?
I'm using offscreen canvas and these images have no alpha channel, so there shouldn't be any problem.
I load image from datauri, if it makes any difference.
Related
I want to use a base64-encoded png that I retrieve from a server in WebGL.
To do this, I load the encoded png into an html Image object.
For my application, I need the png data to be absolutely lossless, but the retrieved pixel values by the shader are different in different browsers...
(if I load the Image into a canvas and use getImageData, the retrieved pixel values are different across browsers as well).
There must be some weird filtering/compression of pixel values happening, but I can't figure out how and why. Anyone familiar with this problem?
Loading the image from the server:
var htmlImage = new Image();
htmlImage.src = BASE64_STRING_FROM_SERVER
Loading the image into the shader:
ctx.texImage2D(ctx.TEXTURE_2D, 0, ctx.RGB, ctx.RGB, ctx.UNSIGNED_BYTE,
htmlImage);
Trying to read the pixel values using a canvas (different values across browsers):
var canvas = document.createElement('canvas');
canvas.width = htmlImage.width;
canvas.height = htmlImage.height;
canvas.getContext('2d').drawImage(htmlImage, 0, 0, htmlImage.width,
htmlImage.height);
// This data is different in, for example, the latest version of Chrome and Firefox
var pixelData = canvas.getContext('2d').getImageData(0, 0,
htmlImage.width, htmlImage.height).data;
As #Sergiu points out, by default the browser may apply color correction, gamma correction, color profiles or anything else to images.
In WebGL though you can turn this off. Before uploading the image to the texture call gl.pixelStorei with gl.UNPACK_COLORSPACE_CONVERSION_WEBGL and pass it gl_NONE as in
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
This will tell the browser not to apply color spaces, gamma, etc. This was important for WebGL because lots of 3D applications use textures to pass things other than images. Examples include normal maps, height maps, ambient occlusion maps, glow maps, specular maps, and many other kinds of data.
The default setting is
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.BROWSER_DEFAULT_WEBGL);
Note this likely only works when taking data directly from an image, not when passing the image through a 2d canvas.
Note that if you're getting the data from WebGL canvas by drawing it into a 2D canvas then all bets are off. If nothing else a canvas 2D uses premultiplied alpha so copying data into and out of a 2D canvas is always lossy if alpha < 255. Use gl.readPixels if you want the data back unaffected by whatever 2D canvas does.
Note that one potential problem with this method is speed. The browser probably assumes when you download an image that it will eventually be displayed. It has no way of knowing in advance that it's going to be used in a texture. So, you create an image tag, set the src attribute, the browser downloads the image, decompresses it, prepares it for display, then emits the load event, you then upload that image to a texture with UNPACK_COLORSPACE_CONVERSION_WEBGL = NONE. The browser at this point might have to re-decompress it if it didn't keep around a version that doesn't have color space conversion already applied. It's not likely a noticeable speed issue but it's also not zero.
To get around this the browsers added the ImageBitmap api. This API solves a few problems.
It can be used in a web worker because it's not a DOM element like Image is
You can pass it a sub rectangle so you don't have to first get the entire image just to ultimately get some part if it
You can tell it whether or not to apply color space correction before it starts avoiding the issue mentioned above.
Unfortunately as of 2018/12 it's only fully supported by Chrome. Firefox has partial support. Safari has none.
Recently I have been programming and have run across inconsistencies between browsers with the document.getElementById(canvasId).getContext("2d").getImageData(x, y, 1, 1).data; command. I have an image and a section of this image is colored rgb(246,247,247) (I set the color in photoshop). I am calling the getImageData method to get the image data at a clicked point, look at the color and if the color is inside a range (which I have defined in an array) it will plot a point on the area. I run this in IE and it works just as expected, the color comes out at rgb(246,247,247). The problem comes in when I run the exact same code with the exact same image in Chrome or Firefox, the browser says the color is rgb(246,247,246) and rgb(246,247,246), respectively. Why is the browser saying the colors are different than they actually are? Is there another way to get the color of a pixel in a canvas reliably?
Thanks in advance!
For the record, web browsers all render colors differently. The best way to ensure colors are always the same is to use web safe colors, and export images as PNG-8 with no gamma channel.
I am currently combining two high resolution images into an html5 canvas element.
The first image is a JPEG and contains all color data, and the second image is an alpha mask PNG file.
I then combine the two to produce a single RGBA canvas.
The app is dealing with 2048x2048 resolution images, that need to also maintain their alpha channel. Therefore by using this method as ooposed to simply using PNG's, I have reduced the average filesize from around 2-3mb to 50-130kb plus a 10kb png alpha mask.
The method I use is as follows:
context.drawImage(alpha_mask_png, 0, 0, w, h);
context.globalCompositeOperation = 'source-in';
context.drawImage(main_image_jpeg, 0, 0, w, h);
Unfortunately this operation takes around 100-120ms. And is only carried out once for each pair of images as they are loaded. While this wouldn't normally be an issue, in this case an animation is being rendered to another main visible canvas (of which these high res images are the source art for) which suffers from a very noticable 100ms judder (mostly perceptible in firefox) whenever new source art is streamed in, loaded, and combined.
What I am looking for is a way to reduce this.
Here is what I have tried so far:
Implemented WebP for Google chrome, removing the need to combine the JPEG and PNG alpha mask altogether. Perfect in Chrome only but need a solution mostly for Firefox (IE 10/11 seems to perform the operation far quicker)
I have tried loading the images in a webworker and decoding them both, followed by combining them. All in pure javascript. This works but is far too slow to be of use.
I have also tried using WebP polyfills. Weppy is very fast and when ran in a webworker does not effect the main loop. However it does not support alpha transparency so is of no use which is a real shame as this method is very close. libwebpjs works okay within a webworker but like my manual decoding of the JPEG/PNG, is far too slow.
EDIT: To further clarify. I have tried transferring the data from my webworkers using transferrable objects and have even tried turning the result into a blob and creating an objectURL which can then be loaded by the main thread. Although there is no lag in the main thread any more, the images simply take far too long to decode.
This leaves me with WebGL. I have literally no understanding of how WebGL works other than I realise that I would need to load both the JPEG and PNG as seperate textures then combine them with a shader. But I really wouldn't know where to begin.
I have spent some time playing with the code from the following link:
Blend two canvases onto one with WebGL
But to no avail. And to be honest I am concerned that loading the images as textures might actually take longer than my original method anyway.
So to summarise, I am really looking for a way of speeding up the operation on high resolution images. (1920x1080 - 2048x2048) be it with the use of WebGL or indeed any other method.
Any help or suggestions would be greatly appreciated.
I came across this gifx.js - http://evanw.github.io/webgl-filter/ - which seems to work well except for one thing: the save button always generates a blank image. I have tried playing around with the source but do not understand why the data URI that is generated is always the same (and blank) rather than that of the canvas. This is the script that I've been trying to tinker with - http://evanw.github.io/webgl-filter/script.js. If anyone has any ideas, I would appreciate the help!
The WebGL context needs to be created with { preserveDrawingBuffer: true } in the options, otherwise the buffer being rendered is cleared once it is drawn to the screen, and toDataURL will give you a blank image.
See also:
How do you save an image from a Three.js canvas?
I am trying to convert the canvas element on this page to a png using the following snippet (e.g. enter in JavaScript console):
(function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL("image/png");
return image;
})($$('canvas')[0]);
Unfortunately, the png I get is completely blank. Notice also that the original canvas goes blank after resizing the page.
Why does the canvas go blank? How can I convert this canvas to a png?
Kevin Reid's preserveDrawingBuffer suggestion is the correct one, but there is (usually) a better option. The tl;dr is the code at the end.
It can be expensive to put together the final pixels of a rendered webpage, and coordinating that with rendering WebGL content even more so. The usual flow is:
JavaScript issues drawing commands to WebGL context
JavaScript returns, returning control to the main browser event loop
WebGL context turns drawing buffer (or its contents) over to the compositor for integration into web page currently being rendered on screen
Page, with WebGL content, displayed on screen
Note that this is different from most OpenGL applications. In those, rendered content is usually displayed directly, rather than being composited with a bunch of other stuff on a page, some of which may actually be on top of and blended with the WebGL content.
The WebGL spec was changed to treat the drawing buffer as essentially empty after Step 3. The code you're running in devtools is coming after Step 4, which is why you get an empty buffer. This change to the spec allowed big performance improvements on platforms where blanking after Step 3 is basically what actually happens in hardware (like in many mobile GPUs). If you want work around this to sometimes make copies of the WebGL content after step 3, the browser would have to always make a copy of the drawing buffer before step 3, which is going to make your framerate drop precipitously on some platforms.
You can do exactly that and force the browser to make the copy and keep the image content accessible by setting preserveDrawingBuffer to true. From the spec:
This default behavior can be changed by setting the preserveDrawingBuffer attribute of the WebGLContextAttributes object. If this flag is true, the contents of the drawing buffer shall be preserved until the author either clears or overwrites them. If this flag is false, attempting to perform operations using this context as a source image after the rendering function has returned can lead to undefined behavior. This includes readPixels or toDataURL calls, or using this context as the source image of another context's texImage2D or drawImage call.
In the example you provided, the code is just changing the context creation line:
gl = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});
Just keep in mind that it will force that slower path in some browsers and performance will suffer, depending on what and how you are rendering. You should be fine in most desktop browsers, where the copy doesn't actually have to be made, and those do make up the vast majority of WebGL capable browsers...but only for now.
However, there is another option (as somewhat confusingly mentioned in the next paragraph in the spec).
Essentially, you make the copy yourself before step 2: after all your draw calls have finished but before you return control to the browser from your code. This is when the WebGL drawing buffer is still in tact and is accessible, and you should have no trouble accessing the pixels then. You use the the same toDataUrl or readPixels calls you would use otherwise, it's just the timing that's important.
Here you get the best of both worlds. You get a copy of the drawing buffer, but you don't pay for it in every frame, even those in which you didn't need a copy (which may be most of them), like you do with preserveDrawingBuffer set to true.
In the example you provided, just add your code to the bottom of drawScene and you should see the copy of the canvas right below:
function drawScene() {
...
var webglImage = (function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL('image/png');
return image;
})(document.querySelectorAll('canvas')[0]);
window.document.body.appendChild(webglImage);
}
Here's some things to try. I don't know whether either of these should be necessary to make this work, but they might make a difference.
Add preserveDrawingBuffer: true to the getContext attributes.
Try doing this with a later tutorial which does animation; i.e. draws on the canvas repeatedly rather than just once.
toDataURL() read data from Buffer.
we don't need to use preserveDrawingBuffer: true
before read data, we also need to use render()
finally:
renderer.domElement.toDataURL();