Weird compression of html image - javascript

I want to use a base64-encoded png that I retrieve from a server in WebGL.
To do this, I load the encoded png into an html Image object.
For my application, I need the png data to be absolutely lossless, but the retrieved pixel values by the shader are different in different browsers...
(if I load the Image into a canvas and use getImageData, the retrieved pixel values are different across browsers as well).
There must be some weird filtering/compression of pixel values happening, but I can't figure out how and why. Anyone familiar with this problem?
Loading the image from the server:
var htmlImage = new Image();
htmlImage.src = BASE64_STRING_FROM_SERVER
Loading the image into the shader:
ctx.texImage2D(ctx.TEXTURE_2D, 0, ctx.RGB, ctx.RGB, ctx.UNSIGNED_BYTE,
htmlImage);
Trying to read the pixel values using a canvas (different values across browsers):
var canvas = document.createElement('canvas');
canvas.width = htmlImage.width;
canvas.height = htmlImage.height;
canvas.getContext('2d').drawImage(htmlImage, 0, 0, htmlImage.width,
htmlImage.height);
// This data is different in, for example, the latest version of Chrome and Firefox
var pixelData = canvas.getContext('2d').getImageData(0, 0,
htmlImage.width, htmlImage.height).data;

As #Sergiu points out, by default the browser may apply color correction, gamma correction, color profiles or anything else to images.
In WebGL though you can turn this off. Before uploading the image to the texture call gl.pixelStorei with gl.UNPACK_COLORSPACE_CONVERSION_WEBGL and pass it gl_NONE as in
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
This will tell the browser not to apply color spaces, gamma, etc. This was important for WebGL because lots of 3D applications use textures to pass things other than images. Examples include normal maps, height maps, ambient occlusion maps, glow maps, specular maps, and many other kinds of data.
The default setting is
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.BROWSER_DEFAULT_WEBGL);
Note this likely only works when taking data directly from an image, not when passing the image through a 2d canvas.
Note that if you're getting the data from WebGL canvas by drawing it into a 2D canvas then all bets are off. If nothing else a canvas 2D uses premultiplied alpha so copying data into and out of a 2D canvas is always lossy if alpha < 255. Use gl.readPixels if you want the data back unaffected by whatever 2D canvas does.
Note that one potential problem with this method is speed. The browser probably assumes when you download an image that it will eventually be displayed. It has no way of knowing in advance that it's going to be used in a texture. So, you create an image tag, set the src attribute, the browser downloads the image, decompresses it, prepares it for display, then emits the load event, you then upload that image to a texture with UNPACK_COLORSPACE_CONVERSION_WEBGL = NONE. The browser at this point might have to re-decompress it if it didn't keep around a version that doesn't have color space conversion already applied. It's not likely a noticeable speed issue but it's also not zero.
To get around this the browsers added the ImageBitmap api. This API solves a few problems.
It can be used in a web worker because it's not a DOM element like Image is
You can pass it a sub rectangle so you don't have to first get the entire image just to ultimately get some part if it
You can tell it whether or not to apply color space correction before it starts avoiding the issue mentioned above.
Unfortunately as of 2018/12 it's only fully supported by Chrome. Firefox has partial support. Safari has none.

Related

Change DPI of canvas element in HTML5

I'm looking to create a chrome application that allows the user to design a few different things using the canvas element. After they're done, I'd like to use a mask to crop out everything outside a specified area and then send the image to a printer. However, I need to be able to control the printed size precisely.
When I was making a similar app in python, it was a simple matter of setting the DPI to a ratio of the resolution to the dimensions I needed. I'm trying to find if there's a solution as simple as this one for HTML5.
I've found a couple posts asking for similar things, but they're unanswered and don't have a lot of activity.
Any help would be appreciated!
Canvas is a bitmap (raster) format. You can not just increase the DPI without some horrible artifacts appearing.
You can however record all the drawing, strokes/fills etc, and then at print time create a larger canvas for print and scale up all the drawing commands to fit the higher resolution canvas. As long as you don't use bitmapped images or direct pixel manipulation the results will be good. Though large canvas formats can be problematic on some devices and browsers. A better way is to convert it all to a vector format like SVG and print that. Or skip the canvas altogether and draw to SVG.

How can I speed up this slow canvas DrawImage operation? WebGL? Webworkers?

I am currently combining two high resolution images into an html5 canvas element.
The first image is a JPEG and contains all color data, and the second image is an alpha mask PNG file.
I then combine the two to produce a single RGBA canvas.
The app is dealing with 2048x2048 resolution images, that need to also maintain their alpha channel. Therefore by using this method as ooposed to simply using PNG's, I have reduced the average filesize from around 2-3mb to 50-130kb plus a 10kb png alpha mask.
The method I use is as follows:
context.drawImage(alpha_mask_png, 0, 0, w, h);
context.globalCompositeOperation = 'source-in';
context.drawImage(main_image_jpeg, 0, 0, w, h);
Unfortunately this operation takes around 100-120ms. And is only carried out once for each pair of images as they are loaded. While this wouldn't normally be an issue, in this case an animation is being rendered to another main visible canvas (of which these high res images are the source art for) which suffers from a very noticable 100ms judder (mostly perceptible in firefox) whenever new source art is streamed in, loaded, and combined.
What I am looking for is a way to reduce this.
Here is what I have tried so far:
Implemented WebP for Google chrome, removing the need to combine the JPEG and PNG alpha mask altogether. Perfect in Chrome only but need a solution mostly for Firefox (IE 10/11 seems to perform the operation far quicker)
I have tried loading the images in a webworker and decoding them both, followed by combining them. All in pure javascript. This works but is far too slow to be of use.
I have also tried using WebP polyfills. Weppy is very fast and when ran in a webworker does not effect the main loop. However it does not support alpha transparency so is of no use which is a real shame as this method is very close. libwebpjs works okay within a webworker but like my manual decoding of the JPEG/PNG, is far too slow.
EDIT: To further clarify. I have tried transferring the data from my webworkers using transferrable objects and have even tried turning the result into a blob and creating an objectURL which can then be loaded by the main thread. Although there is no lag in the main thread any more, the images simply take far too long to decode.
This leaves me with WebGL. I have literally no understanding of how WebGL works other than I realise that I would need to load both the JPEG and PNG as seperate textures then combine them with a shader. But I really wouldn't know where to begin.
I have spent some time playing with the code from the following link:
Blend two canvases onto one with WebGL
But to no avail. And to be honest I am concerned that loading the images as textures might actually take longer than my original method anyway.
So to summarise, I am really looking for a way of speeding up the operation on high resolution images. (1920x1080 - 2048x2048) be it with the use of WebGL or indeed any other method.
Any help or suggestions would be greatly appreciated.

Get correct pixels from canvas

When I try to get image data from my png everything works fine on every browser.
For most of the users it works also fine.
But, on some computers this code
imageData = ctx.getImageData(0, 0, img.width, img.height).data;
returns pixels where some of the colors are lower or higher by 1.
This only happens on Firefox and IE.
Chrome returns correct result even on these computers.
I find out that this could be related to color profile on user computer.
Is there any way to get original data without browser applying color correction?
I'm using offscreen canvas and these images have no alpha channel, so there shouldn't be any problem.
I load image from datauri, if it makes any difference.

Capture/save/export an image with CSS filter effects applied

I'm tooling around to make a simple picture editor that uses CSS3 filter effects (saturation, sepia, contrast, etc.)
Making the picture editor is the easy part, however whether it is possible to save or export the image with the filters applied seems incredibly difficult..
I had originally had high hopes it would be possible with #niklasvh's html2canvas. Unfortunately, it doesn't capture most CSS3 properties, let alone filter effects.
If anybody has a solution or sadly, definitive knowledge that this just isn't possible, it would be greatly appreciated! Thanks!
You're not, that I'm aware of, able to apply CSS to graphics in the HTML5 canvas element (as they're not a part of the DOM).
However, that's OK! We can still do basic filter effects relatively easy and save them out as an image with just a few lines of JavaScript.
I found a good article that goes over applying a sepia-like effect to the canvas and saving it as an image. Rather than copying it, I'll highlight the larger takeaways from the article.
Modifying the canvas image:
var canvas = document.getElementById('canvasElementId'),
context = canvas.getContext('2d');
var imageData = context.getImageData(0, 0, canvas.width, canvas.height);
for (var i = 0; i < imageData.data.length; i+=4) {
...
}
In order to get access to some canvas API, you'll need to activate the 2d context on the canvas using the above JavaScript. MDN has some great documentation on the API that is available to you with the context object, but the important part to note here is the getImageData() call. Basically, it will grab all the pixel values in the area that you defined (in the case above, we're grabbing the whole image). Then, with this data in hand, we can iterate through all the pixels and change them as needed. In the sepia article, it's obviously making some interesting adjustments, but you can also do grayscale, blurring, or any other changes as necessary and there's an awesome canvas filters library on Github for just that.
How to save the canvas image:
var canvas = document.getElementById('canvasElementId'),
image = document.createElement("img");
image.src = canvas.toDataURL('image/jpeg');
document.body.appendChild(image);
The above script will select your canvas (assuming you've already done your drawings) and create an image element. It them uses toDataURL() to generate a url that you can use to set the source on an image element. In the example above, the image element is appended to the document body. You can view more info on MDN's canvas page.
I got your answer.
I made this program, finally it's work.
those step is :
1. upload the image (JPG/PNG)
2. convert to canvas
3. custom with css filters.
4. render using camanJS to save as image.
5. done.
you also can reset effect value by modifying value of filters to its default.
good luck!

Why does my canvas go blank after converting to image?

I am trying to convert the canvas element on this page to a png using the following snippet (e.g. enter in JavaScript console):
(function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL("image/png");
return image;
})($$('canvas')[0]);
Unfortunately, the png I get is completely blank. Notice also that the original canvas goes blank after resizing the page.
Why does the canvas go blank? How can I convert this canvas to a png?
Kevin Reid's preserveDrawingBuffer suggestion is the correct one, but there is (usually) a better option. The tl;dr is the code at the end.
It can be expensive to put together the final pixels of a rendered webpage, and coordinating that with rendering WebGL content even more so. The usual flow is:
JavaScript issues drawing commands to WebGL context
JavaScript returns, returning control to the main browser event loop
WebGL context turns drawing buffer (or its contents) over to the compositor for integration into web page currently being rendered on screen
Page, with WebGL content, displayed on screen
Note that this is different from most OpenGL applications. In those, rendered content is usually displayed directly, rather than being composited with a bunch of other stuff on a page, some of which may actually be on top of and blended with the WebGL content.
The WebGL spec was changed to treat the drawing buffer as essentially empty after Step 3. The code you're running in devtools is coming after Step 4, which is why you get an empty buffer. This change to the spec allowed big performance improvements on platforms where blanking after Step 3 is basically what actually happens in hardware (like in many mobile GPUs). If you want work around this to sometimes make copies of the WebGL content after step 3, the browser would have to always make a copy of the drawing buffer before step 3, which is going to make your framerate drop precipitously on some platforms.
You can do exactly that and force the browser to make the copy and keep the image content accessible by setting preserveDrawingBuffer to true. From the spec:
This default behavior can be changed by setting the preserveDrawingBuffer attribute of the WebGLContextAttributes object. If this flag is true, the contents of the drawing buffer shall be preserved until the author either clears or overwrites them. If this flag is false, attempting to perform operations using this context as a source image after the rendering function has returned can lead to undefined behavior. This includes readPixels or toDataURL calls, or using this context as the source image of another context's texImage2D or drawImage call.
In the example you provided, the code is just changing the context creation line:
gl = canvas.getContext("experimental-webgl", {preserveDrawingBuffer: true});
Just keep in mind that it will force that slower path in some browsers and performance will suffer, depending on what and how you are rendering. You should be fine in most desktop browsers, where the copy doesn't actually have to be made, and those do make up the vast majority of WebGL capable browsers...but only for now.
However, there is another option (as somewhat confusingly mentioned in the next paragraph in the spec).
Essentially, you make the copy yourself before step 2: after all your draw calls have finished but before you return control to the browser from your code. This is when the WebGL drawing buffer is still in tact and is accessible, and you should have no trouble accessing the pixels then. You use the the same toDataUrl or readPixels calls you would use otherwise, it's just the timing that's important.
Here you get the best of both worlds. You get a copy of the drawing buffer, but you don't pay for it in every frame, even those in which you didn't need a copy (which may be most of them), like you do with preserveDrawingBuffer set to true.
In the example you provided, just add your code to the bottom of drawScene and you should see the copy of the canvas right below:
function drawScene() {
...
var webglImage = (function convertCanvasToImage(canvas) {
var image = new Image();
image.src = canvas.toDataURL('image/png');
return image;
})(document.querySelectorAll('canvas')[0]);
window.document.body.appendChild(webglImage);
}
Here's some things to try. I don't know whether either of these should be necessary to make this work, but they might make a difference.
Add preserveDrawingBuffer: true to the getContext attributes.
Try doing this with a later tutorial which does animation; i.e. draws on the canvas repeatedly rather than just once.
toDataURL() read data from Buffer.
we don't need to use preserveDrawingBuffer: true
before read data, we also need to use render()
finally:
renderer.domElement.toDataURL();

Categories