I have dynamically loaded a JPEG image into a 2D html canvas and I would like to access the raw YCbCr pixel values.
From what I understand, JPEG encodes the pixel data in YCbCr, but chrome appears convert it to RGB when accessing it with getImageData().
I have tried using the RGB to YCbCr conversion calculations but it appears to be a lossy conversion as they don't map perfectly 1-to-1.
Is it possible to access the raw YCbCr pixel values in JavaScript?
Yes and no.
When the browser loads an image with the native methods (ie. Image element) the YCbCr (in case of JPEGs) are automatically converted to RGB space by the browser and ICC and gamma correction is applied by browsers which support that.
This mean that when the image is loaded and ready for use, it is already in RGB space (which is correct as the monitor is RGB). The canvas does not have a part in this process and will always contain pixels as RGBA. All browsers supporting canvas will therefor return the data as RGBA when using getImageData().
The only way to access raw YCbCr is to do a low-level parse of the raw file yourselves. You can do this in any language that can iterate bytes including JavaScript. For JS I would recommend using typed arrays and DataViews to parse over the data but be warned: this is tedious and prone to errors (as it is a whole project on its own) but fully possible to do if you absolutely need the raw data.
Another way is to convert RGB back to YCbCr but chroma and the aforementioned color- and gamma-corrections will probably affect the result not making it identical to the original raw data (which in any ways is in compressed form). You can also export canvas as JPEG using the toDataURL() method on the canvas element.
I know this is a very old question, but wanted to provide the update that Chrome now rasterizes some JPEG and WebP images directly from YUV planes. You must have GPU rasterization enabled (and Metal disabled) and might sometimes need to enable the feature from chrome://flags
You can run a trace, decode an image, and include the blink category and see if any events pop up for YUV decoding. If so, then you can experiment with trying to directly access the YUV planes from JavaScript, although I'm admittedly doubtful that will work out of the box (for plumbing/security reasons).
Related
I want to use a base64-encoded png that I retrieve from a server in WebGL.
To do this, I load the encoded png into an html Image object.
For my application, I need the png data to be absolutely lossless, but the retrieved pixel values by the shader are different in different browsers...
(if I load the Image into a canvas and use getImageData, the retrieved pixel values are different across browsers as well).
There must be some weird filtering/compression of pixel values happening, but I can't figure out how and why. Anyone familiar with this problem?
Loading the image from the server:
var htmlImage = new Image();
htmlImage.src = BASE64_STRING_FROM_SERVER
Loading the image into the shader:
ctx.texImage2D(ctx.TEXTURE_2D, 0, ctx.RGB, ctx.RGB, ctx.UNSIGNED_BYTE,
htmlImage);
Trying to read the pixel values using a canvas (different values across browsers):
var canvas = document.createElement('canvas');
canvas.width = htmlImage.width;
canvas.height = htmlImage.height;
canvas.getContext('2d').drawImage(htmlImage, 0, 0, htmlImage.width,
htmlImage.height);
// This data is different in, for example, the latest version of Chrome and Firefox
var pixelData = canvas.getContext('2d').getImageData(0, 0,
htmlImage.width, htmlImage.height).data;
As #Sergiu points out, by default the browser may apply color correction, gamma correction, color profiles or anything else to images.
In WebGL though you can turn this off. Before uploading the image to the texture call gl.pixelStorei with gl.UNPACK_COLORSPACE_CONVERSION_WEBGL and pass it gl_NONE as in
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.NONE);
This will tell the browser not to apply color spaces, gamma, etc. This was important for WebGL because lots of 3D applications use textures to pass things other than images. Examples include normal maps, height maps, ambient occlusion maps, glow maps, specular maps, and many other kinds of data.
The default setting is
gl.pixelStorei(gl.UNPACK_COLORSPACE_CONVERSION_WEBGL, gl.BROWSER_DEFAULT_WEBGL);
Note this likely only works when taking data directly from an image, not when passing the image through a 2d canvas.
Note that if you're getting the data from WebGL canvas by drawing it into a 2D canvas then all bets are off. If nothing else a canvas 2D uses premultiplied alpha so copying data into and out of a 2D canvas is always lossy if alpha < 255. Use gl.readPixels if you want the data back unaffected by whatever 2D canvas does.
Note that one potential problem with this method is speed. The browser probably assumes when you download an image that it will eventually be displayed. It has no way of knowing in advance that it's going to be used in a texture. So, you create an image tag, set the src attribute, the browser downloads the image, decompresses it, prepares it for display, then emits the load event, you then upload that image to a texture with UNPACK_COLORSPACE_CONVERSION_WEBGL = NONE. The browser at this point might have to re-decompress it if it didn't keep around a version that doesn't have color space conversion already applied. It's not likely a noticeable speed issue but it's also not zero.
To get around this the browsers added the ImageBitmap api. This API solves a few problems.
It can be used in a web worker because it's not a DOM element like Image is
You can pass it a sub rectangle so you don't have to first get the entire image just to ultimately get some part if it
You can tell it whether or not to apply color space correction before it starts avoiding the issue mentioned above.
Unfortunately as of 2018/12 it's only fully supported by Chrome. Firefox has partial support. Safari has none.
I'm using a lazyload mechanism that only loads the relevant images once they're in the users viewport.
For this I've defined a data-src attribute which links to the original image and a base64 encoded placeholder image as src attribute to make the HTML valid.
<img src="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAYAAAAfFcSJAAAAAXNSR0IArs4c6QAAAARnQU1BAACxjwv8YQUAAAAJcEhZcwAADsQAAA7EAZUrDhsAAAANSURBVBhXYzh8+PB/AAffA0nNPuCLAAAAAElFTkSuQmCC" data-src="/path/to/image.png" alt="some text">
I noticed that chrome caches the base64 string but the string is quite long and bloats my HTML (I have a lot of images on a page).
So my question is if it's better to use a small base64 encoded or a 1px x 1px placeholder image?
Note:
For SEO purposes the element must be an img. Also my HTML must be valid, so a src attribute is required.
You can use this shorter (but valid!) image in the src tag (1x1 pixel GIF):
data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=
Note that if you gzip your HTML (which you should), the length of the string won't be that important because repetitive strings compress well.
Depending on your needs you might want to use a color for the 1x1 pixel (results in shorter gif files). One way to do this is using Photoshop or a similar tool to create the 1x1 pixel GIF in the right color, and then using a tool like ImageOptim to find the best compression. There's various online tools to convert the resulting file to a data URL.
I'd use the placeholder in your situation.
Using the base64 encoded image kind of defeats the purpose of lazy loading since you're still having to send some image data to the browser. If anything this could be detrimental to performance since the image is downloaded as part of the original HTTP request, rather than via a separate request as a browser might make with an image tag and URL.
Ideally if it's just a 'loading' placeholder or something similar I'd create this in CSS and then replace it with the loaded image when the user scrolls down sufficiently as to invoke the loading of that particular image.
I noticed that chrome caches the base64 string but the string is quite long and bloats my HTML (I have a lot of images on a page).
If that is the case, consider placing a 'real' src attribute pointing to always the same placeholder. You do need an extra HTTP request, but:
it will be almost certainly pipelined and take little time.
it will trigger the image caching mechanism, which base64 does not do, so that the image will actually be only decoded once. Not a great issue given today's CPUs and GPUs, but anyway.
it will also be cached as a resource, and with the correct headers, it will stay a long time, giving zero load time in all subsequent page hits from the same client.
If the number of images on a page is significant, you might easily be better off with a "real" image.
I'd go as far as to venture that it will be more compatible with browsers, spiders and what not -- base64 encoding is widely supported, but plain images are even more so.
Even compared with the smallest images you can get in base64, 26 bytes become this
src="data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs="
while you can go from
src="/img/p.png"
all the way to
src="p.png"
which looks quite unbloaty - if such a word even exists.
Test
I have ran a very basic test
<html>
<body>
<?php
switch($_GET['t']) {
case 'base64':
$src = 'data:image/gif;base64,R0lGODlhAQABAAD/ACwAAAAAAQABAAACADs=';
break;
case 'gif':
$src = 'p.gif';
break;
}
print str_repeat("<img src=\"{$src}\"/>", $_GET['n']);
?>
</body>
</html>
and I got:
images mode DOMContentLoaded Load Result
200 base64 202ms 310ms base64 is best
200 gif 348ms 437ms
1000 base64 559ms 622ms base64 is best
1000 gif 513ms 632ms
2000 base64 986ms 1033ms gif is best
2000 gif 811ms 947ms
So, at least on my machine, it would seem I'm giving you a bad advice, since you see no advantages in page load time until you have almost two thousand images.
However:
this heavily depends on server and network setup, and even more on actual DOM layout.
I only ran one test for each set, which is bad statistics, using Firebug, which is bad methodology - if you want to have solid data, run several dozen page loads in either mode using some Web performance monitoring tool and a clone of your real page.
(what about using PNG instead of gif?)
I've experienced good results with inline SVG for responsive image placeholders as described here.
Basically, you put something like
data:image/svg+xml,%3csvg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 500 500'%3e%3c/svg%3e
in your <img>'s src attribute. Beware to keep viewBox values aspect ratio on par with your real image dimensions. This way your layout won't jump around causing unnecessary repaints.
I know converting any image format to SVG is not an easy task, and it is not something I am pursueing for complex images. Let me explain why I am asking this question. As a web designer, there are elements in a site that I will commonly use an image for. There include simple geometric shapes, fleur-de-lis, some simple logos etc. The cost of these HTTP requests is relatively small (we are talking KB's here).
However, I am interested if these requests could be even smaller by using svg or other canvas elements instead of an image format for simple images. Has there been any research or testing to compare SVG vs. an image? Is it possible that I can make HTTP requests even smaller by using canvas for simple elements on my page? If so it would be great news. I could even start creating libraries of canvas images to re-use and share with others.
For simple shapes gzipped SVG will be pretty small and in modern browsers it's quite usable.
However, for page loading performance number of requests is a very big factor, so you'll get significant performance boost if you use CSS sprite sheets regardless of the image format.
If by canvas you mean storing shapes as JavaScript drawing commands (or a library that issues them based on some JSON) then it's unlikely to be big bandwidth saving compared to gzipped SVG (SVG has quite efficient format for defining paths, and gzip removes overhead of XML quite well).
You will have to wait for JS to load and either insert several canvases all over the document or burn CPU on compressing generated images and building data: URLs, so I don't think it'd be faster overall than using SVG or optimized PNG.
I think this will depend on the case.
Sprites can help.. but what if your sprite contains some simple lines yet must be 500px * 500px... then a canvas (or even svg) will indeed be a lot smaller. On top of that, external javascripts usually are cached (even better than images). In that regard, they indeed help slim down the weight.
However, if you'd expect IE to also 'work', then you might need to start to provide some html5/canvas shims and yes that would increase the weight.
Edit:
Look at this jsfiddle example (I whipped up for this question) where a full-size 'granite' background is created with a radial gradient.
As you can see (in the source) this is a LOT less bytes then a separate image (and will alway's be pixel-perfect).
On the other hand, it also (depending on the complexity) take some calculation-time that regular images wouldn't take (after the necessary bytes are downloaded to the client).
Edit2:
I found a link where they did a little test.
In our sprite example, the raw SVG file was 2445 bytes. The PNG
version was only 1064 bytes, and the double-sized PNG for double-pixel
ratio devices was 1932 bytes. On first appearance, the vector file
loses on all accounts, but for larger images, the raster version more
quickly escalates in size.
SVG files are also human-readable due to being in XML format. They
generally comprise a very limited range of characters, which means
they can be heavily Gzip-compressed when sent over HTTP. This means
that the actual download size is many times smaller than the raw file
— easily beyond 30%, probably a lot more. Raster image formats such as
PNG and JPG are already compressed to their fullest extent.
However, note all this is still depending how complex the image is: as an extreme counter-example.. try to describe a full color photo of a forrest (leaves..) versus simply a jpg..
The size always depends on the specific images, sorry but you have to test it at your own. There is no way to compare this in general.
This is a LONG shot, but maybe someone out here is a super-genius.
I want to combine 4 jpgs in a grid
1 2
3 4
I want to do this client side, without SVG (because IE blows).
I'm looking at manipulating the code inside the jpg files to produce a single, new image. I will get them as all same-sized jpgs, and I can output them to another format (like bitmap) if necessary. I'm also wiling to accept solutions using any special IE "features" like VML. IE8 is the target audience.
I realize I can do this with SVG. I realize I can do this server-side. I realize I can css them next to each other. I need a single image from the 4 originals, as a string is fine (even preferable) because I can base64 encode it and throw it in an image element.
Thanks!
I am pretty sure that your only option (on the client) is actually flash based canvas libraries like FX Canvas . They give you full complement of standard canvas methods (such as toDataURL which you need the most)... Normally I would not advocate using Flash but using a library emulating canvas will make it a breeze to migrate your code when your client decides to upgrade their browser set.
I need to scale images in array form in a Web Worker. If I was outside a web worker I could use a canvas and drawImage to copy certain parts of an image or scale it.
Look like in a web worker I can't use a canvas so, what can I do? Is there any pure Javascript library that can help me?
Thanks a lot in advance.
Scaling can be done in various ways, but they all boil down to either removing or creating pixels from the image. Since images are essentially matrices (resized as arrays) of pixel values, you can look at scaling up images as enlarging that array and filling in the blanks and scaling down images as shrinking the array by leaving values out.
That being said, it is typically not that difficult to write your own scale function in JavaScript that works on arrays. Since I understand that you already have the images in the form of a JavaScript array, you can pass that array in a message to the Web Worker, scale it your scale function and send the scaled array back to the main thread.
In terms of representation I would advise you to use the Uint8ClampedArray which was designed for RGBA (color, with alpha channel) encoded images and is more efficient than normal JavaScript arrays. You can also easily send Uint8ClampedArray objects in messages to your Web Worker, so that won't be a problem. Another benefit is that a Uint8ClampedArray is used in the ImageData datatype (after replacing CanvasPixelArray) of the Canvas API. This means that it quite easy to draw your scaled image back on a Canvas (if that was what you wanted), simply by getting the current ImageData of the canvas' 2D context using ctx.getImageData() and changing its data attribute to your scaled Uint8ClampedArray object.
By the way, if you don't have your images as arrays yet you can use the same method. First draw the image on the canvas and then use the data attribute of the current ImageData object to retrieve the image in a Uint8ClampedArray.
Regarding scaling methods to upscale an image, there are basically two components that you need to implement. The first one is to divide the known pixels (i.e. the pixels from the image you are scaling) over the larger new array that you have created. An obvious way is to evenly divide all the pixels over the space. For example, if you are making the width of an image twice as wide, you want simply skip a position after each pixel leaving blanks in between.
The second component is then to fill in those blanks, which can be slightly less straightforward. However, there are several that are fairly easy. (On the other hand, if you have some knowledge of Computer Vision or Image Processing you might want to look at some more advanced methods.) An easy and somewhat obvious method is to interpolate each unknown pixel position using its nearest neighbor (i.e. the closest pixel value that is known) by duplicate the known pixel's color. This does typically result in the effect of bigger pixels (larger blocks of the same color) when you scale the images too much. Instead of duplicating the color of the closest pixel, you can also take the average of several known pixels that are nearby. Possibly even combined with weights were you make closer pixels count more in the average than pixels that are farther away. Other methods include blurring the image using Gaussians. If you want to find out what method is the best for your application, look at some pages about image interpolation. Of course, remember that scaling up always means filling in stuff that isn't really there. Which will always look bad if you do it too much.
As far as scaling down is concerned, one typically just removes pixels by transferring only a selection of pixels from the current array to the smaller array. For example if you would want to make the with of an image twice as small, you roughly iterate through the current array with steps of 2 (This depends a bit on the dimensions of the image, even or odd, and the representation that you are using). There are methods that do this even better by removing those pixels that could be missed the most. But I don't know enough about them.
By the way, all of this is practically unrelated to web workers. You would do it in exactly the same way if you wanted to scale images in JavaScript on the main thread. Or in any other language for that matter. Web Workers are however a very nice way to do these calculations on a separate thread instead of on the UI thread, which means that the website itself does not seem unresponsive. However, like you said, everything that involves the canvas element needs to be done on the main thread, but scaling arrays can be done anywhere.
Also, I'm sure there are JavaScript libraries that can do this for you and depending on their methods you can also load them in your Web Worker using importScripts. But I would say that in this case it might just be easier and a lot more fun to try to write it yourself and make it tailor-made for your purpose.
And depending on how advanced your programming skills are and the speed at which you need to scale you can always try to do this on the GPU instead of on the CPU using WebGL. But that does seem a slight overkill in this case. Also, you can try to chop your image in several pieces and try to scale the separate parts on several Web Workers making it multi-threaded. Although it is certainly not trivial to combine the parts later. Perhaps multi-threaded makes more sense when you have a lot of images that need to be scaled on the client side.
It all really depends on your application, the images and your own skills and desires.
Anyway, I hope that roughly answers your question.
I feel some specifics on mslatour's answer are needed, since I just spent 6 hours trying to figure out how to "…simply… change its data attribute to your scaled Uint8ClampedArray object". To do this:
① Send your array back from the web-worker. Use the form:
self.postMessage(bufferToReturn, [bufferToReturn]);
to pass your buffer to and from the web worker without making a copy of it, if you don't want to. (It's faster this way.) (There is some MDN documentation, but I can't link to it as I'm out of rep. Sorry.) Anyway, you can also put the first bufferToReturn inside lists or maps, like this:
self.postMessage({buffer:bufferToReturn, width:500, height:500}, [bufferToReturn]);
You use something like
webWorker.addEventListener('message', function(event) {your code here})
to listen for a posted message. (In this case, the events being posted are from the web worker and the event doing the listening is in your normal JS code. It works the same other way, just switch the 'self' and 'webWorker' variables around.)
② In your browser-side Javascript (as opposed to worker-side), you can use imageData.data.set() to "simply" change the data attribute and put it back in the canvas.
var imageData = context2d.createImageData(width, height);
imageData.data.set(new Uint8ClampedArray(bufferToReturn));
context2d.putImageData(imageData, x_offset, y_offset);
I would like to thank hacks.mozilla.org for alerting me to the existence of the data.set() method.
p.s. I don't know of any libraries to help with this… yet. Sorry.
I have yet to test it out myself, but there is a pure JS library that might be of use here:
https://github.com/taisel/JS-Image-Resizer