How can I upload large images to the GPU using WebGL without freezing up the browser (think of high-res skyboxes or texture atlases)?
I thought at first to seek if there's a way to make texImage2D do its thing asynchronously (uploading images to the GPU is IO-ish, right?), but I could not find any way.
I then tried using texSubImage2D to upload small chunks that fit in a 16 ms time window (I'm aiming for 60 fps). But texSubImage2D takes an offset AND width/height parameter only if you pass in an ArrayBufferView - when passing in Image objects you can only specify the offset and it will (I'm guessing) upload the whole image. I imagine painting the image to a canvas first (to get it as a buffer) is just as slow as uploading the whole thing to the GPU.
Here's a minimal example of what I mean: http://jsfiddle.net/2v63f/3/.
I takes ~130 ms to upload this image to the GPU.
Exact same code as on jsfiddle:
var canvas = document.getElementById('can');
var gl = canvas.getContext('webgl');
var image = new Image();
image.crossOrigin = "anonymous";
//image.src = 'http://i.imgur.com/9Tq28Qj.jpg?1';
image.src = 'http://i.imgur.com/G0qL97y.jpg'
image.addEventListener('load', function () {
var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
var now = performance.now();
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
console.log(performance.now() - now);
});
The referenced image appears to be 3840x1920
For a high-res skybox you can usually discard the need for an alpha channel, and then decide if some other pixel format structure can provide a justifiable trade off between quality and data-size.
The specified RGBA encoding means this will require a 29,491,200 byte data transfer post-image-decode. I attempted to test with RGB, discarding the alpha but saw a similar 111 ms transfer time.
Assuming you can pre-process the images prior to your usage, and care only about the time measured for data-transfer time to GPU, you can preform some form of lossy encoding or compression on the data to decrease the amount of bandwidth you need to transfer.
One of the more trivial encoding methods would be to half your data size and send the data to the chip using RGB 565. This will decrease your data size to 14,745,600 bytes at a cost of color range.
var buff = new Uint16Array(image.width*image.height);
//encode image to buffer
//on texture load-time, perform the following call
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, image.width, image.height, 0, gl.RGB, gl.UNSIGNED_SHORT_5_6_5, buff);
Assuming you can confirm support for the S3TC extension (https://developer.mozilla.org/en-US/docs/Web/WebGL/Using_Extensions#WEBGL_compressed_texture_s3tc), you could also store and download the texture in DXT1 format and decrease the memory transfer requirement down to 3,686,400 bytes.
It appears any form of S3TC will result in the same RGB565 color-range reduction.
var ext = (
gl.getExtension("WEBGL_compressed_texture_s3tc") ||
gl.getExtension("MOZ_WEBGL_compressed_texture_s3tc") ||
gl.getExtension("WEBKIT_WEBGL_compressed_texture_s3tc")
);
gl.compressedTexImage2D(gl.TEXTURE_2D, 0, ext.COMPRESSED_RGB_S3TC_DXT1_EXT, image.width, image.height, 0, buff);
Most block compressions offer a decreased image quality up close, but allow for richer imagery at a distance.
Should you need high resolution up close, a lower resolution or compressed texture could be loaded initially to be used as a poor LOD distant-view, and a smaller subset of the higher resolution could be loaded as on approaching the texture in question.
Within my trivial tests at http://jsfiddle.net/zy8hkby3/2/ , the texture-payload-size reductions can cause exponentially decreasing time requirements for the data-transfer, but this is most likely GPU dependent.
Related
I'm a server-side dev learning the ropes of client side manipulation, starting with pure JS.
Currently I'm using pure JS to resize the dimensions of images uploaded via the browser.
I'm running into a situation where downsizing a 1018 x 1529 .jpg file to a 400 x 601 .jpeg is producing a file with a bigger size (in bytes). It goes from 70013 bytes to 74823 bytes.
My expectation is that there ought to be a size reduction, not inflation. What is going on, and is there any way to patch this kind of a situation?
Note: one point that especially perplexes me is that each image's compression starts without any prior knowledge of the target's previous compressions. Thus, any quality level below 100 should further degrade the image. This should accordingly always decrease the file size. But that strangely doesn't happen?
If required, my relevant JS code is:
var max_img_width = 400;
var wranges = [max_img_width, Math.round(0.8*max_img_width), Math.round(0.6*max_img_width),Math.round(0.4*max_img_width),Math.round(0.2*max_img_width)];
function prep_image(img_src, text, img_name, target_action, callback) {
var img = document.createElement('img');
var fr = new FileReader();
fr.onload = function(){
var dataURL = fr.result;
img.onload = function() {
img_width = this.width;
img_height = this.height;
img_to_send = resize_and_compress(this, img_width, img_height, "image/jpeg");
callback(text, img_name, target_action, img_to_send);
}
img.src = dataURL;
};
fr.readAsDataURL(img_src);
}
function resize_and_compress(source_img, img_width, img_height, mime_type){
var new_width;
switch (true) {
case img_width < wranges[4]:
new_width = wranges[4];
break;
case img_width < wranges[3]:
new_width = wranges[4];
break;
case img_width < wranges[2]:
new_width = wranges[3];
break;
case img_width < wranges[1]:
new_width = wranges[2];
break;
case img_width < wranges[0]:
new_width = wranges[1];
break;
default:
new_width = wranges[0];
break;
}
var wpercent = (new_width/img_width);
var new_height = Math.round(img_height*wpercent);
var canvas = document.createElement('canvas');//supported
canvas.width = new_width;
canvas.height = new_height;
var ctx = canvas.getContext("2d");
ctx.drawImage(source_img, 0, 0, new_width, new_height);
return dataURItoBlob(canvas.toDataURL(mime_type),mime_type);
}
// converting image data uri to a blob object
function dataURItoBlob(dataURI,mime_type) {
var byteString = atob(dataURI.split(',')[1]);
var ab = new ArrayBuffer(byteString.length);
var ia = new Uint8Array(ab);//supported
for (var i = 0; i < byteString.length; i++) { ia[i] = byteString.charCodeAt(i); }
return new Blob([ab], { type: mime_type });
}
If warranted, here's the test image I've used:
Here's the image's original location.
Note that for several other images I tried, the code did behave as expected. It doesn't always screw up the results, but now I can't be sure that it'll always work. Let's stick to pure JS solutions for the scope of this question.
Why Canvas is not the best option to shrink an image file size.
I won't go into too much details, nor in depth explanations, but I will try to explain to you the basics of what you encountered.
Here are a few concepts you need to understand (at least partially).
What is a lossy image format (like JPEG)
What happens when you draw an image to a canvas
What happens when you export a canvas image to an image format
Lossy Image Format.
Image formats can be divided in three categories:
raw Image formats
lossless image formats (tiff, png, gif, bmp, webp ...)
lossy image formats (jpeg, ...)
Lossless image formats generally simply compress the data in a table mapping pixel colors to the pixel positions where this color is used.
On the other hand, Lossy image formats will discard information and produce approximation of the data (artifacts) from the raw image in order to create a perceptively similar image rendering, using less data.
Approximation (artifacts) works because the decompression algorithm knows that it will have to spread the color information on a given area, and thus it doesn't have to keep every pixels information.
But once the algorithm has treated the raw image, and produced the new one, there is no way to find back the lost data.
Drawing an image to the canvas.
When you draw an image on a canvas, the browser will convert the image information to a raw image format.
It won't store any information about what image format was passed to it, and in the case of a lossy image, every pixels contained in the artifacts will become a first class citizen as every other pixels.
Exporting a canvas image
The canvas 2D API has three methods to export its raw data:
getImageData. Which will return the raw pixels RGBA values
toDataURL. Which will apply a compression algorithm corresponding to the MIME you passed as argument, synchronously.
toBlob. Similar to toDataURL, but asynchronously.
The case we are interested in is the one of toDataURL and toBlob along with the "image/jpeg" MIME.
Remember that when calling this method, the browser only sees the current raw pixel data it has on the canvas. So it will apply once again the jpeg algorithm, removing some data, and producing new approximations (artifacts) from this raw image.
So, yes, there is an 0-1 quality parameter available for lossy compression in these methods, so one could think that we could try to know what was the original loss level used to generate the original image, but even then, since we actually produced new image data in the drawing-to-canvas step, the algorithm might not be able to produce a good spreading scheme for these artifacts.
An other thing to take into consideration, mostly for toDataURL, is that browsers have to be as fast as possible when doing these operations, and thus they will generally prefer speed over compression quality.
Alright, the canvas is not good for it. What then?
Not so easy for jpeg images... jpegtran claims it can do a lossless scaling of your jpeg images, so I guess it should be possible to make a js port too, but I don't know any...
Special note about lossless formats
Note that your resizing algorithm can also produce bigger png files, here is an example case, but I'll let the reader guess why this happens:
var ctx= c.getContext('2d');
c.width = 501;
for(var i = 0; i<500; i+=10) {
ctx.moveTo(i+.5, 0);
ctx.lineTo(i+.5, 150);
}
ctx.stroke();
c.toBlob(b=>console.log('original', b.size));
c2.width = 500;
c2.height = (500 / 501) * c.height;
c2.getContext('2d').drawImage(c, 0, 0, c2.width, c2.height);
c2.toBlob(b=>console.log('resized', b.size));
<canvas id="c"></canvas>
<canvas id="c2"></canvas>
This is a recommendation and not really a fix (or a solution).
If you've run into this problem, make sure you compare the file sizes of the two images once you've completed the resize operation. If the new file is larger, then simply fallback to the source image.
When you load a WebGL texture directly from a DOM image, how do you tell if the image has an alpha channel or not? Is there anyway except to guess based on the filename (e.g. "contains .PNG may be RGBA otherwise RGB"). There is a width and height in the DOM image, but nothing I can see that says what format it is. i.e.:
const img = await loadDOMImage(url);
const format = gl.RGBA; //Does this always need to be RGBA? I'm wasting space in most cases where its only RGB
const internalFormat = gl.RGBA;
const type = gl.UNSIGNED_BYTE; //This is guaranteed to be correct, right? No HDR formats supported by the DOM?
gl.texImage2D(gl.TEXTURE_2D, 0, internalFormat, format, type, img);
My load function looks like this FWIW:
async loadDOMImage(url) {
return new Promise(
(resolve, reject)=>{
const img = new Image();
img.crossOrigin = 'anonymous';
img.addEventListener('load', function() {
resolve(img);
}, false);
img.addEventListener('error', function(err) {
reject(err);
}, false);
img.src = uri;
}
);
}
how do you tell if the image has an alpha channel or not?
You can't. You can only guess.
you could see the URL ends in .png and assume it has alpha. You might be wrong
you could draw image into a 2D canvas then call getImageData, read all the alpha pixels and see if any of them are not 255
const format = gl.RGBA;
Does this always need to be RGBA? I'm wasting space in most cases where its only RGB
It's unlikely to waste space. Most GPUs work best with RGBA values so even if you choose RGB it's unlikely to save space.
const type = gl.UNSIGNED_BYTE;
This is guaranteed to be correct, right?
texImage2D takes the image you pass in and converts it to type and format. It then passes that converted data to the GPU.
No HDR formats supported by the DOM?
That is undefined and browser specific. I know of no HDR image formats supported by any browsers. What image formats a browser supports is up to the browser. For example Firefox and Chrome support animated webp but Safari does not.
A common question some developers have is they want to know if they should turn on blending / transparency for a particular texture. Some mistakenly believe if the texture has alpha they should, otherwise they should not. This is false. Whether blending / transparency should be used is entirely separate from the format of the image and needs to be stored separately.
The same is true with other things related to images. What format to upload the image into the GPU is application and usage specific and has no relation to the format of the image file itself.
I'd like to dynamically downsize some images on my canvas using createjs, and then store the smaller images to be displayed when zooming out of the canvas for performance reasons. Right now, I'm using the following code:
var bitmap = createjs.Bitmap('somefile.png');
// wait for bitmap to load (using preload.js etc.)
var oc = document.createElement('canvas');
var octx = oc.getContext('2d');
oc.width = bitmap.image.width*0.5;
oc.height = bitmap.image.height*0.5;
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var dataUrl = oc.toDataURL('image/png'); // very expensive
var smallBitmap = new createjs.Bitmap(dataUrl);
This works, but:
The toDataURL operation is very expensive when converting to image/png and too slow to use in practice (and I can't convert to the faster image/jpeg due to the insufficient quality of the output for all settings I tried)
Surely there must be a way to downsize the image without having to resort to separate canvas code, and then do a conversion manually to draw onto the createjs Bitmap object??
I've also tried:
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var smallBitmap = new createjs.Bitmap(oc);
But although very fast, this doesn't seem to actually work (and in any case I'm having to create a separate canvas element every time to facilitate this.)
I'm wondering if there is a way that I can use drawImage to draw a downsampled version of the bitmap into a createjs Bitmap instance directly without having to go via a separate canvas object or do a conversion to string?
If I understand correctly, internally this is how the createjs cache property works (i.e. uses drawImage internally to write into the DisplayObject) but I'm unable to figure out how use it myself.
You have tagged this post with createjs and easeljs, but your examples show plain Canvas context usage for scaling.
You can use the scale parameter on Bitmap.cache() to get the result you want, then reuse the cacheCanvas as necessary.
// This will create a half-size cache (50%)
// But scale it back up for you when it displays on the stage
var bmp = new createjs.Bitmap(img);
bmp.cache(0, 0, img.width, img.height, 0.5);
// Pull out the generated cache and use it in a new Bitmap
// This will display at the new scaled size.
var bmp2 = new createjs.Bitmap(bmp.cacheCanvas);
// Un-cache the first one to reset it if you want
bmp.uncache();
Here is a fiddle to see it in action: http://jsfiddle.net/lannymcnie/ofdsyn7g/
Note that caching just uses another canvas with a drawImage to scale it down. I definitely would stay away from toDataURL, as it not performant at all.
I'm using a canvas to resize client-side an image before uploading it on the server.
maxWidth = 500;
maxHeight = 500;
//manage resizing
if (image.width >= image.height) {
var ratio = 1 / (image.width / maxWidth);
}
else {
var ratio = 1 / (image.height / maxHeight);
}
...
canvas.width = image.width * ratio;
canvas.height = image.height * ratio;
var resCtx = canvas.getContext('2d');
resCtx.drawImage(image, 0, 0, image.width * ratio, image.height * ratio);
This works, since on the server is saved the resized image, but there are two points i would like to improve:
image weight in KB is important to me; i want a very light weight image; even if it is resized, the canvas' image is still too heavy; i can see that the image saved on the server has a resolution of 96 DPI and 32-bit of color-depth even if the original image has a resolution of 72 DPI and 24-bit of color-depth; why? Can i set the canvas' image resolution?
the resized image does not look very nice, because the resizing proces introduce distortion... I've tried the custom algorithm by GameAlchemist in this post:
HTML5 Canvas Resize (Downscale) Image High Quality?
getting a very nice result, but then the resized image was more heavy in KB than the original one! Is there a good algorithm in order to get quality resized images keeping them lightweight?
DPI does not matter at all with images. An image at 1k x 1k will be just the same if 892478427 DPI or as 1 DPI. DPI is arbitrary in this context so ignore that part (it is only used as information for DTP programs so they know the relative size compared to document size). Images are only measured in pixels.
Canvas is RGBA based (32-bits, 24-bit colors + 8 bit alpha channel) so it's optimal form for exporting images will be in this format (ie. PNG files). You can however export 24-bits images without the alpha channel by requesting JPEG format as export format.
Compression is basically signal processing. As in all forms of (lossy) compression you try to remove frequencies in the signal in order to reduce the size. The more high frequencies there is the harder it is to compress the image (or sound or anything else signal based).
In images high frequency manifests as small details (thin lines and so forth) and as noise. To reduce the size of a compressed image you need to remove high frequencies. You can do this by using interpolation as a low-pass filter. A blur is a form of low-pass filter as well and they work in the same way in principle.
So in order to reduce image size you can apply a slight blur on the image before compressing it.
The JPEG format support quality settings which can reduce the size as well although this using a different approach than blurring:
// 0.5 = quality, lower = lower quality. Range is [0.0, 1.0]
var dataUri = canvas.toDataURL('image/jpeg', 0.5);
To blur an image you can use a simple and fast method of scaling it to half the size and then back up (with smoothing enabled). This will use interpolation as a low-pass filter by averaging the pixels.
Or you can use a "real" blur algorithm such as Gaussian blur as well as box blur, but only with a small amount.
I am using canvas.toDataURL() and getting the image in base64 format. But before uploading it to the server can I reduce the image's memory size say to 10KB? How can I do this using JavaScript or jquery?Code I am using is:
var context = canvas.getContext("2d");
context.drawImage(imageObj, c.x, c.y, c.w, c.h, 0, 0, canvas.width, canvas.height);
var vData = canvas.toDataURL();
If you want to compress the string you could attempt one of the compression algorithms mentioned here, but the DataURL is already fairly compressed, so this shouldn't make much of a difference.
Another option is to use the second parameter of the toDataURL specifying the quality of the JPG which can often be safely decreased without a visible effect on the image quality.
If the requested type is image/jpeg or image/webp, then the second argument, if it is between 0.0 and 1.0, is treated as indicating image quality; if the second argument is anything else, the default value for image quality is used. Other arguments are ignored.