WebGL - reading pixel data from render buffer - javascript

Is there a way to get the raw pixel data from a WebGL render buffer or frame buffer that is off screen?
I'm using WebGL to do some image processing, e.g. blurring an image, adjusting color, etc.
I'm using frame buffers to render to textures at the full image size, then using that texture to display in the viewport at a smaller size. Can I get the pixel data of a buffer or texture so I can work with it in a normal canvas 2d context? Or am I stuck with changing the viewport to the full image size and grabbing the data with canvas.toDataURL()?
Thanks.

This is very old question, but I have looked for the same in three.js recently. There is no direct way to render to frame buffer, but actually it is done by render to texture (RTT) process. I check the framework source code and figure out the following code:
renderer.render( rttScene, rttCamera, rttTexture, true );
// ...
var width = rttTexture.width;
var height = rttTexture.height;
var pixels = new Uint8Array(4 * width * height); // be careful - allocate memory only once
// ...
var gl = renderer.context;
var framebuffer = rttTexture.__webglFramebuffer;
gl.bindFramebuffer(gl.FRAMEBUFFER, framebuffer);
gl.viewport(0, 0, width, height);
gl.readPixels(0, 0, width, height, gl.RGBA, gl.UNSIGNED_BYTE, pixels);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);

readPixels() should do what you want. Read more in the WebGL spec at http://www.khronos.org/registry/webgl/specs/latest/

Yes, you can read raw pixel data. Set preserveDrawingBuffer as true while getting webgl context and afterwards make use of readPixels by WebGL.
var context = canvasElement.getContext("webgl", {preserveDrawingBuffer: true}
var pixels = new Uint8Array(4 * width * height);
context.readPixels(x, y, width, height, context.RGBA, context.UNSIGNED_BYTE, pixels)

Related

THREE.js read pixels from GPUComputationRenderer texture

I have been playing with GPUComputationRenderer on a modified version of this three.js example which modifies the velocity of interacting boids using GPU shaders to hold, read and manipulate boid position and velocity data.
I have got to a stage where I can put GPU computed data (predicted collision times) into the texture buffer using the shader. But now I want to read some of that texture data inside the main javascript animation script (to find the earliest collision).
Here is the relevant code in the render function (which is called on each animation pass)
//... GPU calculations as per original THREE.js example
gpuCompute.compute(); //... gpuCompute is the gpu computation renderer.
birdUniforms.texturePosition.value = gpuCompute.getCurrentRenderTarget( positionVariable ).texture;
birdUniforms.textureVelocity.value = gpuCompute.getCurrentRenderTarget( velocityVariable ).texture;
var xTexture = birdUniforms.texturePosition.value;//... my variable, OK.
//... From http://zhangwenli.com/blog/2015/06/20/read-from-shader-texture-with-threejs/
//... but note that this reads from the main THREE.js renderer NOT from the gpuCompute renderer.
//var pixelBuffer = new Uint8Array(canvas.width * canvas.height * 4);
//var gl = renderer.getContext();
//gl.readPixels(0, 0, canvas.width, canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixelBuffer);
var pixelBuffer = new Uint8Array( WIDTH * WIDTH * 4); //... OK.
//var gl = gpuCompute.getContext();//... no getContext function!!!
//... from Nick Whaley here: http://stackoverflow.com/questions/13475209/three-js-get-data-from-three-webglrendertarget
//WebGLRenderer.readRenderTargetPixels ( renderTarget, x, y, width, height, buffer )
gpuCompute.readRenderTargetPixels ( xTexture, 0, 0, WIDTH, WIDTH, pixelBuffer ); //... readRenderTargetPixels is not a function!
As shown in the code I was "wanting" the gpuCompute renderer object to provide functions such as .getContext() or readRenderTargetPixels() but they do not exist for gpuCompute.
EDIT:
Then I tried adding the following code:-
//... the WebGLRenderer code is included in THREE.js build
myWebglRenderer = new THREE.WebGLRenderer();
var myRenderTarget = gpuCompute.getCurrentRenderTarget( positionVariable );
myWebglRenderer.readRenderTargetPixels (
myRenderTarget, 0, 0, WIDTH, WIDTH, pixelBuffer );
This executes OK but pixelBuffer remains entirely full of zeroes instead of the desired position coordinate values.
Please can anybody suggest how I might read the texture data into a pixel buffer? (preferably in THREE.js/plain javascript because I am ignorant of WebGL).
This answer is out of date. See link at bottom
The short answer is it won't be easy. In WebGL 1.0 there is no easy way to read pixels from floating point textures which is what GPUComputationRenderer uses.
If you really want to read back the data you'll need to render the GPUComputationRenderer floating point texture into an 8bit RGBA texture doing some kind of encoding from 32bit floats to 8bit textures. You can then read that back in JavaScript and look at the values.
See WebGL Read pixels from floating point render target
Sorry for the long delay. I've not logged in in SO for a long time.
In the example of water with tennis balls,
https://threejs.org/examples/?q=water#webgl_gpgpu_water
The height of the water at the balls positions is read back from the GPU.
An integer 4-component texture is used to give a 1-component float texture.
The texture has 4x1 pixels, where the first one is the height and the other 2 are the normal of the water surface (the last pixel is not used)
This texture is computed and read back for each one of the tennis balls, and in CPU the ball physics is performed.

Resizing images using createjs / easeljs

I'd like to dynamically downsize some images on my canvas using createjs, and then store the smaller images to be displayed when zooming out of the canvas for performance reasons. Right now, I'm using the following code:
var bitmap = createjs.Bitmap('somefile.png');
// wait for bitmap to load (using preload.js etc.)
var oc = document.createElement('canvas');
var octx = oc.getContext('2d');
oc.width = bitmap.image.width*0.5;
oc.height = bitmap.image.height*0.5;
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var dataUrl = oc.toDataURL('image/png'); // very expensive
var smallBitmap = new createjs.Bitmap(dataUrl);
This works, but:
The toDataURL operation is very expensive when converting to image/png and too slow to use in practice (and I can't convert to the faster image/jpeg due to the insufficient quality of the output for all settings I tried)
Surely there must be a way to downsize the image without having to resort to separate canvas code, and then do a conversion manually to draw onto the createjs Bitmap object??
I've also tried:
octx.drawImage(bitmap.image, 0, 0, oc.width, oc.height);
var smallBitmap = new createjs.Bitmap(oc);
But although very fast, this doesn't seem to actually work (and in any case I'm having to create a separate canvas element every time to facilitate this.)
I'm wondering if there is a way that I can use drawImage to draw a downsampled version of the bitmap into a createjs Bitmap instance directly without having to go via a separate canvas object or do a conversion to string?
If I understand correctly, internally this is how the createjs cache property works (i.e. uses drawImage internally to write into the DisplayObject) but I'm unable to figure out how use it myself.
You have tagged this post with createjs and easeljs, but your examples show plain Canvas context usage for scaling.
You can use the scale parameter on Bitmap.cache() to get the result you want, then reuse the cacheCanvas as necessary.
// This will create a half-size cache (50%)
// But scale it back up for you when it displays on the stage
var bmp = new createjs.Bitmap(img);
bmp.cache(0, 0, img.width, img.height, 0.5);
// Pull out the generated cache and use it in a new Bitmap
// This will display at the new scaled size.
var bmp2 = new createjs.Bitmap(bmp.cacheCanvas);
// Un-cache the first one to reset it if you want
bmp.uncache();
Here is a fiddle to see it in action: http://jsfiddle.net/lannymcnie/ofdsyn7g/
Note that caching just uses another canvas with a drawImage to scale it down. I definitely would stay away from toDataURL, as it not performant at all.

Given a canvas object, how can I get a scaled down "version" of it as an image/Base64-encoded byte array?

I am using html2canvas to take "screenshots" of the current browser window. I would like to save the screenshot as Base64 encoded data which can later be used to create an html img. For example, I might want to save the String that canvas.toDataURL() returns in the browser's local storage, and retrieve it later.
The following code works fine:
html2canvas(document.body, {
onrendered: function(canvas) {
localStorage.setItem('screenshot', JSON.stringify(canvas.toDataURL()));
}
});
I can later retrieve it and create an image from it, for example using Angular:
<img data-ng-src="{{imgData}}"/>
What I want to do though, is have a scaled down version of the screenshot. I can simply scale the resulting image using CSS, but I want to save storage space. In other words, I want the String encoding to represent an image that is, for example, 200 pixels wide instead of (say) 1000 pixels wide. I do not care about the quality of the image.
This is the closest I've come:
saveScreenshot = function(name, scaleFactor) {
html2canvas(document.body, {
onrendered: function(canvas) {
var ctx = canvas.getContext('2d');
var img = new Image();
img.onload = function() {
ctx.clearRect(0, 0, canvas.width, canvas.height);
ctx.scale(scaleFactor, scaleFactor);
ctx.drawImage(img, 0, 0);
//canvas.width *= scaleFactor;
//canvas.height *= scaleFactor;
localStorage.setItem('screenshot', JSON.stringify(canvas.toDataURL()));
}
img.src = canvas.toDataURL();
}
});
}
This almost works - the "picture" is scaled correctly, but the resulting image is still the same size as the original, with the rest of the space apparently filled in with transparent pixels.
If I uncomment the 2 lines above which set the canvas' height and width, then the image is the right size, but it's all blank.
I feel like I'm close - what am I missing?
.toDataURL will always capture the entire canvas, so to get a smaller image you must create a second smaller canvas:
var scaledCanvas=document.createElement('canvas');
var scaledContext=scaledCanvas.getContext('2d');
scaledCanvas.width=canvas.width*scaleFactor;
scaledCanvas.height=canvas.height*scaleFactor;
Then scale the second canvas and draw the main canvas onto the second canvas
scaledContext.scale(scaleFactor,scaleFactor);
scaledContext.drawImage(canvas,0,0);
And finally export that second canvas as a dataURL.
var dataURL = scaledCanvas.toDataURL();
Warning: untested code above...but it should put you on the right track!

Uploading large images to the GPU in WebGL

How can I upload large images to the GPU using WebGL without freezing up the browser (think of high-res skyboxes or texture atlases)?
I thought at first to seek if there's a way to make texImage2D do its thing asynchronously (uploading images to the GPU is IO-ish, right?), but I could not find any way.
I then tried using texSubImage2D to upload small chunks that fit in a 16 ms time window (I'm aiming for 60 fps). But texSubImage2D takes an offset AND width/height parameter only if you pass in an ArrayBufferView - when passing in Image objects you can only specify the offset and it will (I'm guessing) upload the whole image. I imagine painting the image to a canvas first (to get it as a buffer) is just as slow as uploading the whole thing to the GPU.
Here's a minimal example of what I mean: http://jsfiddle.net/2v63f/3/.
I takes ~130 ms to upload this image to the GPU.
Exact same code as on jsfiddle:
var canvas = document.getElementById('can');
var gl = canvas.getContext('webgl');
var image = new Image();
image.crossOrigin = "anonymous";
//image.src = 'http://i.imgur.com/9Tq28Qj.jpg?1';
image.src = 'http://i.imgur.com/G0qL97y.jpg'
image.addEventListener('load', function () {
var texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
var now = performance.now();
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
console.log(performance.now() - now);
});
The referenced image appears to be 3840x1920
For a high-res skybox you can usually discard the need for an alpha channel, and then decide if some other pixel format structure can provide a justifiable trade off between quality and data-size.
The specified RGBA encoding means this will require a 29,491,200 byte data transfer post-image-decode. I attempted to test with RGB, discarding the alpha but saw a similar 111 ms transfer time.
Assuming you can pre-process the images prior to your usage, and care only about the time measured for data-transfer time to GPU, you can preform some form of lossy encoding or compression on the data to decrease the amount of bandwidth you need to transfer.
One of the more trivial encoding methods would be to half your data size and send the data to the chip using RGB 565. This will decrease your data size to 14,745,600 bytes at a cost of color range.
var buff = new Uint16Array(image.width*image.height);
//encode image to buffer
//on texture load-time, perform the following call
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, image.width, image.height, 0, gl.RGB, gl.UNSIGNED_SHORT_5_6_5, buff);
Assuming you can confirm support for the S3TC extension (https://developer.mozilla.org/en-US/docs/Web/WebGL/Using_Extensions#WEBGL_compressed_texture_s3tc), you could also store and download the texture in DXT1 format and decrease the memory transfer requirement down to 3,686,400 bytes.
It appears any form of S3TC will result in the same RGB565 color-range reduction.
var ext = (
gl.getExtension("WEBGL_compressed_texture_s3tc") ||
gl.getExtension("MOZ_WEBGL_compressed_texture_s3tc") ||
gl.getExtension("WEBKIT_WEBGL_compressed_texture_s3tc")
);
gl.compressedTexImage2D(gl.TEXTURE_2D, 0, ext.COMPRESSED_RGB_S3TC_DXT1_EXT, image.width, image.height, 0, buff);
Most block compressions offer a decreased image quality up close, but allow for richer imagery at a distance.
Should you need high resolution up close, a lower resolution or compressed texture could be loaded initially to be used as a poor LOD distant-view, and a smaller subset of the higher resolution could be loaded as on approaching the texture in question.
Within my trivial tests at http://jsfiddle.net/zy8hkby3/2/ , the texture-payload-size reductions can cause exponentially decreasing time requirements for the data-transfer, but this is most likely GPU dependent.

Changing Pattern Size in Html5 Canvas

My Problem: I've got an ImageObject, that is being used to create a PatternObject. My problem is, that i changed the width and height properties of the Image before creating the pattern, but that doesn't actually remap the image (and that is also no problem). The thing is, that I now have got a pattern, that is of a different size (the original image size) than the image itself. If I want to draw a line with the fillStyle of that pattern, it doesn't fit (because I need a pattern of the new size).
My question: Is there an easy way, to achieve that the pattern's width and height can be adjusted?
My tries and why I dont like them:
1) Render the original image to a canvas with the new size and create the pattern from that. Didn't use this, because the pattern cannot be loaded directly as a result of the canvas being created and rendered to too slowly. But I want that pattern directly
2) Calculate the variance between the new image size and the original one, change the lineWidth of the context, so the patterns height fits exactly and scale the line down, so it has a nice size. Didn't use that because I render in realtime and this is way too slow to be used later in webapps.
Using canvas (your step 1) is the most flexible way.
It's not slower using a canvas to draw on another canvas than using an image directly. They both use the same element basis (you're blitting a bitmap just as you do with an image).
(Update: Drawing using pattern as style do go through an extra step of a local transformation matrix for the pattern in more recent browsers.)
Create a new canvas in the size of the pattern and simple draw the image into it:
patternCtx.drawImage(img, 0, 0, patternWidth, patternHeight);
Then use the canvas of patternCtx as basis for the pattern (internally the pattern caches this image the first time it's drawn, from there, if possible, it just doubles out what it has until the whole canvas is filled).
The other option is to pre-scale the images to all the sizes you need them to be, load them all in, and then choose the image which size is the one you need.
The third is to draw the image yourself as a pattern. This however is not so efficient compared to the built-in method, though using the above mentioned method (internal) you can get a usable result.
Example of manual patterning:
var ctx = canvas.getContext('2d');
var img = new Image();
img.onload = function() {
fillPattern(this, 64, 64);
change.onchange = change.oninput = function() {
fillPattern(img, this.value, this.value);
}
};
img.src = "//i.stack.imgur.com/tkBVh.png";
// Fills canvas with image as pattern at size w,h
function fillPattern(img, w, h) {
//draw once
ctx.drawImage(img, 0, 0, w, h);
while (w < canvas.width) {
ctx.drawImage(canvas, w, 0);
w <<= 1; // shift left 1 = *2 but slightly faster
}
while (h < canvas.height) {
ctx.drawImage(canvas, 0, h);
h <<= 1;
}
}
<input id=change type=range min=8 max=120 value=64><br>
<canvas id=canvas width=500 height=400></canvas>
(or with a video as pattern).

Categories