In short
I would like to read a single pixel value from a WebGL 2 depth texture in JavaScript. Is this at all possible?
The scenario
I am rendering a scene in WebGL 2. The renderer is given a depth texture to which it writes the depth buffer. This depth texture is used in post processing shaders and the like, so it is available to us.
However, I need to read my single pixel value in JavaScript, not from within a shader. If this had been a normal RGB texture, I would do
function readPixel(x, y, texture, outputBuffer) {
const frameBuffer = gl.createFramebuffer();
gl.bindFramebuffer( gl.FRAMEBUFFER, frameBuffer );
gl.framebufferTexture2D( gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, 0 );
gl.readPixels(x, y, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, outputBuffer);
}
This will write the pixel at x, y into outputBuffer.
However, is it at all possible to do the same with a depth texture? If I just pass a depth texture to my function above, the output buffer only has zeros, and I receive a WebGL warning GL_INVALID_FRAMEBUFFER_OPERATION: Framebuffer is incomplete.. Checking the framebuffer state reveals FRAMEBUFFER_INCOMPLETE_ATTACHMENT.
Naturally, the depth texture is not an RGBA texture, but is there some other values we can give it to get our depth value, or is it impossible?
Motivation
I am aware of that this question has been asked some number of times on StackOverflow and elsewhere in some form of another, but there is always some variation making it confusing for me to get a straight-up yes or no answer to the question in the form I ask it here. In addition, many questions and sources are very old, WebGL 1 only, with some mentions of webgl_depth_texture making a difference etc etc.
If the answer is no, I'd welcome any suggestions for how else to easily obtain this depth pixel. As this operation is not done for every frame, I value simplicity over performance. The use case is picking, and classical ray intersection is not feasible. (I also know that I can encode a scalar depth value into and out of an RGB pixel, but I need to be able to access the pixel from within the js code in the first place.)
I'd welcome any insights.
There is no possibility WebGL 2.0 is based on OpenGL ES 3.0.
In OpenGL ES 3.2 Specification - 4.3.2 Reading Pixels is clearly specified:
[...] The second is an implementation-chosen format from among those defined
in table 3.2, excluding formats DEPTH_COMPONENT and DEPTH_STENCIL [...]
Related
I'm trying to translate some TypeScript code into a vertex shader to use with WebGL. My goal is to draw the bitangent lines of two circles. I have a function to calculate the tangent points here, https://jsfiddle.net/Zanchi/4xnp1n8x/2/ on line 27. Essentially, it returns a tuple of points with x and y values.
// First circle bottom tangent point
const t1 = {
x: x1 + r1 * cos(PI/2 - alpha),
y: y1 + r1 * sin(PI/2 - alpha)
}; //... and so on
I know I can do the calcuation in JS and pass the values to the shader via an attribute, but I'd like to leverage the GPU to do the point calculations instead.
Is it possible to set multiple vertices in a single vertex shader call, or use multiple values calculated in the first call of the shader in subsequent calls?
Is it possible to set multiple vertices in a single vertex shader call
No
or use multiple values calculated in the first call of the shader in subsequent calls?
No
A vertex shader outputs 1 vertex per iteration/call. You set the number of iterations when you call gl.drawArrays (gl.drawElements is more complicated)
I'm not sure you gain much by not just putting the values in an attribute. It might be fun to generate them in the vertex shader but it's probably not performant.
In WebGL1 there is no easy way to use a vertex shader to generate data. First off you'd need some kind of count or something that changes for each iteration and there is nothing that changes if you don't supply at least one attribute. You could supply one attribute with just a count [0, 1, 2, 3, ...] and use that count to generate vertices. This is what vertexshaderart.com does but it's all for fun, not for perf.
In WebGL2 there is the gl_VertexID built in variable which means you get a count for free, no need to supply an attribute. In WebGL2 you can also use transform feedback to write the output of a vertex shader to a buffer. In that way you can generate some vertices once into a buffer and then use the generated vertices from that buffer (and therefore probably get better performance than generating them every time).
I have been playing with GPUComputationRenderer on a modified version of this three.js example which modifies the velocity of interacting boids using GPU shaders to hold, read and manipulate boid position and velocity data.
I have got to a stage where I can put GPU computed data (predicted collision times) into the texture buffer using the shader. But now I want to read some of that texture data inside the main javascript animation script (to find the earliest collision).
Here is the relevant code in the render function (which is called on each animation pass)
//... GPU calculations as per original THREE.js example
gpuCompute.compute(); //... gpuCompute is the gpu computation renderer.
birdUniforms.texturePosition.value = gpuCompute.getCurrentRenderTarget( positionVariable ).texture;
birdUniforms.textureVelocity.value = gpuCompute.getCurrentRenderTarget( velocityVariable ).texture;
var xTexture = birdUniforms.texturePosition.value;//... my variable, OK.
//... From http://zhangwenli.com/blog/2015/06/20/read-from-shader-texture-with-threejs/
//... but note that this reads from the main THREE.js renderer NOT from the gpuCompute renderer.
//var pixelBuffer = new Uint8Array(canvas.width * canvas.height * 4);
//var gl = renderer.getContext();
//gl.readPixels(0, 0, canvas.width, canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixelBuffer);
var pixelBuffer = new Uint8Array( WIDTH * WIDTH * 4); //... OK.
//var gl = gpuCompute.getContext();//... no getContext function!!!
//... from Nick Whaley here: http://stackoverflow.com/questions/13475209/three-js-get-data-from-three-webglrendertarget
//WebGLRenderer.readRenderTargetPixels ( renderTarget, x, y, width, height, buffer )
gpuCompute.readRenderTargetPixels ( xTexture, 0, 0, WIDTH, WIDTH, pixelBuffer ); //... readRenderTargetPixels is not a function!
As shown in the code I was "wanting" the gpuCompute renderer object to provide functions such as .getContext() or readRenderTargetPixels() but they do not exist for gpuCompute.
EDIT:
Then I tried adding the following code:-
//... the WebGLRenderer code is included in THREE.js build
myWebglRenderer = new THREE.WebGLRenderer();
var myRenderTarget = gpuCompute.getCurrentRenderTarget( positionVariable );
myWebglRenderer.readRenderTargetPixels (
myRenderTarget, 0, 0, WIDTH, WIDTH, pixelBuffer );
This executes OK but pixelBuffer remains entirely full of zeroes instead of the desired position coordinate values.
Please can anybody suggest how I might read the texture data into a pixel buffer? (preferably in THREE.js/plain javascript because I am ignorant of WebGL).
This answer is out of date. See link at bottom
The short answer is it won't be easy. In WebGL 1.0 there is no easy way to read pixels from floating point textures which is what GPUComputationRenderer uses.
If you really want to read back the data you'll need to render the GPUComputationRenderer floating point texture into an 8bit RGBA texture doing some kind of encoding from 32bit floats to 8bit textures. You can then read that back in JavaScript and look at the values.
See WebGL Read pixels from floating point render target
Sorry for the long delay. I've not logged in in SO for a long time.
In the example of water with tennis balls,
https://threejs.org/examples/?q=water#webgl_gpgpu_water
The height of the water at the balls positions is read back from the GPU.
An integer 4-component texture is used to give a 1-component float texture.
The texture has 4x1 pixels, where the first one is the height and the other 2 are the normal of the water surface (the last pixel is not used)
This texture is computed and read back for each one of the tennis balls, and in CPU the ball physics is performed.
I'm trying to learn how to take advantage of gpu possibilities for threejs and webgl stuff so im just analysing code to get some patterns, methods how things are done and I need some code explanation.
I found this example: One million particles, which seems to be the easiest one involving calculations made in shaders and spit back out.
So from what I have figured out:
- Data for velocity and position of particles are kept in textures passed to shaders to perform calculations there, and get them back for update
Particles are created randomly on the plane no more than the texture size ?
for (var i = 0; i < 1000000; i++) {
particles.vertices.push(new THREE.Vector3((i % texSize)/texSize,
Math.floor(i/texSize)/texSize , 0))
;
}
I don't see any particles position updates? How is the data from shaders retrieved and updates each particle?
pick()
only passes the mouse position to calculate the direction of particles movement?
why are there 2 buffers? and 8 (4 pairs of fragment and vector) shaders? Is only the one for calculating velocity and position not enough?
how does the shader update the texture? I just see reading from it not writing to it?
Thanks in advance for any explanations!
How the heck have they done that:
In this post, I'll explain how this results get computed nearly solely on the gpu via WebGL/Three.js - it might look a bit sloppy as I'm using integrated graphics of an Intel i7 4770k:
Introduction:
Simple idea to keep everything intra-gpu: Each particle's state will be represented by one texture pixel color value. One Million particles will result in 1024x1024 pixel textures, one to hold the current position and another one that holds the velocities of those particles.
Nobody ever forbid to abuse the RGB color values of a texture for completely different data of 0...255 universe. You basically have 32-bit (R + G + B + alpha) per texture pixel for whatever you want to save in GPU memory. (One might even use multiple texture pixels if he needs to store more data per particle/object).
They basically used multiple shaders in a sequential order. From the source code, one can identify these steps of their processing pipeline:
Randomize particles (ignored in this answer) ('randShader')
Determine each particles velocity by its distance to mouse location ('velShader')
Based on velocity, move each particle accordingly ('posShader')
Display the screen ('dispShader')**
.
Step 2: Determining Velocity per particle:
They call a draw process on 1 Million points which's output will be saved as a texture. In the vertex shader each fragment gets 2 additional varyings named "vUv", which basically determine the x and y pixel positions inside the textures used in the process.
Next step is its fragment shader, as only this shader can output (as RGB values into the framebuffer, which gets converted to a texture buffer afterwards - all happening inside gpu memory only). You can see in the id="velFrag" fragment shader, that it gets an input variable called uniform vec3 targetPos;. Those uniforms are set cheaply with each frame from the CPU, because they are shared among all instances and don't involve large memory transfers. (containing the mouse coordinate, in -1.00f to +1.00f universe probably - they probably also update mouse coords once every FEW frames, to lower cpu usage).
Whats going on here? Well, that shader calculates the distance of that particle to the mouse coordinate and depending on that it alter that particles velocity - the velocity also holds information about the particles flight direction. Note: this velocity step also makes particles gain momentum and keep flying/overshooting mouse position, depending on gray value.
.
Step 3: Updating positions per particle:
So far each particle got a velocity and an previous position. Those two values will get processed into a new position, again being outputted as a texture - this time into the positionTexture. Until the whole frame got rendered (into default framebuffer)and then marked as the new texture, the old positionTexture remains unchanged and can get read with ease:
In id="posFrag" fragment shader, they read from both textures (posTexture and velTexture) and process this data into a new position. They output the x and y position coordinates into the colors of that texture (as red and green values).
.
Step 4: Prime time (=output)
To output the results, they probably took again a million points/vertexes and gave it the positionTexture as an input. Then the vertex shader sets the position of each point by reading the texture's RGB value at location x,y (passed as vertex attributes).
// From <script type="x-shader/x-vertex" id="dispVert">
vec3 mvPosition = texture2D(posTex, vec2(x, y)).rgb;
gl_PointSize = 1.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(mvPosition,1.0);
In the display fragment shader, they only need to set a color (note the low alpha, causing it to allow 20 particles to stack up to fully light up a pixel).
// From <script type="x-shader/x-fragment" id="dispFrag">
gl_FragColor = vec4(vec3(0.5, 1.0, 0.1), 0.05);
.
I hope this made it clear how this little demo works :-) I am not the author of that demo, though. Just noticed this answer actually became a super duper detailed one - fly through the thick keywords to get the short version.
I am running a data processing application that is pretty much:
var f = function(a,b){ /* any function of type int -> int -> int */ };
var g = function(a){ /* any function of type int -> int */ };
function my_computation(state){
var data = state[2];
for (var i=0,l=data.length,res=0; i<l; ++i)
res = f(res,g(data[i]));
state[3] = res;
return res;
}
This pattern is pretty much that of a foldl. That computation is not fast enough on CPU. Is it possible to somehow run that computation on the GPU, on the browser?
From your comment:
I don't know much about vertex shaders but to my knowledge it worked in isolated pixels, and for the folding you'd kinda need an accumulation pattern. No?
If you want to use WebGL for computation over an array, you most likely will want to do it in a fragment shader, not a vertex shader. If you use input geometry that covers the entire viewport, a fragment shader is then simply a program that computes an image pixel-by-pixel. It can use as inputs numeric parameters and arbitrary textures. Furthermore, you can render output to a texture.
This is how you do inputs: you stash the input data in a texture, and have the fragment shader do lookups in the texture. It's perfectly normal to do multiple offset lookups in a texture; for example, this is how a blur effect works.
You're right to be concerned about accumulation. There is no native way to do a fold over all pixels. However, if you can express your algorithm in a "map-reduce" fashion, where the reduce operation combines two outputs and doesn't care about whether they are the input from a previous reduce step, then you can do it like so:
Load your input data into a 1-pixel high by N-pixel wide texture. (Not sure whether using square textures might give better upper limits, but this is simpler to describe.)
Run your "map" (g, non-accumulating computation) shader program producing an intermediate-outputs texture.
Run a shader which performs the "reduce" operation (f) on each pair of adjacent pixels (or similar) of the intermediate texture, producing another texture half as wide.
Do the same thing again on that output.
This will get you your single answer in only O(log n) JavaScript operations.
I would say yes. I've often though about this myself. Your data would be attached as a vertex attribute buffer and a custom shader would execute you fold code, 'rendering' the results to an off-screen buffer. You would then read the result buffer back into CPU memory.
Given that you want to run it on the browser, you are limited by what WebGL/extensions support, specifically on CPU access to GPU data.
You can take a look at shader code for filters/edge detect in below code-base that show how you can do this in a fragment shader.
https://github.com/prabindh/sgxperf/blob/master/sgxperf_strings.cpp
After this, you can access the data using readPixels. NOTE - the fragment shader can only output fixed-point data.
When I write Processing.js in the JavaScript-flavor I get a performance warning that I didn't get when I used Processing.js to parse Processing-Code. I've create a fairly simple sketch with 3d support to get into it and the console is flooded with this warning:
PERFORMANCE WARNING: Attribute 0 is disabled. This has signficant performance penalty
What does that mean? And even more importantly: how to fix it?
That's the sketch. (watch/edit on codepen.io)
var can = document.createElement("canvas");
var sketch = function(p){
p.setup = function(){
p.size(800, 600, p.OPENGL);
p.fill(170);
};
p.draw = function(){
p.pushMatrix();
p.translate(p.width/2, p.height/2);
p.box(50);
p.popMatrix();
};
};
document.body.appendChild(can);
new Processing(can, sketch);
This is an issue in Processing.js
For detailed explanation: OpenGL and OpenGL ES have attributes. All attributes can either fetch values from buffers or provide a constant value. Except, in OpenGL attribute 0 is special. It can not provide a constant value. It MUST get values from a buffer. WebGL though is based on OpenGL ES 2.0 which doesn't have this limitation.
So, when WebGL is running on top of OpenGL and the user does not use attribute 0 (it's set to use a constant value), WebGL has to make a temporary buffer, fill it with the constant value, and give it to OpenGL. This is slow. Hence the warning.
The issue in Processing is they have a single shader that handles multiple use cases. It has attributes for normals, positions, colors, and texture coordinates. Depending on what you ask Processing to draw it might not use all of these attributes. For example commonly it might not use normals. Normals are only needed in Processing to support lights so if you have no lights there are no normals (I'm guessing). In that case they turn off normals. Unfortunately if normals happens to be on attribute 0, in order for WebGL to render it has to make a temp buffer, fill it with a constant value, and then render.
The way around this is to always use attribute 0. In the case of Processing they will always use position data. So before linking their shaders they should call bindAttribLocation
// "aVertex" is the name of the attribute used for position data in
// Processing.js
gl.bindAttribLocation(program, 0, "aVertex");
This will make the attribute 'aVertex' always use attrib 0 and since for every use case they always use 'aVertex' they'll never get that warning.
Ideally you should always bind your locations. That way you don't have to query them after linking.