WebGL2 3D Texture Image coming upside-down, how to flip the output? - javascript

Originally I was working with 2D Textures and everything was fine because we are allowed to use UNPACK_FLIP_Y_WEBGL. However, in WebGL2 we are now able to utilize 3D Textures. The problem is that my output is still upside-down and now the UNPACK_FLIP_Y_WEBGL is flagged as illegal for 3D Textures.
Does any one know how to fix this?
Below is some sample code that is similar to the config setup I have for loading the 3D Texture (if I was loading a 2D texture then this would work as long as I have the pixelStorei set to UNPACK_FLIP_Y_WEBGL).
texImage3D: [
gl.TEXTURE_3D, // target
0, // mip level
gl.RGB8, // sized (internal) format
size, // width
size, // height
size, // depth
0, // border
gl.RGB, // base format
gl.UNSIGNED_BYTE, // type
data, // Uint8Array color look up table
]
texParameteri3D: [
[gl.TEXTURE_3D, gl.TEXTURE_BASE_LEVEL, 0],
[gl.TEXTURE_3D, gl.TEXTURE_MAX_LEVEL, 0],
[gl.TEXTURE_3D, gl.TEXTURE_MAG_FILTER, gl.LINEAR],
[gl.TEXTURE_3D, gl.TEXTURE_MIN_FILTER, gl.LINEAR],
]
pixelStorei3D: [gl.UNPACK_ALIGNMENT, 1]
gl.getParameter(gl.MAX_VERTEX_UNIFORM_VECTORS);
gl.getParameter(gl.MAX_FRAGMENT_UNIFORM_VECTORS);

Found out that in WebGL tiles are rendered from bottom-left positioning. Meaning we need to flip the Y value. So in the vertex shader file we can just add some math to it to fix this. I don't know why the UNPACK_FLIP_Y_WEBGL was made to do this and then not supported for 3D Textures when the fix and what that UNPACK does is super simple. You just need to do 1 - y in the vertex tile position value.
#version 300 es
in vec2 tile_position;
in vec4 vertex_position;
out vec2 vertex_tile_position;
void main() {
vertex_tile_position = vec2(tile_position.s, 1.0 - tile_position.t);
gl_Position = vec4(vertex_position);
}

Related

Loading the PNG Image by webgl is not perfect

When using the canvas.getContext('2d') to load the png file which has a transparent part, it looks exactly the same as the png file itself. But when loading by canvas.getContext('webgl'), it will display as white in the transparent part. And then if you add discard in the shader, it will be better but still not perfect as the png file. How to fix this issue?
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, this.img);
void main() {
vec4 color = texture2D(u_image, v_texCoord);
if(color.a < 0.5) {
discard;
}
gl_FragColor = color;
}
It sounds like you may need to activate blending.
gl.enable(gl.BLEND);
And then set the blending function to work with pre-multiplied alpha (the default)
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
Transparency is actually kind of complicated
There is
#1 what the canvas itself needs
The default canvas wants premultiplied alpha. In other words it wants you to provide RGBA values where RGB has been multiplied by A.
You can set the canvas so it does not expect premultiplied alpha when creating the webgl context by passing in premultipledAlpha: false as in
const gl = someCanvas.getContext('webgl', {premultipliedAlpha: false});
Note: IIRC This doesn't work on iOS.
#2 what format you load the images
The default for loading images in WebGL is unpremultiplied alpha. In other words if the image has a pixel that is
255, 128, 64, 128 RGBA
It will be loaded exactly like that (*)
You can tell WebGL to premultiply for you when loading an image by setting
gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, true);
Before calling gl.texImage2D.
Now that same pixel will above will end up being
128, 64, 16, 128 RGBA
Each of RGB has been multiplied by A (A above is 128 where 128 represents 128/255 or 0.5019607843137255)
#3 what you write out in your shaders
If you loaded un-premultiplied data you might choose to premultiply in your shader
gl_FragColor = vec4(someColor.rgb * someColor.a, someColor.a);
#4 how you blend
If you want to blend what you are drawing into what has already been drawn then you need to turn on blending
gl.enable(gl.BLEND);
But you also need to set how the blending happens. There are multiple functions that affect blending. The most common one to use is gl.blendFunc which sets how the src pixel (the one generated by your shader) and the dst pixel (the one being drawn on top of in the canvas) are affected before being combined. The 2 most common settings are
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA); // unpremultiplied alpha
and
gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA); // premultiplied alpha
The first argument is how to multiply the src pixel. Above we are either multiplying by the alpha of the src (SRC_ALPHA) or by 1 (ONE). The second argument is how to multiply the dst pixel. ONE_MINUS_SRC_ALPHA is exactly what it says (1 - alpha)
How you put all these together is up to you.
This article and This one somewhat cover these issues
(*) Images may have color conversion applied.

THREE.js read pixels from GPUComputationRenderer texture

I have been playing with GPUComputationRenderer on a modified version of this three.js example which modifies the velocity of interacting boids using GPU shaders to hold, read and manipulate boid position and velocity data.
I have got to a stage where I can put GPU computed data (predicted collision times) into the texture buffer using the shader. But now I want to read some of that texture data inside the main javascript animation script (to find the earliest collision).
Here is the relevant code in the render function (which is called on each animation pass)
//... GPU calculations as per original THREE.js example
gpuCompute.compute(); //... gpuCompute is the gpu computation renderer.
birdUniforms.texturePosition.value = gpuCompute.getCurrentRenderTarget( positionVariable ).texture;
birdUniforms.textureVelocity.value = gpuCompute.getCurrentRenderTarget( velocityVariable ).texture;
var xTexture = birdUniforms.texturePosition.value;//... my variable, OK.
//... From http://zhangwenli.com/blog/2015/06/20/read-from-shader-texture-with-threejs/
//... but note that this reads from the main THREE.js renderer NOT from the gpuCompute renderer.
//var pixelBuffer = new Uint8Array(canvas.width * canvas.height * 4);
//var gl = renderer.getContext();
//gl.readPixels(0, 0, canvas.width, canvas.height, gl.RGBA, gl.UNSIGNED_BYTE, pixelBuffer);
var pixelBuffer = new Uint8Array( WIDTH * WIDTH * 4); //... OK.
//var gl = gpuCompute.getContext();//... no getContext function!!!
//... from Nick Whaley here: http://stackoverflow.com/questions/13475209/three-js-get-data-from-three-webglrendertarget
//WebGLRenderer.readRenderTargetPixels ( renderTarget, x, y, width, height, buffer )
gpuCompute.readRenderTargetPixels ( xTexture, 0, 0, WIDTH, WIDTH, pixelBuffer ); //... readRenderTargetPixels is not a function!
As shown in the code I was "wanting" the gpuCompute renderer object to provide functions such as .getContext() or readRenderTargetPixels() but they do not exist for gpuCompute.
EDIT:
Then I tried adding the following code:-
//... the WebGLRenderer code is included in THREE.js build
myWebglRenderer = new THREE.WebGLRenderer();
var myRenderTarget = gpuCompute.getCurrentRenderTarget( positionVariable );
myWebglRenderer.readRenderTargetPixels (
myRenderTarget, 0, 0, WIDTH, WIDTH, pixelBuffer );
This executes OK but pixelBuffer remains entirely full of zeroes instead of the desired position coordinate values.
Please can anybody suggest how I might read the texture data into a pixel buffer? (preferably in THREE.js/plain javascript because I am ignorant of WebGL).
This answer is out of date. See link at bottom
The short answer is it won't be easy. In WebGL 1.0 there is no easy way to read pixels from floating point textures which is what GPUComputationRenderer uses.
If you really want to read back the data you'll need to render the GPUComputationRenderer floating point texture into an 8bit RGBA texture doing some kind of encoding from 32bit floats to 8bit textures. You can then read that back in JavaScript and look at the values.
See WebGL Read pixels from floating point render target
Sorry for the long delay. I've not logged in in SO for a long time.
In the example of water with tennis balls,
https://threejs.org/examples/?q=water#webgl_gpgpu_water
The height of the water at the balls positions is read back from the GPU.
An integer 4-component texture is used to give a 1-component float texture.
The texture has 4x1 pixels, where the first one is the height and the other 2 are the normal of the water surface (the last pixel is not used)
This texture is computed and read back for each one of the tennis balls, and in CPU the ball physics is performed.

WebGL/GLSL - How does a ShaderToy work?

I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular.
From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles.
However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to be written directly into the GLSL code.
How does that work?
My guess is that a vertex shader is already prepared upfront, and that the editable/sample shader is only a fragment shader. But then that doesn't explain the geometry in some of the more complex examples...
Can anyone explain how Shadertoy works?
ShaderToy is a tool for writing pixel shaders.
What are pixel shaders?
If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.
So attributes are always the same and so is a vertex shader:
positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]
And that quad is rendered as TRIANGLE_STRIP.
Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.
Vertex shader:
attribute vec2 aPos;
attribute vec2 aUV;
varying vec2 vUV;
void main() {
gl_Position = vec4(aPos, 0.0, 1.0);
vUV = aUV;
}
And fragment shader would then look something like this:
uniform vec2 uScreenResolution;
varying vec2 vUV;
void main() {
// vUV is equal to gl_FragCoord/uScreenResolution
// do some pixel shader related work
gl_FragColor = vec3(someColor);
}
ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.
For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short:
You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help:
How to do ray tracing in modern OpenGL?
Hope this helps.
ShaderToy displays simple GLSL that is programmed to handle all the lighting, geometry, etc, it's not vertex geometry, it's raycasting most of it, the 3D stuff, or you can do 2D shaders, etc.
Any color and spacial maths can be programmed in GLSL language. Combinations of advanced algorithms makes isosurfaces, shapes, and then project textures onto isosurfaces, and raycasting, sending imaginary lines from viewer to distance, intercepts anything in the way, there are many raycasting techniques for 3D.
visit www.iquilezles.org for an idea of the different tools that are used in shadertoy/glsl graphics
Shadertoy is called "TOY" for a reason. It's basically a puzzle. Given only a function that as input is told the current pixel position write a function that generates an image.
The website sets up WebGL to draw a single quad (rectangle) and then lets you provide a function that is passed the current pixel being rendered as fragCoord. you then use that to compute some colors.
For example if you did this
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
fragColor = mix(red, green, sin(fragCoord.x / 10.0) * 0.5 + 0.5);
}
you'd get red and green stripes like this
https://www.shadertoy.com/view/3l2cRz
Shadertoy provides a few other inputs. The most common is the resolution being rendered to as iResolution. If you divide the fragCoord by iResolution then you get values that go from 0 to 1 across and 0 to 1 down the canvas so you can easily make your function resolution independent.
Doing that we can draw an ellipse in the center like this
void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
// uv goes from 0 to 1 across and up the canvas
vec2 uv = fragCoord / iResolution.xy;
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
float distanceFromCenter = distance(uv, vec2(0.5));
fragColor = mix(red, green, step(0.25, distanceFromCenter));
}
which produces
The second most common input is iTime in seconds which you can use the animate parameters in your function over time.
So, given those inputs, if you apply enough math you can make an incredible image like for example this shadertoy shader generates this image
Which is amazing someone figured out the math needed to generate that image given only the input mentioned above.
Many of the most amazing shadertoy shaders use a technique called ray marching and some math called "signed distance fields" which you can read about here
But, I think it's important to point out that while there are many cool things to learn from shadertoy shaders, many of them apply only to this "puzzle" of "how do I do make a beautiful image with only one function whose input is a pixel position and whose output is a single color". They don't answer "how should I write shaders for a performant app".
Compare to the dolphin above to this speedboat game
https://www.youtube.com/watch?v=7v9gZK9HqqI
The dolphin shadertoy shader runs at about 2 frames a second when fullscreen on my NVidia GeForce GT 750 where as the speedboat game runs at 60fps. The reason the game runs fast is it uses more traditional techniques drawing shapes with projected triangles. Even an NVidia 1060 GTX can only run that dolphin shader at about 10 frames a second when fullscreen.
It's just basically pushing GLSL pixel shader source code directly onto the graphics card.The real magic happens in the incredibly clever algorithms that people use to create amazing effects, like ray marching, ray casting, ray tracing. best to have a look at some other live GLSL sandboxes like: http://glsl.heroku.com/ and http://webglplayground.net/.
Its basically creating a window typically two triangles which represent the screen, then the shader works on each pixel just like a ray tracer.
I've been looking at these a while now, and the algorithms people use are mind blowing, you'll need to some serious math chops and look up "demo coding" source code to able to wrap your head around them. Many on shader toy, just blow your mind !
So to summarise, you just need to learn GLSL shader coding and algorithms. No easy solution.
Traditionally in computer graphics, geometry is created using vertices and rendered using some form of materials (e.g. textures with lighting). In GLSL, the vertex shader processes the vertices and the fragment (pixel) shader processes the materials.
But that is not the only way to define shapes. Just as a texture could be procedurally defined (instead of looking up its texels), a shape could be procedurally defined (instead of looking up its geometry).
So, similar to ray tracing, these fragment shaders are able to create shapes without having their geometry defined by vertices.
There's still more ways to define shapes. E.g. volume data (voxels), surface curves, and so on. A computer graphics text should cover some of them.

WebGL: vec2 multiplied by 1 breaks texture coordinates

I have got a texture bound to a framebuffer object so I can render to this texture. Because the canvas/viewport may be smaller than the texture, I multiply the texture coordinates by the current scale (canvas size/texture size) to get the correct position.
The line in the fragment shader to get the correct color from the texture looks as follows:
vec4 rtt = texture2D(rtt_sampler, v_texCoord * u_scale);
This works perfectly fine but if u_scale is exactly 1.0 everything breaks. I don't get the correct color but always the same, regardless the coordinates (it may be 0.5 but I haven't tested it yet). Do you have any idea what could be the cause of this?

WebGL render to texture - how do I write data to the alpha channel?

To preface this, I'm trying to replicate the water rendering algorithm describe in this article http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html. Part of this algorithm requires rendering an alpha mask into a framebuffer in order to use later for a texture sampling from the originally rendered scene. In short, the algorithm looks like this:
Render scene geometry into a texture S, skipping refractive meshes and replacing it with an alpha mask
Render refractive meshes by sampling texture S with perturbation IF it's inside the alpha mask, otherwise just directly sample texture S
Unfortunately, I'm still learning WebGL and don't really know enough to know how to approach this. Additionally, that article uses HLSL, and the conversion is nontrivial for me. Obviously, attempting to do this in the fragment shader won't work:
void main( void ) {
gl_FragColor = vec4( 0.0 );
}
because it will just blend with the previously rendered geometry and the alpha value will still be 1.0.
Here is a brief synopsis of what I have:
function animate(){
... snip ...
renderer.render( scene, camera, rtTexture, true );
renderer.render( screenScene, screenCamera );
}
// water fragment shader
void main( void ){
// black out the alpha channel
gl_FragColor = vec4(0.0);
}
// screen fragment shader
varying vec2 vUv;
uniform sampler2D screenTex;
void main( void ) {
gl_FragColor = texture2D( screenTex, vUv );
// just trying to see what the alpha mask would look like
if( gl_FragColor.a < 0.1 ){
gl_FragColor.b = 1.0;
}
}
The entire code can be found at http://scottrabin.github.com/Terrain/
Obviously, attempting to do this in the fragment shader won't work:
because it will just blend with the previously rendered geometry and the alpha value will still be 1.0.
That's up to you. Just use the proper blend modes:
glBlendFuncSeparate(..., ..., GL_ONE, GL_ZERO);
glBlendFuncSeparate sets up separate blending for the RGB and Alpha portions of the color. In this case, it writes the source alpha directly to the destination.
Note that if you're drawing something opaque, you don't need blend modes. The output alpha will be written as is, just like the color.

Categories