I am attempting to use use WebGL2 for some GPGPU computations. A key component of this is setting the value of a texel to the bitwise-OR of itself and the new value computed in the fragment shader. Is there a way of applying this bitwise operation to each fragment instead of overwriting the value completely?
Here is the relevant code:
precision mediump float;
varying float id; // integer passed though the vertex shader, value is in range [1, 32].
void main() {
if (id == 1.0) {
gl_FragColor = vec4(0.0039, 0.0, 0.0, 0.0);
} else if (id == 2.0) {
gl_FragColor = vec4(0.0078, 0.0, 0.0, 0.0);
} else if (id == 3.0) {
...
}
}
To clarify the desired behaviour, let's say the value of a texel that we are writing to is 0b01. I perform some computation in the fragment shader and write the value 0b10. I would like the result to be 0b11.
I know that reading and writing operations to a single texture are mutually exclusive, so I am wondering if there is a way to configure WebGL2 to always perform a bitwise-OR when writing to a texture
there is no bitwise-OR writing. Typically you read from some texture, bitwise or in the shader, write to a new texture
BTW, webgl2 has signed and unsigned integer textures.
Related
I'm a beginner at shaders and WebGL and I've taken some shortcuts to develop what I currently have, so please bear with me.
Is there a way to update attribute buffer data within the GPU only? Basically what I want to do is to send in three buffers t0, t1, t2 into the GPU representing points and their position in time 0, 1, and 2 respectively. Then I wish to update their new position tn depending on the properties of t2, t1, and t0 depending on the velocity of the points, turning angle, and so on.
My current implementation updates the positions in javascript and then copies the buffers into WebGL at every draw. But why? This seems terribly inefficient to me, and I don't see why I couldn't do everything in the shader to skip moving data from CPU->GPU all the time. Is this possible somehow?
This is current vertex shader which sets color on the point depending on the turn direction and angle it's turning at (tn is updated in JS atm by debugging functions):
export const VsSource = `
#define M_PI 3.1415926535897932384626433832795
attribute vec4 t0_pos;
attribute vec4 t1_pos;
attribute vec4 t2_pos;
varying vec4 color;
attribute vec4 r_texture;
void main() {
float dist = distance(t1_pos, t2_pos);
vec4 v = normalize(t1_pos-t0_pos);
vec4 u = normalize(t2_pos-t1_pos);
float angle = acos(dot(u, v));
float intensinty = angle / M_PI * 25.0;
float turnDirr = (t0_pos.y-t1_pos.y) * (t2_pos.x-t1_pos.x) + (t1_pos.x-t0_pos.x) * (t2_pos.y-t1_pos.y);
if(turnDirr > 0.000000001 ) {
color = vec4(1.0, 0.0, 0.0, intensinty);
} else if( turnDirr < -0.000000001 ) {
color = vec4(0.0, 0.0, 1.0, intensinty);
} else {
color = vec4(1.0, 1.0, 1.0, 0.03);
}
gl_Position = t2_pos;
gl_PointSize = 50.0;
}
`;
What I want to do is to update the position gl_Position (tn) depending on these properties, and then somehow shuffle/copy the buffers tn->t2, t2->t1, t1->t0 to prepare for another cycle, but all within the vertex shader (not only for the efficiency, but also for some other reasons which are unrelated to the question but related to the project I'm working on).
Note, your question should probably be closed as a duplicate since how to write output from a vertex shader is already covered but just to add some notes relevant to your question...
In WebGL1 is it not possible to update buffers in the GPU. You can instead store your data in a texture and update a texture. Still, you can not update a texture from itself
pos = pos + vel // won't work
But you can update another texture
newPos = pos + vel // will work
Then next time pass the texture called newPos as pos and visa versa
In WebGL2 you can use "transformFeedback" to write the output a vertex shader (the varyings) to a buffer. It has the same issue that you can not write back to a buffer you are reading from.
There is an example of writing to a texture and also an example of writing to a buffer using transformfeedback in this answer
Also an example of putting vertex data in a texture here
There is an example of a particle system using textures to update the positions in this Q&A
Put succinctly, I've hit the problem where calls for different uniforms via getUniformLocation all seem to return the same value.
It's hard to validate this since the returned Glint is opaque to the javascript console. However when looking at someone else's code sample I noticed you can pass standard javascript numbers in it's place hence this trial code:
const infoA = gl.getActiveUniform(program, gl.getUniformLocation(program, 'uSampler'));
const infoB = gl.getActiveUniform(program, gl.getUniformLocation(program, 'uRotationMatrix'));
const infoC = gl.getActiveUniform(program, 0);
const infoD = gl.getActiveUniform(program, 1);
infoA and infoB both equal the WebGLActiveInfo object for the 'uRotationMatrix' uniform, as does infoC, but infoDreturns the info data for 'uSampler'.
The closest I can find to similar questions is about optimization removing unused uniforms resulting in getUniformLocation always returning -1. I don't believe that is the case here, since both uniforms are used and using the webgl-inspector chrome extension by Ben Vanik https://github.com/benvanik/WebGL-Inspector , I see both uniforms listed in the Program panel with idx values 0 and 1. However I did note that providing an invalid uniform name produced no error and resulted in a 'default' return value of an info object for 'uRotationMatrix' (infoE); using just -1resulted in an error (infoF).
const infoE = gl.getActiveUniform(program, gl.getUniformLocation(program, 'INVALID_NAME');
const infoF = gl.getActiveUniform(program, -1); // null
Interestingly the results in Safari are reversed, that is the majority of calls return the info object for 'uSampler' while only explicitly using a javascript number, returns the info object for 'uRotationMatrix'
The shaders are below and pretty simple, both they and the program I linked them in returned success when the relevant paramters were inspected. i.e.
gl.getShaderParameter(shader, gl.COMPILE_STATUS);
gl.getProgramParameter(program, gl.LINK_STATUS);
Vertex.
precision mediump float;
attribute vec2 aPosition;
attribute vec2 aTexCoord;
uniform mat4 uRotationMatrix;
varying vec2 fragTexCoord;
void main() {
fragTexCoord = aTexCoord;
gl_Position = uRotationMatrix * vec4(aPosition, 0.0, 1.0);
}
Fragment.
precision mediump float;
varying vec2 fragTexCoord;
uniform sampler2D uSampler;
void main() {
vec4 sample = texture2D(uSampler, fragTexCoord);
gl_FragColor = vec4(sample.rgb, 1.0);
}
Does anyone have any pointers for where I should be looking to track down the problem?
Edit:
In reference to making sure the parameter type and return value types are compatible, via the MDN documentation for the pertinent functions.
"location: A GLuint specifying the index of the uniform attribute to get. This value is returned by getUniformLocation()." Link: getActiveUniform
and
"Return value: A WebGLUniformLocation value indicating the location of the named variable, ... The WebGLUniformLocation type is compatible with the GLint type when specifying the index or location of a uniform attribute." Link: getUniformLocation
gl.getUniformLocation does not return -1 for non-existent uniforms. It returns null
This code makes no sense
const infoA = gl.getActiveUniform(program, gl.getUniformLocation(program, 'uSampler'));
const infoB = gl.getActiveUniform(program, gl.getUniformLocation(program, 'uRotationMatrix'));
gl.getActiveUniform requires an integer. gl.getUniformLocation returns WebGLUniformLocation object, not an integer and it cannot be converted into a integer. At best it's getting converted into NaN and NaN is getting converted into 0.
gl.getActiveUniform does not take uniform locations, it takes a number from 0 to N - 1 where N is returned from gl.getProgramParameter(prg, gl.ACTIVE_UNIFORMS). It's purpose is to allow you to query the uniforms without first knowing their names.
const numUniforms = gl.getProgramParameter(prg, gl.ACTIVE_UNIFORMS);
for (let i = 0; i < numUniforms; ++i) {
// get the name, type, and size of a uniform
const info = gl.getActiveUniform(prg, i);
// get the location of that uinform
const loc = gl.getUniformLocation(prg, info.name);
}
Note that the reason WebGL choose to have gl.getUniformLocation return a WebGLUniformLocation object instead of an int is because it's an common error to guess those ints or to assume they are consecutive. OpenGL makes no such guarantees.
Two programs with the same uniforms might have different locations for each uniform.
Those locations are not 0 and 1, they could be anything, 5323 and 23424. It's up to the driver. Various drivers return different numbers.
Similarly for uniforms arrays like uniform float foo[2], if the location of foo[0] is 37 that does not mean the location of foo[1] is 38.
For all these reasons WebGL chose to wrap the location. That way many of those mistakes can be avoided and/or checked for. You can't do math on a WebGLUniformLocation so the guessing a location error disappears (your guess might work locally but you're making a webpage that has to run on other GPUs). The erroneous uniform array math error is avoided. Similarly you can't use a WebGLUniformLocation from one program with a different program meaning the error of assuming 2 programs with the same uniforms will have the same int locations for those uniforms is avoided.
While we're on the topic of gl.getActiveUniform you should be aware it can return info for things that aren't uniforms. Example: https://jsfiddle.net/greggman/n6mzz6jv/
i am writing a webgl program with texturing.
As long as the image isn´t loaded, the texture2D-function returns a vec4(0.0, 0.0, 0.0, 1.0). So all objects are black.
So i would like to check, if my sampler2D is available.
I have already tried:
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
vec4 color = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
if(color.r == 0.0 && color.g == 0.0 && color.b == 0.0)
color = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = color;
}
</script>
But of course this doesn´t make sense, because the texture could be black.
Can anybody help me? How can I check, whether my texture image is already loaded in the fragment shader?
You can't really check that in WebGL.
Solutions:
Don't render until the texture is loaded
Use a 1x1 pixel texture to start, fill it in with the image
once it's loaded. See this answer
Pass in more info to the shader like uniform bool textureLoaded.
Me, I always pick #2 because it means the app runs immediately and the textures get filled in as they download.
I'd provide new uniform which will store data whether texture is loaded or not.
Or you can write 2 shaders with/without texture and select proper one before rendering.
I want to be able to append multiple light sources to each node of my scene graph, but I have no clue how to do it!
From the tutorials on learningwebgl.com I learned to either use directional or position lighting, but I couldn't find a good explanation of how to implement multiple light sources.
So, the objective should be, to have the option to append an arbitrary number of light sources to each node, which type can be either directional or position lighting, and if possible and advisable, this should be realized by using only one shader-program (if this isn't the only possibility to do it anyway), because I automatically create the program for each node depending on their specific needs (unless there is already a program on the stack with equal settings).
Based on the tutorials on learningwebgl.com, my fragment-shader source for a node object using lighting without preset binding to one of the lighting-types could look like this...
precision highp float;
uniform bool uUsePositionLighting;
uniform bool uUseDirectionalLighting;
uniform vec3 uLightPosition;
uniform vec3 uLightDirection;
uniform vec3 uAmbientColor;
uniform vec3 uDirectionalColor;
uniform float uAlpha;
varying vec4 vPosition;
varying vec3 vTransformedNormal;
varying vec3 vColor;
void main (void) {
float directionalLightWeighting;
if (uUseDirectionalLighting) {
directionalLightWeighting = max(dot(vTransformedNormal, uLightDirection), 0.0);
else if (uUsePositionLighting) {
vec3 lightDirection = normalize(uLightPosition, vPosition.xyz);
directionalLightWeighting = max(dot(normalize(vTransformedNormal, lightDirection), 0.0);
}
vec3 lightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
gl_FragColor = vec4(vColor * lightWeighting, uAlpha);
}
...so, that's basically my poor state of knowledge concerning this subject.
I also ask myself, how adding more light sources would affect the lighting-colors:
I mean, do uAmbientColor and uDirectionalColor have to sum up to 1.0? In this case (and particularly when using more than one light source) it surely would be good to precalculate these values before passing them to the shader, wouldn't it?
Put your lights into an array and loop over them for each fragment. Start with a fixed array of light sources, unbounded arrays are not supported until OpenGL 4.3 and are more complicated to work with.
Something along the lines of:
uniform vec3 uLightPosition[16];
uniform vec3 uLightColor[16];
uniform vec3 uLightDirection[16];
uniform bool uLightIsDirectional[16];
....
void main(void) {
vec3 reflectedLightColor;
// Calculate incoming light for all light sources
for(int i = 0; i < 16; i++) {
vec3 lightDirection = normalize(uLightPosition[i], vPosition.xyz);
if (lightIsDirectional[i]) {
reflectedLightColor += max(dot(vTransformedNormal, uLightDirection[i]), 0.0) * uLightColor[i];
}
else {
reflectedLightColor += max(dot(normalize(vTransformedNormal, lightDirection), 0.0) * uLightColor[i];
}
}
glFragColor = vec4(uAmbientColor + reflectedLightColor * vColor, uAlpha);
}
Then you can enable/disable the light sources by setting uLightColor to (0,0,0) for the entries you don't use.
Ambient and directional don't have to sum up to 1, actually the light source can have a strength much stronger than 1.0, but then you will need to do tonemapping to get back to a range of values that can be displayed on a screen, I would suggest playing around to get a feel for what is happening (e.g. what happens when a light source have negative colors, or colors above 1.0?).
uAmbientColor is just a (poor) way to simulate light that has bounced several times in the scene. Otherwise things in shadow becomes completely black Which looks unrealistic.
Reflectance should typically be between 0 and 1 (in this example it would be the parts returned by the 'max' computations), otherwise a lightsource will get stronger when looked at via the material.
#ErikMan's answer is great, but may involve a lot of extra work on the part of the GPU since you're checking every light per fragment, which isn't strictly necessary.
Rather than an array, I'd suggest building a clip-space quadtree. (You can do this in a compute shader if it's supported by your target platform / GL version.)
A node might have a structure such as (pseudocode as my JS is rusty):
typedef struct
{
uint LightMask; /// bitmask - each light has a bit indicating whether it is active for this node. uint will allow for 32 lights.
bool IsLeaf;
} Node;
const uint maxLevels = 4;
const uint maxLeafCount = pow(4,maxLevels);
const uint maxNodeCount = (4 * maLeafCount - 1) / 3;
/// linear quadtree - node offset = 4 * parentIndex + childIndex;
Node tree[maxNodeCount];
When building the tree, just check each light's clip-space bounding box against the implicit node bounds. (Root goes from (-1,-1) to (+1,+1). Each child is half that size on each dimension. So, you don't really need to store node bounds.)
If the light touches the node, set a bit in Node.LightMask corresponding to the light. If the light completely contains the node, stop recursing. If it intersects the node, subdivide and continue.
In your fragment shader, find which leaf node contains your fragment, and apply all lights whose bit is set in the leaf node's mask.
You could also store your tree in a mipmap pyramid if you expect it to be dense.
Keep your tiles to a size that is a multiple of 32, preferably square.
vec2 minNodeSize = vec2(2.f / 32);
Now, if you have a small number of lights, this may be overkill. You would probably have to have a lot of lights to see any real performance benefit. Also, a normal loop may help reduce data divergence in your shader, and makes it easier to eliminate branching.
This is one way to implement a simple tiled renderer, and opens the door to having hundreds of lights.
To preface this, I'm trying to replicate the water rendering algorithm describe in this article http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html. Part of this algorithm requires rendering an alpha mask into a framebuffer in order to use later for a texture sampling from the originally rendered scene. In short, the algorithm looks like this:
Render scene geometry into a texture S, skipping refractive meshes and replacing it with an alpha mask
Render refractive meshes by sampling texture S with perturbation IF it's inside the alpha mask, otherwise just directly sample texture S
Unfortunately, I'm still learning WebGL and don't really know enough to know how to approach this. Additionally, that article uses HLSL, and the conversion is nontrivial for me. Obviously, attempting to do this in the fragment shader won't work:
void main( void ) {
gl_FragColor = vec4( 0.0 );
}
because it will just blend with the previously rendered geometry and the alpha value will still be 1.0.
Here is a brief synopsis of what I have:
function animate(){
... snip ...
renderer.render( scene, camera, rtTexture, true );
renderer.render( screenScene, screenCamera );
}
// water fragment shader
void main( void ){
// black out the alpha channel
gl_FragColor = vec4(0.0);
}
// screen fragment shader
varying vec2 vUv;
uniform sampler2D screenTex;
void main( void ) {
gl_FragColor = texture2D( screenTex, vUv );
// just trying to see what the alpha mask would look like
if( gl_FragColor.a < 0.1 ){
gl_FragColor.b = 1.0;
}
}
The entire code can be found at http://scottrabin.github.com/Terrain/
Obviously, attempting to do this in the fragment shader won't work:
because it will just blend with the previously rendered geometry and the alpha value will still be 1.0.
That's up to you. Just use the proper blend modes:
glBlendFuncSeparate(..., ..., GL_ONE, GL_ZERO);
glBlendFuncSeparate sets up separate blending for the RGB and Alpha portions of the color. In this case, it writes the source alpha directly to the destination.
Note that if you're drawing something opaque, you don't need blend modes. The output alpha will be written as is, just like the color.