I'm a beginner at shaders and WebGL and I've taken some shortcuts to develop what I currently have, so please bear with me.
Is there a way to update attribute buffer data within the GPU only? Basically what I want to do is to send in three buffers t0, t1, t2 into the GPU representing points and their position in time 0, 1, and 2 respectively. Then I wish to update their new position tn depending on the properties of t2, t1, and t0 depending on the velocity of the points, turning angle, and so on.
My current implementation updates the positions in javascript and then copies the buffers into WebGL at every draw. But why? This seems terribly inefficient to me, and I don't see why I couldn't do everything in the shader to skip moving data from CPU->GPU all the time. Is this possible somehow?
This is current vertex shader which sets color on the point depending on the turn direction and angle it's turning at (tn is updated in JS atm by debugging functions):
export const VsSource = `
#define M_PI 3.1415926535897932384626433832795
attribute vec4 t0_pos;
attribute vec4 t1_pos;
attribute vec4 t2_pos;
varying vec4 color;
attribute vec4 r_texture;
void main() {
float dist = distance(t1_pos, t2_pos);
vec4 v = normalize(t1_pos-t0_pos);
vec4 u = normalize(t2_pos-t1_pos);
float angle = acos(dot(u, v));
float intensinty = angle / M_PI * 25.0;
float turnDirr = (t0_pos.y-t1_pos.y) * (t2_pos.x-t1_pos.x) + (t1_pos.x-t0_pos.x) * (t2_pos.y-t1_pos.y);
if(turnDirr > 0.000000001 ) {
color = vec4(1.0, 0.0, 0.0, intensinty);
} else if( turnDirr < -0.000000001 ) {
color = vec4(0.0, 0.0, 1.0, intensinty);
} else {
color = vec4(1.0, 1.0, 1.0, 0.03);
}
gl_Position = t2_pos;
gl_PointSize = 50.0;
}
`;
What I want to do is to update the position gl_Position (tn) depending on these properties, and then somehow shuffle/copy the buffers tn->t2, t2->t1, t1->t0 to prepare for another cycle, but all within the vertex shader (not only for the efficiency, but also for some other reasons which are unrelated to the question but related to the project I'm working on).
Note, your question should probably be closed as a duplicate since how to write output from a vertex shader is already covered but just to add some notes relevant to your question...
In WebGL1 is it not possible to update buffers in the GPU. You can instead store your data in a texture and update a texture. Still, you can not update a texture from itself
pos = pos + vel // won't work
But you can update another texture
newPos = pos + vel // will work
Then next time pass the texture called newPos as pos and visa versa
In WebGL2 you can use "transformFeedback" to write the output a vertex shader (the varyings) to a buffer. It has the same issue that you can not write back to a buffer you are reading from.
There is an example of writing to a texture and also an example of writing to a buffer using transformfeedback in this answer
Also an example of putting vertex data in a texture here
There is an example of a particle system using textures to update the positions in this Q&A
Related
Rather standard GLSL problem. Unfortunatly although I'm familiar with the math behind it, I'm not that certain on the implementation at the webgl level. And shaders are rather tricky to debug.
I'm trying to get the angle between the view vector and the normal of an object, at the glsl shader level. I'm using threejs but making my own sharers through them.
Here's the relevant part of the vertex shader:
void main() {
gl_Position = projectionMatrix * modelViewMatrix * vec4(position, 1.0);
vec3 nmalInWorld = normalize(normalMatrix * normal);
vec3 camInWorld = cameraPosition;
vec4 posInWorld = modelViewMatrix * vec4(position, 1.0);
posInWorld /= posInWorld[3];
angle = -dot(normalize(posInWorld - vec4(cameraPosition, 1.0)), normalize(vec4(nmalInWorld,1.0)));
if(angle > 0.7){
angle = 1.0;
}else{
angle = 0.0;
}
I've tried several permutations of this arrangement so I apologize in advance if it's needlessly complicated. I do get something rather similar to what I want - it it certainly always the same in respect to camera perspective but it's not centered for some reason I don't understand. The fragment shader just directly patches angle to a color field.
As you can see the white point isn't centered in the middle of the sphere which is the intended result. I don't really see why. Perhaps it's missing a perspective transform application? Tried it with no success. I don't believe the view vector should be relevant in this situation, the sphere should remain colored the same regardless of the target of the camera.
posInWorld, vec4(cameraPosition, 1.0) and vec4(nmalInWorld,1.0) are Homogeneous coordinates.
The Dot product of 2 Cartesian Unit vectors is equal to the Cosine of the angle between the vectors.
The "view" vector is the normalized cartesian vector from the position of the vertex to the position of the camera. When a vertex position is transformed by modelViewMatrix, then the result is a position in view space. The view space position of the camera is (0, 0, 0), because the origin of the viewspace is the position of the camera:
vec4 posInView = modelViewMatrix * vec4(position, 1.0);
posInView /= posInView[3];
vec3 VinView = normalize(-posInView.xyz); // (0, 0, 0) - posInView
The normalMatrix transforms a vector from model space to view space:
vecr NinView = normalize(normalMatrix * normal);
VinView and NinView are both vectors in viewspace. The former points from the vertex to the camera the later is the normal vector of the surface at the vertex coordinate.
The cosine of the angle between the 2 vectors can be get by the dot product:
float NdotV = dot(NinView, VinView);
I want to be able to append multiple light sources to each node of my scene graph, but I have no clue how to do it!
From the tutorials on learningwebgl.com I learned to either use directional or position lighting, but I couldn't find a good explanation of how to implement multiple light sources.
So, the objective should be, to have the option to append an arbitrary number of light sources to each node, which type can be either directional or position lighting, and if possible and advisable, this should be realized by using only one shader-program (if this isn't the only possibility to do it anyway), because I automatically create the program for each node depending on their specific needs (unless there is already a program on the stack with equal settings).
Based on the tutorials on learningwebgl.com, my fragment-shader source for a node object using lighting without preset binding to one of the lighting-types could look like this...
precision highp float;
uniform bool uUsePositionLighting;
uniform bool uUseDirectionalLighting;
uniform vec3 uLightPosition;
uniform vec3 uLightDirection;
uniform vec3 uAmbientColor;
uniform vec3 uDirectionalColor;
uniform float uAlpha;
varying vec4 vPosition;
varying vec3 vTransformedNormal;
varying vec3 vColor;
void main (void) {
float directionalLightWeighting;
if (uUseDirectionalLighting) {
directionalLightWeighting = max(dot(vTransformedNormal, uLightDirection), 0.0);
else if (uUsePositionLighting) {
vec3 lightDirection = normalize(uLightPosition, vPosition.xyz);
directionalLightWeighting = max(dot(normalize(vTransformedNormal, lightDirection), 0.0);
}
vec3 lightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
gl_FragColor = vec4(vColor * lightWeighting, uAlpha);
}
...so, that's basically my poor state of knowledge concerning this subject.
I also ask myself, how adding more light sources would affect the lighting-colors:
I mean, do uAmbientColor and uDirectionalColor have to sum up to 1.0? In this case (and particularly when using more than one light source) it surely would be good to precalculate these values before passing them to the shader, wouldn't it?
Put your lights into an array and loop over them for each fragment. Start with a fixed array of light sources, unbounded arrays are not supported until OpenGL 4.3 and are more complicated to work with.
Something along the lines of:
uniform vec3 uLightPosition[16];
uniform vec3 uLightColor[16];
uniform vec3 uLightDirection[16];
uniform bool uLightIsDirectional[16];
....
void main(void) {
vec3 reflectedLightColor;
// Calculate incoming light for all light sources
for(int i = 0; i < 16; i++) {
vec3 lightDirection = normalize(uLightPosition[i], vPosition.xyz);
if (lightIsDirectional[i]) {
reflectedLightColor += max(dot(vTransformedNormal, uLightDirection[i]), 0.0) * uLightColor[i];
}
else {
reflectedLightColor += max(dot(normalize(vTransformedNormal, lightDirection), 0.0) * uLightColor[i];
}
}
glFragColor = vec4(uAmbientColor + reflectedLightColor * vColor, uAlpha);
}
Then you can enable/disable the light sources by setting uLightColor to (0,0,0) for the entries you don't use.
Ambient and directional don't have to sum up to 1, actually the light source can have a strength much stronger than 1.0, but then you will need to do tonemapping to get back to a range of values that can be displayed on a screen, I would suggest playing around to get a feel for what is happening (e.g. what happens when a light source have negative colors, or colors above 1.0?).
uAmbientColor is just a (poor) way to simulate light that has bounced several times in the scene. Otherwise things in shadow becomes completely black Which looks unrealistic.
Reflectance should typically be between 0 and 1 (in this example it would be the parts returned by the 'max' computations), otherwise a lightsource will get stronger when looked at via the material.
#ErikMan's answer is great, but may involve a lot of extra work on the part of the GPU since you're checking every light per fragment, which isn't strictly necessary.
Rather than an array, I'd suggest building a clip-space quadtree. (You can do this in a compute shader if it's supported by your target platform / GL version.)
A node might have a structure such as (pseudocode as my JS is rusty):
typedef struct
{
uint LightMask; /// bitmask - each light has a bit indicating whether it is active for this node. uint will allow for 32 lights.
bool IsLeaf;
} Node;
const uint maxLevels = 4;
const uint maxLeafCount = pow(4,maxLevels);
const uint maxNodeCount = (4 * maLeafCount - 1) / 3;
/// linear quadtree - node offset = 4 * parentIndex + childIndex;
Node tree[maxNodeCount];
When building the tree, just check each light's clip-space bounding box against the implicit node bounds. (Root goes from (-1,-1) to (+1,+1). Each child is half that size on each dimension. So, you don't really need to store node bounds.)
If the light touches the node, set a bit in Node.LightMask corresponding to the light. If the light completely contains the node, stop recursing. If it intersects the node, subdivide and continue.
In your fragment shader, find which leaf node contains your fragment, and apply all lights whose bit is set in the leaf node's mask.
You could also store your tree in a mipmap pyramid if you expect it to be dense.
Keep your tiles to a size that is a multiple of 32, preferably square.
vec2 minNodeSize = vec2(2.f / 32);
Now, if you have a small number of lights, this may be overkill. You would probably have to have a lot of lights to see any real performance benefit. Also, a normal loop may help reduce data divergence in your shader, and makes it easier to eliminate branching.
This is one way to implement a simple tiled renderer, and opens the door to having hundreds of lights.
I'm using threejs to render around 2 000 000 points by using PointClouds. I would like to make each point move. To do that, I have the beginning position and the end position.
So, I'm looking for the best way to do it. And three ideas came up :
Do a custom shader, where I modify the position of each vertex following this function : f(t) = beginning*(1-t)+end*t. With t varying from 0 to 1.
But I can't send so big arrays to the shader (2 millions vec3 to know end position). Is it possible for each vertex to send the specific end vec3 ? So I would have : uniform vec3 end, instead of uniform vec3 end[2097152].
precalculate the position and send the result to shader.
precalculate the position by using web workers and then send the result to the shader.
Does anyone have any idea on how I could do my animation ?
I'm really new to the all the shader stuff and I don't fully understand web workers yet. So I might be wrong with what I say. Don't hesitate to make me aware of it :P
Thanks !
edit: dunno why, but stackoverflow doesn't want me to say Hi at the beginning of the message. Hi !
editV2 : Executing the moving code on the main thread.
var moving = setInterval(function()
{
var time = t;
var iTime = 1-time;
for(var i = 0; i < 2097152;i++)
{
scene.children[1].geometry.attributes.position.array[i*3] = before[i].x*iTime+after[i].x*time;
scene.children[1].geometry.attributes.position.array[i*3+1] = before[i].y*iTime+after[i].y*time;
scene.children[1].geometry.attributes.position.array[i*3+2] = before[i].z*iTime+after[i].z*time;
}
scene.children[1].geometry.attributes.position.needsUpdate = true;
t+=1/60;
if(t>1)
{
clearInterval(moving);
}
},5);
editV3 :
As WestLangley explained me, I need to use attributes so my code looks like that :
Vertex shader :
uniform float size;
uniform float t;
attribute vec3 endPosition;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position*(1.0-t)+endPosition*t, 1.0 );
gl_PointSize = size / length( mvPosition.xyz );
gl_Position = projectionMatrix * mvPosition;
}
And I add the attributes with the BufferGeometry :
geometry.addAttribute('position' , new THREE.BufferAttribute( positionArray, 3 ));
geometry.addAttribute('endPosition' , new THREE.BufferAttribute( endPositionArray, 3 ));
It's moving, but it's just getting smaller and smaller. It seems that endPosition*t is equal to 0 for the vertex shader.
Do I have a wrong function ? or did I miss something with the declaration of endPosition ?
I'm quite sure endPosition must not be a vec3, but something with buffer, but for the moment I didn't found it.
Thanks for your help :)
EditV4:
Ok I found where my problem was, I needed to specify the shaderMaterial that I wanted to add :
var attributes = {
endPosition: { type:'v3', value:null }
}
As position already existed and was functionning, I didn't pay attention to that fact.
Thanks everyone ! Have a nice day.
I've created ~2500 meshes and use an algorithm defining the color of each mesh. The algorithm goes through all meshes and adds a value depending on the distance it has to each "red-start" point. The value then decides what the color should be.
This is the result:
It is lagging and the corners aren't smooth. I want to recreate the same color result in some other way but can't figure out how. How can you do it with THREE.Shape and FragmentShader?
Final Goal Description:
Using one, for increase in FPS, mesh (THREE.Shape) that
defines the area which is to be colored.
Be able to insert X amount of points that acts as positions where RED color is MAX and the further away from it you get it should go from RED -> GREEN
You should be able to move the points
Parts of the mesh that are in between 2 or more points should turn to a color depending on the distance it has to each point.
EDIT:
Here is my jsfiddle of how far I've gotten.
http://jsfiddle.net/zDh4y/9/
EDIT SOLUTION:
http://jsfiddle.net/zDh4y/13/
I have solved it ^^
Much smoother, faster and easier!
Main issue with my algorithm was the distance was in 'millimeter' when it should have been in 'm'.
dist = dist / (T * T * T);
Check it out here:
http://jsfiddle.net/zDh4y/13/
Edit: Now it's pretty and on WebGL http://glsl.heroku.com/e#16831.0 (Thanks to the gman)
It blends point color to the base color of the quad based on the distance between the point and the current fragment.
uniform vec2 pointA;
uniform vec2 pointB;
uniform vec2 pointC;
uniform vec4 pointColor;
uniform vec4 baseColor;
varying vec2 texCoord;
float blendF(float val){
return pow(val, 1.2) * 5;
}
vec4 addPoint(vec4 base, vec2 pointTexCord, vec4 pointColor){
return mix(pointColor, base, blendF(distance(pointTexCord, texCoord)));
}
void main(void)
{
vec4 accumulator = addPoint(baseColor, pointA, pointColor);
accumulator = addPoint(accumulator, pointB, pointColor);
accumulator = addPoint(accumulator, pointC, pointColor);
gl_FragColor = accumulator;
}
It would work with any kind of geometry and be as fast as it gets.
Here's Rendermonkey file with this shader You can tinker with it and don't worry about OpenGL\WebGL stuff - only shaders.
To preface this, I'm trying to replicate the water rendering algorithm describe in this article http://http.developer.nvidia.com/GPUGems2/gpugems2_chapter19.html. Part of this algorithm requires rendering an alpha mask into a framebuffer in order to use later for a texture sampling from the originally rendered scene. In short, the algorithm looks like this:
Render scene geometry into a texture S, skipping refractive meshes and replacing it with an alpha mask
Render refractive meshes by sampling texture S with perturbation IF it's inside the alpha mask, otherwise just directly sample texture S
Unfortunately, I'm still learning WebGL and don't really know enough to know how to approach this. Additionally, that article uses HLSL, and the conversion is nontrivial for me. Obviously, attempting to do this in the fragment shader won't work:
void main( void ) {
gl_FragColor = vec4( 0.0 );
}
because it will just blend with the previously rendered geometry and the alpha value will still be 1.0.
Here is a brief synopsis of what I have:
function animate(){
... snip ...
renderer.render( scene, camera, rtTexture, true );
renderer.render( screenScene, screenCamera );
}
// water fragment shader
void main( void ){
// black out the alpha channel
gl_FragColor = vec4(0.0);
}
// screen fragment shader
varying vec2 vUv;
uniform sampler2D screenTex;
void main( void ) {
gl_FragColor = texture2D( screenTex, vUv );
// just trying to see what the alpha mask would look like
if( gl_FragColor.a < 0.1 ){
gl_FragColor.b = 1.0;
}
}
The entire code can be found at http://scottrabin.github.com/Terrain/
Obviously, attempting to do this in the fragment shader won't work:
because it will just blend with the previously rendered geometry and the alpha value will still be 1.0.
That's up to you. Just use the proper blend modes:
glBlendFuncSeparate(..., ..., GL_ONE, GL_ZERO);
glBlendFuncSeparate sets up separate blending for the RGB and Alpha portions of the color. In this case, it writes the source alpha directly to the destination.
Note that if you're drawing something opaque, you don't need blend modes. The output alpha will be written as is, just like the color.