I'm a beginner at shaders and WebGL and I've taken some shortcuts to develop what I currently have, so please bear with me.
Is there a way to update attribute buffer data within the GPU only? Basically what I want to do is to send in three buffers t0, t1, t2 into the GPU representing points and their position in time 0, 1, and 2 respectively. Then I wish to update their new position tn depending on the properties of t2, t1, and t0 depending on the velocity of the points, turning angle, and so on.
My current implementation updates the positions in javascript and then copies the buffers into WebGL at every draw. But why? This seems terribly inefficient to me, and I don't see why I couldn't do everything in the shader to skip moving data from CPU->GPU all the time. Is this possible somehow?
This is current vertex shader which sets color on the point depending on the turn direction and angle it's turning at (tn is updated in JS atm by debugging functions):
export const VsSource = `
#define M_PI 3.1415926535897932384626433832795
attribute vec4 t0_pos;
attribute vec4 t1_pos;
attribute vec4 t2_pos;
varying vec4 color;
attribute vec4 r_texture;
void main() {
float dist = distance(t1_pos, t2_pos);
vec4 v = normalize(t1_pos-t0_pos);
vec4 u = normalize(t2_pos-t1_pos);
float angle = acos(dot(u, v));
float intensinty = angle / M_PI * 25.0;
float turnDirr = (t0_pos.y-t1_pos.y) * (t2_pos.x-t1_pos.x) + (t1_pos.x-t0_pos.x) * (t2_pos.y-t1_pos.y);
if(turnDirr > 0.000000001 ) {
color = vec4(1.0, 0.0, 0.0, intensinty);
} else if( turnDirr < -0.000000001 ) {
color = vec4(0.0, 0.0, 1.0, intensinty);
} else {
color = vec4(1.0, 1.0, 1.0, 0.03);
}
gl_Position = t2_pos;
gl_PointSize = 50.0;
}
`;
What I want to do is to update the position gl_Position (tn) depending on these properties, and then somehow shuffle/copy the buffers tn->t2, t2->t1, t1->t0 to prepare for another cycle, but all within the vertex shader (not only for the efficiency, but also for some other reasons which are unrelated to the question but related to the project I'm working on).
Note, your question should probably be closed as a duplicate since how to write output from a vertex shader is already covered but just to add some notes relevant to your question...
In WebGL1 is it not possible to update buffers in the GPU. You can instead store your data in a texture and update a texture. Still, you can not update a texture from itself
pos = pos + vel // won't work
But you can update another texture
newPos = pos + vel // will work
Then next time pass the texture called newPos as pos and visa versa
In WebGL2 you can use "transformFeedback" to write the output a vertex shader (the varyings) to a buffer. It has the same issue that you can not write back to a buffer you are reading from.
There is an example of writing to a texture and also an example of writing to a buffer using transformfeedback in this answer
Also an example of putting vertex data in a texture here
There is an example of a particle system using textures to update the positions in this Q&A
I'm using threejs to render around 2 000 000 points by using PointClouds. I would like to make each point move. To do that, I have the beginning position and the end position.
So, I'm looking for the best way to do it. And three ideas came up :
Do a custom shader, where I modify the position of each vertex following this function : f(t) = beginning*(1-t)+end*t. With t varying from 0 to 1.
But I can't send so big arrays to the shader (2 millions vec3 to know end position). Is it possible for each vertex to send the specific end vec3 ? So I would have : uniform vec3 end, instead of uniform vec3 end[2097152].
precalculate the position and send the result to shader.
precalculate the position by using web workers and then send the result to the shader.
Does anyone have any idea on how I could do my animation ?
I'm really new to the all the shader stuff and I don't fully understand web workers yet. So I might be wrong with what I say. Don't hesitate to make me aware of it :P
Thanks !
edit: dunno why, but stackoverflow doesn't want me to say Hi at the beginning of the message. Hi !
editV2 : Executing the moving code on the main thread.
var moving = setInterval(function()
{
var time = t;
var iTime = 1-time;
for(var i = 0; i < 2097152;i++)
{
scene.children[1].geometry.attributes.position.array[i*3] = before[i].x*iTime+after[i].x*time;
scene.children[1].geometry.attributes.position.array[i*3+1] = before[i].y*iTime+after[i].y*time;
scene.children[1].geometry.attributes.position.array[i*3+2] = before[i].z*iTime+after[i].z*time;
}
scene.children[1].geometry.attributes.position.needsUpdate = true;
t+=1/60;
if(t>1)
{
clearInterval(moving);
}
},5);
editV3 :
As WestLangley explained me, I need to use attributes so my code looks like that :
Vertex shader :
uniform float size;
uniform float t;
attribute vec3 endPosition;
void main() {
vec4 mvPosition = modelViewMatrix * vec4( position*(1.0-t)+endPosition*t, 1.0 );
gl_PointSize = size / length( mvPosition.xyz );
gl_Position = projectionMatrix * mvPosition;
}
And I add the attributes with the BufferGeometry :
geometry.addAttribute('position' , new THREE.BufferAttribute( positionArray, 3 ));
geometry.addAttribute('endPosition' , new THREE.BufferAttribute( endPositionArray, 3 ));
It's moving, but it's just getting smaller and smaller. It seems that endPosition*t is equal to 0 for the vertex shader.
Do I have a wrong function ? or did I miss something with the declaration of endPosition ?
I'm quite sure endPosition must not be a vec3, but something with buffer, but for the moment I didn't found it.
Thanks for your help :)
EditV4:
Ok I found where my problem was, I needed to specify the shaderMaterial that I wanted to add :
var attributes = {
endPosition: { type:'v3', value:null }
}
As position already existed and was functionning, I didn't pay attention to that fact.
Thanks everyone ! Have a nice day.
I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular.
From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles.
However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to be written directly into the GLSL code.
How does that work?
My guess is that a vertex shader is already prepared upfront, and that the editable/sample shader is only a fragment shader. But then that doesn't explain the geometry in some of the more complex examples...
Can anyone explain how Shadertoy works?
ShaderToy is a tool for writing pixel shaders.
What are pixel shaders?
If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.
So attributes are always the same and so is a vertex shader:
positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]
And that quad is rendered as TRIANGLE_STRIP.
Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.
Vertex shader:
attribute vec2 aPos;
attribute vec2 aUV;
varying vec2 vUV;
void main() {
gl_Position = vec4(aPos, 0.0, 1.0);
vUV = aUV;
}
And fragment shader would then look something like this:
uniform vec2 uScreenResolution;
varying vec2 vUV;
void main() {
// vUV is equal to gl_FragCoord/uScreenResolution
// do some pixel shader related work
gl_FragColor = vec3(someColor);
}
ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.
For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short:
You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help:
How to do ray tracing in modern OpenGL?
Hope this helps.
ShaderToy displays simple GLSL that is programmed to handle all the lighting, geometry, etc, it's not vertex geometry, it's raycasting most of it, the 3D stuff, or you can do 2D shaders, etc.
Any color and spacial maths can be programmed in GLSL language. Combinations of advanced algorithms makes isosurfaces, shapes, and then project textures onto isosurfaces, and raycasting, sending imaginary lines from viewer to distance, intercepts anything in the way, there are many raycasting techniques for 3D.
visit www.iquilezles.org for an idea of the different tools that are used in shadertoy/glsl graphics
Shadertoy is called "TOY" for a reason. It's basically a puzzle. Given only a function that as input is told the current pixel position write a function that generates an image.
The website sets up WebGL to draw a single quad (rectangle) and then lets you provide a function that is passed the current pixel being rendered as fragCoord. you then use that to compute some colors.
For example if you did this
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
fragColor = mix(red, green, sin(fragCoord.x / 10.0) * 0.5 + 0.5);
}
you'd get red and green stripes like this
https://www.shadertoy.com/view/3l2cRz
Shadertoy provides a few other inputs. The most common is the resolution being rendered to as iResolution. If you divide the fragCoord by iResolution then you get values that go from 0 to 1 across and 0 to 1 down the canvas so you can easily make your function resolution independent.
Doing that we can draw an ellipse in the center like this
void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
// uv goes from 0 to 1 across and up the canvas
vec2 uv = fragCoord / iResolution.xy;
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
float distanceFromCenter = distance(uv, vec2(0.5));
fragColor = mix(red, green, step(0.25, distanceFromCenter));
}
which produces
The second most common input is iTime in seconds which you can use the animate parameters in your function over time.
So, given those inputs, if you apply enough math you can make an incredible image like for example this shadertoy shader generates this image
Which is amazing someone figured out the math needed to generate that image given only the input mentioned above.
Many of the most amazing shadertoy shaders use a technique called ray marching and some math called "signed distance fields" which you can read about here
But, I think it's important to point out that while there are many cool things to learn from shadertoy shaders, many of them apply only to this "puzzle" of "how do I do make a beautiful image with only one function whose input is a pixel position and whose output is a single color". They don't answer "how should I write shaders for a performant app".
Compare to the dolphin above to this speedboat game
https://www.youtube.com/watch?v=7v9gZK9HqqI
The dolphin shadertoy shader runs at about 2 frames a second when fullscreen on my NVidia GeForce GT 750 where as the speedboat game runs at 60fps. The reason the game runs fast is it uses more traditional techniques drawing shapes with projected triangles. Even an NVidia 1060 GTX can only run that dolphin shader at about 10 frames a second when fullscreen.
It's just basically pushing GLSL pixel shader source code directly onto the graphics card.The real magic happens in the incredibly clever algorithms that people use to create amazing effects, like ray marching, ray casting, ray tracing. best to have a look at some other live GLSL sandboxes like: http://glsl.heroku.com/ and http://webglplayground.net/.
Its basically creating a window typically two triangles which represent the screen, then the shader works on each pixel just like a ray tracer.
I've been looking at these a while now, and the algorithms people use are mind blowing, you'll need to some serious math chops and look up "demo coding" source code to able to wrap your head around them. Many on shader toy, just blow your mind !
So to summarise, you just need to learn GLSL shader coding and algorithms. No easy solution.
Traditionally in computer graphics, geometry is created using vertices and rendered using some form of materials (e.g. textures with lighting). In GLSL, the vertex shader processes the vertices and the fragment (pixel) shader processes the materials.
But that is not the only way to define shapes. Just as a texture could be procedurally defined (instead of looking up its texels), a shape could be procedurally defined (instead of looking up its geometry).
So, similar to ray tracing, these fragment shaders are able to create shapes without having their geometry defined by vertices.
There's still more ways to define shapes. E.g. volume data (voxels), surface curves, and so on. A computer graphics text should cover some of them.
I would like to cut an object (a box) in WebGL (fragment shaders / vertex shaders) without using Boolean operations (union, difference, etc..).
I want to use shaders to hide some part of the object (so it is therefore not really a "real cuts" since it simply hides the object).
EDIT
First, make sure that the vertex shader passes through to the fragment shader the position in world space (or rather, whichever coordinate space you wish the clipping to be fixed relative to). Example (written from memory, not tested):
varying vec3 positionForClip;
...
void main(void) {
...
vec4 worldPos = modelMatrix * vertexPosition;
positionForClip = worldPos.xyz / worldPos.w; // don't need homogeneous coordinates, so do the divide early
gl_Position = viewMatrix * worldPos;
}
And in your fragment shader, you can then discard based on an arbitrary plane, or any other kind of test you want:
varying vec3 positionForClip;
uniform vec3 planeNormal;
uniform float planeDistance;
...
void main(void) {
if (dot(positionForClip, planeNormal) > planeDistance) {
// or if (positionForClip.x > 10.0), or whatever
discard;
}
...
gl_FragColor = ...;
}
Note that using discard may cause a performance reduction as the GPU cannot optimize based on knowing that all fragments will be written.
Disclaimer: I haven't researched this myself, and only just wrote down a possible way to do it based on the 'obvious solution'. There may be better ways I haven't heard of.
Regarding your question about multiple objects: There are many different ways to handle this — it's all custom code in the end. But you certainly can use a different shader for different objects in your scene, as long as they're in different vertex arrays.
gl.useProgram(programWhichCuts);
gl.drawArrays();
gl.useProgram(programWhichDoesNotCut);
gl.drawArrays();
If you're new to using multiple programs, it's pretty much just like using one program except that you do all the setup (compile, attach, link) once. The main thing to watch out for is each program has its own uniforms, so you have to initialize your uniforms for each program separately.
i'm looking to just simply display an image on the canvas at x and y co-ordinates using WEBGL but have no clue how to do it. do i need to include shaders and all that stuff? i've seen code to display images but they are very bulky. I do not wish to use a framework. If possible could you comment and explain what the important sections do? I will be using WEBGL for a 2d tile based game.
thankyou for your time
Yes, you need a vertex and fragment shader, but they can be relatively simple. I'd recommend to start from the Mozilla example, as suggested by Ido, and after you got it running, remove the 3D aspect. In particular, you don't need the uMVPMatrix and uPmatrix, and your coordinate array can be 2D. For the vertex shader, that means:
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
varying highp vec2 vTextureCoord;
void main(void) {
gl_Position = vec4(aVertexPosition, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}