I'm trying to translate some TypeScript code into a vertex shader to use with WebGL. My goal is to draw the bitangent lines of two circles. I have a function to calculate the tangent points here, https://jsfiddle.net/Zanchi/4xnp1n8x/2/ on line 27. Essentially, it returns a tuple of points with x and y values.
// First circle bottom tangent point
const t1 = {
x: x1 + r1 * cos(PI/2 - alpha),
y: y1 + r1 * sin(PI/2 - alpha)
}; //... and so on
I know I can do the calcuation in JS and pass the values to the shader via an attribute, but I'd like to leverage the GPU to do the point calculations instead.
Is it possible to set multiple vertices in a single vertex shader call, or use multiple values calculated in the first call of the shader in subsequent calls?
Is it possible to set multiple vertices in a single vertex shader call
No
or use multiple values calculated in the first call of the shader in subsequent calls?
No
A vertex shader outputs 1 vertex per iteration/call. You set the number of iterations when you call gl.drawArrays (gl.drawElements is more complicated)
I'm not sure you gain much by not just putting the values in an attribute. It might be fun to generate them in the vertex shader but it's probably not performant.
In WebGL1 there is no easy way to use a vertex shader to generate data. First off you'd need some kind of count or something that changes for each iteration and there is nothing that changes if you don't supply at least one attribute. You could supply one attribute with just a count [0, 1, 2, 3, ...] and use that count to generate vertices. This is what vertexshaderart.com does but it's all for fun, not for perf.
In WebGL2 there is the gl_VertexID built in variable which means you get a count for free, no need to supply an attribute. In WebGL2 you can also use transform feedback to write the output of a vertex shader to a buffer. In that way you can generate some vertices once into a buffer and then use the generated vertices from that buffer (and therefore probably get better performance than generating them every time).
Related
I looking for a way to skip instances while rendering in shader.
I have one million instances, and to achieve performance, based on current viewbox I will skip instances based on their boundings.
Is any place where I can write code for condition?
JS is slower than GPU, so I'm looking for how to make this condition in the GLSL
you can't "Skip" an instance. You can move all of its vertices off the screen
attribute float visible;
...
gl_Position = mix(vec4(0, 0, -2, 1), gl_Position, visible);
or something like that. If visible is 1.0 then you get the same as you've always gotten. If visible is 0.0 then all of its vertices are outside of clip space.
You could also pass in the bounds in some form (center + radius, aabb) and effectively do the same thing. Compute if that center + radius is in the view frustum, if not set gl_Position to something outside clip space. Of course if they were going to be outside of clip space already then this isn't helping.
Still, a million instances is probably too many for most GPUs
I am learning WebGL by doing a simple drawing: a horizontal line and a vertical line, alternatively every 10 frames (i.e 10 frames display a horizontal line, then the next 10 frames display a vertical line). I got that going by keeping a counter in js code, then give the vertex shader proper the coords on every frame. Is there a way to let the WebGL program to handle this counter, instead of js? Is it possible to pass 4 points (of the 2 lines) to the WebGL program once, and make it handle the counting with some kind of variable that persists through every main interation?
I hope I can demonstrate better with the below code. The counter variable is what I am hoping for
attribute vec3 coordinates;
int counter = 0;
void main(void) {
counter = counter + 1;
if (counter < 10){
gl_Position = vec4(coordinates[0], coordinates[1], coordinates[2], 1.0);
} else {
gl_Position = vec4(coordinates[3], coordinates[4], coordinates[5], 1.0);
}
if (counter >= 20){
counter = 0;
}
}
If that is not possible, please tell me how to handle this problem? Is passing the right vertices from js code the way to go?
Thank you very much for your attention. Any help would be appreciated.
You need to pass the counter from js as a uniform.
Your code will not work since counter is a local variable to the vertex shader and not sharing with other. Even it is shared, keep in mind that the vertex shader called in number of vertices time in any order.
Best practice is keeping shaders for rendering only and those logic in the application.
In WebGL1 there is no counter. You'd have to pass one in. You can do this by filling a buffer with an increasing number and pass that in as an attribute to your vertex shader.
In WebGL2 there is a built in counter, gl_VertexID as well as gl_InstanceID
Whether or not you should be using counters depends on your use case. The normal way to draw a lot of points is to pass the points in as data via attributes.
The normal way to draw consecutive points right next to each other is to pass in vertices that generate a triangle that covers the points you want rendered.
Using the counters, either your own in WebGL1 or the built in ones in WebGL2 is fairly uncommon.
GLSL shaders have no state between one iteration and the next so your counter example won't work.
If you're new to WebGL might I suggest these articles
I am using gl.TRIANGLES to draw a function where I past vertices and an index buffer to identify the vertices of each triangle. I would also like the shaders to distinguish three types of triangles: -1,0, and 1. I realize I can set up a triplet set of numbers for each triangle and pass this buffer to an attribute variable of the vertex shader then on to the fragment shader via a varying variable. However, that seems to be a lot of unnecessary data as I only need a single number per triangle.
Is there a way to set up a buffer for gl.TRIANGLES so that for each triangle, a single number gets passed to the vertex shader? I can then use a varying variable
to pass it to the fragment shader. Or will I have to set up duplicate
triplets like (0,0,0) or (-1,-1,-1) for each triangle?
I'm trying to learn how to take advantage of gpu possibilities for threejs and webgl stuff so im just analysing code to get some patterns, methods how things are done and I need some code explanation.
I found this example: One million particles, which seems to be the easiest one involving calculations made in shaders and spit back out.
So from what I have figured out:
- Data for velocity and position of particles are kept in textures passed to shaders to perform calculations there, and get them back for update
Particles are created randomly on the plane no more than the texture size ?
for (var i = 0; i < 1000000; i++) {
particles.vertices.push(new THREE.Vector3((i % texSize)/texSize,
Math.floor(i/texSize)/texSize , 0))
;
}
I don't see any particles position updates? How is the data from shaders retrieved and updates each particle?
pick()
only passes the mouse position to calculate the direction of particles movement?
why are there 2 buffers? and 8 (4 pairs of fragment and vector) shaders? Is only the one for calculating velocity and position not enough?
how does the shader update the texture? I just see reading from it not writing to it?
Thanks in advance for any explanations!
How the heck have they done that:
In this post, I'll explain how this results get computed nearly solely on the gpu via WebGL/Three.js - it might look a bit sloppy as I'm using integrated graphics of an Intel i7 4770k:
Introduction:
Simple idea to keep everything intra-gpu: Each particle's state will be represented by one texture pixel color value. One Million particles will result in 1024x1024 pixel textures, one to hold the current position and another one that holds the velocities of those particles.
Nobody ever forbid to abuse the RGB color values of a texture for completely different data of 0...255 universe. You basically have 32-bit (R + G + B + alpha) per texture pixel for whatever you want to save in GPU memory. (One might even use multiple texture pixels if he needs to store more data per particle/object).
They basically used multiple shaders in a sequential order. From the source code, one can identify these steps of their processing pipeline:
Randomize particles (ignored in this answer) ('randShader')
Determine each particles velocity by its distance to mouse location ('velShader')
Based on velocity, move each particle accordingly ('posShader')
Display the screen ('dispShader')**
.
Step 2: Determining Velocity per particle:
They call a draw process on 1 Million points which's output will be saved as a texture. In the vertex shader each fragment gets 2 additional varyings named "vUv", which basically determine the x and y pixel positions inside the textures used in the process.
Next step is its fragment shader, as only this shader can output (as RGB values into the framebuffer, which gets converted to a texture buffer afterwards - all happening inside gpu memory only). You can see in the id="velFrag" fragment shader, that it gets an input variable called uniform vec3 targetPos;. Those uniforms are set cheaply with each frame from the CPU, because they are shared among all instances and don't involve large memory transfers. (containing the mouse coordinate, in -1.00f to +1.00f universe probably - they probably also update mouse coords once every FEW frames, to lower cpu usage).
Whats going on here? Well, that shader calculates the distance of that particle to the mouse coordinate and depending on that it alter that particles velocity - the velocity also holds information about the particles flight direction. Note: this velocity step also makes particles gain momentum and keep flying/overshooting mouse position, depending on gray value.
.
Step 3: Updating positions per particle:
So far each particle got a velocity and an previous position. Those two values will get processed into a new position, again being outputted as a texture - this time into the positionTexture. Until the whole frame got rendered (into default framebuffer)and then marked as the new texture, the old positionTexture remains unchanged and can get read with ease:
In id="posFrag" fragment shader, they read from both textures (posTexture and velTexture) and process this data into a new position. They output the x and y position coordinates into the colors of that texture (as red and green values).
.
Step 4: Prime time (=output)
To output the results, they probably took again a million points/vertexes and gave it the positionTexture as an input. Then the vertex shader sets the position of each point by reading the texture's RGB value at location x,y (passed as vertex attributes).
// From <script type="x-shader/x-vertex" id="dispVert">
vec3 mvPosition = texture2D(posTex, vec2(x, y)).rgb;
gl_PointSize = 1.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(mvPosition,1.0);
In the display fragment shader, they only need to set a color (note the low alpha, causing it to allow 20 particles to stack up to fully light up a pixel).
// From <script type="x-shader/x-fragment" id="dispFrag">
gl_FragColor = vec4(vec3(0.5, 1.0, 0.1), 0.05);
.
I hope this made it clear how this little demo works :-) I am not the author of that demo, though. Just noticed this answer actually became a super duper detailed one - fly through the thick keywords to get the short version.
I am running a data processing application that is pretty much:
var f = function(a,b){ /* any function of type int -> int -> int */ };
var g = function(a){ /* any function of type int -> int */ };
function my_computation(state){
var data = state[2];
for (var i=0,l=data.length,res=0; i<l; ++i)
res = f(res,g(data[i]));
state[3] = res;
return res;
}
This pattern is pretty much that of a foldl. That computation is not fast enough on CPU. Is it possible to somehow run that computation on the GPU, on the browser?
From your comment:
I don't know much about vertex shaders but to my knowledge it worked in isolated pixels, and for the folding you'd kinda need an accumulation pattern. No?
If you want to use WebGL for computation over an array, you most likely will want to do it in a fragment shader, not a vertex shader. If you use input geometry that covers the entire viewport, a fragment shader is then simply a program that computes an image pixel-by-pixel. It can use as inputs numeric parameters and arbitrary textures. Furthermore, you can render output to a texture.
This is how you do inputs: you stash the input data in a texture, and have the fragment shader do lookups in the texture. It's perfectly normal to do multiple offset lookups in a texture; for example, this is how a blur effect works.
You're right to be concerned about accumulation. There is no native way to do a fold over all pixels. However, if you can express your algorithm in a "map-reduce" fashion, where the reduce operation combines two outputs and doesn't care about whether they are the input from a previous reduce step, then you can do it like so:
Load your input data into a 1-pixel high by N-pixel wide texture. (Not sure whether using square textures might give better upper limits, but this is simpler to describe.)
Run your "map" (g, non-accumulating computation) shader program producing an intermediate-outputs texture.
Run a shader which performs the "reduce" operation (f) on each pair of adjacent pixels (or similar) of the intermediate texture, producing another texture half as wide.
Do the same thing again on that output.
This will get you your single answer in only O(log n) JavaScript operations.
I would say yes. I've often though about this myself. Your data would be attached as a vertex attribute buffer and a custom shader would execute you fold code, 'rendering' the results to an off-screen buffer. You would then read the result buffer back into CPU memory.
Given that you want to run it on the browser, you are limited by what WebGL/extensions support, specifically on CPU access to GPU data.
You can take a look at shader code for filters/edge detect in below code-base that show how you can do this in a fragment shader.
https://github.com/prabindh/sgxperf/blob/master/sgxperf_strings.cpp
After this, you can access the data using readPixels. NOTE - the fragment shader can only output fixed-point data.