Wish to past triangle identifiers to shaders - javascript

I am using gl.TRIANGLES to draw a function where I past vertices and an index buffer to identify the vertices of each triangle. I would also like the shaders to distinguish three types of triangles: -1,0, and 1. I realize I can set up a triplet set of numbers for each triangle and pass this buffer to an attribute variable of the vertex shader then on to the fragment shader via a varying variable. However, that seems to be a lot of unnecessary data as I only need a single number per triangle.
Is there a way to set up a buffer for gl.TRIANGLES so that for each triangle, a single number gets passed to the vertex shader? I can then use a varying variable
to pass it to the fragment shader. Or will I have to set up duplicate
triplets like (0,0,0) or (-1,-1,-1) for each triangle?

Related

Can I read single pixel value from WebGL depth texture in JavaScript?

In short
I would like to read a single pixel value from a WebGL 2 depth texture in JavaScript. Is this at all possible?
The scenario
I am rendering a scene in WebGL 2. The renderer is given a depth texture to which it writes the depth buffer. This depth texture is used in post processing shaders and the like, so it is available to us.
However, I need to read my single pixel value in JavaScript, not from within a shader. If this had been a normal RGB texture, I would do
function readPixel(x, y, texture, outputBuffer) {
const frameBuffer = gl.createFramebuffer();
gl.bindFramebuffer( gl.FRAMEBUFFER, frameBuffer );
gl.framebufferTexture2D( gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture, 0 );
gl.readPixels(x, y, 1, 1, gl.RGBA, gl.UNSIGNED_BYTE, outputBuffer);
}
This will write the pixel at x, y into outputBuffer.
However, is it at all possible to do the same with a depth texture? If I just pass a depth texture to my function above, the output buffer only has zeros, and I receive a WebGL warning GL_INVALID_FRAMEBUFFER_OPERATION: Framebuffer is incomplete.. Checking the framebuffer state reveals FRAMEBUFFER_INCOMPLETE_ATTACHMENT.
Naturally, the depth texture is not an RGBA texture, but is there some other values we can give it to get our depth value, or is it impossible?
Motivation
I am aware of that this question has been asked some number of times on StackOverflow and elsewhere in some form of another, but there is always some variation making it confusing for me to get a straight-up yes or no answer to the question in the form I ask it here. In addition, many questions and sources are very old, WebGL 1 only, with some mentions of webgl_depth_texture making a difference etc etc.
If the answer is no, I'd welcome any suggestions for how else to easily obtain this depth pixel. As this operation is not done for every frame, I value simplicity over performance. The use case is picking, and classical ray intersection is not feasible. (I also know that I can encode a scalar depth value into and out of an RGB pixel, but I need to be able to access the pixel from within the js code in the first place.)
I'd welcome any insights.
There is no possibility WebGL 2.0 is based on OpenGL ES 3.0.
In OpenGL ES 3.2 Specification - 4.3.2 Reading Pixels is clearly specified:
[...] The second is an implementation-chosen format from among those defined
in table 3.2, excluding formats DEPTH_COMPONENT and DEPTH_STENCIL [...]

Is it possible to set condition for instance in WebGL?

I looking for a way to skip instances while rendering in shader.
I have one million instances, and to achieve performance, based on current viewbox I will skip instances based on their boundings.
Is any place where I can write code for condition?
JS is slower than GPU, so I'm looking for how to make this condition in the GLSL
you can't "Skip" an instance. You can move all of its vertices off the screen
attribute float visible;
...
gl_Position = mix(vec4(0, 0, -2, 1), gl_Position, visible);
or something like that. If visible is 1.0 then you get the same as you've always gotten. If visible is 0.0 then all of its vertices are outside of clip space.
You could also pass in the bounds in some form (center + radius, aabb) and effectively do the same thing. Compute if that center + radius is in the view frustum, if not set gl_Position to something outside clip space. Of course if they were going to be outside of clip space already then this isn't helping.
Still, a million instances is probably too many for most GPUs

WebGL - Set multiple vertices

I'm trying to translate some TypeScript code into a vertex shader to use with WebGL. My goal is to draw the bitangent lines of two circles. I have a function to calculate the tangent points here, https://jsfiddle.net/Zanchi/4xnp1n8x/2/ on line 27. Essentially, it returns a tuple of points with x and y values.
// First circle bottom tangent point
const t1 = {
x: x1 + r1 * cos(PI/2 - alpha),
y: y1 + r1 * sin(PI/2 - alpha)
}; //... and so on
I know I can do the calcuation in JS and pass the values to the shader via an attribute, but I'd like to leverage the GPU to do the point calculations instead.
Is it possible to set multiple vertices in a single vertex shader call, or use multiple values calculated in the first call of the shader in subsequent calls?
Is it possible to set multiple vertices in a single vertex shader call
No
or use multiple values calculated in the first call of the shader in subsequent calls?
No
A vertex shader outputs 1 vertex per iteration/call. You set the number of iterations when you call gl.drawArrays (gl.drawElements is more complicated)
I'm not sure you gain much by not just putting the values in an attribute. It might be fun to generate them in the vertex shader but it's probably not performant.
In WebGL1 there is no easy way to use a vertex shader to generate data. First off you'd need some kind of count or something that changes for each iteration and there is nothing that changes if you don't supply at least one attribute. You could supply one attribute with just a count [0, 1, 2, 3, ...] and use that count to generate vertices. This is what vertexshaderart.com does but it's all for fun, not for perf.
In WebGL2 there is the gl_VertexID built in variable which means you get a count for free, no need to supply an attribute. In WebGL2 you can also use transform feedback to write the output of a vertex shader to a buffer. In that way you can generate some vertices once into a buffer and then use the generated vertices from that buffer (and therefore probably get better performance than generating them every time).

Reading data from shader

I'm trying to learn how to take advantage of gpu possibilities for threejs and webgl stuff so im just analysing code to get some patterns, methods how things are done and I need some code explanation.
I found this example: One million particles, which seems to be the easiest one involving calculations made in shaders and spit back out.
So from what I have figured out:
- Data for velocity and position of particles are kept in textures passed to shaders to perform calculations there, and get them back for update
Particles are created randomly on the plane no more than the texture size ?
for (var i = 0; i < 1000000; i++) {
particles.vertices.push(new THREE.Vector3((i % texSize)/texSize,
Math.floor(i/texSize)/texSize , 0))
;
}
I don't see any particles position updates? How is the data from shaders retrieved and updates each particle?
pick()
only passes the mouse position to calculate the direction of particles movement?
why are there 2 buffers? and 8 (4 pairs of fragment and vector) shaders? Is only the one for calculating velocity and position not enough?
how does the shader update the texture? I just see reading from it not writing to it?
Thanks in advance for any explanations!
How the heck have they done that:
In this post, I'll explain how this results get computed nearly solely on the gpu via WebGL/Three.js - it might look a bit sloppy as I'm using integrated graphics of an Intel i7 4770k:
Introduction:
Simple idea to keep everything intra-gpu: Each particle's state will be represented by one texture pixel color value. One Million particles will result in 1024x1024 pixel textures, one to hold the current position and another one that holds the velocities of those particles.
Nobody ever forbid to abuse the RGB color values of a texture for completely different data of 0...255 universe. You basically have 32-bit (R + G + B + alpha) per texture pixel for whatever you want to save in GPU memory. (One might even use multiple texture pixels if he needs to store more data per particle/object).
They basically used multiple shaders in a sequential order. From the source code, one can identify these steps of their processing pipeline:
Randomize particles (ignored in this answer) ('randShader')
Determine each particles velocity by its distance to mouse location ('velShader')
Based on velocity, move each particle accordingly ('posShader')
Display the screen ('dispShader')**
.
Step 2: Determining Velocity per particle:
They call a draw process on 1 Million points which's output will be saved as a texture. In the vertex shader each fragment gets 2 additional varyings named "vUv", which basically determine the x and y pixel positions inside the textures used in the process.
Next step is its fragment shader, as only this shader can output (as RGB values into the framebuffer, which gets converted to a texture buffer afterwards - all happening inside gpu memory only). You can see in the id="velFrag" fragment shader, that it gets an input variable called uniform vec3 targetPos;. Those uniforms are set cheaply with each frame from the CPU, because they are shared among all instances and don't involve large memory transfers. (containing the mouse coordinate, in -1.00f to +1.00f universe probably - they probably also update mouse coords once every FEW frames, to lower cpu usage).
Whats going on here? Well, that shader calculates the distance of that particle to the mouse coordinate and depending on that it alter that particles velocity - the velocity also holds information about the particles flight direction. Note: this velocity step also makes particles gain momentum and keep flying/overshooting mouse position, depending on gray value.
.
Step 3: Updating positions per particle:
So far each particle got a velocity and an previous position. Those two values will get processed into a new position, again being outputted as a texture - this time into the positionTexture. Until the whole frame got rendered (into default framebuffer)and then marked as the new texture, the old positionTexture remains unchanged and can get read with ease:
In id="posFrag" fragment shader, they read from both textures (posTexture and velTexture) and process this data into a new position. They output the x and y position coordinates into the colors of that texture (as red and green values).
.
Step 4: Prime time (=output)
To output the results, they probably took again a million points/vertexes and gave it the positionTexture as an input. Then the vertex shader sets the position of each point by reading the texture's RGB value at location x,y (passed as vertex attributes).
// From <script type="x-shader/x-vertex" id="dispVert">
vec3 mvPosition = texture2D(posTex, vec2(x, y)).rgb;
gl_PointSize = 1.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(mvPosition,1.0);
In the display fragment shader, they only need to set a color (note the low alpha, causing it to allow 20 particles to stack up to fully light up a pixel).
// From <script type="x-shader/x-fragment" id="dispFrag">
gl_FragColor = vec4(vec3(0.5, 1.0, 0.1), 0.05);
.
I hope this made it clear how this little demo works :-) I am not the author of that demo, though. Just noticed this answer actually became a super duper detailed one - fly through the thick keywords to get the short version.

WebGL/GLSL - How does a ShaderToy work?

I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular.
From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles.
However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to be written directly into the GLSL code.
How does that work?
My guess is that a vertex shader is already prepared upfront, and that the editable/sample shader is only a fragment shader. But then that doesn't explain the geometry in some of the more complex examples...
Can anyone explain how Shadertoy works?
ShaderToy is a tool for writing pixel shaders.
What are pixel shaders?
If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.
So attributes are always the same and so is a vertex shader:
positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]
And that quad is rendered as TRIANGLE_STRIP.
Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.
Vertex shader:
attribute vec2 aPos;
attribute vec2 aUV;
varying vec2 vUV;
void main() {
gl_Position = vec4(aPos, 0.0, 1.0);
vUV = aUV;
}
And fragment shader would then look something like this:
uniform vec2 uScreenResolution;
varying vec2 vUV;
void main() {
// vUV is equal to gl_FragCoord/uScreenResolution
// do some pixel shader related work
gl_FragColor = vec3(someColor);
}
ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.
For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short:
You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help:
How to do ray tracing in modern OpenGL?
Hope this helps.
ShaderToy displays simple GLSL that is programmed to handle all the lighting, geometry, etc, it's not vertex geometry, it's raycasting most of it, the 3D stuff, or you can do 2D shaders, etc.
Any color and spacial maths can be programmed in GLSL language. Combinations of advanced algorithms makes isosurfaces, shapes, and then project textures onto isosurfaces, and raycasting, sending imaginary lines from viewer to distance, intercepts anything in the way, there are many raycasting techniques for 3D.
visit www.iquilezles.org for an idea of the different tools that are used in shadertoy/glsl graphics
Shadertoy is called "TOY" for a reason. It's basically a puzzle. Given only a function that as input is told the current pixel position write a function that generates an image.
The website sets up WebGL to draw a single quad (rectangle) and then lets you provide a function that is passed the current pixel being rendered as fragCoord. you then use that to compute some colors.
For example if you did this
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
fragColor = mix(red, green, sin(fragCoord.x / 10.0) * 0.5 + 0.5);
}
you'd get red and green stripes like this
https://www.shadertoy.com/view/3l2cRz
Shadertoy provides a few other inputs. The most common is the resolution being rendered to as iResolution. If you divide the fragCoord by iResolution then you get values that go from 0 to 1 across and 0 to 1 down the canvas so you can easily make your function resolution independent.
Doing that we can draw an ellipse in the center like this
void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
// uv goes from 0 to 1 across and up the canvas
vec2 uv = fragCoord / iResolution.xy;
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
float distanceFromCenter = distance(uv, vec2(0.5));
fragColor = mix(red, green, step(0.25, distanceFromCenter));
}
which produces
The second most common input is iTime in seconds which you can use the animate parameters in your function over time.
So, given those inputs, if you apply enough math you can make an incredible image like for example this shadertoy shader generates this image
Which is amazing someone figured out the math needed to generate that image given only the input mentioned above.
Many of the most amazing shadertoy shaders use a technique called ray marching and some math called "signed distance fields" which you can read about here
But, I think it's important to point out that while there are many cool things to learn from shadertoy shaders, many of them apply only to this "puzzle" of "how do I do make a beautiful image with only one function whose input is a pixel position and whose output is a single color". They don't answer "how should I write shaders for a performant app".
Compare to the dolphin above to this speedboat game
https://www.youtube.com/watch?v=7v9gZK9HqqI
The dolphin shadertoy shader runs at about 2 frames a second when fullscreen on my NVidia GeForce GT 750 where as the speedboat game runs at 60fps. The reason the game runs fast is it uses more traditional techniques drawing shapes with projected triangles. Even an NVidia 1060 GTX can only run that dolphin shader at about 10 frames a second when fullscreen.
It's just basically pushing GLSL pixel shader source code directly onto the graphics card.The real magic happens in the incredibly clever algorithms that people use to create amazing effects, like ray marching, ray casting, ray tracing. best to have a look at some other live GLSL sandboxes like: http://glsl.heroku.com/ and http://webglplayground.net/.
Its basically creating a window typically two triangles which represent the screen, then the shader works on each pixel just like a ray tracer.
I've been looking at these a while now, and the algorithms people use are mind blowing, you'll need to some serious math chops and look up "demo coding" source code to able to wrap your head around them. Many on shader toy, just blow your mind !
So to summarise, you just need to learn GLSL shader coding and algorithms. No easy solution.
Traditionally in computer graphics, geometry is created using vertices and rendered using some form of materials (e.g. textures with lighting). In GLSL, the vertex shader processes the vertices and the fragment (pixel) shader processes the materials.
But that is not the only way to define shapes. Just as a texture could be procedurally defined (instead of looking up its texels), a shape could be procedurally defined (instead of looking up its geometry).
So, similar to ray tracing, these fragment shaders are able to create shapes without having their geometry defined by vertices.
There's still more ways to define shapes. E.g. volume data (voxels), surface curves, and so on. A computer graphics text should cover some of them.

Categories