I would like to cut an object (a box) in WebGL (fragment shaders / vertex shaders) without using Boolean operations (union, difference, etc..).
I want to use shaders to hide some part of the object (so it is therefore not really a "real cuts" since it simply hides the object).
EDIT
First, make sure that the vertex shader passes through to the fragment shader the position in world space (or rather, whichever coordinate space you wish the clipping to be fixed relative to). Example (written from memory, not tested):
varying vec3 positionForClip;
...
void main(void) {
...
vec4 worldPos = modelMatrix * vertexPosition;
positionForClip = worldPos.xyz / worldPos.w; // don't need homogeneous coordinates, so do the divide early
gl_Position = viewMatrix * worldPos;
}
And in your fragment shader, you can then discard based on an arbitrary plane, or any other kind of test you want:
varying vec3 positionForClip;
uniform vec3 planeNormal;
uniform float planeDistance;
...
void main(void) {
if (dot(positionForClip, planeNormal) > planeDistance) {
// or if (positionForClip.x > 10.0), or whatever
discard;
}
...
gl_FragColor = ...;
}
Note that using discard may cause a performance reduction as the GPU cannot optimize based on knowing that all fragments will be written.
Disclaimer: I haven't researched this myself, and only just wrote down a possible way to do it based on the 'obvious solution'. There may be better ways I haven't heard of.
Regarding your question about multiple objects: There are many different ways to handle this — it's all custom code in the end. But you certainly can use a different shader for different objects in your scene, as long as they're in different vertex arrays.
gl.useProgram(programWhichCuts);
gl.drawArrays();
gl.useProgram(programWhichDoesNotCut);
gl.drawArrays();
If you're new to using multiple programs, it's pretty much just like using one program except that you do all the setup (compile, attach, link) once. The main thing to watch out for is each program has its own uniforms, so you have to initialize your uniforms for each program separately.
Related
i am mainly developing in C++ and decided to compile one of my projects to WebAssembly and build a website upon it. Since I have written some 3D-Engine in C++ before, I decided to use WebGL on the website. I have translated my shader template class. I have reduced the problem to a two-dimensional problem.
Task
First, I will describe what I am trying to do.
I am going to render a 2D FEM-grid which consists of FEM-elements which is any type of polygon. The nodes of those polygons contain values which I am trying to display. I already wrote code to break the polygons down to triangles. Initially I am just trying to render two dimensional triangles with their node values being interpolated.
Shader Code
I wrote some template shader class in WebGL which handles the construction and compilation of the shaders and has been copied from my C++ 3D-Engine. Since the code itself is somewhat long, I am not going to post it here but eventually show a list of executed OpenGL commands.
The shaders I am currently using to debug my problem are the following:
Vertex-Shader:
precision mediump float;
attribute vec2 position;
varying vec2 fragPos;
void main(){
gl_Position = vec4(position,0,1.0);
fragPos = position;
}
---------------------------------------------------------------
Fragment-Shader:
precision mediump float;
varying vec2 fragPos;
uniform sampler2D elementValues;
uniform int elementValuesDimension;
void main(){
gl_FragColor = vec4(0.8,0.2,0.0,1.0);
}
As you can see, I am not going for any interpolation in this debug case but just trying to some red-ish color within my fragment shader.
Executed Commands
I went ahead and used webgl-debug.js to show all the operations done. For this case, you can see the vertices and indices matching a simple quad spanning [0,0.6]x[0,0.6] which should be well within the clip space.
This can be broken into a few parts:
Enable uint32 as indexing
gl.getExtension(OES_element_index_uint)
Create two buffers for vertex coordinates and indices
gl.createBuffer()
gl.createBuffer()
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, [object WebGLBuffer])
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, 0,1,2,0,2,3, gl.STATIC_DRAW)
gl.bindBuffer(gl.ARRAY_BUFFER, [object WebGLBuffer])
gl.bufferData(gl.ARRAY_BUFFER, 0,0,0,0.6000000238418579,0.6000000238418579,0.6000000238418579,0.6000000238418579,0, gl.STATIC_DRAW)
Create and compile the shaders
gl.createShader(gl.VERTEX_SHADER)
gl.shaderSource(vertex_shader_src);
gl.compileShader([object WebGLShader]);
gl.getShaderParameter([object WebGLShader], gl.COMPILE_STATUS);
gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragment_shader_src);
gl.compileShader([object WebGLShader]);
gl.getShaderParameter([object WebGLShader], gl.COMPILE_STATUS);
gl.createProgram()
gl.attachShader([object WebGLProgram], [object WebGLShader])
gl.attachShader([object WebGLProgram], [object WebGLShader])
gl.linkProgram([object WebGLProgram])
gl.validateProgram([object WebGLProgram])
Get Locations of the uniforms (which are unused) as well as the location of the attribute
gl.getUniformLocation([object WebGLProgram], elementValues)
gl.getUniformLocation([object WebGLProgram], elementValuesDimension)
gl.getAttribLocation([object WebGLProgram], position)
Verify that linking has worked
gl.getProgramParameter([object WebGLProgram], gl.LINK_STATUS)
Main Rendering Loop. Initially clear the displayed stuff
gl.clearColor(0.5, 0.5, 0.5, 1)
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)
gl.disable(gl.CULL_FACE)
gl.viewport(0, 0, 700, 600)
Activate the shader and bind the buffers
gl.useProgram([object WebGLProgram])
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, [object WebGLBuffer])
gl.bindBuffer(gl.ARRAY_BUFFER, [object WebGLBuffer])
Connect the vertices to the first attribute
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0)
Render Call.
gl.drawElements(gl.TRIANGLES, 4, gl.UNSIGNED_INT, 0)
Sadly when the commands above are executed, I am unable to see anything except a gray screen:
I am very happy for any help!
Greetings
Finn
Thanks to teddybeard and a lot more time on this problem spent than I was hoping, two issues have shown up:
l.bufferData(gl.ELEMENT_ARRAY_BUFFER, 0,1,2,0,2,3, gl.STATIC_DRAW) contains 6 vertices but the drawElements command only took 4 values. This is obviously a bug but should still have rendered atleast one of the two triangles.
Secondly, I forgot to enable the vertex attribute in the shader. gl.enableVertexAttribArray(1); solved the problem.
I have been trying to learn THREEJS shader materials. So far I understand how uniforms, vertexShader, and fragmentShader play a role in projecting and coloring the vertices and fragments in the world of glsl and webgl. I have been trying to find some good examples where ShaderMaterial of THREEJS is extended using the THREE.ShaderLib.
Suppose I want to extend a standard threejs material (THREE.ShaderLib['standard']) to write the envmap texture, is that possible? Or is it absolutely necessary that I write everything from scratch?
Shaders are just strings, it's up to you what you do with them and how you obtain them. With that being said, three.js has numerous tools to help you build them better.
At the highest level, there is an abstraction in form of THREE.Material. Where you describe abstract properties and three configures the shader under the hood.
//no shaders involved
var redPlastic = new THREE.MeshStandardMaterial({
color: 'red',
roughness: notVeryRough
})
ShaderMaterial expects you to write raw GLSL, but still includes some things that you would otherwise have to do manually. So "writing from scratch" is a not entirely correct. With RawShaderMaterial you would write everything from scratch. THREE.ShaderMaterial:
varying vec2 vUv;
void main(){
vUv = uv; // <- magic! where does uv come from?
vec4 worldPosition = modelMatrix * vec4( position, 1.); // <- more magic! where do modelMatrix and position come from?
gl_Position = projectionMatrix * viewMatrix * worldPosition; // more!
}
At runtime, when three is compiled and included on a page, the THREE namespace has THREE.ShaderChunk dictionary. These are various named snippets of GLSL that all the materials are built out of.
You can copy these snippets from their source files, and paste them in your own shader file, or string.
You can write it with a string template:
`${THREE.ShaderChunk.some_chunk}
void main(){
...
${anotherChunk}
gl_Position = ...
`
But if you want to extend the built in materials, there is a (IMHO buggy and wonky :)) feature of the material called onBeforeCompile. With this, you pass a callback to any of the built in materials, and get the shader object before compilation. Here you can inject your own glsl, swap out chunks or anything else you can think of doing to a string.
In order to use this one needs to be familiar with the structure of the shaders built from:
https://github.com/mrdoob/three.js/tree/dev/src/renderers/shaders/ShaderChunk
I've recently started dabbling in shader programming using babylon.js. I'm trying to write a fragment shader that supports repeating textures. I used a pretty trivial way to do it, As you can tell by the link.
http://www.babylonjs.com/cyos/#CARU2#1
vec2 xy = vUV;
vec2 phase = fract(xy / vec2(1.0/vScale,1.0/vScale));
vec3 color = texture2D(textureSampler, phase).rgb;
The problem is that this creates a strange pixelating effect on the seams of the repeating texture, as shown by the following image.
How can I fix this? It must be something wrong with my fragment shader, because using the standard material doesn't yield this problem.
If anyone could help I would be eternally grateful.
Wow okay I'm an idiot, I was scaling the image in a really stupid and inefficient way. I don't think I actually understood what the line:
vec2 phase = fract(xy / vec2(1.0/vScale,1.0/vScale));
Was doing. I'm still not 100% on it, but the logical way is simply to multiply the uv vector by the repeating factor.
vec2 phase = vec2(xy.x*vScale,xy.y*vScale);
You can see the result here:
http://www.babylonjs.com/cyos/#CARU2#2
I want to be able to append multiple light sources to each node of my scene graph, but I have no clue how to do it!
From the tutorials on learningwebgl.com I learned to either use directional or position lighting, but I couldn't find a good explanation of how to implement multiple light sources.
So, the objective should be, to have the option to append an arbitrary number of light sources to each node, which type can be either directional or position lighting, and if possible and advisable, this should be realized by using only one shader-program (if this isn't the only possibility to do it anyway), because I automatically create the program for each node depending on their specific needs (unless there is already a program on the stack with equal settings).
Based on the tutorials on learningwebgl.com, my fragment-shader source for a node object using lighting without preset binding to one of the lighting-types could look like this...
precision highp float;
uniform bool uUsePositionLighting;
uniform bool uUseDirectionalLighting;
uniform vec3 uLightPosition;
uniform vec3 uLightDirection;
uniform vec3 uAmbientColor;
uniform vec3 uDirectionalColor;
uniform float uAlpha;
varying vec4 vPosition;
varying vec3 vTransformedNormal;
varying vec3 vColor;
void main (void) {
float directionalLightWeighting;
if (uUseDirectionalLighting) {
directionalLightWeighting = max(dot(vTransformedNormal, uLightDirection), 0.0);
else if (uUsePositionLighting) {
vec3 lightDirection = normalize(uLightPosition, vPosition.xyz);
directionalLightWeighting = max(dot(normalize(vTransformedNormal, lightDirection), 0.0);
}
vec3 lightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
gl_FragColor = vec4(vColor * lightWeighting, uAlpha);
}
...so, that's basically my poor state of knowledge concerning this subject.
I also ask myself, how adding more light sources would affect the lighting-colors:
I mean, do uAmbientColor and uDirectionalColor have to sum up to 1.0? In this case (and particularly when using more than one light source) it surely would be good to precalculate these values before passing them to the shader, wouldn't it?
Put your lights into an array and loop over them for each fragment. Start with a fixed array of light sources, unbounded arrays are not supported until OpenGL 4.3 and are more complicated to work with.
Something along the lines of:
uniform vec3 uLightPosition[16];
uniform vec3 uLightColor[16];
uniform vec3 uLightDirection[16];
uniform bool uLightIsDirectional[16];
....
void main(void) {
vec3 reflectedLightColor;
// Calculate incoming light for all light sources
for(int i = 0; i < 16; i++) {
vec3 lightDirection = normalize(uLightPosition[i], vPosition.xyz);
if (lightIsDirectional[i]) {
reflectedLightColor += max(dot(vTransformedNormal, uLightDirection[i]), 0.0) * uLightColor[i];
}
else {
reflectedLightColor += max(dot(normalize(vTransformedNormal, lightDirection), 0.0) * uLightColor[i];
}
}
glFragColor = vec4(uAmbientColor + reflectedLightColor * vColor, uAlpha);
}
Then you can enable/disable the light sources by setting uLightColor to (0,0,0) for the entries you don't use.
Ambient and directional don't have to sum up to 1, actually the light source can have a strength much stronger than 1.0, but then you will need to do tonemapping to get back to a range of values that can be displayed on a screen, I would suggest playing around to get a feel for what is happening (e.g. what happens when a light source have negative colors, or colors above 1.0?).
uAmbientColor is just a (poor) way to simulate light that has bounced several times in the scene. Otherwise things in shadow becomes completely black Which looks unrealistic.
Reflectance should typically be between 0 and 1 (in this example it would be the parts returned by the 'max' computations), otherwise a lightsource will get stronger when looked at via the material.
#ErikMan's answer is great, but may involve a lot of extra work on the part of the GPU since you're checking every light per fragment, which isn't strictly necessary.
Rather than an array, I'd suggest building a clip-space quadtree. (You can do this in a compute shader if it's supported by your target platform / GL version.)
A node might have a structure such as (pseudocode as my JS is rusty):
typedef struct
{
uint LightMask; /// bitmask - each light has a bit indicating whether it is active for this node. uint will allow for 32 lights.
bool IsLeaf;
} Node;
const uint maxLevels = 4;
const uint maxLeafCount = pow(4,maxLevels);
const uint maxNodeCount = (4 * maLeafCount - 1) / 3;
/// linear quadtree - node offset = 4 * parentIndex + childIndex;
Node tree[maxNodeCount];
When building the tree, just check each light's clip-space bounding box against the implicit node bounds. (Root goes from (-1,-1) to (+1,+1). Each child is half that size on each dimension. So, you don't really need to store node bounds.)
If the light touches the node, set a bit in Node.LightMask corresponding to the light. If the light completely contains the node, stop recursing. If it intersects the node, subdivide and continue.
In your fragment shader, find which leaf node contains your fragment, and apply all lights whose bit is set in the leaf node's mask.
You could also store your tree in a mipmap pyramid if you expect it to be dense.
Keep your tiles to a size that is a multiple of 32, preferably square.
vec2 minNodeSize = vec2(2.f / 32);
Now, if you have a small number of lights, this may be overkill. You would probably have to have a lot of lights to see any real performance benefit. Also, a normal loop may help reduce data divergence in your shader, and makes it easier to eliminate branching.
This is one way to implement a simple tiled renderer, and opens the door to having hundreds of lights.
I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular.
From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles.
However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to be written directly into the GLSL code.
How does that work?
My guess is that a vertex shader is already prepared upfront, and that the editable/sample shader is only a fragment shader. But then that doesn't explain the geometry in some of the more complex examples...
Can anyone explain how Shadertoy works?
ShaderToy is a tool for writing pixel shaders.
What are pixel shaders?
If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.
So attributes are always the same and so is a vertex shader:
positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]
And that quad is rendered as TRIANGLE_STRIP.
Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.
Vertex shader:
attribute vec2 aPos;
attribute vec2 aUV;
varying vec2 vUV;
void main() {
gl_Position = vec4(aPos, 0.0, 1.0);
vUV = aUV;
}
And fragment shader would then look something like this:
uniform vec2 uScreenResolution;
varying vec2 vUV;
void main() {
// vUV is equal to gl_FragCoord/uScreenResolution
// do some pixel shader related work
gl_FragColor = vec3(someColor);
}
ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.
For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short:
You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help:
How to do ray tracing in modern OpenGL?
Hope this helps.
ShaderToy displays simple GLSL that is programmed to handle all the lighting, geometry, etc, it's not vertex geometry, it's raycasting most of it, the 3D stuff, or you can do 2D shaders, etc.
Any color and spacial maths can be programmed in GLSL language. Combinations of advanced algorithms makes isosurfaces, shapes, and then project textures onto isosurfaces, and raycasting, sending imaginary lines from viewer to distance, intercepts anything in the way, there are many raycasting techniques for 3D.
visit www.iquilezles.org for an idea of the different tools that are used in shadertoy/glsl graphics
Shadertoy is called "TOY" for a reason. It's basically a puzzle. Given only a function that as input is told the current pixel position write a function that generates an image.
The website sets up WebGL to draw a single quad (rectangle) and then lets you provide a function that is passed the current pixel being rendered as fragCoord. you then use that to compute some colors.
For example if you did this
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
fragColor = mix(red, green, sin(fragCoord.x / 10.0) * 0.5 + 0.5);
}
you'd get red and green stripes like this
https://www.shadertoy.com/view/3l2cRz
Shadertoy provides a few other inputs. The most common is the resolution being rendered to as iResolution. If you divide the fragCoord by iResolution then you get values that go from 0 to 1 across and 0 to 1 down the canvas so you can easily make your function resolution independent.
Doing that we can draw an ellipse in the center like this
void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
// uv goes from 0 to 1 across and up the canvas
vec2 uv = fragCoord / iResolution.xy;
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
float distanceFromCenter = distance(uv, vec2(0.5));
fragColor = mix(red, green, step(0.25, distanceFromCenter));
}
which produces
The second most common input is iTime in seconds which you can use the animate parameters in your function over time.
So, given those inputs, if you apply enough math you can make an incredible image like for example this shadertoy shader generates this image
Which is amazing someone figured out the math needed to generate that image given only the input mentioned above.
Many of the most amazing shadertoy shaders use a technique called ray marching and some math called "signed distance fields" which you can read about here
But, I think it's important to point out that while there are many cool things to learn from shadertoy shaders, many of them apply only to this "puzzle" of "how do I do make a beautiful image with only one function whose input is a pixel position and whose output is a single color". They don't answer "how should I write shaders for a performant app".
Compare to the dolphin above to this speedboat game
https://www.youtube.com/watch?v=7v9gZK9HqqI
The dolphin shadertoy shader runs at about 2 frames a second when fullscreen on my NVidia GeForce GT 750 where as the speedboat game runs at 60fps. The reason the game runs fast is it uses more traditional techniques drawing shapes with projected triangles. Even an NVidia 1060 GTX can only run that dolphin shader at about 10 frames a second when fullscreen.
It's just basically pushing GLSL pixel shader source code directly onto the graphics card.The real magic happens in the incredibly clever algorithms that people use to create amazing effects, like ray marching, ray casting, ray tracing. best to have a look at some other live GLSL sandboxes like: http://glsl.heroku.com/ and http://webglplayground.net/.
Its basically creating a window typically two triangles which represent the screen, then the shader works on each pixel just like a ray tracer.
I've been looking at these a while now, and the algorithms people use are mind blowing, you'll need to some serious math chops and look up "demo coding" source code to able to wrap your head around them. Many on shader toy, just blow your mind !
So to summarise, you just need to learn GLSL shader coding and algorithms. No easy solution.
Traditionally in computer graphics, geometry is created using vertices and rendered using some form of materials (e.g. textures with lighting). In GLSL, the vertex shader processes the vertices and the fragment (pixel) shader processes the materials.
But that is not the only way to define shapes. Just as a texture could be procedurally defined (instead of looking up its texels), a shape could be procedurally defined (instead of looking up its geometry).
So, similar to ray tracing, these fragment shaders are able to create shapes without having their geometry defined by vertices.
There's still more ways to define shapes. E.g. volume data (voxels), surface curves, and so on. A computer graphics text should cover some of them.