I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular.
From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles.
However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to be written directly into the GLSL code.
How does that work?
My guess is that a vertex shader is already prepared upfront, and that the editable/sample shader is only a fragment shader. But then that doesn't explain the geometry in some of the more complex examples...
Can anyone explain how Shadertoy works?
ShaderToy is a tool for writing pixel shaders.
What are pixel shaders?
If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.
So attributes are always the same and so is a vertex shader:
positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]
And that quad is rendered as TRIANGLE_STRIP.
Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.
Vertex shader:
attribute vec2 aPos;
attribute vec2 aUV;
varying vec2 vUV;
void main() {
gl_Position = vec4(aPos, 0.0, 1.0);
vUV = aUV;
}
And fragment shader would then look something like this:
uniform vec2 uScreenResolution;
varying vec2 vUV;
void main() {
// vUV is equal to gl_FragCoord/uScreenResolution
// do some pixel shader related work
gl_FragColor = vec3(someColor);
}
ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.
For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short:
You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help:
How to do ray tracing in modern OpenGL?
Hope this helps.
ShaderToy displays simple GLSL that is programmed to handle all the lighting, geometry, etc, it's not vertex geometry, it's raycasting most of it, the 3D stuff, or you can do 2D shaders, etc.
Any color and spacial maths can be programmed in GLSL language. Combinations of advanced algorithms makes isosurfaces, shapes, and then project textures onto isosurfaces, and raycasting, sending imaginary lines from viewer to distance, intercepts anything in the way, there are many raycasting techniques for 3D.
visit www.iquilezles.org for an idea of the different tools that are used in shadertoy/glsl graphics
Shadertoy is called "TOY" for a reason. It's basically a puzzle. Given only a function that as input is told the current pixel position write a function that generates an image.
The website sets up WebGL to draw a single quad (rectangle) and then lets you provide a function that is passed the current pixel being rendered as fragCoord. you then use that to compute some colors.
For example if you did this
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
fragColor = mix(red, green, sin(fragCoord.x / 10.0) * 0.5 + 0.5);
}
you'd get red and green stripes like this
https://www.shadertoy.com/view/3l2cRz
Shadertoy provides a few other inputs. The most common is the resolution being rendered to as iResolution. If you divide the fragCoord by iResolution then you get values that go from 0 to 1 across and 0 to 1 down the canvas so you can easily make your function resolution independent.
Doing that we can draw an ellipse in the center like this
void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
// uv goes from 0 to 1 across and up the canvas
vec2 uv = fragCoord / iResolution.xy;
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
float distanceFromCenter = distance(uv, vec2(0.5));
fragColor = mix(red, green, step(0.25, distanceFromCenter));
}
which produces
The second most common input is iTime in seconds which you can use the animate parameters in your function over time.
So, given those inputs, if you apply enough math you can make an incredible image like for example this shadertoy shader generates this image
Which is amazing someone figured out the math needed to generate that image given only the input mentioned above.
Many of the most amazing shadertoy shaders use a technique called ray marching and some math called "signed distance fields" which you can read about here
But, I think it's important to point out that while there are many cool things to learn from shadertoy shaders, many of them apply only to this "puzzle" of "how do I do make a beautiful image with only one function whose input is a pixel position and whose output is a single color". They don't answer "how should I write shaders for a performant app".
Compare to the dolphin above to this speedboat game
https://www.youtube.com/watch?v=7v9gZK9HqqI
The dolphin shadertoy shader runs at about 2 frames a second when fullscreen on my NVidia GeForce GT 750 where as the speedboat game runs at 60fps. The reason the game runs fast is it uses more traditional techniques drawing shapes with projected triangles. Even an NVidia 1060 GTX can only run that dolphin shader at about 10 frames a second when fullscreen.
It's just basically pushing GLSL pixel shader source code directly onto the graphics card.The real magic happens in the incredibly clever algorithms that people use to create amazing effects, like ray marching, ray casting, ray tracing. best to have a look at some other live GLSL sandboxes like: http://glsl.heroku.com/ and http://webglplayground.net/.
Its basically creating a window typically two triangles which represent the screen, then the shader works on each pixel just like a ray tracer.
I've been looking at these a while now, and the algorithms people use are mind blowing, you'll need to some serious math chops and look up "demo coding" source code to able to wrap your head around them. Many on shader toy, just blow your mind !
So to summarise, you just need to learn GLSL shader coding and algorithms. No easy solution.
Traditionally in computer graphics, geometry is created using vertices and rendered using some form of materials (e.g. textures with lighting). In GLSL, the vertex shader processes the vertices and the fragment (pixel) shader processes the materials.
But that is not the only way to define shapes. Just as a texture could be procedurally defined (instead of looking up its texels), a shape could be procedurally defined (instead of looking up its geometry).
So, similar to ray tracing, these fragment shaders are able to create shapes without having their geometry defined by vertices.
There's still more ways to define shapes. E.g. volume data (voxels), surface curves, and so on. A computer graphics text should cover some of them.
Related
I have been trying to learn THREEJS shader materials. So far I understand how uniforms, vertexShader, and fragmentShader play a role in projecting and coloring the vertices and fragments in the world of glsl and webgl. I have been trying to find some good examples where ShaderMaterial of THREEJS is extended using the THREE.ShaderLib.
Suppose I want to extend a standard threejs material (THREE.ShaderLib['standard']) to write the envmap texture, is that possible? Or is it absolutely necessary that I write everything from scratch?
Shaders are just strings, it's up to you what you do with them and how you obtain them. With that being said, three.js has numerous tools to help you build them better.
At the highest level, there is an abstraction in form of THREE.Material. Where you describe abstract properties and three configures the shader under the hood.
//no shaders involved
var redPlastic = new THREE.MeshStandardMaterial({
color: 'red',
roughness: notVeryRough
})
ShaderMaterial expects you to write raw GLSL, but still includes some things that you would otherwise have to do manually. So "writing from scratch" is a not entirely correct. With RawShaderMaterial you would write everything from scratch. THREE.ShaderMaterial:
varying vec2 vUv;
void main(){
vUv = uv; // <- magic! where does uv come from?
vec4 worldPosition = modelMatrix * vec4( position, 1.); // <- more magic! where do modelMatrix and position come from?
gl_Position = projectionMatrix * viewMatrix * worldPosition; // more!
}
At runtime, when three is compiled and included on a page, the THREE namespace has THREE.ShaderChunk dictionary. These are various named snippets of GLSL that all the materials are built out of.
You can copy these snippets from their source files, and paste them in your own shader file, or string.
You can write it with a string template:
`${THREE.ShaderChunk.some_chunk}
void main(){
...
${anotherChunk}
gl_Position = ...
`
But if you want to extend the built in materials, there is a (IMHO buggy and wonky :)) feature of the material called onBeforeCompile. With this, you pass a callback to any of the built in materials, and get the shader object before compilation. Here you can inject your own glsl, swap out chunks or anything else you can think of doing to a string.
In order to use this one needs to be familiar with the structure of the shaders built from:
https://github.com/mrdoob/three.js/tree/dev/src/renderers/shaders/ShaderChunk
I want to be able to append multiple light sources to each node of my scene graph, but I have no clue how to do it!
From the tutorials on learningwebgl.com I learned to either use directional or position lighting, but I couldn't find a good explanation of how to implement multiple light sources.
So, the objective should be, to have the option to append an arbitrary number of light sources to each node, which type can be either directional or position lighting, and if possible and advisable, this should be realized by using only one shader-program (if this isn't the only possibility to do it anyway), because I automatically create the program for each node depending on their specific needs (unless there is already a program on the stack with equal settings).
Based on the tutorials on learningwebgl.com, my fragment-shader source for a node object using lighting without preset binding to one of the lighting-types could look like this...
precision highp float;
uniform bool uUsePositionLighting;
uniform bool uUseDirectionalLighting;
uniform vec3 uLightPosition;
uniform vec3 uLightDirection;
uniform vec3 uAmbientColor;
uniform vec3 uDirectionalColor;
uniform float uAlpha;
varying vec4 vPosition;
varying vec3 vTransformedNormal;
varying vec3 vColor;
void main (void) {
float directionalLightWeighting;
if (uUseDirectionalLighting) {
directionalLightWeighting = max(dot(vTransformedNormal, uLightDirection), 0.0);
else if (uUsePositionLighting) {
vec3 lightDirection = normalize(uLightPosition, vPosition.xyz);
directionalLightWeighting = max(dot(normalize(vTransformedNormal, lightDirection), 0.0);
}
vec3 lightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
gl_FragColor = vec4(vColor * lightWeighting, uAlpha);
}
...so, that's basically my poor state of knowledge concerning this subject.
I also ask myself, how adding more light sources would affect the lighting-colors:
I mean, do uAmbientColor and uDirectionalColor have to sum up to 1.0? In this case (and particularly when using more than one light source) it surely would be good to precalculate these values before passing them to the shader, wouldn't it?
Put your lights into an array and loop over them for each fragment. Start with a fixed array of light sources, unbounded arrays are not supported until OpenGL 4.3 and are more complicated to work with.
Something along the lines of:
uniform vec3 uLightPosition[16];
uniform vec3 uLightColor[16];
uniform vec3 uLightDirection[16];
uniform bool uLightIsDirectional[16];
....
void main(void) {
vec3 reflectedLightColor;
// Calculate incoming light for all light sources
for(int i = 0; i < 16; i++) {
vec3 lightDirection = normalize(uLightPosition[i], vPosition.xyz);
if (lightIsDirectional[i]) {
reflectedLightColor += max(dot(vTransformedNormal, uLightDirection[i]), 0.0) * uLightColor[i];
}
else {
reflectedLightColor += max(dot(normalize(vTransformedNormal, lightDirection), 0.0) * uLightColor[i];
}
}
glFragColor = vec4(uAmbientColor + reflectedLightColor * vColor, uAlpha);
}
Then you can enable/disable the light sources by setting uLightColor to (0,0,0) for the entries you don't use.
Ambient and directional don't have to sum up to 1, actually the light source can have a strength much stronger than 1.0, but then you will need to do tonemapping to get back to a range of values that can be displayed on a screen, I would suggest playing around to get a feel for what is happening (e.g. what happens when a light source have negative colors, or colors above 1.0?).
uAmbientColor is just a (poor) way to simulate light that has bounced several times in the scene. Otherwise things in shadow becomes completely black Which looks unrealistic.
Reflectance should typically be between 0 and 1 (in this example it would be the parts returned by the 'max' computations), otherwise a lightsource will get stronger when looked at via the material.
#ErikMan's answer is great, but may involve a lot of extra work on the part of the GPU since you're checking every light per fragment, which isn't strictly necessary.
Rather than an array, I'd suggest building a clip-space quadtree. (You can do this in a compute shader if it's supported by your target platform / GL version.)
A node might have a structure such as (pseudocode as my JS is rusty):
typedef struct
{
uint LightMask; /// bitmask - each light has a bit indicating whether it is active for this node. uint will allow for 32 lights.
bool IsLeaf;
} Node;
const uint maxLevels = 4;
const uint maxLeafCount = pow(4,maxLevels);
const uint maxNodeCount = (4 * maLeafCount - 1) / 3;
/// linear quadtree - node offset = 4 * parentIndex + childIndex;
Node tree[maxNodeCount];
When building the tree, just check each light's clip-space bounding box against the implicit node bounds. (Root goes from (-1,-1) to (+1,+1). Each child is half that size on each dimension. So, you don't really need to store node bounds.)
If the light touches the node, set a bit in Node.LightMask corresponding to the light. If the light completely contains the node, stop recursing. If it intersects the node, subdivide and continue.
In your fragment shader, find which leaf node contains your fragment, and apply all lights whose bit is set in the leaf node's mask.
You could also store your tree in a mipmap pyramid if you expect it to be dense.
Keep your tiles to a size that is a multiple of 32, preferably square.
vec2 minNodeSize = vec2(2.f / 32);
Now, if you have a small number of lights, this may be overkill. You would probably have to have a lot of lights to see any real performance benefit. Also, a normal loop may help reduce data divergence in your shader, and makes it easier to eliminate branching.
This is one way to implement a simple tiled renderer, and opens the door to having hundreds of lights.
I'm trying to learn how to take advantage of gpu possibilities for threejs and webgl stuff so im just analysing code to get some patterns, methods how things are done and I need some code explanation.
I found this example: One million particles, which seems to be the easiest one involving calculations made in shaders and spit back out.
So from what I have figured out:
- Data for velocity and position of particles are kept in textures passed to shaders to perform calculations there, and get them back for update
Particles are created randomly on the plane no more than the texture size ?
for (var i = 0; i < 1000000; i++) {
particles.vertices.push(new THREE.Vector3((i % texSize)/texSize,
Math.floor(i/texSize)/texSize , 0))
;
}
I don't see any particles position updates? How is the data from shaders retrieved and updates each particle?
pick()
only passes the mouse position to calculate the direction of particles movement?
why are there 2 buffers? and 8 (4 pairs of fragment and vector) shaders? Is only the one for calculating velocity and position not enough?
how does the shader update the texture? I just see reading from it not writing to it?
Thanks in advance for any explanations!
How the heck have they done that:
In this post, I'll explain how this results get computed nearly solely on the gpu via WebGL/Three.js - it might look a bit sloppy as I'm using integrated graphics of an Intel i7 4770k:
Introduction:
Simple idea to keep everything intra-gpu: Each particle's state will be represented by one texture pixel color value. One Million particles will result in 1024x1024 pixel textures, one to hold the current position and another one that holds the velocities of those particles.
Nobody ever forbid to abuse the RGB color values of a texture for completely different data of 0...255 universe. You basically have 32-bit (R + G + B + alpha) per texture pixel for whatever you want to save in GPU memory. (One might even use multiple texture pixels if he needs to store more data per particle/object).
They basically used multiple shaders in a sequential order. From the source code, one can identify these steps of their processing pipeline:
Randomize particles (ignored in this answer) ('randShader')
Determine each particles velocity by its distance to mouse location ('velShader')
Based on velocity, move each particle accordingly ('posShader')
Display the screen ('dispShader')**
.
Step 2: Determining Velocity per particle:
They call a draw process on 1 Million points which's output will be saved as a texture. In the vertex shader each fragment gets 2 additional varyings named "vUv", which basically determine the x and y pixel positions inside the textures used in the process.
Next step is its fragment shader, as only this shader can output (as RGB values into the framebuffer, which gets converted to a texture buffer afterwards - all happening inside gpu memory only). You can see in the id="velFrag" fragment shader, that it gets an input variable called uniform vec3 targetPos;. Those uniforms are set cheaply with each frame from the CPU, because they are shared among all instances and don't involve large memory transfers. (containing the mouse coordinate, in -1.00f to +1.00f universe probably - they probably also update mouse coords once every FEW frames, to lower cpu usage).
Whats going on here? Well, that shader calculates the distance of that particle to the mouse coordinate and depending on that it alter that particles velocity - the velocity also holds information about the particles flight direction. Note: this velocity step also makes particles gain momentum and keep flying/overshooting mouse position, depending on gray value.
.
Step 3: Updating positions per particle:
So far each particle got a velocity and an previous position. Those two values will get processed into a new position, again being outputted as a texture - this time into the positionTexture. Until the whole frame got rendered (into default framebuffer)and then marked as the new texture, the old positionTexture remains unchanged and can get read with ease:
In id="posFrag" fragment shader, they read from both textures (posTexture and velTexture) and process this data into a new position. They output the x and y position coordinates into the colors of that texture (as red and green values).
.
Step 4: Prime time (=output)
To output the results, they probably took again a million points/vertexes and gave it the positionTexture as an input. Then the vertex shader sets the position of each point by reading the texture's RGB value at location x,y (passed as vertex attributes).
// From <script type="x-shader/x-vertex" id="dispVert">
vec3 mvPosition = texture2D(posTex, vec2(x, y)).rgb;
gl_PointSize = 1.0;
gl_Position = projectionMatrix * modelViewMatrix * vec4(mvPosition,1.0);
In the display fragment shader, they only need to set a color (note the low alpha, causing it to allow 20 particles to stack up to fully light up a pixel).
// From <script type="x-shader/x-fragment" id="dispFrag">
gl_FragColor = vec4(vec3(0.5, 1.0, 0.1), 0.05);
.
I hope this made it clear how this little demo works :-) I am not the author of that demo, though. Just noticed this answer actually became a super duper detailed one - fly through the thick keywords to get the short version.
Is there a way to create a Three.js 3D line series with width and thickness?
Even though the Three.js line object supports linewidth, this attribute is not yet supported in all browsers on all platforms in WebGL.
Here's where you set linewidth in Three.js:
var material = new THREE.LineBasicMaterial({
color: 0xff0000,
linewidth: 5
});
The Three.js ribbon object - which had width - has recently been dropped.
The Three.js tube object generates 3D extrusions but - being Bezier-based - the lines do not pass through the control points.
Can anybody think of a method of drawing a line series (polylines, plotlines) in Three.js that has some sort of user definable 'bulk' such as width, thickness or radius?
This question may be a restating of this question:
Extruding a graph in three.js.
Given that I do not think that there is a readily available method, I would be happy to participate in an effort to create a simple function that responds to this question.
But a response that points to an existing workable method would be cool...
As WestLangley suggests, one possible solution includes the polyline being of constant pixel width - as is currently available with the Three.js canvas renderer.
A comparison of the two renderers is shown here:
Canvas and WebGL Lines Compared via GitHub Pages
Canvas and WebGL Lines Compared via jsFiddle
A solution where you could specify linewidth and similar results occurred on both renderers would be very cool.
There are, however, other ways of thinking of 3D lines where lines have actual physical constructs. They cast shadows, they respond to events. These also need to be looked into.
Here are links to GitHub Pages with two demos of lines made up of multiple meshes:
Sphere and Cylinder Polylines
An 'expensive solution. Each joint is made up of a full sphere.
Cubes Polylines
My guess is that building either of these as smooth single meshes will be complex to problems to solve. So in the meantime here is a link to a partial visualization of 3D lines that are wide and have height:
3D Box Line on jsFiddle
The goal is have to code 'with a low level of complexity - in other words - for dummies'. Thus a 3D line should be as easy and as familiar as adding a sphere or cube. Geometry + material = mesh > scene. And the geometry should be quite economical in terms of creating vertices and faces.
The lines should have width and height. Up is always in the Y direction. The demo shows this. What the demo does not show is corners being mitred nicely...
I cooked up a possible solution which I believe meets most of your requirements:
http://codepen.io/garciahurtado/pen/AGEsf?editors=001
The concept is fairly simple: render any arbitrary geometry in "wireframe mode", then apply a full screen GLSL shader to it to add thickness to the wireframe lines.
The shader is inspired by the blur shaders in the ThreeJS distro, which essentially copy the image a bunch of times along the horizontal and vertical axis. I automated that process and made the number of copies a user defined parameter, while ensuring that the copies were offset by 1 pixel.
I used a 3D cube mesh in my demo (with an ortho camera), but it should be trivial to convert it to a poly line.
The real meat and potatoes of this thing is in the custom shader (fragment shader portion):
uniform sampler2D tDiffuse;
uniform int edgeWidth;
uniform int diagOffset;
uniform float totalWidth;
uniform float totalHeight;
const int MAX_LINE_WIDTH = 30; // Needed due to weird limitations in GLSL around for loops
varying vec2 vUv;
void main() {
int offset = int( floor(float(edgeWidth) / float(2) + 0.5) );
vec4 color = vec4( 0.0, 0.0, 0.0, 0.0);
// Horizontal copies of the wireframe first
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalWidth);
float newUvX = vUv.x + float(i - offset) * uvFactor;
float newUvY = vUv.y + (float(i - offset) * float(diagOffset) ) * uvFactor; // only modifies vUv.y if diagOffset > 0
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
// GLSL does not allow loop comparisons against dynamic variables. Workaround below
if(i == edgeWidth) break;
}
// Now we create the vertical copies
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalHeight);
float newUvX = vUv.x + (float(i - offset) * float(-diagOffset) ) * uvFactor; // only modifies vUv.x if diagOffset > 0
float newUvY = vUv.y + float(i - offset) * uvFactor;
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
if(i == edgeWidth) break;
}
gl_FragColor = color;
}
Pros:
No need for additional geometry beyond the line vertices
Line thickness is user definable
A full screen shader should be relatively gentle on the GPU
Can be implemented fully within the WebGL canvas
Cons:
Line thickness is close to pixel perfect on horizontal and vertical edges, but slightly off on diagonal edges. This is due to the algorithm used and is a limitation of the solution. Having said that, for low line thickness and complex geometries, this is barely noticeable with the naked eye.
The joints between lines will show gaps for large enough line thickness. You can play with the Codepen demo to see what I mean. I started to implement a solution to this by adding a second "diagonal pass", but it got a little hairy and I think this would only be an issue for higher line thicknesses (+8 pixels) or extreme line angles. If you are interested in this solution, you can look at the original source to see where I was going with it.
Since this uses a full screen filter, you can only use the WebGL context for displaying objects of this thickness. Showing various line widths would require additional rendering passes.
As a potential solution. You could take your 3d points, then use THREE.Vector3.project method to figure out screen-space coordinates. Then simply use canvas and it's lineTo and moveTo operations. Canvas 2d context does support variable line thickness.
var w = renderer.domElement.innerWidth;
var h = renderer.domElement.innerHeight;
vector.project(camera);
context2d.lineWidth = 3;
var x = (vector.x+1)*(w/2);
var y = h - (vector.y+1)*(h/2);
context2d.lineTo(x,y);
Also, i don't think you can use the same canvas for that, so it would have to be a layer (another canvas) above your gl rendering context canvas.
If you have infrequent camera changes - it is also possible to construct line out of polygons and update it's vertex positions based on camera transform. For orthographic camera this would work best as only rotations would require vertex position manipulation.
Lastly, you could disable canvas clearing and draw your lines several times with offset inside a circle or a box. After that you can re-enable clearing. This would require several extra draw operations, but it's probably the most scalable approach.
The reason lines don't work as you'd expect out of the box is due to how ANGLE works, it's used in Chrome and in Firefox to my knowledge, it emulates OpenGL via DirectX. Guys from ANGLE state that WebGL spec only requires support of line thickness up-to 1, so they do not see it as a bug and don't intend to "fix" it. Line thickness should work on non-windows OSs though, where ANGLE is not used.
I would like to cut an object (a box) in WebGL (fragment shaders / vertex shaders) without using Boolean operations (union, difference, etc..).
I want to use shaders to hide some part of the object (so it is therefore not really a "real cuts" since it simply hides the object).
EDIT
First, make sure that the vertex shader passes through to the fragment shader the position in world space (or rather, whichever coordinate space you wish the clipping to be fixed relative to). Example (written from memory, not tested):
varying vec3 positionForClip;
...
void main(void) {
...
vec4 worldPos = modelMatrix * vertexPosition;
positionForClip = worldPos.xyz / worldPos.w; // don't need homogeneous coordinates, so do the divide early
gl_Position = viewMatrix * worldPos;
}
And in your fragment shader, you can then discard based on an arbitrary plane, or any other kind of test you want:
varying vec3 positionForClip;
uniform vec3 planeNormal;
uniform float planeDistance;
...
void main(void) {
if (dot(positionForClip, planeNormal) > planeDistance) {
// or if (positionForClip.x > 10.0), or whatever
discard;
}
...
gl_FragColor = ...;
}
Note that using discard may cause a performance reduction as the GPU cannot optimize based on knowing that all fragments will be written.
Disclaimer: I haven't researched this myself, and only just wrote down a possible way to do it based on the 'obvious solution'. There may be better ways I haven't heard of.
Regarding your question about multiple objects: There are many different ways to handle this — it's all custom code in the end. But you certainly can use a different shader for different objects in your scene, as long as they're in different vertex arrays.
gl.useProgram(programWhichCuts);
gl.drawArrays();
gl.useProgram(programWhichDoesNotCut);
gl.drawArrays();
If you're new to using multiple programs, it's pretty much just like using one program except that you do all the setup (compile, attach, link) once. The main thing to watch out for is each program has its own uniforms, so you have to initialize your uniforms for each program separately.