WebGL not displaying content - javascript

i am mainly developing in C++ and decided to compile one of my projects to WebAssembly and build a website upon it. Since I have written some 3D-Engine in C++ before, I decided to use WebGL on the website. I have translated my shader template class. I have reduced the problem to a two-dimensional problem.
Task
First, I will describe what I am trying to do.
I am going to render a 2D FEM-grid which consists of FEM-elements which is any type of polygon. The nodes of those polygons contain values which I am trying to display. I already wrote code to break the polygons down to triangles. Initially I am just trying to render two dimensional triangles with their node values being interpolated.
Shader Code
I wrote some template shader class in WebGL which handles the construction and compilation of the shaders and has been copied from my C++ 3D-Engine. Since the code itself is somewhat long, I am not going to post it here but eventually show a list of executed OpenGL commands.
The shaders I am currently using to debug my problem are the following:
Vertex-Shader:
precision mediump float;
attribute vec2 position;
varying vec2 fragPos;
void main(){
gl_Position = vec4(position,0,1.0);
fragPos = position;
}
---------------------------------------------------------------
Fragment-Shader:
precision mediump float;
varying vec2 fragPos;
uniform sampler2D elementValues;
uniform int elementValuesDimension;
void main(){
gl_FragColor = vec4(0.8,0.2,0.0,1.0);
}
As you can see, I am not going for any interpolation in this debug case but just trying to some red-ish color within my fragment shader.
Executed Commands
I went ahead and used webgl-debug.js to show all the operations done. For this case, you can see the vertices and indices matching a simple quad spanning [0,0.6]x[0,0.6] which should be well within the clip space.
This can be broken into a few parts:
Enable uint32 as indexing
gl.getExtension(OES_element_index_uint)
Create two buffers for vertex coordinates and indices
gl.createBuffer()
gl.createBuffer()
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, [object WebGLBuffer])
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, 0,1,2,0,2,3, gl.STATIC_DRAW)
gl.bindBuffer(gl.ARRAY_BUFFER, [object WebGLBuffer])
gl.bufferData(gl.ARRAY_BUFFER, 0,0,0,0.6000000238418579,0.6000000238418579,0.6000000238418579,0.6000000238418579,0, gl.STATIC_DRAW)
Create and compile the shaders
gl.createShader(gl.VERTEX_SHADER)
gl.shaderSource(vertex_shader_src);
gl.compileShader([object WebGLShader]);
gl.getShaderParameter([object WebGLShader], gl.COMPILE_STATUS);
gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragment_shader_src);
gl.compileShader([object WebGLShader]);
gl.getShaderParameter([object WebGLShader], gl.COMPILE_STATUS);
gl.createProgram()
gl.attachShader([object WebGLProgram], [object WebGLShader])
gl.attachShader([object WebGLProgram], [object WebGLShader])
gl.linkProgram([object WebGLProgram])
gl.validateProgram([object WebGLProgram])
Get Locations of the uniforms (which are unused) as well as the location of the attribute
gl.getUniformLocation([object WebGLProgram], elementValues)
gl.getUniformLocation([object WebGLProgram], elementValuesDimension)
gl.getAttribLocation([object WebGLProgram], position)
Verify that linking has worked
gl.getProgramParameter([object WebGLProgram], gl.LINK_STATUS)
Main Rendering Loop. Initially clear the displayed stuff
gl.clearColor(0.5, 0.5, 0.5, 1)
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)
gl.disable(gl.CULL_FACE)
gl.viewport(0, 0, 700, 600)
Activate the shader and bind the buffers
gl.useProgram([object WebGLProgram])
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, [object WebGLBuffer])
gl.bindBuffer(gl.ARRAY_BUFFER, [object WebGLBuffer])
Connect the vertices to the first attribute
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0)
Render Call.
gl.drawElements(gl.TRIANGLES, 4, gl.UNSIGNED_INT, 0)
Sadly when the commands above are executed, I am unable to see anything except a gray screen:
I am very happy for any help!
Greetings
Finn

Thanks to teddybeard and a lot more time on this problem spent than I was hoping, two issues have shown up:
l.bufferData(gl.ELEMENT_ARRAY_BUFFER, 0,1,2,0,2,3, gl.STATIC_DRAW) contains 6 vertices but the drawElements command only took 4 values. This is obviously a bug but should still have rendered atleast one of the two triangles.
Secondly, I forgot to enable the vertex attribute in the shader. gl.enableVertexAttribArray(1); solved the problem.

Related

Passing arrays of off-length to gl.uniform3fv

It seems that when using WebGL uniform3fv you MUST pass an array of length equal to 3 if your shader uniform is vec3 OR an array having length equal to multiple of 3 if your shader uniform is an array of vec3's.
Doing this:
var data = new Float32Array([ 1, 2, 3, 4 ]);
gl.uniform3fv( uniformLocation, data );
when your uniform is declared as:
uniform vec3 some_uniform;
will result in some_uniform getting (0,0,0) value.
I searched the web and SO and MDN and forums and stuff (one of the stuff being WebGL specification) and I can't find a requirement (or mention) for this limitation.
My question is: is this required by WebGL specification (and if yes, can you please point me to it) or is it just some undocumented behaviour you are supposed to know about?
If it is required, we'll change code to support it as requirement, if it is undocumented quirk, we'll change code to support that quirk with an option to disable/remove the support once the quirk is gone.
From the WebGL spec section 5.14.10
If the array passed to any of the vector forms (those ending in v) has an invalid length, an INVALID_VALUE error will be generated. The length is invalid if it is too short for or is not an integer multiple of the assigned type.
Did you check your JavaScript console? When I tried it I clearly saw the INVALID_VALUE error in the console.
The code below
const gl = document.querySelector('canvas').getContext('webgl');
const vs = `
attribute vec4 position;
void main() {
gl_Position = position;
}
`;
const fs = `
precision mediump float;
uniform vec3 color;
void main() {
gl_FragColor = vec4(color, 1);
}
`;
const program = twgl.createProgram(gl, [vs, fs]);
const loc = gl.getUniformLocation(program, 'color');
gl.useProgram(program);
gl.uniform3fv(loc, [1, 2, 3, 4]); // generates INVALID_VALUE
<script src="https://twgljs.org/dist/4.x/twgl.min.js"></script>
<canvas></canvas>
prints
in the JavaScript console
In other words, your uniform was not set to 0,0,0. Instead your gl.uniform3fv function failed to execute since it got an error and so the uniform was left at whatever value it already was. Uniforms default to 0 so that's where the 0s came from
If you want to catch this kind of error during debugging consider using this helper library. Generally I find just looking at the JavaScript console is enough for me to figure out where the issue though.

How to create custom shaders using THREE.ShaderLib

I have been trying to learn THREEJS shader materials. So far I understand how uniforms, vertexShader, and fragmentShader play a role in projecting and coloring the vertices and fragments in the world of glsl and webgl. I have been trying to find some good examples where ShaderMaterial of THREEJS is extended using the THREE.ShaderLib.
Suppose I want to extend a standard threejs material (THREE.ShaderLib['standard']) to write the envmap texture, is that possible? Or is it absolutely necessary that I write everything from scratch?
Shaders are just strings, it's up to you what you do with them and how you obtain them. With that being said, three.js has numerous tools to help you build them better.
At the highest level, there is an abstraction in form of THREE.Material. Where you describe abstract properties and three configures the shader under the hood.
//no shaders involved
var redPlastic = new THREE.MeshStandardMaterial({
color: 'red',
roughness: notVeryRough
})
ShaderMaterial expects you to write raw GLSL, but still includes some things that you would otherwise have to do manually. So "writing from scratch" is a not entirely correct. With RawShaderMaterial you would write everything from scratch. THREE.ShaderMaterial:
varying vec2 vUv;
void main(){
vUv = uv; // <- magic! where does uv come from?
vec4 worldPosition = modelMatrix * vec4( position, 1.); // <- more magic! where do modelMatrix and position come from?
gl_Position = projectionMatrix * viewMatrix * worldPosition; // more!
}
At runtime, when three is compiled and included on a page, the THREE namespace has THREE.ShaderChunk dictionary. These are various named snippets of GLSL that all the materials are built out of.
You can copy these snippets from their source files, and paste them in your own shader file, or string.
You can write it with a string template:
`${THREE.ShaderChunk.some_chunk}
void main(){
...
${anotherChunk}
gl_Position = ...
`
But if you want to extend the built in materials, there is a (IMHO buggy and wonky :)) feature of the material called onBeforeCompile. With this, you pass a callback to any of the built in materials, and get the shader object before compilation. Here you can inject your own glsl, swap out chunks or anything else you can think of doing to a string.
In order to use this one needs to be familiar with the structure of the shaders built from:
https://github.com/mrdoob/three.js/tree/dev/src/renderers/shaders/ShaderChunk

WebGL/GLSL - How does a ShaderToy work?

I've been knocking around Shadertoy - https://www.shadertoy.com/ - recently, in an effort to learn more about OpenGL and GLSL in particular.
From what I understand so far, the OpenGL user first has to prepare all the geometry to be used and configure the OpenGL server (number of lights allowed, texture storage, etc). Once that's done, the user then has to provide at least one vertex shader program, and one fragment shader program before an OpenGL program compiles.
However, when I look at the code samples on Shadertoy, I only ever see one shader program, and most of the geometry used appears to be written directly into the GLSL code.
How does that work?
My guess is that a vertex shader is already prepared upfront, and that the editable/sample shader is only a fragment shader. But then that doesn't explain the geometry in some of the more complex examples...
Can anyone explain how Shadertoy works?
ShaderToy is a tool for writing pixel shaders.
What are pixel shaders?
If you render a full screen quad, meaning that each of four points is placed in one of the four corners of the viewport, then the fragment shader for that quad is called pixel shader, because you could say that now each fragment corresponds to exactly one pixel of the screen. So a pixel shader is a fragment shader for a fullscreen quad.
So attributes are always the same and so is a vertex shader:
positions = [ [-1,1], [1,1], [-1,-1], [1,-1] ]
uv = [ [0.0, 0.0], [1.0, 0.0], [0.0, 1.0], [1.0, 1.0] ]
And that quad is rendered as TRIANGLE_STRIP.
Also, instead of setting UVs explicitly, some prefer to use fragment shader's built-in variable gl_FragCoord, which is then divided with, for example, a uniform vec2 uScreenResolution.
Vertex shader:
attribute vec2 aPos;
attribute vec2 aUV;
varying vec2 vUV;
void main() {
gl_Position = vec4(aPos, 0.0, 1.0);
vUV = aUV;
}
And fragment shader would then look something like this:
uniform vec2 uScreenResolution;
varying vec2 vUV;
void main() {
// vUV is equal to gl_FragCoord/uScreenResolution
// do some pixel shader related work
gl_FragColor = vec3(someColor);
}
ShaderToy can supply you with a few uniforms on default, iResolution (aka uScreenResolution), iGlobalTime, iMouse,... which you can use in your pixel shader.
For coding geometry directly into the fragment shader (aka pixel shader), developer use something called ray-tracing. That is quite complex area of programming but in short:
You present your geometry through some mathematical formulas, and later in pixel shader, when you wish to check if some pixel is a part of your geometry you use that formula to retrieve that information. Google-ing a bit should give you plenty of resources to read from what and how ray tracers are built exactly, and this might help:
How to do ray tracing in modern OpenGL?
Hope this helps.
ShaderToy displays simple GLSL that is programmed to handle all the lighting, geometry, etc, it's not vertex geometry, it's raycasting most of it, the 3D stuff, or you can do 2D shaders, etc.
Any color and spacial maths can be programmed in GLSL language. Combinations of advanced algorithms makes isosurfaces, shapes, and then project textures onto isosurfaces, and raycasting, sending imaginary lines from viewer to distance, intercepts anything in the way, there are many raycasting techniques for 3D.
visit www.iquilezles.org for an idea of the different tools that are used in shadertoy/glsl graphics
Shadertoy is called "TOY" for a reason. It's basically a puzzle. Given only a function that as input is told the current pixel position write a function that generates an image.
The website sets up WebGL to draw a single quad (rectangle) and then lets you provide a function that is passed the current pixel being rendered as fragCoord. you then use that to compute some colors.
For example if you did this
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
fragColor = mix(red, green, sin(fragCoord.x / 10.0) * 0.5 + 0.5);
}
you'd get red and green stripes like this
https://www.shadertoy.com/view/3l2cRz
Shadertoy provides a few other inputs. The most common is the resolution being rendered to as iResolution. If you divide the fragCoord by iResolution then you get values that go from 0 to 1 across and 0 to 1 down the canvas so you can easily make your function resolution independent.
Doing that we can draw an ellipse in the center like this
void mainImage(out vec4 fragColor, in vec2 fragCoord)
{
// uv goes from 0 to 1 across and up the canvas
vec2 uv = fragCoord / iResolution.xy;
vec4 red = vec4(1, 0, 0, 1);
vec4 green = vec4(0, 1, 0, 1);
float distanceFromCenter = distance(uv, vec2(0.5));
fragColor = mix(red, green, step(0.25, distanceFromCenter));
}
which produces
The second most common input is iTime in seconds which you can use the animate parameters in your function over time.
So, given those inputs, if you apply enough math you can make an incredible image like for example this shadertoy shader generates this image
Which is amazing someone figured out the math needed to generate that image given only the input mentioned above.
Many of the most amazing shadertoy shaders use a technique called ray marching and some math called "signed distance fields" which you can read about here
But, I think it's important to point out that while there are many cool things to learn from shadertoy shaders, many of them apply only to this "puzzle" of "how do I do make a beautiful image with only one function whose input is a pixel position and whose output is a single color". They don't answer "how should I write shaders for a performant app".
Compare to the dolphin above to this speedboat game
https://www.youtube.com/watch?v=7v9gZK9HqqI
The dolphin shadertoy shader runs at about 2 frames a second when fullscreen on my NVidia GeForce GT 750 where as the speedboat game runs at 60fps. The reason the game runs fast is it uses more traditional techniques drawing shapes with projected triangles. Even an NVidia 1060 GTX can only run that dolphin shader at about 10 frames a second when fullscreen.
It's just basically pushing GLSL pixel shader source code directly onto the graphics card.The real magic happens in the incredibly clever algorithms that people use to create amazing effects, like ray marching, ray casting, ray tracing. best to have a look at some other live GLSL sandboxes like: http://glsl.heroku.com/ and http://webglplayground.net/.
Its basically creating a window typically two triangles which represent the screen, then the shader works on each pixel just like a ray tracer.
I've been looking at these a while now, and the algorithms people use are mind blowing, you'll need to some serious math chops and look up "demo coding" source code to able to wrap your head around them. Many on shader toy, just blow your mind !
So to summarise, you just need to learn GLSL shader coding and algorithms. No easy solution.
Traditionally in computer graphics, geometry is created using vertices and rendered using some form of materials (e.g. textures with lighting). In GLSL, the vertex shader processes the vertices and the fragment (pixel) shader processes the materials.
But that is not the only way to define shapes. Just as a texture could be procedurally defined (instead of looking up its texels), a shape could be procedurally defined (instead of looking up its geometry).
So, similar to ray tracing, these fragment shaders are able to create shapes without having their geometry defined by vertices.
There's still more ways to define shapes. E.g. volume data (voxels), surface curves, and so on. A computer graphics text should cover some of them.

How to cut an object using WebGL shaders?

I would like to cut an object (a box) in WebGL (fragment shaders / vertex shaders) without using Boolean operations (union, difference, etc..).
I want to use shaders to hide some part of the object (so it is therefore not really a "real cuts" since it simply hides the object).
EDIT
First, make sure that the vertex shader passes through to the fragment shader the position in world space (or rather, whichever coordinate space you wish the clipping to be fixed relative to). Example (written from memory, not tested):
varying vec3 positionForClip;
...
void main(void) {
...
vec4 worldPos = modelMatrix * vertexPosition;
positionForClip = worldPos.xyz / worldPos.w; // don't need homogeneous coordinates, so do the divide early
gl_Position = viewMatrix * worldPos;
}
And in your fragment shader, you can then discard based on an arbitrary plane, or any other kind of test you want:
varying vec3 positionForClip;
uniform vec3 planeNormal;
uniform float planeDistance;
...
void main(void) {
if (dot(positionForClip, planeNormal) > planeDistance) {
// or if (positionForClip.x > 10.0), or whatever
discard;
}
...
gl_FragColor = ...;
}
Note that using discard may cause a performance reduction as the GPU cannot optimize based on knowing that all fragments will be written.
Disclaimer: I haven't researched this myself, and only just wrote down a possible way to do it based on the 'obvious solution'. There may be better ways I haven't heard of.
Regarding your question about multiple objects: There are many different ways to handle this — it's all custom code in the end. But you certainly can use a different shader for different objects in your scene, as long as they're in different vertex arrays.
gl.useProgram(programWhichCuts);
gl.drawArrays();
gl.useProgram(programWhichDoesNotCut);
gl.drawArrays();
If you're new to using multiple programs, it's pretty much just like using one program except that you do all the setup (compile, attach, link) once. The main thing to watch out for is each program has its own uniforms, so you have to initialize your uniforms for each program separately.

WebGL Weird Performance Drop in Chrome

I have been trying to learn WebGL so I have made a simple model loading script. I thought it was working all fine and dandy until I pulled up Chrome's devtools timeline and saw that I was getting ~20 FPS when rendering 100 models. The weird part was, the CPU was idle most of the time on each frame.
My first thought was that I must be GPU bound, which seemed unlikely because my shaders were short and not very complex. I tested with 10 models and again with only one, and that did increase performance, but I was still getting less than 40 FPS. I pulled up chrome://tracing, and saw that both my GPU wasn't doing much for each frame either. The majority of each frame in CrGpuMain was filled with "Processing Swap", and CrBrowserMain spent nearly all of the time on glFinish (called by "CompositingIOSurfaceMac::DrawIOSurface").
Processing Swap appears to be the major problem, not glFinish.
So my question is, what is causing the browser to spend so much time on glFinish? Any ideas on how to fix this problem?
Each model has 40608 elements and 20805 vertices,
Update - Corrected numbers of simplified mesh:
3140 elements
6091 vertices
The models are pseudo-instanced so data is only passed to the GPU once per frame. Is the large number of vertices the problem?
The demo can be found here.
Rendering Code:
Thing.prototype.renderInstances = function(){
if (this.loaded){
var instance;
gl.bindBuffer(gl.ARRAY_BUFFER, this.vertexbuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, this.normalbuffer);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, 3, gl.FLOAT, true, 0, 0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, this.indexbuffer);
for(var i in this.instances){
instance = this.instances[i];
setMatrixUniforms(instance.matrix);
gl.drawElements(gl.TRIANGLES, this.numItems, gl.UNSIGNED_SHORT,0);
}
}
};
Vertex Shader Code:
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
uniform mat4 uMMatrix;
uniform mat4 uVMatrix;
uniform mat4 uPMatrix;
uniform vec3 uLightDirection;
varying float vLight;
void main(void) {
mat4 MVMatrix = uVMatrix*uMMatrix;
//Toji's manual transpose
mat3 normalMatrix = mat3(MVMatrix[0][0], MVMatrix[1][0], MVMatrix[2][0], MVMatrix[0][1], MVMatrix[1][1], MVMatrix[2][1], MVMatrix[0][2], MVMatrix[1][2], MVMatrix[2][2]);
gl_Position = uPMatrix*uVMatrix*uMMatrix*vec4(aVertexPosition, 1.0);
vec3 lightDirection = uLightDirection*normalMatrix;
vec3 normal = normalize(aVertexNormal*normalMatrix);
vLight = max(dot(aVertexNormal,uLightDirection),0.0);
}

Categories