I have been trying to learn WebGL so I have made a simple model loading script. I thought it was working all fine and dandy until I pulled up Chrome's devtools timeline and saw that I was getting ~20 FPS when rendering 100 models. The weird part was, the CPU was idle most of the time on each frame.
My first thought was that I must be GPU bound, which seemed unlikely because my shaders were short and not very complex. I tested with 10 models and again with only one, and that did increase performance, but I was still getting less than 40 FPS. I pulled up chrome://tracing, and saw that both my GPU wasn't doing much for each frame either. The majority of each frame in CrGpuMain was filled with "Processing Swap", and CrBrowserMain spent nearly all of the time on glFinish (called by "CompositingIOSurfaceMac::DrawIOSurface").
Processing Swap appears to be the major problem, not glFinish.
So my question is, what is causing the browser to spend so much time on glFinish? Any ideas on how to fix this problem?
Each model has 40608 elements and 20805 vertices,
Update - Corrected numbers of simplified mesh:
3140 elements
6091 vertices
The models are pseudo-instanced so data is only passed to the GPU once per frame. Is the large number of vertices the problem?
The demo can be found here.
Rendering Code:
Thing.prototype.renderInstances = function(){
if (this.loaded){
var instance;
gl.bindBuffer(gl.ARRAY_BUFFER, this.vertexbuffer);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, this.normalbuffer);
gl.vertexAttribPointer(shaderProgram.vertexNormalAttribute, 3, gl.FLOAT, true, 0, 0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, this.indexbuffer);
for(var i in this.instances){
instance = this.instances[i];
setMatrixUniforms(instance.matrix);
gl.drawElements(gl.TRIANGLES, this.numItems, gl.UNSIGNED_SHORT,0);
}
}
};
Vertex Shader Code:
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
uniform mat4 uMMatrix;
uniform mat4 uVMatrix;
uniform mat4 uPMatrix;
uniform vec3 uLightDirection;
varying float vLight;
void main(void) {
mat4 MVMatrix = uVMatrix*uMMatrix;
//Toji's manual transpose
mat3 normalMatrix = mat3(MVMatrix[0][0], MVMatrix[1][0], MVMatrix[2][0], MVMatrix[0][1], MVMatrix[1][1], MVMatrix[2][1], MVMatrix[0][2], MVMatrix[1][2], MVMatrix[2][2]);
gl_Position = uPMatrix*uVMatrix*uMMatrix*vec4(aVertexPosition, 1.0);
vec3 lightDirection = uLightDirection*normalMatrix;
vec3 normal = normalize(aVertexNormal*normalMatrix);
vLight = max(dot(aVertexNormal,uLightDirection),0.0);
}
Related
i am mainly developing in C++ and decided to compile one of my projects to WebAssembly and build a website upon it. Since I have written some 3D-Engine in C++ before, I decided to use WebGL on the website. I have translated my shader template class. I have reduced the problem to a two-dimensional problem.
Task
First, I will describe what I am trying to do.
I am going to render a 2D FEM-grid which consists of FEM-elements which is any type of polygon. The nodes of those polygons contain values which I am trying to display. I already wrote code to break the polygons down to triangles. Initially I am just trying to render two dimensional triangles with their node values being interpolated.
Shader Code
I wrote some template shader class in WebGL which handles the construction and compilation of the shaders and has been copied from my C++ 3D-Engine. Since the code itself is somewhat long, I am not going to post it here but eventually show a list of executed OpenGL commands.
The shaders I am currently using to debug my problem are the following:
Vertex-Shader:
precision mediump float;
attribute vec2 position;
varying vec2 fragPos;
void main(){
gl_Position = vec4(position,0,1.0);
fragPos = position;
}
---------------------------------------------------------------
Fragment-Shader:
precision mediump float;
varying vec2 fragPos;
uniform sampler2D elementValues;
uniform int elementValuesDimension;
void main(){
gl_FragColor = vec4(0.8,0.2,0.0,1.0);
}
As you can see, I am not going for any interpolation in this debug case but just trying to some red-ish color within my fragment shader.
Executed Commands
I went ahead and used webgl-debug.js to show all the operations done. For this case, you can see the vertices and indices matching a simple quad spanning [0,0.6]x[0,0.6] which should be well within the clip space.
This can be broken into a few parts:
Enable uint32 as indexing
gl.getExtension(OES_element_index_uint)
Create two buffers for vertex coordinates and indices
gl.createBuffer()
gl.createBuffer()
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, [object WebGLBuffer])
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, 0,1,2,0,2,3, gl.STATIC_DRAW)
gl.bindBuffer(gl.ARRAY_BUFFER, [object WebGLBuffer])
gl.bufferData(gl.ARRAY_BUFFER, 0,0,0,0.6000000238418579,0.6000000238418579,0.6000000238418579,0.6000000238418579,0, gl.STATIC_DRAW)
Create and compile the shaders
gl.createShader(gl.VERTEX_SHADER)
gl.shaderSource(vertex_shader_src);
gl.compileShader([object WebGLShader]);
gl.getShaderParameter([object WebGLShader], gl.COMPILE_STATUS);
gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragment_shader_src);
gl.compileShader([object WebGLShader]);
gl.getShaderParameter([object WebGLShader], gl.COMPILE_STATUS);
gl.createProgram()
gl.attachShader([object WebGLProgram], [object WebGLShader])
gl.attachShader([object WebGLProgram], [object WebGLShader])
gl.linkProgram([object WebGLProgram])
gl.validateProgram([object WebGLProgram])
Get Locations of the uniforms (which are unused) as well as the location of the attribute
gl.getUniformLocation([object WebGLProgram], elementValues)
gl.getUniformLocation([object WebGLProgram], elementValuesDimension)
gl.getAttribLocation([object WebGLProgram], position)
Verify that linking has worked
gl.getProgramParameter([object WebGLProgram], gl.LINK_STATUS)
Main Rendering Loop. Initially clear the displayed stuff
gl.clearColor(0.5, 0.5, 0.5, 1)
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT)
gl.disable(gl.CULL_FACE)
gl.viewport(0, 0, 700, 600)
Activate the shader and bind the buffers
gl.useProgram([object WebGLProgram])
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, [object WebGLBuffer])
gl.bindBuffer(gl.ARRAY_BUFFER, [object WebGLBuffer])
Connect the vertices to the first attribute
gl.vertexAttribPointer(0, 2, gl.FLOAT, false, 0, 0)
Render Call.
gl.drawElements(gl.TRIANGLES, 4, gl.UNSIGNED_INT, 0)
Sadly when the commands above are executed, I am unable to see anything except a gray screen:
I am very happy for any help!
Greetings
Finn
Thanks to teddybeard and a lot more time on this problem spent than I was hoping, two issues have shown up:
l.bufferData(gl.ELEMENT_ARRAY_BUFFER, 0,1,2,0,2,3, gl.STATIC_DRAW) contains 6 vertices but the drawElements command only took 4 values. This is obviously a bug but should still have rendered atleast one of the two triangles.
Secondly, I forgot to enable the vertex attribute in the shader. gl.enableVertexAttribArray(1); solved the problem.
I am able to render a image to a texture , which is rendered in the top left of the screen. Now I want to move the texture slowly right in the X-Axis and add another texture in the top-left corner, and want to do it continuously .
Now, I am able to render a single texture , but unable to move further. Can anybody help me move further ?
I am putting my code which I have done so far.
Code:
Vertex Shader
attribute vec2 a_position;
uniform vec2 u_resolution;
uniform mat3 u_matrix;
varying vec2 v_texCoord;
void main() {
gl_Position = vec4(u_matrix * vec3(a_position, 1), 1);
v_texCoord = a_position;
}
Fragment Shader
precision mediump float;
// our texture
uniform sampler2D u_image;
// the texCoords passed in from the vertex shader.
varying vec2 v_texCoord;
void main() {
gl_FragColor = texture2D(u_image, v_texCoord);
}
Javascript
gl.clear(gl.COLOR_BUFFER_BIT || gl.DEPTH_BUFFER_BIT);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, image.width, image.height, 0, gl.RGBA, gl.UNSIGNED_BYTE, dataTypedArray);
//Draw the rectangle.
gl.drawArrays(gl.TRIANGLES, 0, 6);
Where dataTypedArray is a Uint8Array of pixels.
Next I want to move this rendered texture and add more texture previous locations . Any suggestions?
I think the easiest way to accomplish this is basically to draw 2 quads with 2 separate draw calls. In the first draw call, draw the first quad at desired x location, then, switch to the second texture and draw the second quad at whatever the desired location is. You do not have to alter the shader in any way.
In pseudo-js:
// allocate the two textures
gl.clear(gl.COLOR_BUFFER_BIT || gl.DEPTH_BUFFER_BIT);
gl.bindTexture(gl.TEXTURE_2D, firstTexture);
// use whatever method you want uniform/attribute/watever to upload the quad position
gl.drawArrays(gl.TRIANGLES, 0, 6);
gl.bindTexture(gl.TEXTURE_2D, secondTexture);
// upload the second quad position
gl.drawArrays(gl.TRIANGLES, 0, 6);
This is typically accomplished by translating the texture coordinates themselves. The vertex shader will often include a texture matrix:
attribute vec2 vertexTexCoord; // input texture coord (s,t)
uniform mat4 TextureMatrix; // for transforming (s,t)
varying vec2 texCoord; // output texture coord
...
void main() {
gl_Position = ModelViewProjection*vec4(vertexPosition,1.0);
texCoord = (TextureMatrix*vec4(vertexTexCoord,0.0,1.0)).st;
...
}
The texture matrix can translate, rotate, scale,... the texture. The client JS code constructs the appropriate transform and loads the shader's texture matrix. If you wish to animate the translation then you might recompute the texture matrix at each from:
... compute s-traslation ds based on frame time ...
Texture = new Matrix4x4;
Texture.translate(ds, 0, 0).scale(4, 2, 1);
...
gl.uniformMatrix4fv(program.TextureMatrix, false, Texture.array);
If you wish to involve multiple textures, then your will have to use multiple texture units and access them as needed in the fragment shader:
varying vec2 texCoord;
uniform sampler2D texUnit0;
uniform sampler2D texUnit1;
...
void main() {
... choose which texture you want to access based on texCoord.st
gl_FragColor = texture2D(desiredTexUnit, texCoord);
...
}
Take a look at this textured donut to see the effect of scaling texture coordinates with a single texture. You'll need to perform multi-texturing to get the effect you desire.
i am writing a webgl program with texturing.
As long as the image isn´t loaded, the texture2D-function returns a vec4(0.0, 0.0, 0.0, 1.0). So all objects are black.
So i would like to check, if my sampler2D is available.
I have already tried:
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
uniform sampler2D uSampler;
void main(void) {
vec4 color = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
if(color.r == 0.0 && color.g == 0.0 && color.b == 0.0)
color = vec4(1.0, 1.0, 1.0, 1.0);
gl_FragColor = color;
}
</script>
But of course this doesn´t make sense, because the texture could be black.
Can anybody help me? How can I check, whether my texture image is already loaded in the fragment shader?
You can't really check that in WebGL.
Solutions:
Don't render until the texture is loaded
Use a 1x1 pixel texture to start, fill it in with the image
once it's loaded. See this answer
Pass in more info to the shader like uniform bool textureLoaded.
Me, I always pick #2 because it means the app runs immediately and the textures get filled in as they download.
I'd provide new uniform which will store data whether texture is loaded or not.
Or you can write 2 shaders with/without texture and select proper one before rendering.
I am working with a WebGL website.
I have a canvas with a texture at 256x256 that I use to render to WebGL.
On this canvas I have rendered several images packed together with 1px spacing between, using regular canvas rendering.
I use a 1x1 rectangle (scaled with the world matrix) to render the images in batches. I.e.: I set up the entire render state, then change the UV as a uniform to the shader. It's a spritesheet of icons.
The shader I use to render it is
precision highp float;
attribute vec3 vertexPosition;
attribute vec2 textureCoordinate;
uniform mat4 worldMatrix;
uniform mat4 projectionMatrix;
uniform vec4 actualUV;
uniform float cacheSize;
varying vec2 fragCoord;
vec2 scaleVec;
void main(void) {
scaleVec = vec2(cacheSize,cacheSize);
gl_Position = projectionMatrix * worldMatrix * vec4(vertexPosition, 1.0);
fragCoord = textureCoordinate * actualUV.zw;
fragCoord = fragCoord + actualUV.xy;
fragCoord = fragCoord * scaleVec;
}
The values I use are
actualUV={x:50, y:50, z:19:, w:19}; // for example
cacheSize = 256;
Which should render 19x19 pixels at 50,50 on the texture into a rectangle on the screen 19x19 size. And it does, almost.
The image is slightly off. It's blurry and when I set the MAG_FILTER to NEAREST I get a sharper image, but it is sometimes off by one pixel, or worse, a half pixel causing some (very minor but noticable) stretching. If I add a slight offset to correct this other images rendered in the same way are off in the other direction. I cannot seem to figure it out. It seems like it's a issue with the floating point calculation not being exact, but I cannot figure out where.
Try re-adjusting your coordinate system so that the UVs passed are within [0-1] range and get rid of your scaling factor. This can also be a pre multiplied alpha problem, try use gl.blendFunc(gl.ONE, gl.ONE_MINUS_SRC_ALPHA); with gl.pixelStorei(gl.UNPACK_PREMULTIPLY_ALPHA_WEBGL, true); instead. If you are using this for tiles, then you need to pad the borders with actual pixels. Need to pad more if you need anisotropic filtering.
Also, try to use a quad with dimensions equal to the image (portion of spritesheet) instead of a 1x1 quad if all above fails.
i'm looking to just simply display an image on the canvas at x and y co-ordinates using WEBGL but have no clue how to do it. do i need to include shaders and all that stuff? i've seen code to display images but they are very bulky. I do not wish to use a framework. If possible could you comment and explain what the important sections do? I will be using WEBGL for a 2d tile based game.
thankyou for your time
Yes, you need a vertex and fragment shader, but they can be relatively simple. I'd recommend to start from the Mozilla example, as suggested by Ido, and after you got it running, remove the 3D aspect. In particular, you don't need the uMVPMatrix and uPmatrix, and your coordinate array can be 2D. For the vertex shader, that means:
attribute vec3 aVertexPosition;
attribute vec2 aTextureCoord;
varying highp vec2 vTextureCoord;
void main(void) {
gl_Position = vec4(aVertexPosition, 0.0, 1.0);
vTextureCoord = aTextureCoord;
}