Why do we need merge geometries ourselves? - javascript

I'm finally understanding the performance impact of merged geometries. I've noticed the GPU has draw calls for all shared geometries combined with materials (or perhaps its just the material count). My question is why are developers forced to figure out how to merge geometries when its clear that the three.js will just divide everything out of your merged geometry anyways?
I'm probably missing something.

It is difficult to make general statements about WebGLRenderer performance, because each use case is different.
But typically, performance will improve if the number of draw calls is reduced. If you have multiple meshes, and they all share the same material, then by merging the mesh geometries into a single geometry, you reduce the number of draw calls to one.
Remember, this approach makes sense only if your multiple meshes are static relative to each other, because once merged, they cannot be transformed independently.
And yes, it is true, that if your mesh has multiple materials, the the renderer will break the geometry up into smaller geometries -- one for each material -- prior to rendering.
Here is a useful tip: In the console, type renderer.info and inspect the resulting object returned. This will give you insight into what is going on behind the scenes.
You can also display renderer.info live on your screen by using this utility.
three.js r.68

Related

What is the performance hit of using multiple WebGL context's

I'm working on a WebGL application that works similarly to video compositing programs like After Effects.
The application takes textures and stacks them like layers. This can be as simple as drawing one texture on top of the other or using common blend modes like the screen to combine several layers.
Each layer can be individually scaled/rotated/translated via an API call, and altogether the application forms a basic compositing software.
Now my question, doing all this in a single WebGL canvas is a lot to keep track of.
// Start webgl rant
The application won't know how many layers there are ahead of time, and so textures coordinate planes and shaders will need to be made dynamically.
Blending the layers together would require writing out the math for each type of blend mode. Transforming vertex coordinates requires matrix math and just doing things in WebGL, in general, requires lots of code being a low-level API.
// End webgl rant
However, this could be easily solved by making a new canvas element for each layer. Each WebGL canvas will have a texture drawn onto it, and then I can scale/move/blend the layers using simple CSS code.
At the surface, it seems like the performance hit won't be that bad, because even if I did combine everything into a single context each layer would still need its own texture and coordinate system. So the number of textures and coordinates stays the same, just spread across multiple contexts.
However deep inside I know somehow this is horribly wrong, and my computer's going to catch fire if I even try. I just can't figure out why.
With a goal of being able to support, ~4 layers at a time would using multiple canvases be a valid option? Besides worrying about browsers having a max number of active WebGL context's are there any other limitations to be aware of?

three.js geometry groups incorrect sorting

I have a Three.js (r87) scene with modified spine skeletons each with two materials and a buffergeometry with two groups to render both parts of the dynamic mesh.
However, when moving the camera with sorting enabled, the two draw calls sometimes reverse in order creating incorrect rendering due to them both having transparent parts.
After looking at the source code for the WebGLRenderer it looks like each draw call for the object is entered into the render list at the same depth, therefore the two draw calls fight.
Spine creates these meshes dynamically for each frame of the animation, layering them from back to front (the additive parts are closer to the camera than the normal parts) which makes them very difficult to split into multiple mesh objects to allow them to be sorted. They are currently one geometry buffer.
Disabling sorting for the entire scene is not really an option. What can I do to preserve the ordering of the draw groups for these objects?

How many webgl.draw calls does Three.js make for a given number of geometries/materials/meshes

How would I calculate the number of drawArrays/drawElements calls THREE.renderer.render(scene, camera) would make?
I'm assuming one call per geometry, with attributes for the materials/mesh properties. But that could easily be wrong or incomplete.
Also, would using the BufferGeometry etc make a difference?
rstats is what i am using, it's pretty cool,
you can get almost all important information:
framerate, memory, render info, webgl drawcall...
it really helps in threejs performance.
a great work from Jaume Sanchez
here is tutorial:
https://spite.github.io/rstats/
Per WestLangley's comment, i'd search the THREE.WebglRenderer for renderer.info and look at the logic there.
The drawcalls themselves differ a bit. If you have one geometry, lets say cubeGeometry, one material cubeMaterialRed, and five cube nodes [c1,c2...,c5], youll have five draw calls. I believe three.js would then bind the geometry once, and do five consecutive draw calls, with the same shader, and mostly the same uniforms (only the position/rotation/scale matrix should change, while the color red could be bound once).
If you have five different materials [cMatRed, cMatBlue... cMatPink], youll have five draw calls with the same geometry, but the uniform for the color of the material will be set differently for each one.
If you have five different geometries, in five different nodes, the geometry will have to be bound before every draw call.
This can be reduced by using instancing when supported. In the second example, it would bound the geometry to be instanced once, and set up attributes containing the position/rotation/scale and colors, one draw call would be issued with these properties, rendering 5 geometries, in the locations and color provided by the attribute.
Three.js currently does not do any batching and optimization along those lines so instancing would be done manually.
This all means that it's pretty much 1:1 relation to the number of nodes that contain geometry. Object3D->Object3D->Mesh would result in three nodes, but one draw call.

Three.js Mesh or Geometry content

I'm new to Three.js; Is there a way to get separate objects (elements/shells) from a Mesh or Geometry object?
If there's no native way to do that, how could I implement a method for separating faces that are not connected to an ensemble and then detaching them so they form there own Mesh object?
Background: I'm loading a 3d model and would like to be able to unify this model using ThreeBSP, I need to separate the objects before applying the boolean operations.
Thank you
Dig into the Geometry object. It has an array of faces. I don't think there is a native way to check to see which ones are contiguous.
Shooting from the hip, "contagious" in this case means faces that share points with something something that shares points with something that shares points etc. So pick a face. Store it's defining points, find any faces that also use those points, store there points, find all the face that share any of the expanded points etc. Check out the "Flood Fill" function for some direction how to use recursion, as well as how to do the bookkeeping needed to avoid duplicates from keeping you searching in a loop forever.
Good Luck

WebGL triangles in a cube

In this tutorial author displays a cube by defining its 6 faces (6*4 vertices) and then telling webgl about triangles in each face.
Isn't this wasteful? Wouldn't it be better to define just 8 vertices and tell webgl how to connect them to get triangles? Are colors shared by multiple vertices a problem?
To make my concern evident: if the author defines triangles with indices array, why does he need so many vertices? He could specify all triangles with just 8 vertices in the vertex array.
Author of the example here. The issue is, as you suspected, to do with the colouring of the cube.
The way to understand this kind of code most easily is to think of WebGL's "vertices" as being not just simple points in space, but instead bundles of attributes. A particular vertex might be be the bundle <(1, -1, 1), red>. A different vertex that was at the same point in space but had a different colour (eg. <(1, -1, 1), green>) would be a different vertex entirely as far as WebGL is concerned.
So while a cube has only 8 vertices in the mathematical sense of points in space, if you want to have a different colour per face, each of those points must be occupied by three different vertices, one per colour -- which makes 8x3=24 vertices in the WebGL sense.
It's not hugely efficient in terms of memory, but memory's cheap compared to the CPU power that a more normalised representation would require for efficient processing.
Hope that clarifies things.
You can use Vertex Buffer Objects (VBO). See this example. They create a list of Vertices and and a list of Indexes "pointing" to the vertices (no duplication of vertices).

Categories