I have a Three.js (r87) scene with modified spine skeletons each with two materials and a buffergeometry with two groups to render both parts of the dynamic mesh.
However, when moving the camera with sorting enabled, the two draw calls sometimes reverse in order creating incorrect rendering due to them both having transparent parts.
After looking at the source code for the WebGLRenderer it looks like each draw call for the object is entered into the render list at the same depth, therefore the two draw calls fight.
Spine creates these meshes dynamically for each frame of the animation, layering them from back to front (the additive parts are closer to the camera than the normal parts) which makes them very difficult to split into multiple mesh objects to allow them to be sorted. They are currently one geometry buffer.
Disabling sorting for the entire scene is not really an option. What can I do to preserve the ordering of the draw groups for these objects?
Related
How would I calculate the number of drawArrays/drawElements calls THREE.renderer.render(scene, camera) would make?
I'm assuming one call per geometry, with attributes for the materials/mesh properties. But that could easily be wrong or incomplete.
Also, would using the BufferGeometry etc make a difference?
rstats is what i am using, it's pretty cool,
you can get almost all important information:
framerate, memory, render info, webgl drawcall...
it really helps in threejs performance.
a great work from Jaume Sanchez
here is tutorial:
https://spite.github.io/rstats/
Per WestLangley's comment, i'd search the THREE.WebglRenderer for renderer.info and look at the logic there.
The drawcalls themselves differ a bit. If you have one geometry, lets say cubeGeometry, one material cubeMaterialRed, and five cube nodes [c1,c2...,c5], youll have five draw calls. I believe three.js would then bind the geometry once, and do five consecutive draw calls, with the same shader, and mostly the same uniforms (only the position/rotation/scale matrix should change, while the color red could be bound once).
If you have five different materials [cMatRed, cMatBlue... cMatPink], youll have five draw calls with the same geometry, but the uniform for the color of the material will be set differently for each one.
If you have five different geometries, in five different nodes, the geometry will have to be bound before every draw call.
This can be reduced by using instancing when supported. In the second example, it would bound the geometry to be instanced once, and set up attributes containing the position/rotation/scale and colors, one draw call would be issued with these properties, rendering 5 geometries, in the locations and color provided by the attribute.
Three.js currently does not do any batching and optimization along those lines so instancing would be done manually.
This all means that it's pretty much 1:1 relation to the number of nodes that contain geometry. Object3D->Object3D->Mesh would result in three nodes, but one draw call.
I'm working with the most recent r72 of THREEJS, and am trying to accomplish some effects that I used to do with pointclouds, which has now been replaced with the Points object.
I've set up a codepen http://codepen.io/shomanishikawa/pen/NGaWdW - both orbs are made by creating a geometry and pushing Vector3's to them.
I'd like to have individual points in the points object move at different velocities/directions, which was previously accomplished easily by setting something like
particle.velocity.y = 20
Which now seems to have no effect.
The orb on the left uses a BufferGeometry with a ShaderMaterial
The orb on the right uses a Geometry with a PointsMaterial
How can I set velocities on individual points so I can have them orbit around the radius instead of just rotating the parent Points object?
I'd like to accomplish this with a regular Geometry if possible, since I'd like finer control of moving the vertices individually. Provided examples of both just in case it's only feasible with one version.
I'm finally understanding the performance impact of merged geometries. I've noticed the GPU has draw calls for all shared geometries combined with materials (or perhaps its just the material count). My question is why are developers forced to figure out how to merge geometries when its clear that the three.js will just divide everything out of your merged geometry anyways?
I'm probably missing something.
It is difficult to make general statements about WebGLRenderer performance, because each use case is different.
But typically, performance will improve if the number of draw calls is reduced. If you have multiple meshes, and they all share the same material, then by merging the mesh geometries into a single geometry, you reduce the number of draw calls to one.
Remember, this approach makes sense only if your multiple meshes are static relative to each other, because once merged, they cannot be transformed independently.
And yes, it is true, that if your mesh has multiple materials, the the renderer will break the geometry up into smaller geometries -- one for each material -- prior to rendering.
Here is a useful tip: In the console, type renderer.info and inspect the resulting object returned. This will give you insight into what is going on behind the scenes.
You can also display renderer.info live on your screen by using this utility.
three.js r.68
I'm making a game with three.js in which an mesh will be rendered in greater detail as the player gets closer (i.e different geometry and material). This seems like it would be a common requirement in open world type games, and I was wondering what standard procedure is in three.js
It looks like changing the geometry of a mesh is quite computationally costly, so should I just store a different mesh for each distance, or is there an existing data structure for this purpose?
You want to use level-of-detail (LOD) modeling.
There is an example of that in the following three.js example: http://threejs.org/examples/webgl_lod.html
In the example, press the 'W/S' keys to see the effect.
three.js r.62
I am trying to learn some WebGL (from this tutorial http://learningwebgl.com/blog/?page_id=1217). I followed the guide, and now I am trying to implement my own demo. I want to create a graphics object that contains buffers and data for each individual object to appear in the scene. Currently, I have a position vertex buffer, a texture coordinate buffer, and a normals buffer. In the tutorial, he uses another buffer, an index buffer, but only for cubes. What is the index buffer actually for? Should I implement it, and is it useful for anything other than cubes?
Vertices of your objects are defined by positions in 3D coordinate system (euclidian coordinate system). So you can take every two following vertices and connect them with line right after your 3D coordinate systems is projected to 2D raster (screen or some target image) by rasterization process. You'll get so called wireframe.
The problem of wireframe is that it's not definite. If you look to the wireframe cube at particular angles, you cannot say, how is the cube exactly rotated. That's because you need to use visibility algorithms to determine, which part of the cube is closer to the observers position (position of camera).
But lines itself cannot define surface, which is necessary to determine which side of the cube is closer to observer that others. The best way how to define surfaces in computer graphics are polygons, exactly the triangle (it have lots of cons for computer graphics).
So you have cube now defined by triangles (so call triangle mesh).
But how to define which vertices forms triangle? By the index buffer. It contains index to the vertex buffer (list with your vertices) and tells the rasterizing algorithm which three vertices forms triangle. There are lot of ways, how to interpret indexes in index buffer to reduce repetition of same vertices (one vertex might be part of lot of triangles), you may find some at article about graphics primitives.
Technically you don't need an index buffer. There are two ways to render geometry, with
glDrawArrays and glDrawElements. glDrawArrays doesn't use the index buffer. You just write the vertices one after the other into the buffers and then tell GL what to do with the elements. If you use GL_TRIANGLES as mode in the call, you have to put triples of data (vertices, normals, ...) into the buffers, so when a vertex is used multiple times you have to add it mutliple times to the buffers.
glDrawElements on the contrary can be used to store a vertex once and then use it multiple times. There is one catch though, the set of parameters for a single index is fixed, so when you have a vertex, where you need two different normals (or another attribute like texture coordinates or colors) you have to store it for each set of properties.
For spheres glDrawElements makes a lot of sense, as there the parameters match, but for a cube the normals are different, the front face needs a different normal than the top face, but the position of the two vertices is the same. You still have to put the position into the buffer twice. For that case glDrawArrays can make sense.
It depends on the data, which of calls needs less data, but glDrawElements is more flexible (as you can always simulate glDrawArrays with an index buffer which contains the numbers 0, 1,2, 3, 4, ...).