instancing vs bufferGeometry vs interleavedBuffer - javascript

I need to draw thousands of points & lines witch have position, size and color attributes and they're position is dynamic (interactive in dragging).
I was using buffer Geometry until now but now I had found two more things
instancing
interleaved buffer
I want to know what are these and how they work? What are advantages and disadvantages of them? Are they better for my case or simple buffer Geometry is best for me?
Can you give me a full comparison between these three?

Interleaving means that instead of creating multiple VBO to contain your data, you create one, and mix your data. Instead of having one buffer with v1,v1,v1,v2,v2,v2... and another with c1,c1,c1,c2,c2,c2...., you have one with v1,v1,v1,c1,c1,c1,v2,v2,v2,c2,c2,c2... with different pointers.
I'm not sure what the upside of this and am hoping that someone with more experience can answer this better. I'm not sure what happens if you want to mix types, say less precision for texture coordinates. Not sure if this would even be good practice.
On the downside, if you have to loop over this and update positions for example, but not the colors, that loop may be slightly more complicated then if it was just lined up.
Instancing is when you use one attribute across many geometry instances.
One type would be, say a cube, v1,v1,v1,v2,v2,v2....v24,24,24, 24 vertices describing a cube with sharp edges in one attribute. You can have another one with 24 normals, and another one with indecis. If you wanted to position this somewhere, you would use a uniform, and do some operation with it on the position attribute.
If you want to make 16683 cubes each with an individual position, you can issue a draw call with the same cube bound (attributes), but with the position uniform changed each time.
You can make another, instance attribute, pos1,pos1,pos1.....pos16683,pos16683,pos16683 with 16683 positions for that many instances of the cube. When you issue an instanced drawcall with these attributes bound, you can draw all 16683 instances of the cube within that one call. Instead of using a position uniform, you would have another attribute.
In case of your points this does not make sense since they are mapped 1:1 to the attribute. Meaning, you assign the position of one point, inside of that attribute and there is no more need to transform it with some kind of a uniform. With instancing, you can turn your point into something more complex, say a cube.

Related

Hidden and visible surface rendering in javascript by overlapping colours

I am trying to implement a hidden surface determination algorithm in my 3D renderer. I have found very good approaches, such as Z-Buffer or Warnock's algorithm. However, they are extremely resource-consuming. Thus I wondered, Why not use opaque overlapping colours, with which I could get the same visual results?. I would like to receive some feedback an opinions before going further, and in case it turns out to be a good solution, of course, to use this post as a way of sharing it with the community. The method would basically come down to: 1) Ordering all polygons in the scene by their Z coordinate 2)Rendering all of them in order, using opaque colours. The image/view/visual effect would be the same, without having to resort to a costly pixel-by-pixel computational process.
(Example: say I have two intersecting polygons (P1, P2). Given that the viewer's closest Z coordinate is 0, if P1z=10 and P2z=3, then the rendering order would be: P2>P1. When drawn, P2's colour will cover those P1's edges and colours placed in the 2D-XY-intersection between the two polygons ) What cons do you think this could have? Do you think that it would suffice the problem?
PS: I do not use polygonal meshes, the 3D-objects to be processed are simple convex and concave figures.
Polygons can intersect (even partially) and determine what part of one polygon is in front of the other it's a very expensive computation.

three.js How to hack OutlinePass to outline all selected objects?

I would like to modify the OutlinePass to outline all of the selected objects in the scene, including those contained within the bounds of the other in screen space (I also hope I just used that term correctly).
I am using three.js OutlinePass to indicate objects currently selected in the scene. With ray picking I append to the selected objects array, then update outlinePass.selectedObjects with said array.
The objects I select are PlaneBufferGeometry with MeshBasicMaterial. Each next drawn plane has increasing renderOrder as well as, just slightly, larger z axis offset (which in my case points upwards), so that you can correctly pick them.
If I select two disjoint planes, the outline works correctly.
If I select two intersecting planes, the outline works okay - it only draws around the two intersecting shapes -- this effect is actually nice, but would collide with fixing the next point anyway.
If I select two planes, where one is contained within the other (contained in terms of view, looking from where the camera is), then only the outer shape is outlined. Yes, that's probably a feature of OutlinePass, not a bug.
Current outline behaviour matches what it's designed to do (linked easy to understand list of steps it does).
I've spent at least 1-2 hours following the OutlinePass source, but I'm not familiar with the subject of vertex shaders or depth masks and while I would like to learn about them in the future, I can't do that right now. I believe the OutlinePass could be modified to achieve what I need.
The OutlinePass currently overrides the scene material to prepare for the edge detection pass. I'm hoping by modifying that behavior (so different objects have different materials, hence can be detected by edge detection pass) could either be modified by changing one of the used shaders or depth parameters to distinguish different objects to outline (and not only their "encompassing shapes", so to speak).
JS fiddle here, look for the UNCOMMENT line at the bottom of JS to see the issue I described. Once you uncomment that line, I would like both of these planes to be outlined.
I'm aware there are other ways to show object outlines (like enlarging an object copy behind it), but I'm interested in using the OutlinePass :). Thank you!

How many webgl.draw calls does Three.js make for a given number of geometries/materials/meshes

How would I calculate the number of drawArrays/drawElements calls THREE.renderer.render(scene, camera) would make?
I'm assuming one call per geometry, with attributes for the materials/mesh properties. But that could easily be wrong or incomplete.
Also, would using the BufferGeometry etc make a difference?
rstats is what i am using, it's pretty cool,
you can get almost all important information:
framerate, memory, render info, webgl drawcall...
it really helps in threejs performance.
a great work from Jaume Sanchez
here is tutorial:
https://spite.github.io/rstats/
Per WestLangley's comment, i'd search the THREE.WebglRenderer for renderer.info and look at the logic there.
The drawcalls themselves differ a bit. If you have one geometry, lets say cubeGeometry, one material cubeMaterialRed, and five cube nodes [c1,c2...,c5], youll have five draw calls. I believe three.js would then bind the geometry once, and do five consecutive draw calls, with the same shader, and mostly the same uniforms (only the position/rotation/scale matrix should change, while the color red could be bound once).
If you have five different materials [cMatRed, cMatBlue... cMatPink], youll have five draw calls with the same geometry, but the uniform for the color of the material will be set differently for each one.
If you have five different geometries, in five different nodes, the geometry will have to be bound before every draw call.
This can be reduced by using instancing when supported. In the second example, it would bound the geometry to be instanced once, and set up attributes containing the position/rotation/scale and colors, one draw call would be issued with these properties, rendering 5 geometries, in the locations and color provided by the attribute.
Three.js currently does not do any batching and optimization along those lines so instancing would be done manually.
This all means that it's pretty much 1:1 relation to the number of nodes that contain geometry. Object3D->Object3D->Mesh would result in three nodes, but one draw call.

Three.js Mesh or Geometry content

I'm new to Three.js; Is there a way to get separate objects (elements/shells) from a Mesh or Geometry object?
If there's no native way to do that, how could I implement a method for separating faces that are not connected to an ensemble and then detaching them so they form there own Mesh object?
Background: I'm loading a 3d model and would like to be able to unify this model using ThreeBSP, I need to separate the objects before applying the boolean operations.
Thank you
Dig into the Geometry object. It has an array of faces. I don't think there is a native way to check to see which ones are contiguous.
Shooting from the hip, "contagious" in this case means faces that share points with something something that shares points with something that shares points etc. So pick a face. Store it's defining points, find any faces that also use those points, store there points, find all the face that share any of the expanded points etc. Check out the "Flood Fill" function for some direction how to use recursion, as well as how to do the bookkeeping needed to avoid duplicates from keeping you searching in a loop forever.
Good Luck

Index buffers in WebGL?

I am trying to learn some WebGL (from this tutorial http://learningwebgl.com/blog/?page_id=1217). I followed the guide, and now I am trying to implement my own demo. I want to create a graphics object that contains buffers and data for each individual object to appear in the scene. Currently, I have a position vertex buffer, a texture coordinate buffer, and a normals buffer. In the tutorial, he uses another buffer, an index buffer, but only for cubes. What is the index buffer actually for? Should I implement it, and is it useful for anything other than cubes?
Vertices of your objects are defined by positions in 3D coordinate system (euclidian coordinate system). So you can take every two following vertices and connect them with line right after your 3D coordinate systems is projected to 2D raster (screen or some target image) by rasterization process. You'll get so called wireframe.
The problem of wireframe is that it's not definite. If you look to the wireframe cube at particular angles, you cannot say, how is the cube exactly rotated. That's because you need to use visibility algorithms to determine, which part of the cube is closer to the observers position (position of camera).
But lines itself cannot define surface, which is necessary to determine which side of the cube is closer to observer that others. The best way how to define surfaces in computer graphics are polygons, exactly the triangle (it have lots of cons for computer graphics).
So you have cube now defined by triangles (so call triangle mesh).
But how to define which vertices forms triangle? By the index buffer. It contains index to the vertex buffer (list with your vertices) and tells the rasterizing algorithm which three vertices forms triangle. There are lot of ways, how to interpret indexes in index buffer to reduce repetition of same vertices (one vertex might be part of lot of triangles), you may find some at article about graphics primitives.
Technically you don't need an index buffer. There are two ways to render geometry, with
glDrawArrays and glDrawElements. glDrawArrays doesn't use the index buffer. You just write the vertices one after the other into the buffers and then tell GL what to do with the elements. If you use GL_TRIANGLES as mode in the call, you have to put triples of data (vertices, normals, ...) into the buffers, so when a vertex is used multiple times you have to add it mutliple times to the buffers.
glDrawElements on the contrary can be used to store a vertex once and then use it multiple times. There is one catch though, the set of parameters for a single index is fixed, so when you have a vertex, where you need two different normals (or another attribute like texture coordinates or colors) you have to store it for each set of properties.
For spheres glDrawElements makes a lot of sense, as there the parameters match, but for a cube the normals are different, the front face needs a different normal than the top face, but the position of the two vertices is the same. You still have to put the position into the buffer twice. For that case glDrawArrays can make sense.
It depends on the data, which of calls needs less data, but glDrawElements is more flexible (as you can always simulate glDrawArrays with an index buffer which contains the numbers 0, 1,2, 3, 4, ...).

Categories