Index buffers in WebGL? - javascript

I am trying to learn some WebGL (from this tutorial http://learningwebgl.com/blog/?page_id=1217). I followed the guide, and now I am trying to implement my own demo. I want to create a graphics object that contains buffers and data for each individual object to appear in the scene. Currently, I have a position vertex buffer, a texture coordinate buffer, and a normals buffer. In the tutorial, he uses another buffer, an index buffer, but only for cubes. What is the index buffer actually for? Should I implement it, and is it useful for anything other than cubes?

Vertices of your objects are defined by positions in 3D coordinate system (euclidian coordinate system). So you can take every two following vertices and connect them with line right after your 3D coordinate systems is projected to 2D raster (screen or some target image) by rasterization process. You'll get so called wireframe.
The problem of wireframe is that it's not definite. If you look to the wireframe cube at particular angles, you cannot say, how is the cube exactly rotated. That's because you need to use visibility algorithms to determine, which part of the cube is closer to the observers position (position of camera).
But lines itself cannot define surface, which is necessary to determine which side of the cube is closer to observer that others. The best way how to define surfaces in computer graphics are polygons, exactly the triangle (it have lots of cons for computer graphics).
So you have cube now defined by triangles (so call triangle mesh).
But how to define which vertices forms triangle? By the index buffer. It contains index to the vertex buffer (list with your vertices) and tells the rasterizing algorithm which three vertices forms triangle. There are lot of ways, how to interpret indexes in index buffer to reduce repetition of same vertices (one vertex might be part of lot of triangles), you may find some at article about graphics primitives.

Technically you don't need an index buffer. There are two ways to render geometry, with
glDrawArrays and glDrawElements. glDrawArrays doesn't use the index buffer. You just write the vertices one after the other into the buffers and then tell GL what to do with the elements. If you use GL_TRIANGLES as mode in the call, you have to put triples of data (vertices, normals, ...) into the buffers, so when a vertex is used multiple times you have to add it mutliple times to the buffers.
glDrawElements on the contrary can be used to store a vertex once and then use it multiple times. There is one catch though, the set of parameters for a single index is fixed, so when you have a vertex, where you need two different normals (or another attribute like texture coordinates or colors) you have to store it for each set of properties.
For spheres glDrawElements makes a lot of sense, as there the parameters match, but for a cube the normals are different, the front face needs a different normal than the top face, but the position of the two vertices is the same. You still have to put the position into the buffer twice. For that case glDrawArrays can make sense.
It depends on the data, which of calls needs less data, but glDrawElements is more flexible (as you can always simulate glDrawArrays with an index buffer which contains the numbers 0, 1,2, 3, 4, ...).

Related

How to apply texture pixels to cloud of points vertices?

I'm trying to move the pixels of an image on a texture onto a set of vertices, so I could use them as a point cloud, in WebGL.
One way to do this would be to
render the texture to a framebuffer, then use gl.readPixels() to move onto a JavaScript array, then move the array back to the GPU with gl.bufferData().
Treating the data as vertices, the point cloud could be rendered with gl.drawArrays() using the gl.POINTS primitive.
But this requires that the data move from the GPU to the CPU and back again, this could become costly, especially for video.
Is there a way to move data directly from a texture to a vertex list, without leaving the GPU?
Any references, suggestions, or code examples greatly appreciated!
Thanks to Blindman67 for pointing out that you can access textures in the vertex shader, I didn't know this. You can simply use texture2D(), working examples are https://www.khronos.org/registry/webgl/conformance-suites/2.0.0/conformance/rendering/vertex-texture-fetch.html for WebGL, and http://webglsamples.org/WebGL2Samples/#texture_vertex for WebGL2. A good search term is "WebGL vertex texture fetch". Don't waste time (as I did) following old links, and trying to get calls like texture2DLodEXT() to work.

How do I access depth data in three.js

I've seen shaders that create an outline around edges dynamically based on how much difference there is between the depth (distance from camera to surface) of a pixel at an edge and the depth of a pixel adjacent to it (less depth can mean a thinner outline or none at all). Like these renders:
And I'm interested in using such a shader on my three.js renders, but I think I need to figure out how to access depth data for each pixel.
Three.js documentation mentions a depth setting:
depth - whether the drawing buffer has a depth buffer of at least 16
bits. Default is true.
But I'm not sure what it means by the drawing buffer having a depth buffer. The image buffers I'm familiar with are pixel buffers, with no depth information. Where would I access this depth buffer at?
There's an example in the Three.js website that renders the scene to a THREE.WebGLRenderTarget with it's depthBuffer attribute set to true. This gives you access to depth data.
The idea is as follows:
Render the main scene to a WebGLRenderTarget. This target will contain RGB and Depth data that can be accessed via their .texture and .depthTexture attributes, accordingly.
Take these 2 textures, and apply them to a plane with custom shaders.
In the plane's custom shaders, you can access the texture data to perform whatever calculations you want to play with colors and depth.
Render the second scene (that contains only the plane) to canvas.
Here's the link to source code of that example Notice you can comment-out the code on line 73 to allow the color data to display.
Three.js already has a MeshToonMaterial, there's no need to create a new one.
https://github.com/mrdoob/three.js/blob/master/examples/webgl_materials_variations_toon.html
html, body, iframe {margin:0;width:100%;height:100%;border:0}
<iframe src="https://threejs.org/examples/webgl_materials_variations_toon.html">

instancing vs bufferGeometry vs interleavedBuffer

I need to draw thousands of points & lines witch have position, size and color attributes and they're position is dynamic (interactive in dragging).
I was using buffer Geometry until now but now I had found two more things
instancing
interleaved buffer
I want to know what are these and how they work? What are advantages and disadvantages of them? Are they better for my case or simple buffer Geometry is best for me?
Can you give me a full comparison between these three?
Interleaving means that instead of creating multiple VBO to contain your data, you create one, and mix your data. Instead of having one buffer with v1,v1,v1,v2,v2,v2... and another with c1,c1,c1,c2,c2,c2...., you have one with v1,v1,v1,c1,c1,c1,v2,v2,v2,c2,c2,c2... with different pointers.
I'm not sure what the upside of this and am hoping that someone with more experience can answer this better. I'm not sure what happens if you want to mix types, say less precision for texture coordinates. Not sure if this would even be good practice.
On the downside, if you have to loop over this and update positions for example, but not the colors, that loop may be slightly more complicated then if it was just lined up.
Instancing is when you use one attribute across many geometry instances.
One type would be, say a cube, v1,v1,v1,v2,v2,v2....v24,24,24, 24 vertices describing a cube with sharp edges in one attribute. You can have another one with 24 normals, and another one with indecis. If you wanted to position this somewhere, you would use a uniform, and do some operation with it on the position attribute.
If you want to make 16683 cubes each with an individual position, you can issue a draw call with the same cube bound (attributes), but with the position uniform changed each time.
You can make another, instance attribute, pos1,pos1,pos1.....pos16683,pos16683,pos16683 with 16683 positions for that many instances of the cube. When you issue an instanced drawcall with these attributes bound, you can draw all 16683 instances of the cube within that one call. Instead of using a position uniform, you would have another attribute.
In case of your points this does not make sense since they are mapped 1:1 to the attribute. Meaning, you assign the position of one point, inside of that attribute and there is no more need to transform it with some kind of a uniform. With instancing, you can turn your point into something more complex, say a cube.

GPU accelerated collision

Has anyone tried accelerating collision detection via GPU? I thought about passing position+radius for a simple sphere intersection, rendering all intersecting triangle indices to a texture.
Using the GPU
I'm not sure if this is an good idea at all, but the math doesn't seem to be costly for this in vertex shader. It wouldn't resolve anything, just fetching the face indices to indicate which faces are relevant. It's also only supposed for the terrain.
Using the octree
The terrain is generated via dual contouring, and in it's highest lod several thousand nodes can be required, storing face indices based on the dual cells or connecting them to their octree nodes are costly in terms of memory and cpu. It already required a lot optimizations and needs to run multi-threaded, i'd like to avoid additional steps on this side.
It might work using the octree and the density function on the boundaries and trilinear interpolating the surface, but it requires to pass the nodes from the worker, or position to worker. Anyway this wouldn't completely match the polygons of a cell, but at least smooth out the error.
Using the density function
While required octree-nodes and their polygons are adaptive, their size are varying much, so a collision based on the density field function won't always fit the actual underlying geometry, surface can be far below or above the geometry.
Any suggestions?

WebGL triangles in a cube

In this tutorial author displays a cube by defining its 6 faces (6*4 vertices) and then telling webgl about triangles in each face.
Isn't this wasteful? Wouldn't it be better to define just 8 vertices and tell webgl how to connect them to get triangles? Are colors shared by multiple vertices a problem?
To make my concern evident: if the author defines triangles with indices array, why does he need so many vertices? He could specify all triangles with just 8 vertices in the vertex array.
Author of the example here. The issue is, as you suspected, to do with the colouring of the cube.
The way to understand this kind of code most easily is to think of WebGL's "vertices" as being not just simple points in space, but instead bundles of attributes. A particular vertex might be be the bundle <(1, -1, 1), red>. A different vertex that was at the same point in space but had a different colour (eg. <(1, -1, 1), green>) would be a different vertex entirely as far as WebGL is concerned.
So while a cube has only 8 vertices in the mathematical sense of points in space, if you want to have a different colour per face, each of those points must be occupied by three different vertices, one per colour -- which makes 8x3=24 vertices in the WebGL sense.
It's not hugely efficient in terms of memory, but memory's cheap compared to the CPU power that a more normalised representation would require for efficient processing.
Hope that clarifies things.
You can use Vertex Buffer Objects (VBO). See this example. They create a list of Vertices and and a list of Indexes "pointing" to the vertices (no duplication of vertices).

Categories