Transforming vertex normals in three.js - javascript

I'm having difficulties with vertex normals in THREE.js. (For reference I'm using revision 58.) For various reasons I'd like to first calculate the face vertex normals when I setup my geometry, then be free to transform it, merge and whatnot.
While I realize the normals depend on the vertices which are transformed when you apply a matrix, I thought geometry.applyMatrix was able to transform them as well. However, while the following works fine:
geometry.applyMatrix(new THREE.Matrix4().makeScale(1, -1, 1));
geometry.computeFaceNormals();
geometry.computeVertexNormals();
...the following order of operations yields reversed vertex normals:
geometry.computeFaceNormals();
geometry.computeVertexNormals();
geometry.applyMatrix(new THREE.Matrix4().makeScale(1, -1, 1));
So I'm simply wondering, is this working as intended? Do I need to first do all the transformations on the geometry before I calculate the vertex normals?

three.js does not support reflections in the object matrix. By setting a negative scale factor, you are reflecting the geometry of the object.
You are free to apply such a matrix to your geometry directly, however, which of course, is what you are doing.
However, this will result in a number of undesirable consequences, one of which is the geometry faces will no longer have counterclockwise winding order, but clockwise. It will also result in reversed face normals as calculated by geometry.computeFaceNormals().
I would advise against doing this unless you are familiar with the inner-workings of the library.
three.js r.58

Related

passing multiple secondary geometries into vertex shaders using threejs

Lets say I have a geometry which I am using the vertices of to create Points or an InstancedMesh. But then I want change this underlying geometry to something else, let's as a cone to a sphere or something which has the same number of vertices. I would like to animated between these without using MorphTargets so I guess I need to use a custom vertex shader which is fine however I'm a bit stuck as to how to pass in the additional BufferGeometrys into the vertex shader.
I can't really think how I might do this with the uniforms - has anyone got any ideas as, in my understanding i can only use int/float/bool/vec/ivec/mat but i need multiple vertex buffers - is it just an array of some kind?
I guess i'm trying to find a way of having multiple "full" geometries which i can interrogate within the vertex shader but can't figure out how to access/pass these additional buffers into webgl from three.js
The default position of the vertices is defined in a BufferAttribute called position. This is passed to the vertex shader as attribute vec3 position.
You could create a new BufferAttribute in your geometry called position2 and you can define it in your vertex shader as attribute vec3 position2.

Three.js tile which has multiple textures using plane geometry

So I am trying to build a 3D based world consisting of tiles.
I have successfully managed to do this using the plane geometry and height values etc. But now I have come to a point where I possibly have to change everything.
The problem is that I want a tile to have multiple textures (using a shader because I want to blend them). I was able to do this globally (so each tile would have same textures + using some uv mapping).
However I fail to understand how I would be able to specify which tile has which textures (and I have about a 100 textures), since the plane geometry has only 1 shader material. And I am also not sure if it is a good idea to send 100 textures through a shader?
So my questions basically boil down to this:
Is there a decent/performant way to link the tile/vertices to the textures, so I can keep the plane geometry.
- If yes: how?
- if no: Should I create each tile separately (so a plane of 1x1) and merge them somehow together (performance/vertex blending?) so it acts as a single plane (in this case the merged plane consists of many 1x1 planes) and use the shader per tile (a 1x1 plane)?
How are these things generally done?
Edit:
Some extra information because it seems that my question is not really clear:
What I want is that a tile (2 faces) has multiple "materialIndexes" to say so. Currently I need to have 1 tile to have 3 textures, so I can blend them in a shader with a specific algorithm.
For example I want to have a heart shape (red heart/black background) as texture and than based on the colors I want to blend/change the other 2 textures so I can get for example wooden heart and a blue background. Or for example I should be able to blend 4 textures evenly on the square, each take 1/4 of the square. But the point here is not what has to be done with the textures, but how that i can specify such 3, 4, or more textures for my faces/tiles.
I think you have to take a look at what is called a multi material object.
THREE.SceneUtils.createMultiMaterialObject( geometry, materials );
If you google for those you find several examples. Like this answer on a similar question.
Or take a look at THREE.MeshFaceMaterial. You can use it to assign multiple materials for the same geometry.
var mesh = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial( materials ) );
Where materials is an array of materials and the faces use a materialIndex paramater to get appointed the right material
Similar questions here and here
EDIT:
Here is a fiddle to demonstrate that it is possible. I used the code from one of the links.
If your textures have all the same size, you can do the following:
Create a texture 10x the size of an individual texture (you can do that automatically, create a canvas, draw the texture file on the canvas at the correct coordinates, get the texture from the canvas). This allows for 100 (10x10) textures.
Assign to the tiles a base uv coordinates in the range 0 - 0.1 (since a single texture occupies 1/10th of the global texture),
Add to the previous uv values the offset of the individual texture (for the second texture, offset of 0.1 in u and 0 in v, for the 3rd texture 0.2 and 0; for the 10th 0 and 0.1.

three.js - Does it have indexed coloring for vertices?

I've started using three.js, and I know there is coloring on vertices in three.js, but I'm investigating whether there is some way in three.js or generally in WebGL to have indexed colors for vertices? For example I would restrict coloring from blue, over yellow to red on a scale from minimum to maximum value, based on some values i give to the vertex, and the gradient between two vertices must use that scale of indexed colors. The practical example would be use in Finite Element Method visualisation.
So, do you know how one might hack this?
Store the indices with the vertex, and pass them on to the fragment shader. In the fragment shader, use the interpolated index to do a lookup in a 1d(*) texture containing the color gradiƫnt.
(*) Note that WebGL doesn't actually support true 1d textures, the common approach is to use an Nx1 2D texture.

Index buffers in WebGL?

I am trying to learn some WebGL (from this tutorial http://learningwebgl.com/blog/?page_id=1217). I followed the guide, and now I am trying to implement my own demo. I want to create a graphics object that contains buffers and data for each individual object to appear in the scene. Currently, I have a position vertex buffer, a texture coordinate buffer, and a normals buffer. In the tutorial, he uses another buffer, an index buffer, but only for cubes. What is the index buffer actually for? Should I implement it, and is it useful for anything other than cubes?
Vertices of your objects are defined by positions in 3D coordinate system (euclidian coordinate system). So you can take every two following vertices and connect them with line right after your 3D coordinate systems is projected to 2D raster (screen or some target image) by rasterization process. You'll get so called wireframe.
The problem of wireframe is that it's not definite. If you look to the wireframe cube at particular angles, you cannot say, how is the cube exactly rotated. That's because you need to use visibility algorithms to determine, which part of the cube is closer to the observers position (position of camera).
But lines itself cannot define surface, which is necessary to determine which side of the cube is closer to observer that others. The best way how to define surfaces in computer graphics are polygons, exactly the triangle (it have lots of cons for computer graphics).
So you have cube now defined by triangles (so call triangle mesh).
But how to define which vertices forms triangle? By the index buffer. It contains index to the vertex buffer (list with your vertices) and tells the rasterizing algorithm which three vertices forms triangle. There are lot of ways, how to interpret indexes in index buffer to reduce repetition of same vertices (one vertex might be part of lot of triangles), you may find some at article about graphics primitives.
Technically you don't need an index buffer. There are two ways to render geometry, with
glDrawArrays and glDrawElements. glDrawArrays doesn't use the index buffer. You just write the vertices one after the other into the buffers and then tell GL what to do with the elements. If you use GL_TRIANGLES as mode in the call, you have to put triples of data (vertices, normals, ...) into the buffers, so when a vertex is used multiple times you have to add it mutliple times to the buffers.
glDrawElements on the contrary can be used to store a vertex once and then use it multiple times. There is one catch though, the set of parameters for a single index is fixed, so when you have a vertex, where you need two different normals (or another attribute like texture coordinates or colors) you have to store it for each set of properties.
For spheres glDrawElements makes a lot of sense, as there the parameters match, but for a cube the normals are different, the front face needs a different normal than the top face, but the position of the two vertices is the same. You still have to put the position into the buffer twice. For that case glDrawArrays can make sense.
It depends on the data, which of calls needs less data, but glDrawElements is more flexible (as you can always simulate glDrawArrays with an index buffer which contains the numbers 0, 1,2, 3, 4, ...).

WebGL triangles in a cube

In this tutorial author displays a cube by defining its 6 faces (6*4 vertices) and then telling webgl about triangles in each face.
Isn't this wasteful? Wouldn't it be better to define just 8 vertices and tell webgl how to connect them to get triangles? Are colors shared by multiple vertices a problem?
To make my concern evident: if the author defines triangles with indices array, why does he need so many vertices? He could specify all triangles with just 8 vertices in the vertex array.
Author of the example here. The issue is, as you suspected, to do with the colouring of the cube.
The way to understand this kind of code most easily is to think of WebGL's "vertices" as being not just simple points in space, but instead bundles of attributes. A particular vertex might be be the bundle <(1, -1, 1), red>. A different vertex that was at the same point in space but had a different colour (eg. <(1, -1, 1), green>) would be a different vertex entirely as far as WebGL is concerned.
So while a cube has only 8 vertices in the mathematical sense of points in space, if you want to have a different colour per face, each of those points must be occupied by three different vertices, one per colour -- which makes 8x3=24 vertices in the WebGL sense.
It's not hugely efficient in terms of memory, but memory's cheap compared to the CPU power that a more normalised representation would require for efficient processing.
Hope that clarifies things.
You can use Vertex Buffer Objects (VBO). See this example. They create a list of Vertices and and a list of Indexes "pointing" to the vertices (no duplication of vertices).

Categories