I know that we can load JSON models in WebGL, but I don't know how to animate them if we have a rigged model loaded. Is there any way of doing this without three.js?
You can animate a rigged model using THREE.js (however you seem to not want to use the built in functionality).
What THREE.js is doing in the background, is passing all the matrix transforms (an array of matrices), and per vertex it passes the bone indexes (up to 4) and bone weights to the vertex shader. In the vertex shader, it's blending between those matrices based on vertex weight and translating the vertex. So in theory you can pass values to the vertex shader to animate things. Or just use THREE.js animation routines.
It can use 2 methods to store all this data. One method uses an "image texture" which stores all those matrix and does some fancy footwork to turn the image into matrices in the vertex shader. Another method is just passing uniform matrix array (for newer graphics cards this is preferred method).
Related
Lets say I have a geometry which I am using the vertices of to create Points or an InstancedMesh. But then I want change this underlying geometry to something else, let's as a cone to a sphere or something which has the same number of vertices. I would like to animated between these without using MorphTargets so I guess I need to use a custom vertex shader which is fine however I'm a bit stuck as to how to pass in the additional BufferGeometrys into the vertex shader.
I can't really think how I might do this with the uniforms - has anyone got any ideas as, in my understanding i can only use int/float/bool/vec/ivec/mat but i need multiple vertex buffers - is it just an array of some kind?
I guess i'm trying to find a way of having multiple "full" geometries which i can interrogate within the vertex shader but can't figure out how to access/pass these additional buffers into webgl from three.js
The default position of the vertices is defined in a BufferAttribute called position. This is passed to the vertex shader as attribute vec3 position.
You could create a new BufferAttribute in your geometry called position2 and you can define it in your vertex shader as attribute vec3 position2.
Raycasting selection is working fine for my project on static meshes, however for animated meshes the ray selection doesn't seem to see the movement of the mesh and only responds to the mesh's non-animated (original) position.
This is an animated model that can only pick up the pose state below the first frame and creates a small red dot when I detect the model
The bones matrices are computed by the CPU, but the new vertex positions are computed by GPU.
So the CPU has access to the first pose only.
That's why RayCasting does not work (properly) for skinned meshes.
My idea is to update the model position when updating the animation, or use GPU calculations to get the location, but I don't know how to do it. I'm looking forward to your suggestions. Thank you.
Static model
Animated model
jsfddle view
Currently, raycasting in three.js supports morph targets (for THREE.Geometry only) by replicating the vertex shader computations on the CPU.
So yes, in theory, you could add the same functionality to support raycasting for skinned meshes for both THREE.Geometry and THREE.BufferGeometry. However, a more efficient approach would be to use "GPU picking".
You can find an example of GPU picking in this three.js example. In the example, the objects are not animated, but the concept is the same.
three.js r.98
As far as I know, it is called projective texture mapping. Are there any library methods to project primitive 2D shapes (lines mostly) to a texture?
This threejs example looks close to what I need. I tried replacing the decal texture (decalMaterial) with
THREE.LineBasicMaterial
but I get square instead of lines.
Drawing a line onto a 3D texture is possible. But are you wanting that or just a decal on your model? There are other ways of putting a decal that might be better, but I'll assume you want to paint onto the texture. I've done it in Unity, but not WebGL and am not familiar with WebGL's limitations. It was a program that let's you paint onto 3D models like in Substance Painter, ZBrush, etc. I recommend doing this in a non-destructive manner at runtime so do this to a separate render texture then just combine the 2 textures for rendering in your final model.
To do this you are going to need to have to render your model into texture space. So in your vertex shader output the uv of the model as your position.
Been writing hlsl and Cg a lot recently so glsl is super rusty. Treat this more like psuedocode.
//Vertex Shader
in vec4 position;
in vec2 uv;
out vec4 fworldPos;
out vec2 fuv;
uniform mat4 transform;
void main()
{
//We use the uv position of the model so we can draw into
//its texture space.
gl_Position = vec4(uv.x*2.0-1.0,uv.y*2.0-1.0,0.0,1.0);
//Give the fragment shader the uv
fuv = uv;
//We will need to do per pixel collision detection with
//the line or object so we need the world position.
fworldPos = transform*position
}
You then need to do collision detection with your brush in the pixel shader and see if that world position is contained with your brush. Again though this is if you want to paint on it. Project I did we used raycasts onto the model so the brush would follow along the surface and do the above easily. This will give you a very rough looking brush and you'll need to add a lot of parameters to adjust how the brush looks like falloff strength, falloff radius, etc. Basically all the parameters you see in other painting 3D software.
You can do this with plane's, boxes, spheres, lines, whatever. Just need to test if the world position is contained within your brush. Will need different shaders for different types of brushes though.
I've started using three.js, and I know there is coloring on vertices in three.js, but I'm investigating whether there is some way in three.js or generally in WebGL to have indexed colors for vertices? For example I would restrict coloring from blue, over yellow to red on a scale from minimum to maximum value, based on some values i give to the vertex, and the gradient between two vertices must use that scale of indexed colors. The practical example would be use in Finite Element Method visualisation.
So, do you know how one might hack this?
Store the indices with the vertex, and pass them on to the fragment shader. In the fragment shader, use the interpolated index to do a lookup in a 1d(*) texture containing the color gradiƫnt.
(*) Note that WebGL doesn't actually support true 1d textures, the common approach is to use an Nx1 2D texture.
I am trying to learn some WebGL (from this tutorial http://learningwebgl.com/blog/?page_id=1217). I followed the guide, and now I am trying to implement my own demo. I want to create a graphics object that contains buffers and data for each individual object to appear in the scene. Currently, I have a position vertex buffer, a texture coordinate buffer, and a normals buffer. In the tutorial, he uses another buffer, an index buffer, but only for cubes. What is the index buffer actually for? Should I implement it, and is it useful for anything other than cubes?
Vertices of your objects are defined by positions in 3D coordinate system (euclidian coordinate system). So you can take every two following vertices and connect them with line right after your 3D coordinate systems is projected to 2D raster (screen or some target image) by rasterization process. You'll get so called wireframe.
The problem of wireframe is that it's not definite. If you look to the wireframe cube at particular angles, you cannot say, how is the cube exactly rotated. That's because you need to use visibility algorithms to determine, which part of the cube is closer to the observers position (position of camera).
But lines itself cannot define surface, which is necessary to determine which side of the cube is closer to observer that others. The best way how to define surfaces in computer graphics are polygons, exactly the triangle (it have lots of cons for computer graphics).
So you have cube now defined by triangles (so call triangle mesh).
But how to define which vertices forms triangle? By the index buffer. It contains index to the vertex buffer (list with your vertices) and tells the rasterizing algorithm which three vertices forms triangle. There are lot of ways, how to interpret indexes in index buffer to reduce repetition of same vertices (one vertex might be part of lot of triangles), you may find some at article about graphics primitives.
Technically you don't need an index buffer. There are two ways to render geometry, with
glDrawArrays and glDrawElements. glDrawArrays doesn't use the index buffer. You just write the vertices one after the other into the buffers and then tell GL what to do with the elements. If you use GL_TRIANGLES as mode in the call, you have to put triples of data (vertices, normals, ...) into the buffers, so when a vertex is used multiple times you have to add it mutliple times to the buffers.
glDrawElements on the contrary can be used to store a vertex once and then use it multiple times. There is one catch though, the set of parameters for a single index is fixed, so when you have a vertex, where you need two different normals (or another attribute like texture coordinates or colors) you have to store it for each set of properties.
For spheres glDrawElements makes a lot of sense, as there the parameters match, but for a cube the normals are different, the front face needs a different normal than the top face, but the position of the two vertices is the same. You still have to put the position into the buffer twice. For that case glDrawArrays can make sense.
It depends on the data, which of calls needs less data, but glDrawElements is more flexible (as you can always simulate glDrawArrays with an index buffer which contains the numbers 0, 1,2, 3, 4, ...).