Threejs Transform Matrix ordering - javascript

I would like to know how threejs ordering multiple matrix?
For instance ,
......
var mesh = new THREE.Mesh( geometry, material );
mesh.position.set( 0, 20, 0 ); // T , transform matrix
mesh.rotation.set( 0, Math.PI, 0 );//R , rotation matrix
mesh.scale.set( 1, 1, 10 );//S , scale matrix
So , how threejs to combine three matrix? It will according my set value order(In my example , it is TRS , so final matrix should T*R*S ) or fixed order (For instance, it is always using SRT ordering , the finally matrix is S*R*T)?

You can specify the parameters as you have done so long as the mesh property 'matrixAutoUpdate' is true and you only want it in order by scale, rotation then translation (typical).
If you need something more complicated, such as a specific series of translations and rotations, then you should set matrixAutoUpdate to false and calculate the matrix yourself.
Important note: when using the matrix multiply function, you must do it in 'reverse' order. For example if you want to position a limb then logically you would:
1) do local rotation (such as arm up/down)
2) move limb with respect to its offset (put it at its joint)
3) rotate according to body rotation
4) move to body offset
The actual operations would be done in reverse:
arm_mesh.matrixAutoUpdate = false;
var mat4: THREE.Matrix4 = new THREE.Matrix4();
var arm_matrix = arm_mesh.matrix;
arm_matrix.identity(); // reset
arm_matrix.multiply(mat4.makeTranslation(body_pos.x, body_pos.y, body_pos.z));
arm_matrix.multiply(mat4.makeRotationFromQuaternion(body_rotation));
arm_matrix.multiply(mat4.makeTranslation(arm_offset.x, arm_offset.y, arm_offset.z));
arm_matrix.multiply(mat4.makeRotationFromEuler(new THREE.Euler(arm_angle, 0, 0)));
Of course this does not utilize or account for parent/child relationships.

Related

Three.js rope / cable effect - animating thick lines

With Three.js I want to create the effect of an object swinging from a cable or rope. It doesn't require real physics as the "swinging" object simply follows a fixed animation. The easiest solution is using the THREE.Line, however the problem is that THREE.Line can only be 1px thick and looks kinda awful.
In the three.js examples there is a "fat lines" example :
https://threejs.org/examples/?q=lines#webgl_lines_fat
however the problem is that once I have created the line using LineGeometry() I cannot figure out how to animate it.
The only solution I have found so far is to delete then create a new line every single frame, which works but seems like a really uneconomical, poorly optimized way to do it.
Does anyone know of a better way to either animate Line Geometry without having to delete and replace each frame? Or is there another method within three.js which would allow me to create thicker animated lines?
Thanks!!
I actually have a small project where I animate a bulb swinging along some rope. You can access it here, the functions I'm talking about below are in helperFns.js.
Actually, what I basically do is create my attached object separately :
let geometry = new THREE.SphereGeometry( 1, 32, 32 );
var material = new THREE.MeshStandardMaterial({color:0x000000,emissive:0xffffff,emissiveIntensity:lightIntensity});
bulb = new THREE.Mesh( geometry, material );
light = new THREE.PointLight(0xF5DCAF,lightIntensity,Infinity,2)
light.power = lightIntensity*20000
light.position.set(0,length*Math.sin(theta),z0-length*Math.cos(theta))
light.add(bulb)
light.castShadow = true;
hemiLight = new THREE.HemisphereLight( 0xddeeff, 0x0f0e0d, 0.1 );
scene.add(hemiLight)
scene.add(light)
I then add a spline connected to it :
// Create the wire linking the bulb to the roof
var curveObject = drawSpline(light.position,{x:0,y:0,z:z0},0xffffff);
scene.add(curveObject)
Where drawSpline is the following function :
// Build a spline representing the wire between the roof and the bulb. The new middle point is computed as the middle point shifted orthogonally from the lign by shiftRatio
function drawSpline(beginning,end,clr){
// Compute y sign to know which way to bend the wire
let ySign = Math.sign((end.y+beginning.y)/2)
// Compute the bending strength and multiply per Math.abs(beginning.y) to ensure it decreases as the bulb gets closer to the theta = 0 position, and also to ensure
// that the shift is null if thete is null (no discontinuity in the wire movement)
let appliedRatio = -shiftRatio*Math.abs(beginning.y)
// Compute middle line position vector and the direction vector from the roof to the bulb
let midVector = new THREE.Vector3( 0, (end.y+beginning.y)/2, (end.z+beginning.z)/2 )
let positionVector = new THREE.Vector3(0,end.y-beginning.y,end.z-beginning.z)
// Compute the orthogonal vector to the direction vector (opposite sense to the bending shift)
let orthogVector = new THREE.Vector3(0,positionVector.z,-positionVector.y).normalize()
// Compute the curve passing by the three points
var curve = new THREE.CatmullRomCurve3( [
new THREE.Vector3( beginning.x, beginning.y, beginning.z ),
midVector.clone().addScaledVector(orthogVector,ySign*appliedRatio),
new THREE.Vector3( end.x, end.y, end.z ),
]);
// Build the curve line object
var points = curve.getPoints( 20 );
var geometry = new THREE.BufferGeometry().setFromPoints( points );
var material = new THREE.LineBasicMaterial( { color : clr } );
// Create the final object to add to the scene
var curveObject = new THREE.Line( geometry, material );
return curveObject;
}
It creates the CatmullRomCurve3 interpolating the 3 points (one fix end at (0, 0, 0), one middle point to apply the bend, and the bulb position. You can actually start with a straight line, and then try to compute some curve.
To do so, you want to get the vector orthogonal to the line and shift the line (on the good side) along this vector.
And finally, at each animate() call, I redraw the spline for the new position of the bulb :
scene.children[2] = drawSpline(light.position,{x:0,y:0,z:z0},0xffffff)
Tell me if there is a point you do not get, but it should help for your problem.
Just wanted to post a more detailed version of West Langleys great reply. To animate a THREE Line2 you need to use the commands :
line.geometry.attributes.instanceStart.setXYZ( index, x, y, z );
line.geometry.attributes.instanceEnd.setXYZ( index, x, y, z );
What confused me was the index value - rather than thinking about a Line2 as being vertex points (the method used for creating the line) you need to think of a Line2 as being made of separate individual lines between 2 sets of points... so each line has a Start point and and an End point.
A "W" is therefore NOT defined as 5 vertices but by 4 lines. So you can "split" a Line2 by setting a different Start point to the previous lines End point. The index is the number of lines that make up your object. In my case I have two lines forming a V shape... so I set my index to 1 to affect the end of line 0 and the start of line 1, as in West's example :
var index = 1;
line.geometry.attributes.instanceEnd.setXYZ( index - 1, x, y, z );
line.geometry.attributes.instanceStart.setXYZ( index, x, y, z );
And then you just need to update the line using :
line.geometry.attributes.instanceStart.data.needsUpdate = true;
Thanks again to West for this really useful answer. I'd never have guessed this as you cannot see these variables when you look at the Line2 object properties. Very useful info. I hope it helps someone else at some point.

Why does a `Geometry` have a coordinate system and is it different to a `Mesh`?

I have been creating a simple Three.js application and so far I have a centered text in the scene that shows "Hello World". I have been copying the code examples to try and understand what is happening and so far Ihave it working but I am failing to completely understand why.
My confusion comes from reading all the Three.js tutorials describing that a Geometry object is responsible for creating the shape of the object in the scene. Therefore I did not think it would not make sense to have a position on something that is describing the shape of the mesh.
/* Create the scene Text */
let loader = new THREE.FontLoader();
loader.load( 'fonts/helvetiker_regular.typeface.json', function (font) {
/* Create the geometry */
let geometry_text = new THREE.TextGeometry( "Hello World", {
font: font,
size: 5,
height: 1,
});
/* Create a bounding box in order to calculate the center position of the created text */
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0); // Center the text by offsetting half the width
/* Currently using basic material because I do not have a light, Phong will be black */
let material_text = new THREE.MeshBasicMaterial({
color: new THREE.Color( 0x006699 )
});
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
//debugger;
scene.add(textMesh);
console.log('added mesh')
} );
Here is the code that I use to add to shape and my confusion comes from the following steps.
/* Create a bounding box in order to calculate the center position of the created text */
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0); // Center the text by offsetting half the width
First, we translate the geometry to the left to center the text inside the scene.
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
Secondly, we set the position of the mesh.
My confusion comes from the fact that we need both of these operations to occur to move the mesh backwards and become centered.
However I do not understand why these operations should be done of the geometry, infact what confuses me more is that why does textMesh.position.set(0, 0, -20); not override my previously performed translation and simply move the mesh to (0,0,-20). removing my previous translation. It seems that both are required.
AFAIK it is recommended (in scenegraph) to transform (translate, rotate, scale) the whole mesh (with simple) rather than prepare transformed geometry and use it to create "untransformed" mesh, since the mesh in second case is not transform-friendly. Basically "cumulative" transform will be just illegal, giving wrong, unexpected results. Even simple movement.
But sometimes it is useful to create transformed geometry and use it for some algos/computations or in meshes.
You are getting somehow "expected" results in your "combined transform" case because it is just particular case (for example it can work only if object position is (0, 0, 0) etc)
mesh.position.set doesn't modify geometry: it is only a property of mesh and it is used to compute final mesh triangles. This computation involves geometry and object matrix which is composed from object position, object quaternion (3D-rotation) and object scale. Object's geometry can be modified by "matrix" operations but none of such operations are performed dynamically by mesh.

Does the order of indices matter?

this._vertices = new Float32Array([
-0.5, 0, 0, // left
0, 0.5, 0, // top
0.5, 0, 0 // right
]);
this._indicies = new Uint16Array([0, 1, 2]);
As you can see I have 3 points for a triangle. The problem with this is that my triangle doesn't end up getting rendered unless I change the indices to
this._indicies = new Uint16Array([0, 2, 1]);
do you know why that is? Why does the order of the indices matter? And how do I know the correct order to put the indices in?
Ps. It works when setting the draw type to LINE_LOOP but it doesn't work on triangles.
If culling is on gl.enable(gl.CULL_FACE) then triangles are culled if their vertex are counter clockwise in clip space (ie, after the vertex shader). You can choose which triangles, clockwise or counter-clockwise get culled with gl.cullFace(...)
0 0
/ \ / \
/ \ / \
2-----1 1-----2
clockwise counter-clockwise
Order of vertices does indeed matter if face culling is enabled. What's the front and what's the back of a triangle depends on the set winding order. If counterclock winding is enabled (the default) then faces which vertices appear in counterclock wise order on the screen are considered "front" side.
If culling is enabled then only triangle upon which you look from a selected side are drawn.

Multi-stop Gradient in THREE.js for Lines

This shows an example of how to create a two-color gradient along a THREE.js line:
Color Gradient for Three.js line
How do you implement a multi-stop color gradient along a line? It looks like attributes will only interpolate across two values (I tried passing in three, it only worked with the first two values).
This is the do-it-yourself color gradient approach:
Create a line geometry and add some vertices:
var lineGeometry = new THREE.Geometry();
lineGeometry.vertices.push(
new THREE.Vector3( -10, 0, 0 ),
new THREE.Vector3( -10, 10, 0 )
);
Use some helper functions for convenience:
var steps = 0.2;
var phase = 1.5;
var coloredLine = getColoredBufferLine( steps, phase, lineGeometry );
scene.add( coloredLine );
jsfiddle: http://jsfiddle.net/jfd58hbm/
Explaination:
getColoredBufferLine creates a new buffer geometry from the geometry, which is just for convenience. It then iterates the vertices, assigning each vertex a color. The color is calculated using another helper: color.set ( makeColorGradient( i, frequency, phase ) );.
Where basically frequency defines how many color changes you want the line to receive.
And phase is a shift of the color spectrum (= what color does the line start with).
I have added a dat.gui so you can play around with the parameters. If you want to change the color repetition or type, you can alter the makeColorGradient function to your needs. This page offers some good explaination how gradients are generated and where my example is based upon: http://krazydad.com/tutorials/makecolors.php.

Three.js - Accurate ray casting for collision detection

I'm working with Three.js, version 68. I'm using the same method for collision detection as this guy is using here, which is great most of the time (A big "thank you" goes out to the author!): http://stemkoski.github.io/Three.js/Collision-Detection.html
Here is a link to the source if you want to download it from github. Just look for Collision-Detection.html: https://github.com/stemkoski/stemkoski.github.com
Here is the code that is important to the collision detection:
var MovingCube;
var collidableMeshList = [];
var wall = new THREE.Mesh(wallGeometry, wallMaterial);
wall.position.set(100, 50, -100);
scene.add(wall);
collidableMeshList.push(wall);
var wall = new THREE.Mesh(wallGeometry, wireMaterial);
wall.position.set(100, 50, -100);
scene.add(wall);
var wall2 = new THREE.Mesh(wallGeometry, wallMaterial);
wall2.position.set(-150, 50, 0);
wall2.rotation.y = 3.14159 / 2;
scene.add(wall2);
collidableMeshList.push(wall2);
var wall2 = new THREE.Mesh(wallGeometry, wireMaterial);
wall2.position.set(-150, 50, 0);
wall2.rotation.y = 3.14159 / 2;
scene.add(wall2);
var cubeGeometry = new THREE.CubeGeometry(50,50,50,1,1,1);
var wireMaterial = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe:true } );
MovingCube = new THREE.Mesh( cubeGeometry, wireMaterial );
MovingCube.position.set(0, 25.1, 0);
// collision detection:
// determines if any of the rays from the cube's origin to each vertex
// intersects any face of a mesh in the array of target meshes
// for increased collision accuracy, add more vertices to the cube;
// for example, new THREE.CubeGeometry( 64, 64, 64, 8, 8, 8, wireMaterial )
// HOWEVER: when the origin of the ray is within the target mesh, collisions do not occur
var originPoint = MovingCube.position.clone();
for (var vertexIndex = 0; vertexIndex < MovingCube.geometry.vertices.length; vertexIndex++)
{
var localVertex = MovingCube.geometry.vertices[vertexIndex].clone();
var globalVertex = localVertex.applyMatrix4( MovingCube.matrix );
var directionVector = globalVertex.sub( MovingCube.position );
var ray = new THREE.Raycaster( originPoint, directionVector.clone().normalize() );
var collisionResults = ray.intersectObjects( collidableMeshList );
if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() )
appendText(" Hit ");
}
This works great most of the time, but there are times when I can move the cube partially into the wall, and it won't register a collision. For example, look at this image:
It should say "Hit" in the top-left corner where there are just a bunch of dots, and it's not.
NOTE: I also tried his suggestion and did the following, but it didn't seem to help much:
THREE.BoxGeometry( 64, 64, 64, 8, 8, 8, wireMaterial ) // BoxGeometry is used in version 68 instead of CubeGeometry
Does anyone know how this method could be more accurate? Another question: Does anyone know what the following if statement is for, i.e. why does the object's distance have to be less than the length of the direction vector?:
if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() )
To answer your last question first: that line detects whether the collision happened inside your MovingCube. Your raycasting code casts a ray from the MovingCube's position towards each of its vertices. Anything that the ray intersects with is returned, along with the distance from the MovingCube's position at which the intersected object was found (collisionResults[0].distance). That distance is compared with the distance from the MovingCube's position to the relevant vertex. If the distance to the collision is less than the distance to the vertex, the collision happened inside the cube.
Raycasting is a poor method of collision detection because it only detects collisions in the exact directions rays are cast. It also has some additional edge cases. For example, if the ray is cast from inside another object, the other object might not be considered to be colliding. As another example, raycasting in Three.js uses bounding spheres (or, if unavailable, bounding boxes) to calculate ray intersection, so rays can "intersect" with objects even if they wouldn't hit them visually.
If you're only dealing with spheres or upright cuboids, it's straightforward math to check collision. (That's why Three.js uses bounding spheres and bounding boxes - and most applications that need to do collision checking use secondary collision-only geometries that are less complicated than the rendered ones.) Spheres are colliding if the distance between their centers is less than the sum of their radii. Boxes are colliding if the edges overlap (e.g. if the left edge of box 1 is to the left of the right edge of box 2, and the boxes are within a vertical distance the sum of their half-heights and a horizontal distance the sum of their half-lengths).
For certain applications you can also use voxels, e.g. divide the world into cubical units, do box math, and say that two objects are colliding if they overlap with the same cube-unit.
For more complex applications, you'll probably want to use a library like Ammo.js, Cannon.js, or Physi.js.
The reason raycasting is appealing is because it's workable with more complex geometries without using a library. As you've discovered, however, it's less than perfect. :-)
I wrote a book called Game Development with Three.js which goes into this topic in some depth. (I won't link to it here because I'm not here to promote it, but you can Google it if you're interested.) The book comes with sample code that shows how to do basic collision detection, including full code for a 3D capture-the-flag game.

Categories