In an effort to learn a little bit more about custom geometry in three.js, I tried adapting Paul Bourke's capsule geometry example.
With my custom capsule geometry, I am currently having two issues:
The middle face normals are not oriented properly.
There is a hard seam along the side. (EDIT: fixed by deliberately computing the face normals. updated code in the gist)
And maybe one bonus question that has been lingering on my mind:
What might a general strategy be to add vertex loops in that middle segment?
I'm really happy with the geometry in general, but would anyone be able to give me some direction on how to address these issues? I feel like the normal issue in the middle segment must be the orientation of the faces, and here is the related face construction snippet:
for(let i = 0; i <= N/2; i++){
for(let j = 0; j < N; j++){
let vec = new THREE.Vector4(
i * ( N + 1 ) + j ,
i * ( N + 1 ) + ( j + 1 ) ,
( i + 1 ) * ( N + 1 ) + ( j + 1 ) ,
( i + 1 ) * ( N + 1 ) + j
);
let face_1 = new THREE.Face3(vec.x,vec.y,vec.z);
let face_2 = new THREE.Face3(vec.x,vec.z,vec.w);
geometry.faces.push(face_1);
geometry.faces.push(face_2);
}
}
CapsuleGeometry.js
The shading/normal seam is there because you have probably explicitly defined a hard edge there.
When you run your loops to generate the vertices, you probably duplicate the starting position. If you start at 0, and go all the way to 2PI, 0==2PI. When you weave the triangles, you probably tell the lest one to use the 2PI instead of 0 and even though they are in the same position, as far as triangles are concerned, they point to different vertices, and are thus not connected.
for(let i = 0; i <= N/4; i++){ //change to i < N
for(let j = 0; j <= N; j++){
If you tell the last triangle in the loop to point to the beginning vertex you will make a continous surface that geometry.computeVertexNormals() can smooth out.
You can also just compute these normals directly. All the normals can be obtained in these case from the vertex positions of the original sphere before expanding it.
I currently have a city based on the example of Mr Doob's tutorial: "How to do a procedural city in 100 lines". In the tutorial you can see the that he creates 100 building meshes which then get merged into 1 city mesh for performance reasons. Then one material gets made that is applied to the city mesh, giving every building a texture.
What I want to stop is the clamping and stretching of the building texture. In order to create a more realistic "the windows are the same height on different buildings" look.
What I think would be the solution is to manipulate the face vertex UV's with the scaling values of the geometry.
With the following code I can scale the texture 2x.
let faceVertexUvs = buildingMesh.geometry.faceVertexUvs[0];
for (let k = 0; k < faceVertexUvs.length; k++) {
const uvs = faceVertexUvs[k];
if ( k == 4 || k == 5){
// Make the roof blank
uvs[0].set(0, 0);
uvs[1].set(0, 0);
uvs[2].set(0, 0);
}
else if( k % 2 == 0) {
uvs[0].set(0, 0.5);
uvs[1].set(0, 0);
uvs[2].set(0.5, 0.5);
}
else {
uvs[0].set(0, 0);
uvs[1].set(0.5, 0);
uvs[2].set(0.5, 0.5);
}
}
However I would like to only scale vertically and leave the horizontal scaling alone. But I don't completely understand the relation between the 2 triangles.
Perhaps you should modify the .repeat property of the city model's texture.
document link: https://threejs.org/docs/#api/textures/Texture.repeat
After using a debug image and guessing some values I came to the following code to scale a texture vertically, horizontally or both.
let scale_y = buildingMesh.scale.y/200;
let scale_x = buildingMesh.scale.x/100;
for (let k = 0; k < faceVertexUvs.length; k++) {
const uvs = faceVertexUvs[k];
else if( k % 2 == 0) {
uvs[0].set(0, texture_scale_y); // 0 1
uvs[1].set(0, 0); // 0 0
uvs[2].set(texture_scale_x, texture_scale_y); // 1 1
}
else {
uvs[0].set(0, 0); // 0 0
uvs[1].set(texture_scale_x, 0); // 1 0
uvs[2].set(texture_scale_x, texture_scale_y); // 1 1
}
}
This solves my problem but I still would like to know the explanation. I can see that the X is always first and the Y second but that might be a bad conclusion to make.
I couldn't color my vertices so I can't tell which vertex is which.
I have a file with sparse elevations. It is based off of gps data. I have been using this data to populate an PlaneBuffer array with elevations.
var vertices = new Float32Array( (grid.NCOL*grid.NROW) * 4 );
for (var i = 0, q = vertices.length; i < q; i++){
vertices[ i*3 + 0 ] = parseInt(i % (grid.NCOL+1)*4);
vertices[ i*3 + 1 ] = parseInt(i / (grid.NCOL+1)*4);
// vertices[ i*3 + 2 ] = null; // makes no difference
}
for (var i = 0, l = grid.NODES.length; i < l; i++) {
var nodeNumber = grid.NODES[i][0];
var elevation= grid.NODES[i][1];
vertices[ nodeNumber*3 + 2 ] = elevation;
}
My problem is that there are nodes that the elevation values are unknown(Vertex array is sparse with elevations) and should be represented by holes/cutouts in the plane. What I end up with is the null elevations being interpreted as 0 not as holes. I have started down the path of using a rawshader, but still not sure that making null values transparent is the correct method.
The below picture shows my issues. The circled area is a high wall that should not be there, because it triangulating to the "null/0" floor. The red-lines area is where we should have a hole.
EDIT:
Maybe this picture will help to. It is from the bottom. The null elevations being set to zero block the view of the plane and cause the edge of the plane to be triangulated to 0 elevation:
Here is what our desktop application displays. Notice the edges of the plane are not triangulated down to zero but instead left sharp?
Plane buffer Geometries take a Float32Array. This array is default set to 0. Using undefined setter allowed me to set the sparse array out of the float32 type. Attempts to set any value to null and NanN did not work.
RTFM:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/null
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Float32Array
final result as expected:
Your use case seems more appropriate for a point cloud with THREE.Points. potree.org/demo/potree_1.3/showcase/ca13.html – WestLangley 14 mins ago
This example helped:
http://threejs.org/examples/#webgl_buffergeometry_points
I'm not sure if the answer is supposed to be blindingly obvious but it eludes me. I'm doing the 3D Graphics class on Udacity that uses three.js. I'm at a point where I'm required to generate a 3d mesh.
I've got the vertices all generating correctly, but I'm stuck at generating faces for them. I can't think of an obvious way to auto generate faces that don't overlap. I've searched and searched around the web but I can't find any information about it. I'm not sure if it's something stupidly obvious or just not very documented. Here's the code:
function PolygonGeometry(sides) {
var geo = new THREE.Geometry();
// generate vertices
for ( var pt = 0 ; pt < sides; pt++ )
{
// Add 90 degrees so we start at +Y axis, rotate counterclockwise around
var angle = (Math.PI/2) + (pt / sides) * 2 * Math.PI;
var x = Math.cos( angle );
var y = Math.sin( angle );
// YOUR CODE HERE
//Save the vertex location - fill in the code
geo.vertices.push( new THREE.Vector3(x, y, 0) );
}
// YOUR CODE HERE
// Write the code to generate minimum number of faces for the polygon.
// Return the geometry object
return geo;
}
I know the basic formula for the minimum number of faces is n-2. But I just can't think of a way to do this without faces overlapping. I don't want anyone to do my work for me, I want to figure it out myself as much as I can. But is there anyone who can point me in the right direction or give me a formula or something?
You can automate your triangulation
For big polygons it can be quite a job to manually add the faces. You can automate the process of adding faces to the Mesh using the triangulateShape method in THREE.Shape.Utils like this:
var vertices = [your vertices array]
var holes = [];
var triangles, mesh;
var geometry = new THREE.Geometry();
var material = new THREE.MeshBasicMaterial();
geometry.vertices = vertices;
triangles = THREE.Shape.Utils.triangulateShape ( vertices, holes );
for( var i = 0; i < triangles.length; i++ ){
geometry.faces.push( new THREE.Face3( triangles[i][0], triangles[i][1], triangles[i][2] ));
}
mesh = new THREE.Mesh( geometry, material );
Where vertices is your array of vertices and with holes you can define holes in your polygon.
Note: Be careful, if your polygon is self intersecting it will throw an error. Make sure your vertices array is representing a valid (non intersecting) polygon.
Assuming you are generating your vertices in a concave fashion and in a counterclockwise manner then if you have 3 sides (triangle) you connect vertex 0 with 1 with 2. If you have 4 sides (quad) you connect vertex 0 with 1 with 2 (first triangle) and then vertex 0 with 2 with 3. If you have 5 sides (pentagon) you connect vertex 0 with 1 with 2 (first triangle) then vertex 0 with 2 with 3 (second triangle and then vertex 0 with 3 with 4. I think you get the pattern.
I am using three.js.
I have two mesh geometries in my scene.
If these geometries are intersected (or would intersect if translated) I want to detect this as a collision.
How do I go about performing collision detection with three.js? If three.js does not have collision detection facilities, are there other libraries I might use in conjuction with three.js?
In Three.js, the utilities CollisionUtils.js and Collisions.js no longer seem to be supported, and mrdoob (creator of three.js) himself recommends updating to the most recent version of three.js and use the Ray class for this purpose instead. What follows is one way to go about it.
The idea is this: let's say that we want to check if a given mesh, called "Player", intersects any meshes contained in an array called "collidableMeshList". What we can do is create a set of rays which start at the coordinates of the Player mesh (Player.position), and extend towards each vertex in the geometry of the Player mesh. Each Ray has a method called "intersectObjects" which returns an array of objects that the Ray intersected with, and the distance to each of these objects (as measured from the origin of the Ray). If the distance to an intersection is less than the distance between the Player's position and the geometry's vertex, then the collision occurred on the interior of the player's mesh -- what we would probably call an "actual" collision.
I have posted a working example at:
http://stemkoski.github.io/Three.js/Collision-Detection.html
You can move the red wireframe cube with the arrow keys and rotate it with W/A/S/D. When it intersects one of the blue cubes, the word "Hit" will appear at the top of the screen once for every intersection as described above. The important part of the code is below.
for (var vertexIndex = 0; vertexIndex < Player.geometry.vertices.length; vertexIndex++)
{
var localVertex = Player.geometry.vertices[vertexIndex].clone();
var globalVertex = Player.matrix.multiplyVector3(localVertex);
var directionVector = globalVertex.subSelf( Player.position );
var ray = new THREE.Ray( Player.position, directionVector.clone().normalize() );
var collisionResults = ray.intersectObjects( collidableMeshList );
if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() )
{
// a collision occurred... do something...
}
}
There are two potential problems with this particular approach.
(1) When the origin of the ray is within a mesh M, no collision results between the ray and M will be returned.
(2) It is possible for an object that is small (in relation to the Player mesh) to "slip" between the various rays and thus no collision will be registered. Two possible approaches to reduce the chances of this problem are to write code so that the small objects create the rays and do the collision detection effort from their perspective, or include more vertices on the mesh (e.g. using CubeGeometry(100, 100, 100, 20, 20, 20) rather than CubeGeometry(100, 100, 100, 1, 1, 1).) The latter approach will probably cause a performance hit, so I recommend using it sparingly.
I hope that others will contribute to this question with their solutions to this question. I struggled with it for quite a while myself before developing the solution described here.
An updated version of Lee's answer that works with latest version of three.js
for (var vertexIndex = 0; vertexIndex < Player.geometry.attributes.position.array.length; vertexIndex++)
{
var localVertex = new THREE.Vector3().fromBufferAttribute(Player.geometry.attributes.position, vertexIndex).clone();
var globalVertex = localVertex.applyMatrix4(Player.matrix);
var directionVector = globalVertex.sub( Player.position );
var ray = new THREE.Raycaster( Player.position, directionVector.clone().normalize() );
var collisionResults = ray.intersectObjects( collidableMeshList );
if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() )
{
// a collision occurred... do something...
}
}
This really is far too broad of a topic to cover in a SO question, but for the sake of greasing the SEO of the site a bit, here's a couple of simple starting points:
If you want really simple collision detection and not a full-on physics engine then check out (link removed due to no more existing website)
If, on the other hand you DO want some collision response, not just "did A and B bump?", take a look at (link removed due to no more existing website), which is a super easy to use Ammo.js wrapper built around Three.js
only works on BoxGeometry and BoxBufferGeometry
create the following function:
function checkTouching(a, d) {
let b1 = a.position.y - a.geometry.parameters.height / 2;
let t1 = a.position.y + a.geometry.parameters.height / 2;
let r1 = a.position.x + a.geometry.parameters.width / 2;
let l1 = a.position.x - a.geometry.parameters.width / 2;
let f1 = a.position.z - a.geometry.parameters.depth / 2;
let B1 = a.position.z + a.geometry.parameters.depth / 2;
let b2 = d.position.y - d.geometry.parameters.height / 2;
let t2 = d.position.y + d.geometry.parameters.height / 2;
let r2 = d.position.x + d.geometry.parameters.width / 2;
let l2 = d.position.x - d.geometry.parameters.width / 2;
let f2 = d.position.z - d.geometry.parameters.depth / 2;
let B2 = d.position.z + d.geometry.parameters.depth / 2;
if (t1 < b2 || r1 < l2 || b1 > t2 || l1 > r2 || f1 > B2 || B1 < f2) {
return false;
}
return true;
}
use it in conditional statements like this:
if (checkTouching(cube1,cube2)) {
alert("collision!")
}
I have an example using this at https://3d-collion-test.glitch.me/
Note: if you rotate(or scale) one (or both) of the cubes/prisims, it will detect as though they haven't been turned(or scaled)
since my other answer is limited I made something else that is more accurate and only returns true when there is a collision and false when there isn't (but sometimes when There still is)
anyway, First make The Following Function:
function rt(a,b) {
let d = [b];
let e = a.position.clone();
let f = a.geometry.vertices.length;
let g = a.position;
let h = a.matrix;
let i = a.geometry.vertices;
for (var vertexIndex = f-1; vertexIndex >= 0; vertexIndex--) {
let localVertex = i[vertexIndex].clone();
let globalVertex = localVertex.applyMatrix4(h);
let directionVector = globalVertex.sub(g);
let ray = new THREE.Raycaster(e,directionVector.clone().normalize());
let collisionResults = ray.intersectObjects(d);
if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() ) {
return true;
}
}
return false;
}
that above Function is the same as an answer in this question by
Lee Stemkoski (who I am giving credit for by typing that) but I made changes so it runs faster and you don't need to create an array of meshes. Ok step 2: create this function:
function ft(a,b) {
return rt(a,b)||rt(b,a)||(a.position.z==b.position.z&&a.position.x==b.position.x&&a.position.y==b.position.y)
}
it returns true if the center of mesh A isn't in mesh B AND the center of mesh B isn't in A OR There positions are equal AND they are actually touching. This DOES still work if you scale one (or both) of the meshes.
I have an example at: https://3d-collsion-test-r.glitch.me/
It seems like this has already been solved but I have an easier solution if you are not to comfortable using ray casting and creating your own physics environment.
CANNON.js and AMMO.js are both physics libraries built on top of THREE.js. They create a secondary physics environment and you tie your object positions to that scene to emulate a physics environment. the documentation is simple enough to follow for CANNON and it is what I use but it hasnt been updated since it was released 4 years ago. The repo has since been forked and a community keeps it updated as cannon-es. I will leave a code snippet here so you can see how it works
/**
* Floor
*/
const floorShape = new CANNON.Plane()
const floorBody = new CANNON.Body()
floorBody.mass = 0
floorBody.addShape(floorShape)
floorBody.quaternion.setFromAxisAngle(
new CANNON.Vec3(-1,0,0),
Math.PI / 2
)
world.addBody(floorBody)
const floor = new THREE.Mesh(
new THREE.PlaneGeometry(10, 10),
new THREE.MeshStandardMaterial({
color: '#777777',
metalness: 0.3,
roughness: 0.4,
envMap: environmentMapTexture
})
)
floor.receiveShadow = true
floor.rotation.x = - Math.PI * 0.5
scene.add(floor)
// THREE mesh
const mesh = new THREE.Mesh(
sphereGeometry,
sphereMaterial
)
mesh.scale.set(1,1,1)
mesh.castShadow = true
mesh.position.copy({x: 0, y: 3, z: 0})
scene.add(mesh)
// Cannon
const shape = new CANNON.Sphere(1)
const body = new CANNON.Body({
mass: 1,
shape,
material: concretePlasticMaterial
})
body.position.copy({x: 0, y: 3, z: 0})
world.addBody(body)
This makes a floor and a ball but also creates the same thing in the CANNON.js enironment.
const tick = () =>
{
const elapsedTime = clock.getElapsedTime()
const deltaTime = elapsedTime - oldElapsedTime
oldElapsedTime = elapsedTime
// Update Physics World
mesh.position.copy(body.position)
world.step(1/60,deltaTime,3)
// Render
renderer.render(scene, camera)
// Call tick again on the next frame
window.requestAnimationFrame(tick)
}
After this you just update the position of your THREE.js scene in the animate function based on the position of your physics scene.
Please check out the documentation as it might seem more complicated than it really is. Using a physics library is going to be the easiest way to simulate collisions. Also check out Physi.js, I have never used it but it is supposed to be a more friendly library that doesn't require you to make a secondary environment
In my threejs version, I only have geometry.attributes.position.array and not geometry.vertices. To convert it to vertices, I use the following TS function:
export const getVerticesForObject = (obj: THREE.Mesh): THREE.Vector3[] => {
const bufferVertices = obj.geometry.attributes.position.array;
const vertices: THREE.Vector3[] = [];
for (let i = 0; i < bufferVertices.length; i += 3) {
vertices.push(
new THREE.Vector3(
bufferVertices[i] + obj.position.x,
bufferVertices[i + 1] + obj.position.y,
bufferVertices[i + 2] + obj.position.z
)
);
}
return vertices;
};
I pass in the object's position for each dimension because the bufferVertices by default are relative to the object's center, and for my purposes I wanted them to be global.
I also wrote up a little function to detect collisions based on vertices. It optionally samples vertices for very involved objects, or checks for proximity of all vertices to the vertices of the other object:
const COLLISION_DISTANCE = 0.025;
const SAMPLE_SIZE = 50;
export const detectCollision = ({
collider,
collidables,
method,
}: DetectCollisionParams): GameObject | undefined => {
const { geometry, position } = collider.obj;
if (!geometry.boundingSphere) return;
const colliderCenter = new THREE.Vector3(position.x, position.y, position.z);
const colliderSampleVertices =
method === "sample"
? _.sampleSize(getVerticesForObject(collider.obj), SAMPLE_SIZE)
: getVerticesForObject(collider.obj);
for (const collidable of collidables) {
// First, detect if it's within the bounding box
const { geometry: colGeometry, position: colPosition } = collidable.obj;
if (!colGeometry.boundingSphere) continue;
const colCenter = new THREE.Vector3(
colPosition.x,
colPosition.y,
colPosition.z
);
const bothRadiuses =
geometry.boundingSphere.radius + colGeometry.boundingSphere.radius;
const distance = colliderCenter.distanceTo(colCenter);
if (distance > bothRadiuses) continue;
// Then, detect if there are overlapping vectors
const colSampleVertices =
method === "sample"
? _.sampleSize(getVerticesForObject(collidable.obj), SAMPLE_SIZE)
: getVerticesForObject(collidable.obj);
for (const v1 of colliderSampleVertices) {
for (const v2 of colSampleVertices) {
if (v1.distanceTo(v2) < COLLISION_DISTANCE) {
return collidable;
}
}
}
}
};
You could try cannon.js.It makes it easy to do collision and its my favorite collision detection library. There is also ammo.js too.