normalizing mesh in three.js - javascript

I'm loading a mesh from an obj file, then trying to normalize it. However, I'm getting strange results. Here is the code for loading and centering the mesh:
var manager = new THREE.LoadingManager();
var loader = new THREE.OBJLoader(manager);
loader.load('http://jamesdedge.com/threejs/bunny.obj', function(object) {
object.traverse(function(child) {
if(child instanceof THREE.Mesh)
{
var geometry = child.geometry;
var verts = geometry.vertices;
var ctr = new THREE.Vector3(0.0, 0.0, 0.0);
for(i = 0; i < verts.length; ++i)
ctr.add(verts[i]);
ctr.divideScalar(verts.length);
for(i = 0; i < verts.length; ++i)
verts[i].sub(ctr);
}
});
scene.add(object);
});
This code should just center the mesh on the average of the vertex positions, but it seems to be causing a strange effect. You can see it on my website here: http://jamesdedge.com/threejs/tjs_demo.html
I don't see what is causing this, the ctr variable is giving me a valid vector and subtracting a vector from all the vertices will only reposition it.

Many of the vertices in this model are duplicates (i.e. the same javascript object is contained in the object multiple times), so ctr gets detracted from those vertices multiple times.
One way to address this is issue is to merge the vertices at the start of your traversal function, i.e.
var geometry = child.geometry;
geometry.mergeVertices();
This will also make your code run a bit faster as it has to process a lot less vertices.

You can use GeometryUtils.center() to center your whole mesh conveniently in the middle of the scene at (0,0,0). Anyway, just look at the source code of that function
https://github.com/mrdoob/three.js/blob/master/src/extras/GeometryUtils.js
This should give you a good idea on how to conquer your problem.

Related

three.js Memory Leak when using `elementsNeedUpdate`

I am creating a Geometry in three.js and populating it with vertices to build a 2D terrain. I am pushing all of the Vector3s and Face3s to the geometry as soon as my terrain is created, and then modifying each vertex and face every frame.
Because I am modifying the face vertices every frame, I need to tell three.js to update the faces. I am doing this using geometry.elementsNeedUpdate = true. This works, however I have noticed it causes a substantially large amount of memory usage (my app uses an extra ~50mb of RAM every second).
The following code demonstrates what I'm trying to do:
function pushEverything(geom) {
for (var i = 0; i < 10000; i++) {
geom.vertices.push(new THREE.Vector3(...));
geom.faces.push(new THREE.Face3(...));
geom.faces.push(new THREE.Face3(...));
}
}
function rebuild(geom) {
for (var face of geom.faces) {
face.a = ...
face.b = ...
face.c = ...
}
geom.elementsNeedUpdate = true
}
var renderer = new THREE.WebGLRenderer({
canvas: document.getElementById("my-canvas")
});
var geom = new THREE.Geometry();
var camera = new THREE.PerspectiveCamera(...);
pushEverything(geom);
while (true) {
// Perform some terrain modifications
rebuild(geom);
renderer.render(geom, camera);
sleep(1000 / 30);
}
I have already followed the advice of this question, which suggested using geometry.vertices[x].copy(...) instead of geometry.vertices[x] = new Vector3(...).
My question is: why is my memory usage so high when using geometry.elementsNeedUpdate = true? Is there an alternative method to updating a Geometry's faces?
I am using three.js 0.87.1 from NPM.
I have found and solved the issue. It was not a memory leak on three.js' part, but it was a memory leak on my part.
I was creating a Geometry and allowing myself to clone it, perform modifications to the clone, and then merge it back into the original. What I didn't realise is that I should call geometry.dispose() on the cloned geometry when I was done with it. So, I was basically cloning the geometry every frame, which explains the huge memory usage.
I have fixed my issue by converting the Geometry to a BufferGeometry, and calling geometry.dispose() on the geometry when I am done with it. I now have expected memory usage.

Rotating looped meshes results unexpectedly

I have a three.js project where I'm adding in 100 meshes, that are divided into 10 scenes:
//add 100 meshes to 10 scenes based on an array containing 100 elements
for (var i = 0; i < data.length; i++) {
mesh = new THREE.Mesh(geometry, material);
//random positions so they don't spawn on same spot
mesh.position.x = THREE.Math.randInt(-500, 500);
mesh.position.z = THREE.Math.randInt(-500, 500);
They're added in by a loop, and all these meshes are assigned to 10 scenes:
// Assign 10 meshes per scene.
var sceneIndex = Math.floor(i/10);
scenes[sceneIndex].add(mesh);
I also wrote a functionality where I can rotate a mesh around the center of the scene.
But I don't know how to apply the rotation functionality to all meshes while still keeping them divided into their corresponding scenes. This probably sounds way too vague so I have a fiddle that holds all of the relevant code.
If you comment these two lines back in you'll see that the meshes all move to scenes[0], and they all rotate fine the way I wanted, but I still need them divided in their individual scenes.
spinningRig.add(mesh);
scenes[0].add(spinningRig);
How is the code supposed to look like? Waht is the logic to it?
The logic is fairly simple. The simplest format would be to have a separate spinningRig for each scene -- essentially a grouping of the meshes for each scene.
When you create each scene, you'll also create a spinningRig and add + assign it to that scene:
// Setup 10 scenes
for(var i=0;i<10;i++) {
scenes.push(new THREE.Scene());
// Add the spinningRig to the scene
var spinningRig = new THREE.Object3D();
scenes[i].add(spinningRig);
// Track the spinningRig on the scene, for convenience.
scenes[i].userData.spinningRig = spinningRig;
}
Then instead of adding the meshes directly to the scene, add them to the spinningRig for the scene:
var sceneIndex = Math.floor(i/10);
scenes[sceneIndex].userData.spinningRig.add(mesh);
And finally, rotate the spinningRig assigned to the currentScene:
currentScene.userData.spinningRig.rotation.y -= 0.025;
See jsFiddle: https://jsfiddle.net/712777ee/4/

Three.js - What is PlaneBufferGeometry

What is PlaneBufferGeometry exactly and how it is different from PlaneGeometry? (r69)
PlaneBufferGeometry is a low memory alternative for PlaneGeometry. the object itself differs in a lot of ways. for instance, the vertices are located in PlaneBufferGeometry are located in PlaneBufferGeometry.attributes.position instead of PlaneGeometry.vertices
you can take a quick look in the browser console to figure out more differences, but as far as i understand, since the vertices are usually spaced on a uniform distance (X and Y) from each other, only the heights (Z) need to be given to position a vertex.
The main differences are between Geometry and BufferGeometry.
Geometry is a "user-friendly", object-oriented data structure, whereas BufferGeometry is a data structure that maps more directly to how the data is used in the shader program. BufferGeometry is faster and requires less memory, but Geometry is in some ways more flexible, and certain operations can be done with greater ease.
I have very little experience with Geometry, as I have found that BufferGeometry does the job in most cases. It is useful to learn, and work with, the actual data structures that are used by the shaders.
In the case of a PlaneBufferGeometry, you can access the vertex positions like this:
let pos = geometry.getAttribute("position");
let pa = pos.array;
Then set z values like this:
var hVerts = geometry.heightSegments + 1;
var wVerts = geometry.widthSegments + 1;
for (let j = 0; j < hVerts; j++) {
for (let i = 0; i < wVerts; i++) {
//+0 is x, +1 is y.
pa[3*(j*wVerts+i)+2] = Math.random();
}
}
pos.needsUpdate = true;
geometry.computeVertexNormals();
Randomness is just an example. You could also (another e.g.) plot a function of x,y, if you let x = pa[3*(j*wVerts+i)]; and let y = pa[3*(j*wVerts+i)+1]; in the inner loop. For a small performance benefit in the PlaneBufferGeometry case, let y = (0.5-j/(hVerts-1))*geometry.height in the outer loop instead.
geometry.computeVertexNormals(); is recommended if your material uses normals and you haven't calculated more accurate normals analytically. If you don't supply or compute normals, the material will use the default plane normals which all point straight out of the original plane.
Note that the number of vertices along a dimension is one more than the number of segments along the same dimension.
Note also that (counterintuitively) the y values are flipped with respect to the j indices: vertices.push( x, - y, 0 ); (source)

Babylon.js Mesh Picking & Ignoring Some Meshes

I am currently working on a small project using the new Babylon.js framework. One of the issues I have run into is that I basically have two meshes. One of the meshes is supposed to be the background, and the other is supposed to follow the cursor to mark where on the other mesh you are targeting. The problem is that when I move the targeting mesh to the position of the cursor, it blocks the background mesh when I use scene.pick, resulting in the other mesh having its position set on its self.
Is there any way to ignore the targeting mesh when using scene.pick so that I only pick the background mesh or is there some other method I could use? If not, what would be the steps to implement this sort of feature to essentially raycast only through certain meshes?
If you need code samples or any other forms of description, let me know. Thanks!
Ok, it's easy.
So, we have two meshes. One is called "ground", the second "cursor". If you want to pick only on the ground you have two solutions :
First:
var ground = new BABYLON.Mesh("ground",scene);
ground.isPickable = true ;
var cursor = new BABYLON.Mesh("cursor", scene);
cursor.isPickable = false;
...
var p = scene.pick(event.clientX, event.clientY); // it return only "isPickable" meshes
...
Second:
var ground = new BABYLON.Mesh("ground",scene);
var cursor = new BABYLON.Mesh("cursor", scene);
...
var p = scene.pick(event.clientX, event.clientY, function(mesh) {
return mesh.name == "ground"; // so only ground will be pickable
});
...
regards.

How to drag and drop Object3D elements with three.js?

I took this example for drag&drop objects, it works perfect. But now I want to group some elements to drag&drop them together. I replaced the Cubes from the example with some Spheres and grouped them together in an Object3D().
var data = [[0,10],[50,20],[100,7],[150,18],[200,15],[250,3],[300,10],[350,25]];
var group = new THREE.Object3D();
for(var i = 0; i< data.length; i++){
var mesh = new THREE.Mesh( new THREE.SphereGeometry(data[i][1],20,20), sphereMaterial);
mesh.position.y = data[i][0];
mesh.updateMatrix();
group.add(mesh);
}
objects.push(group);
scene.add(group);
But I can't select this group of objects in my scene :( What I'm doing wrong? Isn't it possible to select a group? Or how must it look like?
It's hard to tell without the code, but i think you might be using .intersectObjects(objects), you should try using .intersectObjects(objects, true) instead.
Here is the documentation :
.intersectObjects( objects, recursive )
objects — The objects to check for intersection with the ray.
recursive — If set, it also checks all descendants of the objects. Otherwise it only checks intersecton with the objects.
checks all intersection between the ray and the objects with or without the descendants.
three.js Raycaster

Categories