three.js box3 of a imported sphere - javascript

i have a short question:
I know how to calculate the boxes of my (imported) 3dobjects f.e.
var box = new THREE.Box3().setFromObject(obj);
With this i can compute boxes for my objects and i can merge them together if i want.
The problem is, now i have these 2 objects https://imgur.com/gallery/NbPwcmB
The solution seems quite simple: i need to compute the left and right spherebox and put them together, but both of these 2 objects are imported with stlloader. I'm not sure how stlloader exactly works, (it seems for me like its all 1 huge mesh) so i'm not even sure if this is possible.
so my questions:
1. how can i compute a box with the shape of a sphere of my sphere object.
2. Is this even possible for my stl object? (I will try when i get the answer for question 1)
Edit: Question 1 should somehow be working with .computeBoundingSphere..
is there a way to make this visible?

how can i compute a box with the shape of a sphere of my sphere object.
Well, in three.js you have the choice between two bounding volumes. THREE.Box3 represents an axis-aligned bounding box (AABB) whereas THREE.Sphere represents a bounding sphere. If you need a box with the shape of a sphere, use THREE.Sphere.
Is this even possible for my stl object?
The method setFromObject() does only exist for THREE.Box3. However, you can compute the bounding sphere via THREE.BufferGeometry.computeBoundingSphere(). This sphere is defined in local space, however. You can use THREE.Sphere.applyMatrix4() to transform it into world space by passing in the world matrix of the 3D object.
is there a way to make this visible?
There is no helper class for bounding spheres. But you can easily create a helper mesh based on THREE.SphereBufferGeometry. Something like:
const geometry = new THREE.SphereBufferGeometry( boundingSphere.radius );
const material = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe: true } );
const mesh = new THREE.Mesh( geometry, material );
mesh.position.copy( boundingSphere.center );
scene.add( mesh );
three.js R109

Related

Three.js: how to use a plane to cut objects in 2 parts?

I have a complex object, i.e. a box, and I would like to cut it dynamically. This jsFiddle is a very simple example:jsFiddle
Very simple plane
var plane = new THREE.Mesh( geometry, material3 );
plane.rotation.x =1.3; // -Math.PI / 2;
gui.add(plane.rotation, "x", 0.1, Math.PI / 2).name("angle");
gui.add(plane.position, "y", -1, 1).name("h");
scene.add( plane );
I would like to remove from my object the upper part. Just like to cut off a piece from an apple using a knife.
The plane is the knife: In my example, you can play with 2 controls to move the plane up and down or change the angle.
Can you help me to hide the removed part from the object?
You've got two options:
You could use clipping as WestLangley mentioned above.
Clipping does not modify the vertex geometry, it's only visual.
Is non-destructive, so it's good for animating or making constant updates.
Clipping is mostly done with a few planes instead of complex geometry.
You could use a boolean operation with Constructive Solid Geometry.
Boolean does affect the vertex geometry, which can be exported.
Operation is "destructive", so you can't make updates once it's done.
Boolean can be performed with complex geometries, as long as they're "manifold".
Boolean operations require both geometries to be manifold geometries in order to work. This means both meshes have to be closed, without open faces. You cannot use infinitely thin planes, so the example in your JSFiddle wouldn't work. You would need to give each side a little bit of thickness, like using a box with a width of 0.0001 instead of a plane.

Why does a `Geometry` have a coordinate system and is it different to a `Mesh`?

I have been creating a simple Three.js application and so far I have a centered text in the scene that shows "Hello World". I have been copying the code examples to try and understand what is happening and so far Ihave it working but I am failing to completely understand why.
My confusion comes from reading all the Three.js tutorials describing that a Geometry object is responsible for creating the shape of the object in the scene. Therefore I did not think it would not make sense to have a position on something that is describing the shape of the mesh.
/* Create the scene Text */
let loader = new THREE.FontLoader();
loader.load( 'fonts/helvetiker_regular.typeface.json', function (font) {
/* Create the geometry */
let geometry_text = new THREE.TextGeometry( "Hello World", {
font: font,
size: 5,
height: 1,
});
/* Create a bounding box in order to calculate the center position of the created text */
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0); // Center the text by offsetting half the width
/* Currently using basic material because I do not have a light, Phong will be black */
let material_text = new THREE.MeshBasicMaterial({
color: new THREE.Color( 0x006699 )
});
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
//debugger;
scene.add(textMesh);
console.log('added mesh')
} );
Here is the code that I use to add to shape and my confusion comes from the following steps.
/* Create a bounding box in order to calculate the center position of the created text */
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0); // Center the text by offsetting half the width
First, we translate the geometry to the left to center the text inside the scene.
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
Secondly, we set the position of the mesh.
My confusion comes from the fact that we need both of these operations to occur to move the mesh backwards and become centered.
However I do not understand why these operations should be done of the geometry, infact what confuses me more is that why does textMesh.position.set(0, 0, -20); not override my previously performed translation and simply move the mesh to (0,0,-20). removing my previous translation. It seems that both are required.
AFAIK it is recommended (in scenegraph) to transform (translate, rotate, scale) the whole mesh (with simple) rather than prepare transformed geometry and use it to create "untransformed" mesh, since the mesh in second case is not transform-friendly. Basically "cumulative" transform will be just illegal, giving wrong, unexpected results. Even simple movement.
But sometimes it is useful to create transformed geometry and use it for some algos/computations or in meshes.
You are getting somehow "expected" results in your "combined transform" case because it is just particular case (for example it can work only if object position is (0, 0, 0) etc)
mesh.position.set doesn't modify geometry: it is only a property of mesh and it is used to compute final mesh triangles. This computation involves geometry and object matrix which is composed from object position, object quaternion (3D-rotation) and object scale. Object's geometry can be modified by "matrix" operations but none of such operations are performed dynamically by mesh.

three.js "exploding" an object in predefined pieces

This is a sort of a follow up question of my last problem. I want to explode (technical drawing style) an imported .obj geometry which contains n .obj parts.
For example a table as object, each part is one .obj, four legs and a table top.
5 files. loading them is no problem. Now each objects position is 0,0,0 that means I need to get the world position of them, which I did with this function:
function absPos( myMesh ) {
myMesh.geometry.computeBoundingBox();
var boundingBox = myMesh.geometry.boundingBox;
var position = new THREE.Vector3();
position.subVectors( boundingBox.max, boundingBox.min );
position.multiplyScalar( 0.5 );
position.add( boundingBox.min );
position.applyMatrix4( myMesh.matrixWorld );
var abspos = {x:position.x,y:position.y,z:position.z};
return abspos;
}
now I did assume that setting the object´s position to their world positions would just happen to move them to another world position but keep their relation but that is not the case. So I cannot just use the objects world position and use it to translate the objects until I mess with the world position?
As the objects position is 0,0,0 they don´t have a relation to each other and I cannot just multiply the values to "explode" the geometry.
Is there any chance to achieve this without messing with the world positions?

What is more cost effective updating a mesh or removing and adding from the scene?

I have a small web app that I've designed for viewing bathymetric data of the seafloor in Three.js. Basically I am using a loader to bring in JSON models of the my extruded bathymetry into my scene and allowing the user to rotate the model or click next to load a new part of the seafloor.
All of my models have the same 2D footprint so are identical in two dimensions, only elevations and texture change from model to model.
My question is this: What is the most cost effective way to update my model?
Using scene.remove(mesh); then calling my loader again to load a new model and then adding it to the scene with scene.add(mesh);.
Updating the existing mesh by calling my loader to bring in material and geometry and then calling mesh.geometry = geometry;, mesh.material = material and then mesh.geometry.needsUpdate;.
I've heard that updating is pretty intensive from a computational point of view, but all of the articles that I've read on this state that the two methods are almost the same. Is this information correct? Is there a better way to approach my code in this instance?
An alternative that I've considered is skipping the step where I create the model (in Blender) and instead using a displacement map to update the y coordinates of my vertices. Then to update I could push new vertices on an existing plane geometry before replacing the material. Would this be a sound approach? At the very least I think the displacement map would be a smaller file to load than a .JSON file. I could even optimize the display by loading a GUI element to divide the mesh into more or fewer divisions for high or low quality render...
I dont know off the top of my head what exactly happens under the hood, but from what i remember i think these two are the exact same thing.
You aren't updating the existing mesh. A mesh extends from Object3D, so it just sits there, wiring some geometry and some materials.
mesh.geometry = geometry did not "update the mesh", or it did, but with new geometry (which may be the thing you are actually referring to as mesh).
In other words, you always have your container, but when you replace the geometry by doing =geometry you set it up for all sorts of GL calls in the next THREE.WebGLRenderer.render() call.
Where that new geometry gets attached to, be it an existing mesh, or a new one, shouldnt matter at all. The geometry is the thing that will trigger the low level webgl calls like gl.bufferData().
//upload two geometries to the gpu on first render()
var meshA = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
var meshB = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
//upload one geometry to the gpu on first render()
var bg = new THREE.BoxGeometry()
var meshA = new THREE.Mesh( bg );
var meshB = new THREE.Mesh( bg );
for ( var i = 0 ; i < someBigNumber ; i ++ ){
var meshTemp = new THREE.Mesh( bg );
}
//doesnt matter that you have X meshes, you only have one geometry
//1 mesh two geometries / "computations"
var meshA = new THREE.Mesh( new THREE.BoxGeometry() ); //first computation - compute box geometry
scene.add(meshA);
renderer.render( scene , camera ); //upload box to the gpu
meshA.geometry = new THREE.SphereGeometry();
renderer.render( scene , camera); //upload sphere to the gpu
THREE.Mesh seems to be the most confusing concept in three.js.

Collision detection with boundingSphere

For each mesh (THREE.Object3D) Three.js provide a very handy properties - boundingSphere and boundingSphere that have intersectsSphere and isIntersectionBox methods.
With all this I thought I can use it for simple collision detection but when I try it appears that collision happens all the time because (I tried boundingSphere) boundingSphere.center is always in (0, 0, 0); So If I want to check collisions between 2 meshes I should for each object - clone boundingSphere object and then get it world coordinates and only then to use intersectsSphere.
something like this:
var bs = component.object.geometry.boundingSphere.clone();
bs.center.setFromMatrixPosition(component.object.matrixWorld);
...
if (_bs.intersectsSphere(bs)){
is this how it suppose to be used or am I missing something and there are more convenient way of doing collisions detection based on boundingBox/boundingSphere?
If you want to do collision detection with bounding boxes you need the boxes in the world coordinate system. The bounding volumes in the intersectsSphere and isIntersectionBox properties of the mesh are in the local coordinate system of the object.
You can do like you did: clone the volumes and move them to the correct position in the world coordinate system, that is a good solution.
Otherwise you can also set a new box from your meshes and do collision using those boxes. Let's say you have a THREE.Mesh called mesh then you can do:
sphere = new THREE.Sphere.setFromPoints( mesh.vertices );
box = new THREE.Box3.setFromObject( mesh );
A little tip. During development it can be nice to see the bounding boxes in your scene, for this you can use the THREE.BoundingBoxHelper:
var helper = new THREE.BoundingBoxHelper( mesh );
scene.add( helper );

Categories