I've got a line "walking around my scene" (some kind of a 3D snake) randomly and the next thing I wish to achieve is set a box around its head.
The line bufferGeometry is set by
var positions1 = new Float32Array( MAX_POINTS * 3 ); // 3 vertices per point
var positions2 = new Float32Array( MAX_POINTS * 3 ); // 3 vertices per point
buffGeometry1.addAttribute( 'position', new THREE.BufferAttribute( positions1, 3 ) );
buffGeometry2.addAttribute( 'position', new THREE.BufferAttribute( positions2, 3 ) );
I chose to set a cube (boxGeometry object) around it, and I used the following code lines to try and achieve that:
var positioning = buffGeometry1.getAttribute('position');
cube.position.x = positioning[0];//(line1.geometry.attributes.position.array[drawCount]);
cube.position.y = positioning[1];//(line1.geometry.attributes.position.array[drawCount + 1]);
cube.position.z = positioning[2];
As I debug, I see that my positioning array is undefined. so I guess something there went wrong.
Thanks.
Try:
console.log(buffGeometry1.getAttribute('position'))
My THREE.BufferGeometry shows me that verticles are stored in positioning.array so you should acces them by:
positioning.array[0]
positioning.array[1]
positioning.array[2]
If you add let's say a point to your scene using BufferGeometry, you can also try accessing the coordinates like this :
/// X coordinate /////
console.log(point1.geometry.attributes.position.array[0]);
/// Y coordinate /////
console.log(point1.geometry.attributes.position.array[1]);
/// Z coordinate /////
console.log(point1.geometry.attributes.position.array[2]);
Related
I'm attempting to create a map of 2d SVG tiles in three.js. I have used SVGLoader() Like so (Keep in mind some brackets are for parent scopes that aren't shown. That is not the issue):
loader = new SVGLoader();
loader.load(
// resource URL
filePath,
// called when the resource is loaded
function ( data ) {
console.log("SVG file successfully loaded");
const paths = data.paths;
for ( let i = 0; i < paths.length; i ++ ) {
const path = paths[ i ];
const material = new THREE.MeshBasicMaterial( {
color: path.color,
side: THREE.DoubleSide,
depthWrite: false
} );
const shapes = SVGLoader.createShapes( path );
console.log(`Shapes length = ${shapes.length}`);
try{
for ( let j = 0; j < shapes.length; j ++ ) {
const shape = shapes[ j ];
const geometry = new THREE.ShapeGeometry( shape );
const testGeometry = new THREE.PlaneGeometry(2,2);
try{
const mesh = new THREE.Mesh(geometry, material );
group.add( mesh );
}catch(e){console.log(e)}
}
}catch(e){console.log(e)}
}
},
// called when loading is in progress
function ( xhr ) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded' );
},
// called when loading has errors
function ( error ) {
console.log( 'An error happened' );
}
);
return group;
}
Dismiss the fact that I surrounded alot of it in try{}catch(){}
I have also created grid lines and added it to my axis helper in the application that allows me to see where each cooordinate is, in relation to the X and Y axis.
This is how the svg appears on screen:
Application Output
I can't seem to figure out how to correlate the scale of the svg, with the individual grid lines. I have a feeling that Im going to have to dive deeper into the SVG loading script that I have above then scale each shape mesh specifically. I call the SVG group itself in the following code.
try{
//SVG returns a group, TGA returns a texture to be added to a material
var object1 = LOADER.textureLoader("TGA", './Art/tile1.tga', pGeometry);
var object2 = LOADER.textureLoader("SVG", '/Art/bitmap.svg');
const testMaterial = new THREE.MeshBasicMaterial({
color: 0xffffff,
map: object1,
side: THREE.DoubleSide
});
//const useMesh = new THREE.Mesh(pGeometry, testMaterial);
//testing scaling the tile
try{
const worldScale = new THREE.Vector3();
object2.getWorldScale(worldScale);
console.log(`World ScaleX: ${worldScale.x} World ScaleY: ${worldScale.y} World ScaleZ: ${worldScale.z}`);
//object2.scale.set(2,2,0);
}catch(error){console.log(error)}
scene.add(object2);
}
Keep in mind that the SVG is object2 in this case. Some of the ideas to tackle this problem I have had is looking into what a world scale is, matrix4 transformations, and the scale methods of either the object3d parent properties or the bufferGeometry parent properties of this particular svg group object. I am also fully aware that three.js is designed for 3d graphics, however I would like to master 2d graphics programming in this library before I get into the 3d aspect of things. I also have a thought that the scale of the SVG group is distinctly different from the scale of the scene and its X Y and Z axis.
If this question has already been answered a link to the corresponding answer would be of great help to me.
Thank you for the time you take to answer this question.
I messed with the dimensions of the svg file itself in the editor I used to paint it and I got it to scale. Not exactly a solution in the code, however I guess the code is just closely tied to the data that the svg file provides and cant be altered too much.
Since r125, THREE.Geometry was deprecated. We are now updating our code base and we are running into errors that we don't know how to fix.
We create a sphere and use a raycaster on the sphere to get the intesect point.
worldSphere = new THREE.SphereGeometry(
worldSize,
worldXSegments,
worldYSegments
);
...
const intersect = raycaster.intersectObjects([worldGlobe])[0];
...
if (intersect) {
let a = worldSphere.vertices[intersect.face.a];
let b = worldSphere.vertices[intersect.face.b];
let c = worldSphere.vertices[intersect.face.c];
}
Now, normally variable a would contain 3 values for every axis namely a.x, a.y, a.z, same goes for the other variables. However, this code does not work anymore.
We already know that worldSphere is of type THREE.BufferGeometry and that the vertices are stored in a position attribute, but we cannot seem to get it working.
What is the best way to fix our issue?
It should be:
const positionAttribute = worldGlobe.geometry.getAttribute( 'position' );
const a = new THREE.Vector3();
const b = new THREE.Vector3();
const c = new THREE.Vector3();
// in your raycasting routine
a.fromBufferAttribute( positionAttribute, intersect.face.a );
b.fromBufferAttribute( positionAttribute, intersect.face.b );
c.fromBufferAttribute( positionAttribute, intersect.face.c );
BTW: If you only raycast against a single object, use intersectObject() and not intersectObjects().
So the inputs are three euler angles x,y,z in radians
I would like to convert this to a Vector location X,Y,Z with center as origin.
So if its possible to use https://threejs.org/docs/#api/en/math/Euler.toVector3 to get the Vector, i would like to know how. And also the alternate mathematical(sin/cos) solution is also appreciated.
so in this snippet axesHelper represents the angle and the cube should be at the location based on the euler.Use Dat gui to live edit the rotations.
//add Axis to represent Euler
const axesHelper = new THREE.AxesHelper( 5 );
scene.add( axesHelper );
//add cube to represent Vector
const geometry = new THREE.BoxGeometry( 0.1, 0.1, 0.1 );
const material = new THREE.MeshBasicMaterial( {color: 0x00ff00} );
const cube = new THREE.Mesh( geometry, material );
scene.add( cube );
render()
const gui = new GUI();
const angles={
degX:0,
degY:0,
degZ:0,
}
gui.add( angles, 'degX',0,360,1 ).onChange(function(){
axesHelper.rotation.x=THREE.MathUtils.degToRad(angles.degX)
render()
updateEULtoAngle()
});
gui.add( angles, 'degY',0,360,1 ).onChange(function(){
axesHelper.rotation.y=THREE.MathUtils.degToRad(angles.degY)
render()
updateEULtoAngle()
});
gui.add( angles, 'degZ',0,360,1 ).onChange(function(){
axesHelper.rotation.z=THREE.MathUtils.degToRad(angles.degZ)
render()
updateEULtoAngle()
});
console.log(THREE.MathUtils.radToDeg( axesHelper.rotation.x))
console.log(THREE.MathUtils.radToDeg( axesHelper.rotation.y))
console.log(THREE.MathUtils.radToDeg( axesHelper.rotation.z))
function updateEULtoAngle(){
let eul= new THREE.Euler(
THREE.MathUtils.degToRad(angles.degX),
THREE.MathUtils.degToRad(angles.degY),
THREE.MathUtils.degToRad(angles.degZ)
)
let vec= new THREE.Vector3()
eul.toVector3(vec)
console.log(eul,vec)
cube.position.copy(vec)
}
fake visual representation
cube following the axes Y axis
related: but has problem with axis matching How to convert Euler angles to directional vector?
Euler.toVector3() does not do what you are looking for. It just copies the x, y and z angles into the respective vector components.
I think you should have a look at THREE.Spherical which is an implementation for using spherical coordinates. You can express a point in 3D space with two angles (phi and theta) and a radius. It's then possible to use these data to setup an instance of Vector3 via Vector3.setFromSpherical() or Vector3.setFromSphericalCoords().
im very not sure whats going on but
let vec = new THREE.Vector3(0, 0, 1).applyEuler(eul)
worked for me
also check this:
https://github.com/mrdoob/three.js/issues/1606
All I want to do is load an OBJ file and translate its coordinates to the world origins (0,0,0) so that orbit controls work perfectly (no Pivot points please).
I'd like to load random OBJ objects with different geometries/center points and have them translated automatically to the scene origin. In other words, a 'hard coded' translate solution for a specific model won't work
This has got to be one of the most common scenarios for Three JS (basic 3d object viewer), so I'm surprised I can't find a definitive solution on SO.
Unfortunately there are a lot of older answers with deprecated functions, so I would really appreciate a new answer even if there are similar solutions out there.
Things I've tried
the code below fits the object nicely to the camera, but doesn't solve the translation/orbiting problem.
// fit camera to object
var bBox = new THREE.Box3().setFromObject(scene);
var height = bBox.size().y;
var dist = height / (2 * Math.tan(camera.fov * Math.PI / 360));
var pos = scene.position;
// fudge factor so the object doesn't take up the whole view
camera.position.set(pos.x, pos.y, dist * 0.5);
camera.lookAt(pos);
Apparently the geometry.center() is good for translating an object's coordinates back to the origin, but the THREE.GeometryUtils.center has been replaced by geometry.center() and I keep getting errors when trying to use it.
when loading OBJs, geometry has now been replaced by bufferGeometry. I can't seem to cast the buffergeometry into geometry in order to use the center() function. do I have to place this in the object traverse > child loop like so? this seems unnecessarily complicated.
geometry = new THREE.Geometry().fromBufferGeometry( child.geometry );
My code is just a very simple OBJLoader.
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
}
} );
scene.add(object);
});
(BTW first real question on SO so forgive any formatting / noob issues)
Why not object.geometry.center()?
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
child.geometry.center();
}
} );
scene.add(object);
OK figured this out, using some very useful functions from Meshviewer Master, an older Three JS object viewer.
https://github.com/ideesculture/meshviewer
All credit to Gautier Michelin for this code
https://github.com/gautiermichelin
After loading the OBJ, you need to do 3 things:
1. Create a Bounding Box based on the OBJ
boundingbox = new THREE.BoundingBoxHelper(object, 0xff0000);
boundingbox.update();
sceneRadiusForCamera = Math.max(
boundingbox.box.max.y - boundingbox.box.min.y,
boundingbox.box.max.z - boundingbox.box.min.z,
boundingbox.box.max.x - boundingbox.box.min.x
)/2 * (1 + Math.sqrt(5)) ; // golden number to beautify display
2. Setup the Camera based on this bounding box / scene radius
function showFront() {
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
controls.reset();
camera.position.z = 0;
camera.position.y = 0;
camera.position.x = sceneRadiusForCamera;
camera.lookAt(scene.position);
}
(the mesh viewer code also contains functions for viewing left, top, etc)
3. Reposition the OBJ to the scene origin
Like any centering exercise, the position is then the width and height divided by 2
function resetObjectPosition(){
boundingbox.update();
size.x = boundingbox.box.max.x - boundingbox.box.min.x;
size.y = boundingbox.box.max.y - boundingbox.box.min.y;
size.z = boundingbox.box.max.z - boundingbox.box.min.z;
// Repositioning object
objectCopy.position.x = -boundingbox.box.min.x - size.x/2;
objectCopy.position.y = -boundingbox.box.min.y - size.y/2;
objectCopy.position.z = -boundingbox.box.min.z - size.z/2;
boundingbox.update();
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
}
From my understanding of your question, you want the objects that are added to the scene in the origin of the camera view. I believe the common way of achieving an object viewer solution is adding camera controls to your camera in the scene mostly THREE.OrbitControls and specifying the target for the camera as the object that you want to focus on. This makes the object focused to be in the center and the camera rotation and movement will be based on that object.
I don’t understand how normals are computed in threejs.
Here is my problem :
I create a simple plane
var plane = new THREE.PlaneGeometry(10, 100, 10, 10);
var material = new THREE.MeshBasicMaterial();
material.setValues({side: THREE.DoubleSide, color: 0xaabbcc});
var mesh = new THREE.Mesh(plane, material);
mesh.rotateY(Math.PI / 2);
scene.add(mesh);
When I read the normal of this plane, I get (0, 0, 1).
But the plane is parallel to the z axis so the value is wrong.
I tried adding
mesh.geometry.computeFaceNormals();
mesh.geometry.computeVertexNormals();
but I still get the same result.
Did I miss anything ?
How can I get correct values for normals from threejs ?
Thanks.
Geometry normals are in object space. To transform them to world space, first make sure the object matrix is updated.
object.updateMatrixWorld();
(The renderer does this for you in each render loop, so you may be able to skip this step.)
Then, compute the normal matrix:
var normalMatrix = new THREE.Matrix3().getNormalMatrix( object.matrixWorld );
Now transform the normal to world space like so:
var newNormal = normal.clone().applyMatrix3( normalMatrix ).normalize();
three.js r.66