I am trying to set up a scene in react-three-fiber that uses a raycaster to detect if an object intersects it.
My Scene: scene
I have been looking at examples like this example and this other example, of raycasters that use simple three objects but don't utilize separated component jsx ".gltf" meshes or their not in jsx. So I'm not sure how to add my group of meshes to a "raycaster.intersectObject();".
It seems that all you do is set up your camera, scene, and raytracer separately in different variables, but my camera and scene are apart of the Canvas Component.
Question: How do I add raycasting support to my scene? This would obscure the text that is on the opposite side of the sphere.
Thanks!
This is the approach I used. Note that I used useState instead of useRef", since I had problems with the latter
const Template = function basicRayCaster(...args) {
const [ray, setRay] = useState(null);
const [box, setBox] = useState(null);
useEffect(() => {
if (!ray || !box) return;
console.log(ray.ray.direction);
const intersect = ray.intersectObject(box);
console.log(intersect);
}, [box, ray]);
return (
<>
<Box ref={setBox}></Box>
<raycaster
ref={setRay}
ray={[new Vector3(-3, 0, 0), new Vector3(1, 0, 0)]}
></raycaster>
</>
);
};
Try CycleRayCast from Drei lib. It is react-three-fiber-friendly
"This component allows you to cycle through all objects underneath the cursor with optional visual feedback. This can be useful for non-trivial selection, CAD data, housing, everything that has layers. It does this by changing the raycasters filter function and then refreshing the raycaster."
You can retrieve the raycaster from useThree.
Example to modify the raycaster threshold for points onMount:
const { raycaster } = useThree();
useEffect(() => {
if (raycaster.params.Points) {
raycaster.params.Points.threshold = 0.1;
}
}, []);
Once done you are enable to modify its properties or to use it
Related
I'm attempting to create a map of 2d SVG tiles in three.js. I have used SVGLoader() Like so (Keep in mind some brackets are for parent scopes that aren't shown. That is not the issue):
loader = new SVGLoader();
loader.load(
// resource URL
filePath,
// called when the resource is loaded
function ( data ) {
console.log("SVG file successfully loaded");
const paths = data.paths;
for ( let i = 0; i < paths.length; i ++ ) {
const path = paths[ i ];
const material = new THREE.MeshBasicMaterial( {
color: path.color,
side: THREE.DoubleSide,
depthWrite: false
} );
const shapes = SVGLoader.createShapes( path );
console.log(`Shapes length = ${shapes.length}`);
try{
for ( let j = 0; j < shapes.length; j ++ ) {
const shape = shapes[ j ];
const geometry = new THREE.ShapeGeometry( shape );
const testGeometry = new THREE.PlaneGeometry(2,2);
try{
const mesh = new THREE.Mesh(geometry, material );
group.add( mesh );
}catch(e){console.log(e)}
}
}catch(e){console.log(e)}
}
},
// called when loading is in progress
function ( xhr ) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded' );
},
// called when loading has errors
function ( error ) {
console.log( 'An error happened' );
}
);
return group;
}
Dismiss the fact that I surrounded alot of it in try{}catch(){}
I have also created grid lines and added it to my axis helper in the application that allows me to see where each cooordinate is, in relation to the X and Y axis.
This is how the svg appears on screen:
Application Output
I can't seem to figure out how to correlate the scale of the svg, with the individual grid lines. I have a feeling that Im going to have to dive deeper into the SVG loading script that I have above then scale each shape mesh specifically. I call the SVG group itself in the following code.
try{
//SVG returns a group, TGA returns a texture to be added to a material
var object1 = LOADER.textureLoader("TGA", './Art/tile1.tga', pGeometry);
var object2 = LOADER.textureLoader("SVG", '/Art/bitmap.svg');
const testMaterial = new THREE.MeshBasicMaterial({
color: 0xffffff,
map: object1,
side: THREE.DoubleSide
});
//const useMesh = new THREE.Mesh(pGeometry, testMaterial);
//testing scaling the tile
try{
const worldScale = new THREE.Vector3();
object2.getWorldScale(worldScale);
console.log(`World ScaleX: ${worldScale.x} World ScaleY: ${worldScale.y} World ScaleZ: ${worldScale.z}`);
//object2.scale.set(2,2,0);
}catch(error){console.log(error)}
scene.add(object2);
}
Keep in mind that the SVG is object2 in this case. Some of the ideas to tackle this problem I have had is looking into what a world scale is, matrix4 transformations, and the scale methods of either the object3d parent properties or the bufferGeometry parent properties of this particular svg group object. I am also fully aware that three.js is designed for 3d graphics, however I would like to master 2d graphics programming in this library before I get into the 3d aspect of things. I also have a thought that the scale of the SVG group is distinctly different from the scale of the scene and its X Y and Z axis.
If this question has already been answered a link to the corresponding answer would be of great help to me.
Thank you for the time you take to answer this question.
I messed with the dimensions of the svg file itself in the editor I used to paint it and I got it to scale. Not exactly a solution in the code, however I guess the code is just closely tied to the data that the svg file provides and cant be altered too much.
I have a scene including an Object3D representing a globe and multiple mesh elements representing points on this globe. I use OrbitControls to allow interaction. Additionally I attach HTMLElements to the points on the globe. Since a globe basically is a sphere, points might not be visible for the camera when placed on the back.
How can I detect whether or not such a point is visible for the camera/hidden by the object? Doing so I want to hide the HTMLElement in relation to the mesh's visibility. The HTMLElement's position is updated on render, hence this check should happen on render as well I assume:
private render() {
this.renderer.render(this.scene, this.camera);
this.points.forEach(({ label, mesh }) => {
const screen = this.toScreenPosition(mesh);
label.style.transform = `translate3d(${screen.x - 15}px, ${screen.y}px, 0)`;
});
this.requestId = window.requestAnimationFrame(this.render.bind(this));
}
Working code within render:
this.points.forEach(({ label, mesh }) => {
const screen = this.toScreenPosition(mesh);
label.style.transform = `translate3d(${screen.x - 15}px, ${screen.y}px, 0)`;
const direction = new Vector3();
direction.copy(mesh.position).sub(this.camera.position).normalize();
this.raycaster.set(this.camera.position, direction);
const intersections = this.raycaster.intersectObject(this.scene, true);
const intersected = intersections.length > 0 ? intersections[0].object.uuid === mesh.uuid : false;
if (intersected && label.style.opacity === "0") {
label.style.opacity = "1";
} else if (!intersected && label.style.opacity === "1") {
label.style.opacity = "0";
}
});
I recommend a simple algorithm with two steps:
First, check if the given point is in the view frustum at all. The code for implementing this feature is shared in: three.js - check if object is still in view of the camera.
If the test passes, you have to verify whether the point is occluded by a 3D object or not. A typical way for checking this is a line-of-sight test. Meaning you setup a raycaster from your camera's position and the direction that points from your camera to the given point. You then test if 3D objects in your scene intersect with this ray. If there is no intersection, the point is not occluded. Otherwise it is and you can hide the respective label.
I have a solid MorphBlendMesh that is overlayed with a LineSegments object using EdgesGeometry/LineBasicMaterial in order to create a wireframe look without the "diagonals" that result from the triangle approach in newer versions of three.js. The problem is that I cannot find a way to get LineSegments to animate along with the mesh, presumably because it isn't a mesh, its simply an Object3D.
Is there a way to animate a LineSegments object with AnimationMixer? Or replicate this same look with a mesh setup that works well with AnimationMixer?
For reference, my question is essentially an expansion of this question -- same idea, but it MUST be capable of animation with AnimationMixer.
You can attach an arbitrary property to the mixer. This property will hold the vertices.
const a: any = ((lines.geometry as THREE.BufferGeometry).attributes.position as BufferAttribute)
.array;
const p: any = (line.geometry as any).attributes.position.array;
(lines as any).value = [...a];
const keyFrame2 = new THREE.NumberKeyframeTrack(
'.value',
[0,1],
[...a, ...p],
THREE.InterpolateSmooth
);
this.lineGeometriesToUpdate.push(lines as THREE.LineSegments);
const clip2 = new THREE.AnimationClip('lines', 1, [keyFrame2]);
const mixer2 = new THREE.AnimationMixer(lines);
const ca2 = mixer2.clipAction(clip2);
this.mixer.push(mixer2);
Then, in your animation loop, you use this property to update the geometry
this.lineGeometriesToUpdate.forEach(l => {
const geom = l.geometry as THREE.BufferGeometry;
const values = (l as any).value;
geom.setAttribute('position', new THREE.BufferAttribute(new Float32Array(values), 3));
(geom.attributes.position as BufferAttribute).needsUpdate = true;
});
this.renderScene();
All I want to do is load an OBJ file and translate its coordinates to the world origins (0,0,0) so that orbit controls work perfectly (no Pivot points please).
I'd like to load random OBJ objects with different geometries/center points and have them translated automatically to the scene origin. In other words, a 'hard coded' translate solution for a specific model won't work
This has got to be one of the most common scenarios for Three JS (basic 3d object viewer), so I'm surprised I can't find a definitive solution on SO.
Unfortunately there are a lot of older answers with deprecated functions, so I would really appreciate a new answer even if there are similar solutions out there.
Things I've tried
the code below fits the object nicely to the camera, but doesn't solve the translation/orbiting problem.
// fit camera to object
var bBox = new THREE.Box3().setFromObject(scene);
var height = bBox.size().y;
var dist = height / (2 * Math.tan(camera.fov * Math.PI / 360));
var pos = scene.position;
// fudge factor so the object doesn't take up the whole view
camera.position.set(pos.x, pos.y, dist * 0.5);
camera.lookAt(pos);
Apparently the geometry.center() is good for translating an object's coordinates back to the origin, but the THREE.GeometryUtils.center has been replaced by geometry.center() and I keep getting errors when trying to use it.
when loading OBJs, geometry has now been replaced by bufferGeometry. I can't seem to cast the buffergeometry into geometry in order to use the center() function. do I have to place this in the object traverse > child loop like so? this seems unnecessarily complicated.
geometry = new THREE.Geometry().fromBufferGeometry( child.geometry );
My code is just a very simple OBJLoader.
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
}
} );
scene.add(object);
});
(BTW first real question on SO so forgive any formatting / noob issues)
Why not object.geometry.center()?
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
child.geometry.center();
}
} );
scene.add(object);
OK figured this out, using some very useful functions from Meshviewer Master, an older Three JS object viewer.
https://github.com/ideesculture/meshviewer
All credit to Gautier Michelin for this code
https://github.com/gautiermichelin
After loading the OBJ, you need to do 3 things:
1. Create a Bounding Box based on the OBJ
boundingbox = new THREE.BoundingBoxHelper(object, 0xff0000);
boundingbox.update();
sceneRadiusForCamera = Math.max(
boundingbox.box.max.y - boundingbox.box.min.y,
boundingbox.box.max.z - boundingbox.box.min.z,
boundingbox.box.max.x - boundingbox.box.min.x
)/2 * (1 + Math.sqrt(5)) ; // golden number to beautify display
2. Setup the Camera based on this bounding box / scene radius
function showFront() {
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
controls.reset();
camera.position.z = 0;
camera.position.y = 0;
camera.position.x = sceneRadiusForCamera;
camera.lookAt(scene.position);
}
(the mesh viewer code also contains functions for viewing left, top, etc)
3. Reposition the OBJ to the scene origin
Like any centering exercise, the position is then the width and height divided by 2
function resetObjectPosition(){
boundingbox.update();
size.x = boundingbox.box.max.x - boundingbox.box.min.x;
size.y = boundingbox.box.max.y - boundingbox.box.min.y;
size.z = boundingbox.box.max.z - boundingbox.box.min.z;
// Repositioning object
objectCopy.position.x = -boundingbox.box.min.x - size.x/2;
objectCopy.position.y = -boundingbox.box.min.y - size.y/2;
objectCopy.position.z = -boundingbox.box.min.z - size.z/2;
boundingbox.update();
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
}
From my understanding of your question, you want the objects that are added to the scene in the origin of the camera view. I believe the common way of achieving an object viewer solution is adding camera controls to your camera in the scene mostly THREE.OrbitControls and specifying the target for the camera as the object that you want to focus on. This makes the object focused to be in the center and the camera rotation and movement will be based on that object.
I try to use a Raycaster (for selection) that works fine with a PerspectiveCamera but doesn't work with a CombinedCamera.
First it seems that CombinedCamera is not supported by the Raycaster, so among those line of three.js I add this :
if ( camera instanceof THREE.CombinedCamera ) {
if( camera.inPerspectiveMode ) {
camera = camera.cameraP;
} else if ( camera.inOrthographicMode ) {
camera = camera.cameraO;
}
}
if ( camera instanceof THREE.PerspectiveCamera ) {
...
So as it refers to the nested camera, however that doesn't do the trick because, I believe, the nested cameras position-quaternion-rotation are not updated ??
How can I achieve this and make Raycaster work with both Ortho and Perspective modes of a CombinedCamera ?
The renderer needs world matrix data for the raycasting to work. Make the following modification to the CombinedCamera code:
// Add to the .toPerspective() method:
this.matrixWorldInverse = this.cameraP.matrixWorldInverse; //
this.matrixWorld = this.cameraP.matrixWorld; //
// and to the .toOrthographic() method add:
this.matrixWorldInverse = this.cameraO.matrixWorldInverse; //
this.matrixWorld = this.cameraO.matrixWorld; //
r73.