I have a scene including an Object3D representing a globe and multiple mesh elements representing points on this globe. I use OrbitControls to allow interaction. Additionally I attach HTMLElements to the points on the globe. Since a globe basically is a sphere, points might not be visible for the camera when placed on the back.
How can I detect whether or not such a point is visible for the camera/hidden by the object? Doing so I want to hide the HTMLElement in relation to the mesh's visibility. The HTMLElement's position is updated on render, hence this check should happen on render as well I assume:
private render() {
this.renderer.render(this.scene, this.camera);
this.points.forEach(({ label, mesh }) => {
const screen = this.toScreenPosition(mesh);
label.style.transform = `translate3d(${screen.x - 15}px, ${screen.y}px, 0)`;
});
this.requestId = window.requestAnimationFrame(this.render.bind(this));
}
Working code within render:
this.points.forEach(({ label, mesh }) => {
const screen = this.toScreenPosition(mesh);
label.style.transform = `translate3d(${screen.x - 15}px, ${screen.y}px, 0)`;
const direction = new Vector3();
direction.copy(mesh.position).sub(this.camera.position).normalize();
this.raycaster.set(this.camera.position, direction);
const intersections = this.raycaster.intersectObject(this.scene, true);
const intersected = intersections.length > 0 ? intersections[0].object.uuid === mesh.uuid : false;
if (intersected && label.style.opacity === "0") {
label.style.opacity = "1";
} else if (!intersected && label.style.opacity === "1") {
label.style.opacity = "0";
}
});
I recommend a simple algorithm with two steps:
First, check if the given point is in the view frustum at all. The code for implementing this feature is shared in: three.js - check if object is still in view of the camera.
If the test passes, you have to verify whether the point is occluded by a 3D object or not. A typical way for checking this is a line-of-sight test. Meaning you setup a raycaster from your camera's position and the direction that points from your camera to the given point. You then test if 3D objects in your scene intersect with this ray. If there is no intersection, the point is not occluded. Otherwise it is and you can hide the respective label.
Related
I am trying to set up a scene in react-three-fiber that uses a raycaster to detect if an object intersects it.
My Scene: scene
I have been looking at examples like this example and this other example, of raycasters that use simple three objects but don't utilize separated component jsx ".gltf" meshes or their not in jsx. So I'm not sure how to add my group of meshes to a "raycaster.intersectObject();".
It seems that all you do is set up your camera, scene, and raytracer separately in different variables, but my camera and scene are apart of the Canvas Component.
Question: How do I add raycasting support to my scene? This would obscure the text that is on the opposite side of the sphere.
Thanks!
This is the approach I used. Note that I used useState instead of useRef", since I had problems with the latter
const Template = function basicRayCaster(...args) {
const [ray, setRay] = useState(null);
const [box, setBox] = useState(null);
useEffect(() => {
if (!ray || !box) return;
console.log(ray.ray.direction);
const intersect = ray.intersectObject(box);
console.log(intersect);
}, [box, ray]);
return (
<>
<Box ref={setBox}></Box>
<raycaster
ref={setRay}
ray={[new Vector3(-3, 0, 0), new Vector3(1, 0, 0)]}
></raycaster>
</>
);
};
Try CycleRayCast from Drei lib. It is react-three-fiber-friendly
"This component allows you to cycle through all objects underneath the cursor with optional visual feedback. This can be useful for non-trivial selection, CAD data, housing, everything that has layers. It does this by changing the raycasters filter function and then refreshing the raycaster."
You can retrieve the raycaster from useThree.
Example to modify the raycaster threshold for points onMount:
const { raycaster } = useThree();
useEffect(() => {
if (raycaster.params.Points) {
raycaster.params.Points.threshold = 0.1;
}
}, []);
Once done you are enable to modify its properties or to use it
I wanted to know how to have a Aframe Component for any entity that define if the entity is seen by the camera, like a bool attribute.
"isSeen"= true || false
I tried with trigonometry (knowing the rotation of the camera, and the Entities' positions), but I failed.
How about frustums: checking out if a point(x, y ,z) is within the camera's field of view .
The code is quite simple. To use it within a-frame, You could create a component, which will check if the point is seen on each render loop:
AFRAME.registerComponent('foo', {
tick: function() {
if (this.el.sceneEl.camera) {
var cam = this.el.sceneEl.camera
var frustum = new THREE.Frustum();
frustum.setFromMatrix(new THREE.Matrix4().multiplyMatrices(cam.projectionMatrix,
cam.matrixWorldInverse));
// Your 3d point to check
var pos = new THREE.Vector3(x, y, z);
if (frustum.containsPoint(pos)) {
// Do something with the position...
}
}
}
}
Check it out in my fiddle
All I want to do is load an OBJ file and translate its coordinates to the world origins (0,0,0) so that orbit controls work perfectly (no Pivot points please).
I'd like to load random OBJ objects with different geometries/center points and have them translated automatically to the scene origin. In other words, a 'hard coded' translate solution for a specific model won't work
This has got to be one of the most common scenarios for Three JS (basic 3d object viewer), so I'm surprised I can't find a definitive solution on SO.
Unfortunately there are a lot of older answers with deprecated functions, so I would really appreciate a new answer even if there are similar solutions out there.
Things I've tried
the code below fits the object nicely to the camera, but doesn't solve the translation/orbiting problem.
// fit camera to object
var bBox = new THREE.Box3().setFromObject(scene);
var height = bBox.size().y;
var dist = height / (2 * Math.tan(camera.fov * Math.PI / 360));
var pos = scene.position;
// fudge factor so the object doesn't take up the whole view
camera.position.set(pos.x, pos.y, dist * 0.5);
camera.lookAt(pos);
Apparently the geometry.center() is good for translating an object's coordinates back to the origin, but the THREE.GeometryUtils.center has been replaced by geometry.center() and I keep getting errors when trying to use it.
when loading OBJs, geometry has now been replaced by bufferGeometry. I can't seem to cast the buffergeometry into geometry in order to use the center() function. do I have to place this in the object traverse > child loop like so? this seems unnecessarily complicated.
geometry = new THREE.Geometry().fromBufferGeometry( child.geometry );
My code is just a very simple OBJLoader.
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
}
} );
scene.add(object);
});
(BTW first real question on SO so forgive any formatting / noob issues)
Why not object.geometry.center()?
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
child.geometry.center();
}
} );
scene.add(object);
OK figured this out, using some very useful functions from Meshviewer Master, an older Three JS object viewer.
https://github.com/ideesculture/meshviewer
All credit to Gautier Michelin for this code
https://github.com/gautiermichelin
After loading the OBJ, you need to do 3 things:
1. Create a Bounding Box based on the OBJ
boundingbox = new THREE.BoundingBoxHelper(object, 0xff0000);
boundingbox.update();
sceneRadiusForCamera = Math.max(
boundingbox.box.max.y - boundingbox.box.min.y,
boundingbox.box.max.z - boundingbox.box.min.z,
boundingbox.box.max.x - boundingbox.box.min.x
)/2 * (1 + Math.sqrt(5)) ; // golden number to beautify display
2. Setup the Camera based on this bounding box / scene radius
function showFront() {
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
controls.reset();
camera.position.z = 0;
camera.position.y = 0;
camera.position.x = sceneRadiusForCamera;
camera.lookAt(scene.position);
}
(the mesh viewer code also contains functions for viewing left, top, etc)
3. Reposition the OBJ to the scene origin
Like any centering exercise, the position is then the width and height divided by 2
function resetObjectPosition(){
boundingbox.update();
size.x = boundingbox.box.max.x - boundingbox.box.min.x;
size.y = boundingbox.box.max.y - boundingbox.box.min.y;
size.z = boundingbox.box.max.z - boundingbox.box.min.z;
// Repositioning object
objectCopy.position.x = -boundingbox.box.min.x - size.x/2;
objectCopy.position.y = -boundingbox.box.min.y - size.y/2;
objectCopy.position.z = -boundingbox.box.min.z - size.z/2;
boundingbox.update();
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
}
From my understanding of your question, you want the objects that are added to the scene in the origin of the camera view. I believe the common way of achieving an object viewer solution is adding camera controls to your camera in the scene mostly THREE.OrbitControls and specifying the target for the camera as the object that you want to focus on. This makes the object focused to be in the center and the camera rotation and movement will be based on that object.
I try to use a Raycaster (for selection) that works fine with a PerspectiveCamera but doesn't work with a CombinedCamera.
First it seems that CombinedCamera is not supported by the Raycaster, so among those line of three.js I add this :
if ( camera instanceof THREE.CombinedCamera ) {
if( camera.inPerspectiveMode ) {
camera = camera.cameraP;
} else if ( camera.inOrthographicMode ) {
camera = camera.cameraO;
}
}
if ( camera instanceof THREE.PerspectiveCamera ) {
...
So as it refers to the nested camera, however that doesn't do the trick because, I believe, the nested cameras position-quaternion-rotation are not updated ??
How can I achieve this and make Raycaster work with both Ortho and Perspective modes of a CombinedCamera ?
The renderer needs world matrix data for the raycasting to work. Make the following modification to the CombinedCamera code:
// Add to the .toPerspective() method:
this.matrixWorldInverse = this.cameraP.matrixWorldInverse; //
this.matrixWorld = this.cameraP.matrixWorld; //
// and to the .toOrthographic() method add:
this.matrixWorldInverse = this.cameraO.matrixWorldInverse; //
this.matrixWorld = this.cameraO.matrixWorld; //
r73.
Example JSfiddle
I can get my cone to point at each target sphere in turn (red,green,yellow,blue) using the THREE.js "lookAt" function.
// Initialisation
c_geometry = new THREE.CylinderGeometry(3, 40, 120, 40, 10, false);
c_geometry.applyMatrix( new THREE.Matrix4().makeRotationX( Math.PI / 2 ) );
c_material = new THREE.MeshNormalMaterial()
myCone = new THREE.Mesh(c_geometry, c_material);
scene.add(myCone);
// Application (within the Animation loop)
myCone.lookAt(target.position);
But now I want the cone to pan smoothly and slowly from the old target to the new target. I guess that I can do it by computing intermediate points on the circular arc which is centred at the cone centre Cxyz and which passes from the previous target position Pxyz to the new target position Nxyz.
Please can someone point me to suitable: (a) utilities or (b) trigonometry algorithms or (c) code examples for calculating the xyz coordinates of the intermediate points on such an arc? (I will supply the angular increment between points based on desired sweep rate and time interval between frames).
You want to smoothly transition from one orientation to another.
In your case, you would pre-calculate the target quaternions:
myCone.lookAt( s1.position );
q1 = new THREE.Quaternion().copy( myCone.quaternion );
myCone.lookAt( s2.position );
q2 = new THREE.Quaternion().copy( myCone.quaternion );
Then, in your render loop:
myCone.quaternion.slerpQuaternions( q1, q2, time ); // 0 < time < 1
three.js r.141
For those of you looking to lerp position and lookAt, you can create a initial lookAt target from the current look direction and lerp that towards the final target:
function MoveWhileLookingAt(object: Object3D, destination: Vector3, lookAt: Vector3){
const fromPosition = object.position.clone();
const fromLookAt = new Vector3(
0,
.1, // To avoid initial camera flip on certain starting points (like top down view)
-object.position.distanceTo(lookAt) // THREE.Camera looks down negative Z. Remove the minus if working with a regular object.
);
object.localToWorld(fromLookAt);
const tempTarget = fromLookAt.clone();
function LookAtLerp(alpha: number){
// This goes in your render loop
object.position.lerpVectors(fromPosition, destination, alpha);
tempTarget.lerpVectors(fromLookAt, lookAt, alpha);
object.lookAt(tempTarget);
}
}