I have set up a simple scene where I have my camera inside a sphere geometry
var mat = new THREE.MeshBasicMaterial({map: THREE.ImageUtils.loadTexture('0.jpg') , overdraw:true, color: 0xffffff, wireframe: false });
var sphereGeo = new THREE.SphereGeometry(1000,50,50);
var sphere = new THREE.Mesh(sphereGeo,mat);
sphere.scale.x = -1;
sphere.doubleSided = false;
scene.add(sphere);
I set up a funcionality where I can look around inside that sphere and my point is to be able to cast a ray on mouse down, hit the sphere and get the coordinates where that hit occured. Im casting a ray but still the intersects are empty.
var vector = new THREE.Vector3();
vector.set( ( event.clientX / window.innerWidth ) * 2 - 1, - ( event.clientY / window.innerHeight ) * 2 + 1, 0.5 );
vector.unproject( camera );
raycaster.ray.set( camera.position, vector.sub( camera.position ).normalize());
var intersects = raycaster.intersectObjects(scene.children, true);
Everything works with a test cube also put inside my sphere.
My question is, does it matter whether you hit the object from the inside or no ? Because that is the only explanation that comes to my mind.
Thanks in advance.
sphere.doubleSided was changed to sphere.material.side = THREE.DoubleSide some years ago.
It does matter if you hit the object from the inside. Usually a ray will pass through an "inverted" surface due to backface culling which happens on the pipeline level.
Inverted/flipped surfaces are usually ignored in both rendering and raycasting.
In your case, however, i'd go ahead and try setting sphere.doubleSided = false; to sphere.doubleSided = true;. This should make the raycast return the intersection point with your sphere. [shouldn't work with negative scale]
You can also enter the "dirty vertices" mode, and flip the normals manually:
mesh.geometry.dynamic = true
mesh.geometry.__dirtyVertices = true;
mesh.geometry.__dirtyNormals = true;
mesh.flipSided = true;
//flip every vertex normal in mesh by multiplying normal by -1
for(var i = 0; i<mesh.geometry.faces.length; i++) {
mesh.geometry.faces[i].normal.x = -1*mesh.geometry.faces[i].normal.x;
mesh.geometry.faces[i].normal.y = -1*mesh.geometry.faces[i].normal.y;
mesh.geometry.faces[i].normal.z = -1*mesh.geometry.faces[i].normal.z;
}
mesh.geometry.computeVertexNormals();
mesh.geometry.computeFaceNormals();
I also suggest you set scale back to 1.0 instead of -1.0.
Let me know if it worked!
Related
I am working in autodesk forge which includes Threejs r71 and I want to use a raycaster to detect clicks on different elements within a pointcloud.
Sample code for how to do this with ThreeJs r71 be appreciated.
Right now, I register an extension with the forge api and run the code below within it. It creates creates a pointcloud and positions the points at predetermined locations (saved within the cameraInfo array).
let geometry = new THREE.Geometry();
this.cameraInfo.forEach( function(e) {
geometry.vertices.push(e.position);
}
)
const material = new THREE.PointCloudMaterial( { size: 150, color: 0Xff0000, sizeAttenuation: true } );
this.points = new THREE.PointCloud( geometry, material );
this.scene.add(this.points);
/* Set up event listeners */
document.addEventListener('mousemove', event => {
// console.log('mouse move!');
let mouse = {
x: ( event.clientX / window.innerWidth ) * 2 - 1,
y: - ( event.clientY / window.innerHeight ) * 2 + 1
};
let raycaster = new THREE.Raycaster();
raycaster.params.PointCloud.threshold = 15;
let vector = new THREE.Vector3(mouse.x, mouse.y, 0.5).unproject(this.camera);
raycaster.ray.set(this.camera.position, vector.sub(this.camera.position).normalize());
this.scene.updateMatrixWorld();
let intersects = raycaster.intersectObject(this.points);
if (intersects.length > 0) {
const hitIndex = intersects[0].index;
const hitPoint = this.points.geometry.vertices[ hitIndex ];
console.log(hitIndex);
console.log(hitPoint);
}
}, false);
The output seems to be illogical. At certain camera positions, it will constantly tell me that it is intersecting an item in the pointcloud (regardless of where the mouse is). And at certain camera positions, it won't detect an intersection at all.
TLDR: it doesn't actually detect an intersection b/w my pointcloud and the mouse.
I've simplified the code a bit, using some of the viewer APIs (using a couple of sample points in the point cloud):
const viewer = NOP_VIEWER;
const geometry = new THREE.Geometry();
for (let i = -100; i <= 100; i += 10) {
geometry.vertices.push(new THREE.Vector3(i, i, i));
}
const material = new THREE.PointCloudMaterial({ size: 50, color: 0Xff0000, sizeAttenuation: true });
const points = new THREE.PointCloud(geometry, material);
viewer.impl.scene.add(points);
const raycaster = new THREE.Raycaster();
raycaster.params.PointCloud.threshold = 50;
document.addEventListener('mousemove', function(event) {
const ray = viewer.impl.viewportToRay(viewer.impl.clientToViewport(event.clientX, event.clientY));
raycaster.ray.set(ray.origin, ray.direction);
let intersects = raycaster.intersectObject(viewer.impl.scene, true);
if (intersects.length > 0) {
console.log(intersects[0]);
}
});
I believe you'll need to tweak the raycaster.params.PointCloud.threshold value. The ray casting logic in three.js doesn't actually intersect the point "boxes" that you see rendered on the screen. It only computes distance between the ray and the point (in the world coordinate system), and only outputs an intersection when the distance is under the threshold value. In my example I tried setting the threshold to 50, and the intersection results were somewhat better.
As a side note, if you don't necessarily need point clouds inside the scene, consider overlaying HTML elements over the 3D view instead. We're using the approach in the https://forge-digital-twin.autodesk.io demo (source) to show rich annotations attached to specific positions in the 3D space. With this approach, you don't have to worry about custom intersections - the browser handles everything for you.
What I'm trying to achieve is a rotation of the geometry around pivot point and make that the new definition of the geometry. I do not want te keep editing the rotationZ but I want to have the current rotationZ to be the new rotationZ 0.
This way when I create a new rotation task, it will start from the new given pivot point and the newly given rad.
What I've tried, but then the rotation point moves:
// Add cube to do calculations
var box = new THREE.Box3().setFromObject( o );
var size = box.getSize();
var offsetZ = size.z / 2;
o.geometry.translate(0, -offsetZ, 0)
// Do ratation
o.rotateZ(CalcUtils.degreeToRad(degree));
o.geometry.translate(0, offsetZ, 0)
I also tried to add a Group and rotate that group and then remove the group. But I need to keep the rotation without all the extra objects. The code I created
var box = new THREE.Box3().setFromObject( o );
var size = box.size();
var geometry = new THREE.BoxGeometry( 20, 20, 20 );
var material = new THREE.MeshBasicMaterial( { color: 0xcc0000 } );
var cube = new THREE.Mesh( geometry, material );
cube.position.x = o.position.x;
cube.position.y = 0; // Height / 2
cube.position.z = -size.z / 2;
o.position.x = 0;
o.position.y = 0;
o.position.z = size.z / 2;
cube.add(o);
scene.add(cube);
// Do ratation
cube.rotateY(CalcUtils.degreeToRad(degree));
// Remove cube, and go back to single object
var position = o.getWorldPosition();
scene.add(o)
scene.remove(cube);
console.log(o);
o.position.x = position.x;
o.position.y = position.y;
o.position.z = position.z;
So my question, how do I save the current rotation as the new 0 rotation point. Make the rotation final
EDIT
I added an image of what I want to do. The object is green. I have a 0 point of the world (black). I have a 0 point of the object (red). And I have rotation point (blue).
How can I rotate the object around the blue point?
I wouldn't recommend updating the vertices, because you'll run into trouble with the normals (unless you keep them up-to-date, too). Basically, it's a lot of hassle to perform an action for which the transformation matrices were intended.
You came pretty close by translating, rotating, and un-translating, so you were on the right track. There are some built-in methods which can help make this super easy.
// obj - your object (THREE.Object3D or derived)
// point - the point of rotation (THREE.Vector3)
// axis - the axis of rotation (normalized THREE.Vector3)
// theta - radian value of rotation
// pointIsWorld - boolean indicating the point is in world coordinates (default = false)
function rotateAboutPoint(obj, point, axis, theta, pointIsWorld){
pointIsWorld = (pointIsWorld === undefined)? false : pointIsWorld;
if(pointIsWorld){
obj.parent.localToWorld(obj.position); // compensate for world coordinate
}
obj.position.sub(point); // remove the offset
obj.position.applyAxisAngle(axis, theta); // rotate the POSITION
obj.position.add(point); // re-add the offset
if(pointIsWorld){
obj.parent.worldToLocal(obj.position); // undo world coordinates compensation
}
obj.rotateOnAxis(axis, theta); // rotate the OBJECT
}
After this method completes, the rotation/position IS persisted. The next time you call the method, it will transform the object from its current state to wherever your inputs define next.
Also note the compensation for using world coordinates. This allows you to use a point in either world coordinates or local space by converting the object's position vector into the correct coordinate system. It's probably best to use it this way any time your point and object are in different coordinate systems, though your observations may differ.
As a simple solution for anyone trying to quickly change the pivot point of an object, I would recommend creating a group and adding the mesh to the group, and rotating around that.
Full example
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshBasicMaterial({ color: 0xff0000 });
const cube = new THREE.Mesh(geometry, material);
scene.add(cube)
Right now, this will just rotate around its center
cube.rotation.z = Math.PI / 4
Create a new group and add the cube
const group = new THREE.Group();
group.add(cube)
scene.add(group)
At this point we are back where we started. Now move the mesh:
cube.position.set(0.5,0.5,0)
Then move the group
group.position.set(-0.5, -0.5, 0)
Now use your group to rotate the object:
group.rotation.z = Math.PI / 4
I make a game where you can add objects to a world without using a grid. Now I want to make a footpath. When you click on "Create footpath", then you can add a point to on the world at the raycaster position. After you add a first point you can add a second point to the world. When these 2 objects where placed. A line/footpath is visible from the first point to the second one.
I can do this really simple with THREE.Line. See the code:
var lineGeometry = new THREE.Geometry();
lineGeometry.vertices.push( new THREE.Vector3(x1,0,z1), new THREE.Vector3(x2,0,z2) );
lineGeometry.computeLineDistances();
var lineMaterial = new THREE.LineBasicMaterial( { color: 0xFF0000 } );
var line = new THREE.Line( lineGeometry, lineMaterial );
scene.add(line);
But I can't add a texture on a simple line. Now I want to do something the same with a Mesh. I have the position of the first point and the raycaster position of the second point. I also have the lenght between the two objects for the lenght of the footpath. But I don't know how I can get the rotation what is needed.
Note. I saw something about LookAt, is this maybe a good idea, how can I use this with a mesh?
Can anyone help me to get the correct rotation for the footpath object?
I use this code for the foodpath mesh:
var loader = new THREE.TextureLoader();
loader.load('images/floor.jpg', function ( texture ) {
var geometry = new THREE.BoxGeometry(10, 0, 2);
var material = new THREE.MeshBasicMaterial({ map: texture, overdraw: 0.5 });
var footpath = new THREE.Mesh(geometry, material);
footpath.position.copy(point2);
var direction = // What can I do here?
footpath.rotation.y = direction;
scene.add(footpath);
});
I want to get the correct rotation for direction.
[UPDATE]
The code of WestLangley helps a lot. But it works not in all directions. I used this code for the lenght:
var lenght = footpaths[i].position.z - point2.position.z;
What can I do that the lenght works in all directions?
You want to align a box between two 3D points. You can do that like so:
var geometry = new THREE.BoxGeometry( width, height, length ); // align length with z-axis
geometry.translate( 0, 0, length / 2 ); // so one end is at the origin
...
var footpath = new THREE.Mesh( geometry, material );
footpath.position.copy( point1 );
footpath.lookAt( point2 );
three.js r.84
I'm trying to create a clickable shape in Three from a bunch of points that are generated by mouse click.
This code is kind of working:
mouse.x = ( ( event.clientX - renderer.domElement.offsetLeft ) / player.width ) * 2 - 1;
mouse.y = - ( ( event.clientY - renderer.domElement.offsetTop ) / player.height ) * 2 + 1
raycaster.setFromCamera( mouse, camera );
var objects = [];
objects.push(selectedHotspot);
var intersects = raycaster.intersectObjects( objects, true );
if ( intersects.length > 0 ) {
var point = new THREE.Mesh( new THREE.SphereGeometry(1, 1, 1), new THREE.MeshBasicMaterial( { color: 0x00ffff } ) );
point.position.copy(intersects[0].point);
scene.add(point);
points.push(intersects[0].point);
}
var geometry = new THREE.Geometry();
points.forEach( function( point ){
geometry.vertices.push( point );
});
geometry.vertices.push( points[0] );
geometry.faces.push( new THREE.Face3(0, 1, 2));
// material
var material = new THREE.MeshBasicMaterial( { color: 0xffffff } );
// line
var line = new THREE.Mesh( geometry, material );
scene.add( line );
hotspots.push( line );
The points get added, I can draw lines between them I just can't fill in the center so the mouse can detect it!
You can create a mesh from points using THREE.ConvexGeometry.
var mesh = new THREE.ConvexGeometry( vertices_array );
See, for example, https://threejs.org/examples/webgl_geometry_convex.html
This is just the convex hull of your points, but it should be sufficient for your use case.
You must include the three.js file examples/jsm/geometries/ConvexGeometry.js explicitly in your source code.
three.js r.147
There are different ways to create a mesh out of a point cloud - it all depends on what your specific needs are. I'll try to give you a high-level overview of a few approaches.
Perhaps a bounding box is enough? Calculate the bounding box of the point cloud and raycast against the BBox.
If the BBox happens to contains large volumes that have no points in them, then you may need a tighter-fitting mesh around these points. Given the ray being cast, project all points onto a plane normal to the ray, then construct the 2D convex hull of the points on this plane using the Gift wrapping algorithm. There are most likely existing libraries implementing this algorithm. Use the polygon constructed by this algorithm for the raycast test.
I'm pretty new to 3d and to threejs and I can't figure out how I can get a PlaneGeometry to show individually illuminated polygons i.e. receive shadows or show reflection. What I basically do is taking a PlaneGeometry applying some noise to every z value of the vertices. Then I have a simple directional light in my scene which is supposed to make the emerging noise pattern on the plane visible. I tried different things like plane.castShadow = true or renderer.shadowMapEnabled = true without success. Am I just missing a simple option or is this way more complicated than I think?
Here's are the relevant pieces of my code
renderer.setSize(width, height);
renderer.setClearColor(0x111111, 1);
...
var directionalLight = new THREE.DirectionalLight( 0xffffff, 0.9);
directionalLight.position.set(10, 2, 20);
directionalLight.castShadow = true;
directionalLight.shadowCameraVisible = true;
scene.add( directionalLight );
var geometry = new THREE.PlaneGeometry(20, 20, segments, segments);
var index = 0;
for(var i=0; i < segments + 1; i++) {
for(var j=0; j < segments + 1; j++) {
zOffset = simplex.noise2D(i * xNoiseScale, j * yNoiseScale) * 5;
geometry.vertices[index].z = zOffset;
index++;
}
}
var material = new THREE.MeshLambertMaterial({
side: THREE.DoubleSide,
color: 0xf50066
});
var plane = new THREE.Mesh(geometry, material);
plane.rotation.x = -Math.PI / 2.35;
plane.castShadow = true;
plane.receiveShadow = true;
scene.add(plane);
This is the output I get. Obviously the plane is aware of the light because the bottom side is darker than the upper side but there is no sign of any individual polygons receiving individual lightening and no 3d structure is visible. Interestingly when I put in a different geometry like a BoxGeometry individual polygons are illuminated individually (see 2nd image). Any ideas?
Ok I figured it out thanks to this post. The trick is to use the THREE.FlatShading shader on the material. Important to note is that after every update of the vertices two things need to be done. Before rendering geometry.normalsNeedUpdate must be set to true so the renderer also incorporates the newly oriented vertices. Also geometry.computeFaceNormals() needs to be called before rendering because when you alter the vertices the normals are not the same anymore.