I have this piece of code (see below) that I used to draw a cube with three.js:
// revolutions per second
var angularSpeed = 0.0;
var lastTime = 0;
function animate(){
// update
var time = (new Date()).getTime();
var timeDiff = time - lastTime;
var angleChange = angularSpeed * timeDiff * 2 * Math.PI / 1000;
cube.rotation.y += angleChange;
lastTime = time;
// render
renderer.render(scene, camera);
// request new frame
requestAnimationFrame(animate);
}
// renderer
var container = document.getElementById("container");
var renderer = new THREE.WebGLRenderer();
renderer.setSize(container.offsetWidth, container.offsetHeight);
container.appendChild(renderer.domElement);
// camera
var camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.z = 700;
// scene
var scene = new THREE.Scene();
// cube Length, Height, Width
var cube = new THREE.Mesh(new THREE.CubeGeometry(400, 200, 200), new THREE.MeshBasicMaterial({
wireframe: true,
color: '#ff0000'
}));
cube.rotation.x = Math.PI * 0.1;
scene.add(cube);
// start animation
animate();
Does anyone know whether is it possible to allow the user to change the size of the cube by dragging the edges using the mouse?
Check this jsfiddle. I reused the structure of draggableCubes, plus little changes :
to drag the vertices i created vertexHelpers (little spheres);
to avoid maths the trick is to use an invisible plane to drag your objects/vertices on, perpendicular to the camera. To see it in action, just set plane.visible=true
now we can correctly drag a vertexHelper, its distance to the center of the cube changes. We just have to scale the cube at the same ratio :
Within the mouseMove listener's function it becomes:
if(SELECTED){
var intersects=raycaster.intersectObject(plane);
//so we get the mouse 3D coordinates in intersects[0].point
var previousDistance=SELECTED.position.sub(cube.position).length();
var increaseRatio=intersects[0].point.sub(cube.position).length()/previousDistance;
cube.scale.set(
cube.scale.x*increaseRatio,
cube.scale.y*increaseRatio,
cube.scale.z*increaseRatio
);
//then update the vertexHelpers position (copy the new vertices positions)
}
EDIT :
In your question you precisely ask to resize a cube by dragging its edges. I did not remember it in the example and did not think about it intuitively, but it can be done the same way.
However, given lineWidth is not implemented in ANGLE (the program used on windows to translate WebGL), it is not easy to pick lines with a 1px-width. I remember a threejs example I could not find, where a geometry is associated to the line so it looks outlined. Basically you could do it by creating a cylinder as custom 'edgesHelpers' (i'm precisely not talking about the THREE.EdgesHelper) and they have to be resized each time the cube is too.
In your setup code add an eventlistener for mousemove:
// Event Handlers
renderer.domElement.addEventListener('mousemove', onDocumentMouseMove, false);
Then in the event handler code, you can check which object is selected and adjust the scale parameter.
function onDocumentMouseMove(event) {
...
var selected = raycaster.intersectObjects(objects);
if (selected.length > 0) {
// Do things to the scale parameter(s)... Just for demo purposes
selected.scale.x = selected.scale.x + selected[0].point.sub(offset).x / 1000;
selected.scale.y = selected.scale.y + selected[0].point.sub(offset).y / 1000;
return;
}
...
}
Since I am typing pseudo code here, all too easy to make an error, so I have left a test version here for you to try: http://www.numpty.co.uk/cubedrag.html
As you can see, size of selected object changes in horrible ways with the dragging of the mouse. You have me interested, so will look into making it proportional to movement if I get more time.
Related
I'm working on building up quite a complex static render in the browser with three.js and have gotten stuck early in the process trying to produce correct shadows with a single THREE.DirectionalLight representing the sun in my scene. All the geometry (contained in another .js file) has shadows enabled. The green sphere is for debugging purposes and is translated (50,0,50) to the center of the plane to represent the target for the camera and location of DirectionalLight.target. The directional light position and main camera position did set correctly.
My theory on why the shadows aren't working is because the orthogonal camera representing the shadow camera is pointing off in the wrong direction. I failed yesterday to figure out and solve the behaviour of the directional light helper (white line to the origin) and shadow camera helper (right).
I'm assuming the correct orientation, and the orientation I'm aiming for, has the directional light helper and shadow camera helper aligned to the center of the plane. After so much research yesterday, my shadow camera doesn't seem to automatically pick up the light position / light target vector. Why are they still anchored to the origin?
Does anyone have any suggestions about how to fix the DirectionalLight.target in my scene? Why are the DirectionalLightHelper and CameraHelper inconsistent?
// Set up
const canvus = document.getElementById('canvus');
const scene = new THREE.Scene();
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true });
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFShadowMap;
//Camera
const camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 500);
camera.position.set(200, 100, 100);
camera.lookAt(50, 0, 50);
// Lighting
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);
directionalLight.position.set(100, 200, 200);
directionalLight.target.position.set(50, 0, 50);
directionalLight.castShadow = true;
directionalLight.shadow.bias = 0.0001;
directionalLight.shadow.mapSize.width = 1024; // default
directionalLight.shadow.mapSize.height = 1024; // default
const view_n = 50;
directionalLight.shadow.camera = new THREE.OrthographicCamera(
-view_n,
view_n,
view_n,
-view_n,
60,
150
);
scene.add(directionalLight, directionalLight.target);
//helpers
const lighthelper = new THREE.DirectionalLightHelper(directionalLight, 10);
const camerahelper = new THREE.CameraHelper(directionalLight.shadow.camera);
scene.add(lighthelper);
scene.add(camerahelper);
//Main Render
createBasicGeometry(scene); // from geometry.js
createGroundPlane(scene); // from geometry.js
renderer.render(scene, camera);
Update 2020-1-5
I had initially tried setting the camera and also found examples of people setting a new ortho shadow camera directly. As I'm motivated to overcome the issue, and for thoroughness, I updated my code to reflect the suggestion and unfortunately the problem persists. I've re-checked that all mesh geometry is set to both object.receiveShadow = true and object.castShadow = true with the MeshPhongMaterial. It's completely confounding why directionalLight.target.position.set(50, 0, 50) is not updating as expected. What is the cause of this behaviour?
// Updated Lighting
const view_n = 50;
directionalLight.castShadow = true;
directionalLight.shadow.bias = 0.0001;
directionalLight.shadow.camera.right = view_n;
directionalLight.shadow.camera.left = -view_n;
directionalLight.shadow.camera.top = view_n;
directionalLight.shadow.camera.bottom = -view_n;
directionalLight.shadow.camera.near = 60;
directionalLight.shadow.camera.far = 150;
directionalLight.shadow.mapSize.width = 1024; // default
directionalLight.shadow.mapSize.height = 1024; // default
scene.add(directionalLight, directionalLight.target);
When I dump the directionalLight I get the target position I expected, though not aligned correctly in the scene. While the camera position gives another strange results.
console.log(directionalLight.target.position);
//Vector3 {x: 50, y: 0, z: 50, isVector3: true}
console.log(directionalLight.shadow.camera.position);
directionalLight.shadow.camera = new THREE.OrthographicCamera(
-view_n,
view_n,
view_n,
-view_n,
60,
150
);
Please do not overwrite the camera reference of LightShadow.camera. Configure the directional light like so:
dirLight.castShadow = true;
dirLight.shadow.camera.top = view_n;
dirLight.shadow.camera.bottom = - view_n;
dirLight.shadow.camera.left = - view_n;
dirLight.shadow.camera.right = view_n;
dirLight.shadow.camera.near = 60;
dirLight.shadow.camera.far = 150;
Besides shadow casting only works if all shadow casting objects (like your boxes) have castShadow set to true. All shadow receiving objects (like your floor) must set receiveShadow to true.
Perhaps because I'm doing this render statically without an animation loop this problem cropped up but was solved by inserting updateMatrixWorld to the directional light target. (Unforuately I wasn't able to update the shadow's camerahelper but at least the shadows are now working as expected.)
directionalLight.target.updateMatrixWorld();
scene.add(directionalLight);
scene.add(directionalLight.target);
I’ve asked in another forum but I thought I would like to be more clear on my problem.
Whats my intention?
Currently I am using three.js within WebView on my android device and created a scene which contains a simple box (which should be used as a bounding box) and a camera. My camera needs to be an interactive one with my android device, which means that I set the position by moving the device dynamically. These vectors are coming from a SLAM-algorithmn named Direct Sparse Odometry which recreates the camera position, I can also call these values with javascript by using the provided WebViewInterface from Android. My goal is to “walk around” the box dynamically without using the camera.lookAt()-Method every time I change the values, because if I move away from the box, the view should not be centered anymore (like an AR-Application), so the point of view should be created dynamically such as the position and rotation of the camera towards the object. My goal is to place an object over a real world object with three.js to scan it later with DSO by walking around the box to detect feature points. The whole visualisation should be created with three.js.
What is DSO?
DSO is a library to track the real environment by detecting points from a camera frames, which are provided by Android’s camera 2 API. This send me a 4x4 transformation Matrix with the current pose, which I try to apply on three.js’s camera position. Due to the complexity of this algorithm, lets pretend this gives me proper values (in meters, but I also tried to multiplicate the values by 10 or 100 to receive larger results than 0.XX).
Whats my Problem?
The box does not seem to have an absolute position, even if the values seems to be fixed. Every time when placing the Box, it seem to move in an opposite direction. After many adjustments on the dso values, I am crystal clear that this the problem is happening with three.js.
I’ve also tried to apply matrixes of the scene/camera and/or using the box as a child (because of the object-heredity), but the box seems not to have an absolute position inside the scene. Also I am not able to rotate the object that seem to be realistic.
Enclosed, you’ll find my code but p
lease note that I am using dynamically dummy values as a replacement for the dso values.
<body>
<canvas id="mCanvas">
</canvas>
</body>
<script>
// Var Init
var renderer, scene, camera, box, transformControl, orbitControl, geometry, material, poseMatrix;
var mPoints = [];
//Box coordinate
var xBCordinate, yBCordinate, zBCordinate, isScaled, posVec, startPosVec, lookPos, helper;
var process = false;
var scanActive = false;
var pointArr = [];
init();
animate();
function init() {
// renderer
renderer = new THREE.WebGLRenderer({canvas: document.getElementById("mCanvas"),
alpha: true});
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
renderer.setClearColor(0xffffff, 0);
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
// scene
scene = new THREE.Scene();
// camera
camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
0.1,
1000
);
camera.up.set(0, 0, 1); // Definition of coordinationsystem
// set initial scale position of camera
camera.position.x = 0;
camera.position.y = -0.5;
camera.position.z = 0.15;
scene.add(camera);
// set position to look at
camera.lookAt(0,2.5,-0.2);
// apply values
camera.updateMatrix();
// light
var light = new THREE.HemisphereLight( 0xeeeeee, 0x888888, 1 );
light.position.set( 0, -0.75, 2.5 );
scene.add(light);
placeBox();
}
function placeBox()
{
geometry = new THREE.BoxGeometry(0.5, 1, 0.5); //3,5,3
material = new THREE.MeshLambertMaterial({color: 0xfece46});
box = new THREE.Mesh(geometry, material);
box.position.set(0, 2.5, -0.2);
box.updateMatrix();
scene.add(box);
}
function animate() {
requestAnimationFrame(animate);
if(process == false){
setCurrentPose();
}
renderer.render(scene, camera);
}
function setCurrentPose(){
process = true;
// this is where I receive the position data via Android
// but lets try using random numbers between 0.01 - 0.99 (which are the results interval of dso)
moveRotateCamera();
}
function moveRotateCamera(){
// Create Vector to work with
posVec = new THREE.Vector3();
posVec.x = getRandomFloat(0.01, 0.99);
posVec.y = pgetRandomFloat(0.01, 0.99);
posVec.z = getRandomFloat(0.01, 0.99);
camera.position.x = posVec.x;
camera.position.y = (posVec.y) - 0.50; // minus initial scale position
camera.position.z = (posVec.z) + 0.15;
// camera.updateMatrix(); <- seem to change nothing such as UpdateWorldMatrix() etc.
// camera rotation tried to calculate with quaternions (result NaN) and/or euler by using former and current point.
process = false;
}
function getRandomFloat(min, max) {
return Math.random() * (max - min) + min;
}
// My attempts in trying to calculate the rotation
/*
function setQuaternionRotation(poseMatrix){
// TODO: delete if not needed!
// adapted from http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/index.htm, 2.12.2019, 2.34pm
mQuaternion = new THREE.Quaternion();
// Calculate Angle w
mQuaternion.w = ((Math.sqrt(Math.max(0, (1.0 + poseMatrix.elements[0] + poseMatrix.elements[5] + poseMatrix.elements[10])))/2.0));
//Sign x,y,z values of quaternion
mQuaternion.x = ((Math.sqrt(Math.max(0, (1.0 + poseMatrix.elements[0] - poseMatrix.elements[5] - poseMatrix.elements[10])))/2.0));
mQuaternion.y = ((Math.sqrt(Math.max(0, (1.0 - poseMatrix.elements[0] + poseMatrix.elements[5] - poseMatrix.elements[10])))/2.0));
mQuaternion.y = ((Math.sqrt(Math.max(0, (1.0 - poseMatrix.elements[0] - poseMatrix.elements[5] + poseMatrix.elements[10])))/2.0));
//Sign element values
mQuaternion.x = (Math.sign(mQuaternion.x * (poseMatrix.elements[6] - poseMatrix.elements[9])));
mQuaternion.y = (Math.sign(mQuaternion.y * (poseMatrix.elements[8] - poseMatrix.elements[2])));
mQuaternion.z = (Math.sign(mQuaternion.z * (poseMatrix.elements[1] - poseMatrix.elements[4])));
// debug
console.log("QuaternionVal: "+mQuaternion.x+ ", " +mQuaternion.y+", "+mQuaternion.z+", "+mQuaternion.w);
camera.applyQuaternion(mQuaternion);
camera.quaternion.normalize();
// debug
console.log("newCamRotation: "+camera.rotation.x +", "+camera.rotation.y+", "+ camera.rotation.z);
// camera.updateMatrix(true);
}
*/
</script>
Link to my Fiddle
Do you have any suggestions?
Thank you very much in advance!
Best regards,
FWIW. I think part of the issue is that the box is not centered about the camera rotation. I tweaked your fiddle by centering the box at the origin, in addition to using spherical coordinates to move the camera about. This keeps the camera at a uniform distance from the box, and with the box being centered about the rotation, it does not appear to be moving about the viewport...
<body>
<canvas id="mCanvas">
</canvas>
</body>
<script src="https://threejs.org/build/three.js"></script>
<script>
// Var Init
var renderer, scene, camera, box, transformControl, orbitControl, geometry, material, poseMatrix;
var mPoints = [];
//Box coordinate
var xBCordinate, yBCordinate, zBCordinate, isScaled, posVec, startPosVec, lookPos, helper;
var process = false;
var scanActive = false;
var pointArr = [];
var cameraSpherical;
init();
animate();
function init() {
// renderer
renderer = new THREE.WebGLRenderer({canvas: document.getElementById("mCanvas"),
alpha: true});
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
renderer.setClearColor(0xffffff, 0);
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
// scene
scene = new THREE.Scene();
// camera
camera = new THREE.PerspectiveCamera(
45,
window.innerWidth / window.innerHeight,
0.1,
1000
);
camera.up.set(0, 0, 1); // Definition of coordinationsystem
// set initial scale position of camera
camera.position.x = 0;
camera.position.y = -0.5;
camera.position.z = 0.15;
scene.add(camera);
cameraSpherical = new THREE.Spherical( camera.position );
// set position to look at
camera.lookAt(0,2.5,-0.2);
// apply values
camera.updateMatrix();
// light
var light = new THREE.HemisphereLight( 0xeeeeee, 0x888888, 1 );
light.position.set( 0, -0.75, 2.5 );
scene.add(light);
placeBox();
}
function placeBox()
{
geometry = new THREE.BoxGeometry(0.5, 1, 0.5); //3,5,3
material = new THREE.MeshLambertMaterial({color: 0xfece46});
box = new THREE.Mesh(geometry, material);
box.position.set(0, 0, 0);
box.updateMatrix();
scene.add(box);
}
function animate() {
requestAnimationFrame(animate);
if(process == false){
setCurrentPose();
}
renderer.render(scene, camera);
}
function setCurrentPose(){
process = true;
// this is where I receive the position data via Android
// but lets try using random numbers between 0.01 - 0.99 (which are the results interval of dso)
moveRotateCamera();
}
function moveRotateCamera(){
// Create Vector to work with
/* posVec = new THREE.Vector3();
posVec.x = getRandomFloat(0.01, 0.05);
posVec.y = getRandomFloat(0.01, 0.05);
posVec.z = getRandomFloat(0.01, 0.02);
camera.position.x += posVec.x;
camera.position.y += posVec.y; // minus initial scale position
camera.position.z += posVec.z;
*/
cameraSpherical.radius = 5;
cameraSpherical.phi += getRandomFloat(0.001, 0.015);
cameraSpherical.theta += getRandomFloat(0.001, 0.015);
let xyz = new THREE.Vector3().setFromSpherical( cameraSpherical );
camera.position.x = xyz.x;
camera.position.y = xyz.y;
camera.position.z = xyz.z;
camera.lookAt(0,0,0);
camera.updateMatrix();
// camera.updateMatrix(); <- seem to change nothing such as UpdateWorldMatrix() etc.
// camera rotation tried to calculate with quaternions (result NaN) and/or euler by using former and current point.
process = false;
}
function getRandomFloat(min, max) {
return Math.random() * (max - min) + min;
}
// My attempts in trying to calculate the rotation
/*
function setQuaternionRotation(poseMatrix){
// TODO: delete if not needed!
// adapted from http://www.euclideanspace.com/maths/geometry/rotations/conversions/matrixToQuaternion/index.htm, 2.12.2019, 2.34pm
mQuaternion = new THREE.Quaternion();
// Calculate Angle w
mQuaternion.w = ((Math.sqrt(Math.max(0, (1.0 + poseMatrix.elements[0] + poseMatrix.elements[5] + poseMatrix.elements[10])))/2.0));
//Sign x,y,z values of quaternion
mQuaternion.x = ((Math.sqrt(Math.max(0, (1.0 + poseMatrix.elements[0] - poseMatrix.elements[5] - poseMatrix.elements[10])))/2.0));
mQuaternion.y = ((Math.sqrt(Math.max(0, (1.0 - poseMatrix.elements[0] + poseMatrix.elements[5] - poseMatrix.elements[10])))/2.0));
mQuaternion.y = ((Math.sqrt(Math.max(0, (1.0 - poseMatrix.elements[0] - poseMatrix.elements[5] + poseMatrix.elements[10])))/2.0));
//Sign element values
mQuaternion.x = (Math.sign(mQuaternion.x * (poseMatrix.elements[6] - poseMatrix.elements[9])));
mQuaternion.y = (Math.sign(mQuaternion.y * (poseMatrix.elements[8] - poseMatrix.elements[2])));
mQuaternion.z = (Math.sign(mQuaternion.z * (poseMatrix.elements[1] - poseMatrix.elements[4])));
// debug
console.log("QuaternionVal: "+mQuaternion.x+ ", " +mQuaternion.y+", "+mQuaternion.z+", "+mQuaternion.w);
camera.applyQuaternion(mQuaternion);
camera.quaternion.normalize();
// debug
console.log("newCamRotation: "+camera.rotation.x +", "+camera.rotation.y+", "+ camera.rotation.z);
// camera.updateMatrix(true);
}
*/
</script>
Not sure whether this helps you in the direction you're going, but hope it sheds some light.
I have created an example project and read a great tutorial on three.js but one thing is not described well when creating the perpsective camera.
When an object is created an added into the scene for a default location of (0,0,0) and the camera is set and moved back 10 units to make the object easily visible, I have specified a near and far plane of 0.1 and 1000. I am not sure which axis this is specified on, however whichever axis it is specified on, none of the axis's on the default object are >= 0.1 to be visible given that the near and far planes specify that the visible objects must be between these planes.
Could someone please explain why my object is visible in the scene, which axis the near and far plane are or provide a very useful link describing it as I cannot find a link to explain well.
Code below if interested.
import * as THREE from 'three';
import 'bootstrap';
import css from '../css/custom_css.css';
let scene = new THREE.Scene();
/* Constants */
let WIDTH = window.innerWidth;
let HEIGHT = window.innerHeight;
/* Camera */
let field_of_view = 75;
let aspect_ratio = WIDTH/HEIGHT;
let near_plane = 0.1;
let far_plane = 1000;
let camera = new THREE.PerspectiveCamera(field_of_view, aspect_ratio, near_plane, far_plane);
// every object is initially created at ( 0, 0, 0 )
// we'll move the camera back a bit so that we can view the scene
camera.position.set( 0, 0, 10 );
/* Renderer */
let renderer = new THREE.WebGLRenderer({antialias:true, canvas: my_canvas });
renderer.setSize(WIDTH, HEIGHT);
renderer.setClearColor(0xE8E2DD, 1);
// Create the shape
let geometry = new THREE.BoxGeometry(1, 1, 1);
// Create a material, colour or image texture
let material = new THREE.MeshBasicMaterial( {
color: 0xFF0000,
wireframe: true
});
// Cube
let cube = new THREE.Mesh(geometry, material);
scene.add(cube);
// Game Logic
let update = function(){
cube.rotation.x += 0.01;
cube.rotation.y += 0.005;
};
// Draw Scene
let render = function(){
renderer.render(scene, camera);
};
// Run game loop, update, render, repeat
let gameLoop = function(){
requestAnimationFrame(gameLoop);
update();
render();
};
gameLoop();
The near and far planes for a perspective camera are relative to the camera itself, they are not on one of the global scene axes.
In your case, the near plane is a plane 0.1 units away from the camera along the camera's central axis or "look" vector, and the far plane is a plane 1000 units away. Things rendered must be between these two planes inside the camera's "view frustum".
So in your example, since the camera is looking at the object and the object is 10 units away, it's within the view frustum and is therefore visible.
See this youtube video for a more visual representation: https://www.youtube.com/watch?v=KyTaxN2XUyQ
I am working in autodesk forge which includes Threejs r71 and I want to use a raycaster to detect clicks on different elements within a pointcloud.
Sample code for how to do this with ThreeJs r71 be appreciated.
Right now, I register an extension with the forge api and run the code below within it. It creates creates a pointcloud and positions the points at predetermined locations (saved within the cameraInfo array).
let geometry = new THREE.Geometry();
this.cameraInfo.forEach( function(e) {
geometry.vertices.push(e.position);
}
)
const material = new THREE.PointCloudMaterial( { size: 150, color: 0Xff0000, sizeAttenuation: true } );
this.points = new THREE.PointCloud( geometry, material );
this.scene.add(this.points);
/* Set up event listeners */
document.addEventListener('mousemove', event => {
// console.log('mouse move!');
let mouse = {
x: ( event.clientX / window.innerWidth ) * 2 - 1,
y: - ( event.clientY / window.innerHeight ) * 2 + 1
};
let raycaster = new THREE.Raycaster();
raycaster.params.PointCloud.threshold = 15;
let vector = new THREE.Vector3(mouse.x, mouse.y, 0.5).unproject(this.camera);
raycaster.ray.set(this.camera.position, vector.sub(this.camera.position).normalize());
this.scene.updateMatrixWorld();
let intersects = raycaster.intersectObject(this.points);
if (intersects.length > 0) {
const hitIndex = intersects[0].index;
const hitPoint = this.points.geometry.vertices[ hitIndex ];
console.log(hitIndex);
console.log(hitPoint);
}
}, false);
The output seems to be illogical. At certain camera positions, it will constantly tell me that it is intersecting an item in the pointcloud (regardless of where the mouse is). And at certain camera positions, it won't detect an intersection at all.
TLDR: it doesn't actually detect an intersection b/w my pointcloud and the mouse.
I've simplified the code a bit, using some of the viewer APIs (using a couple of sample points in the point cloud):
const viewer = NOP_VIEWER;
const geometry = new THREE.Geometry();
for (let i = -100; i <= 100; i += 10) {
geometry.vertices.push(new THREE.Vector3(i, i, i));
}
const material = new THREE.PointCloudMaterial({ size: 50, color: 0Xff0000, sizeAttenuation: true });
const points = new THREE.PointCloud(geometry, material);
viewer.impl.scene.add(points);
const raycaster = new THREE.Raycaster();
raycaster.params.PointCloud.threshold = 50;
document.addEventListener('mousemove', function(event) {
const ray = viewer.impl.viewportToRay(viewer.impl.clientToViewport(event.clientX, event.clientY));
raycaster.ray.set(ray.origin, ray.direction);
let intersects = raycaster.intersectObject(viewer.impl.scene, true);
if (intersects.length > 0) {
console.log(intersects[0]);
}
});
I believe you'll need to tweak the raycaster.params.PointCloud.threshold value. The ray casting logic in three.js doesn't actually intersect the point "boxes" that you see rendered on the screen. It only computes distance between the ray and the point (in the world coordinate system), and only outputs an intersection when the distance is under the threshold value. In my example I tried setting the threshold to 50, and the intersection results were somewhat better.
As a side note, if you don't necessarily need point clouds inside the scene, consider overlaying HTML elements over the 3D view instead. We're using the approach in the https://forge-digital-twin.autodesk.io demo (source) to show rich annotations attached to specific positions in the 3D space. With this approach, you don't have to worry about custom intersections - the browser handles everything for you.
I'm working on a solar system in three.js and am curious if there is an easy way to make the labels for the planets I have below all show up the same size regardless of how far they are from the camera? I can't seem to find a solution to this. I figure you could calculate the distance from each label to the camera then come up with some sort of scaling factor based on that. Seems like there would be an easier way to accomplish this?
Thanks!
Updated with answer from prisoner849. Works excellent!
I figure you could calculate the distance from each label to the camera then come up with some sort of scaling factor based on that.
And it's very simple. Let's say, a THREE.Sprite() object (label) is a child of a THREE.Mesh() object (planet), then in your animation loop you need to do
var scaleVector = new THREE.Vector3();
var scaleFactor = 4;
var sprite = planet.children[0];
var scale = scaleVector.subVectors(planet.position, camera.position).length() / scaleFactor;
sprite.scale.set(scale, scale, 1);
I've made a very simple example of the Solar System, using this technique.
For the benefit of future visitors, the transform controls example does exactly this:
https://threejs.org/examples/misc_controls_transform.html
Here's how its done in the example code:
var factor;
if ( this.camera.isOrthographicCamera ) {
factor = ( this.camera.top - this.camera.bottom ) / this.camera.zoom;
} else {
factor = this.worldPosition.distanceTo( this.cameraPosition ) * Math.min( 1.9 * Math.tan( Math.PI * this.camera.fov / 360 ) / this.camera.zoom, 7 );
}
handle.scale.set( 1, 1, 1 ).multiplyScalar( factor * this.size / 7 );
Finally I found the answer to your question:
First, create a DOM Element:
<div class="element">Not Earth</div>
Then set CSS styles for it:
.element {position: absolute; top:0; left:0; color: white}
// |-------------------------------| |-----------|
// make the element on top of canvas is
// the canvas black, so text
// must be white
After that, create moveDom() function and run it every time you render the scene requestAnimationFrame()
geometry is the geometry of the mesh
cube is the mesh you want to create label
var moveDom = function(){
vector = geometry.vertices[0].clone();
vector.applyMatrix4(cube.matrix);
vector.project(camera);
vector.x = (vector.x * innerWidth/2) + innerWidth/2;
vector.y = -(vector.y * innerHeight/2) + innerHeight/2;
//Get the DOM element and apply transforms on it
document.querySelectorAll(".element")[0].style.webkitTransform = "translate("+vector.x+"px,"+vector.y+"px)";
document.querySelectorAll(".element")[0].style.transform = "translate("+vector.x+"px,"+vector.y+"px)";
};
You can create a for loop to set label for all the mesh in your scene.
Because this trick only set 2D position of DOM Element, the size of label is the same even if you zoom (the label is not part of three.js scene).
Full test case: https://jsfiddle.net/0L1rpayz/1/
var renderer, scene, camera, cube, vector, geometry;
var ww = window.innerWidth,
wh = window.innerHeight;
function init(){
renderer = new THREE.WebGLRenderer({canvas : document.getElementById('scene')});
renderer.setSize(ww,wh);
scene = new THREE.Scene();
camera = new THREE.PerspectiveCamera(50,ww/wh, 0.1, 10000 );
camera.position.set(0,0,500);
scene.add(camera);
light = new THREE.DirectionalLight(0xffffff, 1);
light.position.set( 0, 0, 500 );
scene.add(light);
//Vector use to get position of vertice
vector = new THREE.Vector3();
//Generate Not Earth
geometry = new THREE.BoxGeometry(50,50,50);
var material = new THREE.MeshLambertMaterial({color: 0x00ff00});
cube = new THREE.Mesh(geometry, material);
scene.add(cube);
//Render my scene
render();
}
var moveDom = function(){
vector = geometry.vertices[0].clone();
vector.applyMatrix4(cube.matrix);
vector.project(camera);
vector.x = (vector.x * ww/2) + ww/2;
vector.y = -(vector.y * wh/2) + wh/2;
//Get the DOM element and apply transforms on it
document.querySelectorAll(".element")[0].style.webkitTransform = "translate("+vector.x+"px,"+vector.y+"px)";
document.querySelectorAll(".element")[0].style.transform = "translate("+vector.x+"px,"+vector.y+"px)";
};
var counter = 0;
var render = function (a) {
requestAnimationFrame(render);
counter++;
//Move my cubes
cube.position.x = Math.cos((counter+1*150)/200)*(ww/6+1*80);
cube.position.y = Math.sin((counter+1*150)/200)*(70+1*80);
cube.rotation.x += .001*1+.002;
cube.rotation.y += .001*1+.02;
//Move my dom elements
moveDom();
renderer.render(scene, camera);
};
init();
body,html, canvas{width:100%;height:100%;padding:0;margin:0;overflow: hidden;}
.element{color:white;position:absolute;top:0;left:0}
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r79/three.min.js"></script>
<!-- My scene -->
<canvas id="scene"></canvas>
<div class="element">
<h1>Not Earth</h1>
</div>
If you downvote this, please tell me why. I will try my best to improve my posts.
If you are using spriteMaterial to present your text, you could try to set the sizeAttenuation attribute to false.
var spriteMaterial = new THREE.SpriteMaterial( { map: spriteMap, color: 0xffffff, sizeAttenuation:false } );
See more information from here:
https://threejs.org/docs/index.html#api/en/materials/SpriteMaterial.sizeAttenuation