I have a terrain mountain range with a camera fly view positioned close to the heightmap. The camera is controlled using standard keyboard controls that doesn't allow movement in y axis; it stays on it's set y-axis. Since the terrain is not an even level and randomly renders. How can I get the camera to follow the terrain heightmap? Here is an example. Below is an attempt to add raycaster to scale over the terrain. All code is in the animate function in project. I noticed that when the camera attempts to scale up the terrain. If too steep, it will go straight through. Also, it doesn't follow the terrain as it drops. The camera remains at the highest position.
var raycaster = new THREE.Raycaster(camera.position.clone(), new THREE.Vector3(0, -1, 0));
raycaster.ray.origin.copy(camera.position);
var intersections = raycaster.intersectObjects( terrain );
if (intersections.length > 0){
var distance = intersections[0].distance;
if(distance > 0 && distance < 10){
camera.position.y= intersections[0].point.y + 20;
}
}
In this snippet:
if(distance > 0 && distance < 10){
camera.position.y= intersections[0].point.y + 20;
}
...intersections with distance > 10 are ignored, but then the camera moves to +20 above the terrain. I suspect that's why it can't follow terrain as it drops. I'm less sure about the "too steep" issue, but that could be similar... try keeping the raycaster at a height that's above the maximum terrain altitude, rather than at the camera's location.
If the issues persist, you may want to include a live demo or an export of your terrain.
Copy the camera.position to a vector and raising vector position.y and setting the raycaster to the vectors.
var castFrom = new THREE.Vector3();
var castDirection = new THREE.Vector3(0,-1,0);
var raycaster = new THREE.Raycaster(camera.position.clone(), new THREE.Vector3(0, -1, 0));
castFrom.copy(camera.position);
castFrom.y += 1000;
raycaster.set(castFrom,castDirection);
var intersections = raycaster.intersectObjects( terrain );
if (intersections.length > 0){
camera.position.y = intersections[0].point.y+20;
}
I have a scene where I want to combine perspective objects (ie. objects that appear smaller when they are far away) with orthogographic objects (ie. objects that appear the same size irrespective of distance). The perspective objects are part of the rendered "world", while the orthogographic objects are adornments, like labels or icons. Unlike a HUD, I want the orthogographic objects to be rendered "within" the world, which means that they can be covered by world objects (imagine a plane passing before a label).
My solution is to use one renderer, but two scenes, one with a PerspectiveCamera and one with an OrthogographicCamera. I render them in sequence without clearing the z buffer (the renderer's autoClear property is set to false). The problem that I am facing is that I need to synchronize the placement of the objects in each scene so that an object in one scene is assigned a z-position that is behind objects in the other scene that are before it, but before objects that are behind it.
To do that, I am designating my perspective scene as the "leading" scene, ie. all coordinates of all objects (perspective and orthogographic) are assigned based on this scene. The perspective objects use these coordinates directly and are rendered within that scene and with the perspective camera. The coordinates of the orthogographic objects are transformed to the coordinates in the orthogographic scene and then rendered in that scene with the orthogographic camera. I do the transformation by projecting the coordinates in the perspective scene to the perspective camera's view pane and then back to the orthogonal scene with the orthogographic camera:
position.project(perspectiveCamera).unproject(orthogographicCamera);
Alas, this is not working as indended. The orthogographic objects are always rendered before the perspective objects even if they should be between them. Consider this example, in which the blue circle should be displayed behind the red square, but before the green square (which it isn't):
var pScene = new THREE.Scene();
var oScene = new THREE.Scene();
var pCam = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 1, 1000);
pCam.position.set(0, 40, 50);
pCam.lookAt(new THREE.Vector3(0, 0, -50));
var oCam = new THREE.OrthographicCamera(window.innerWidth / -2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / -2, 1, 500);
oCam.Position = pCam.position.clone();
pScene.add(pCam);
pScene.add(new THREE.AmbientLight(0xFFFFFF));
oScene.add(oCam);
oScene.add(new THREE.AmbientLight(0xFFFFFF));
var frontPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x990000 }));
frontPlane.position.z = -50;
pScene.add(frontPlane);
var backPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x009900 }));
backPlane.position.z = -100;
pScene.add(backPlane);
var circle = new THREE.Mesh(new THREE.CircleGeometry(60, 20), new THREE.MeshBasicMaterial( { color: 0x000099 }));
circle.position.z = -75;
//Transform position from perspective camera to orthogonal camera -> doesn't work, the circle is displayed in front
circle.position.project(pCam).unproject(oCam);
oScene.add(circle);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
renderer.autoClear = false;
renderer.render(oScene, oCam);
renderer.render(pScene, pCam);
You can try out the code here.
In the perspective world the (world) z-position of the circle is -75, which is between the squares (-50 and -100). But it is actually displayed in front of both squares. If you manually set the circles z-position (in the orthogographic scene) to -500 it is displayed between the squares, so with the right positioning, what I'm trying should be possible in principle.
I know that I can not render a scene the same with orthogographic and perspective cameras. My intention is to reposition all orthogographic objects before each rendering so that they appear to be at the right position.
What do I have to do to calculate the orthogographic coordinates from the perspective coordinates so that the objects are rendered with the right depth values?
UPDATE:
I have added an answer with my current solution to the problem in case someone has a similar problem. However, since this solution does not provide the same quality as the orthogographic camera. So I would still be happy if somoeone could explain why the orthogographic camera does not work as expected and/or provide a solution to the problem.
You are very close to the result what you have expected. You have forgotten to update the camera matrices, which have to be calculated that the operation project and project can proper work:
pCam.updateMatrixWorld ( false );
oCam.updateMatrixWorld ( false );
circle.position.project(pCam).unproject(oCam);
Explanation:
In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix.
Projection matrix:
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
View matrix:
The view matrix describes the direction and position from which the scene is looked at. The view matrix transforms from the wolrd space to the view (eye) space. In the coordinat system on the viewport, the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
Model matrix:
The model matrix defines the location, oriantation and the relative size of a mesh in the scene. The model matrix transforms the vertex positions from of the mesh to the world space.
If a fragment is drawn "behind" or "before" another fragment, depends on the depth value of the fragment. While for orthographic projection the Z coordinate of the view space is linearly mapped to the depth value, in perspective projection it is not linear.
In general, the depth value is calculated as follows:
float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((farZ-nearZ) * ndc_depth) + nearZ + farZ) / 2.0;
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates.
At Orthographic Projection the coordinates in the eye space are linearly mapped to normalized device coordinates.
Orthographic Projection
At Orthographic Projection the coordinates in the eye space are linearly mapped to normalized device coordinates.
Orthographic Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2/(r-l) 0 0 0
0 2/(t-b) 0 0
0 0 -2/(f-n) 0
-(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
At Orthographic Projection, the Z component is calcualted by the linear function:
z_ndc = z_eye * -2/(f-n) - (f+n)/(f-n)
Perspective Projection
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
Perspective Projection
Perspective Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
At Perspective Projection, the Z component is calcualted by the rational function:
z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye
See a detailed description at the answer to the Stack Overflow question How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
In your case this means, that you have to choose the Z coordinate of the circle in the orthographic projection in that way, that the depth value is inbetween of the depths of the objects in the perspective projection.
Since the depth value in nothing else than depth = z ndc * 0.5 + 0.5 in both cases, it also possible to do the calculations by normalized device coordinates instead of depth values.
The normalized device coordinates can easily be caluclated by the project function of the THREE.PerspectiveCamera. The project converrts from wolrd space to view space and from view space to normalized device coordinates.
To find a Z coordinate which is in between in orthographic projection, the middle normalized device Z coordinate, has to be transformed to a view space Z coordinate. This can be done by the unproject function of the THREE.PerspectiveCamera. The unproject converts from normalized device coordinates to view space and from view space to world sapce.
See further OpenGL - Mouse coordinates to Space coordinates.
See the example:
var renderer, pScene, oScene, pCam, oCam, frontPlane, backPlane, circle;
var init = function () {
pScene = new THREE.Scene();
oScene = new THREE.Scene();
pCam = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 1, 1000);
pCam.position.set(0, 40, 50);
pCam.lookAt(new THREE.Vector3(0, 0, -50));
oCam = new THREE.OrthographicCamera(window.innerWidth / -2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / -2, 1, 500);
oCam.Position = pCam.position.clone();
pScene.add(pCam);
pScene.add(new THREE.AmbientLight(0xFFFFFF));
oScene.add(oCam);
oScene.add(new THREE.AmbientLight(0xFFFFFF));
frontPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x990000 }));
frontPlane.position.z = -50;
pScene.add(frontPlane);
backPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x009900 }));
backPlane.position.z = -100;
pScene.add(backPlane);
circle = new THREE.Mesh(new THREE.CircleGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x000099 }));
circle.position.z = -75;
//Transform position from perspective camera to orthogonal camera -> doesn't work, the circle is displayed in front
pCam.updateMatrixWorld ( false );
oCam.updateMatrixWorld ( false );
circle.position.project(pCam).unproject(oCam);
oScene.add(circle);
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
};
var render = function () {
renderer.autoClear = false;
renderer.render(oScene, oCam);
renderer.render(pScene, pCam);
};
var animate = function () {
requestAnimationFrame(animate);
//controls.update();
render();
};
init();
animate();
html,body {
height: 100%;
width: 100%;
margin: 0;
overflow: hidden;
}
<script src="https://threejs.org/build/three.min.js"></script>
I have found a solution that involves only the perspective camera and scales the adornments according to their distance to the camera. It is similar to the answer posted to a similar question, but not quite the same. My specific issue is that I don't only need the adornments to be the same size independent of their distance to the camera, I also need to control their exact size on screen.
To scale them to the right size, not to any size that does not change, I use the function to calculate on screen size found in this answer to calculate the position of both ends of a vector of a known on-screen length and check the length of the projection to the screen. From the difference in length I can calculate the exact scaling factor:
var widthVector = new THREE.Vector3( 100, 0, 0 );
widthVector.applyEuler(pCam.rotation);
var baseX = getScreenPosition(circle, pCam).x;
circle.position.add(widthVector);
var referenceX = getScreenPosition(circle, pCam).x;
circle.position.sub(widthVector);
var scale = 100 / (referenceX - baseX);
circle.scale.set(scale, scale, scale);
The problem with this solution is that in most of the cases the calculation is precise enough to provide an exact size. But every now and then some rounding error makes the adornment not render correctly.
I'm confused about converting screen position to world position in three.js. I've looked at several answers to questions on Stackoverflow but can't make sense of the formulas and/or extract them from raycasting or mouse testing in the examples (neither of which I'm doing). I simply want to position a plane in world position according to the width of the scene/window. For example the bottom right corner of the window minus the width and height of the plane so it's on screen and visible. Something like this:
//geometry and material of the plane
var geometry = new THREE.PlaneGeometry(50, 50);
var material = new THREE.MeshPhongMaterial( { color: 0xff0000 } );
//create plane
plane = new THREE.Mesh( geometry, material);
position = new THREE.Vector3();
position.x = ( window.innerWidth / 2 ) - asquare.geometry.parameters.width;
position.y = - ( window.innerHeight / 2 ) - asquare.geometry.parameters.height;
position.z = 0;
plane.position.set( position.x, position.y, position.z );
It's in the right direction but too far, I guess it's the difference between the camera and planes z position that is the issue. If my camera is at z 500 and the plane is at z 0 how do I adjust the x and y accordingly?
Hello I have created simple renderer for my 3D objects (php generated).
I am successfully rendering all the object, but I got some big issues with textures.
This is my texture: (512x512)
I'd like to use it on my object, but this is what happens:
I can't figure how to display not stretched nicely looking grid in 1:1 ratio.
I think i need to calculate the repeats somehow. Any idea?
This is how im setting up the texture:
var texture = THREE.ImageUtils.loadTexture(basePath + '/images/textures/dt.jpg', new THREE.UVMapping());
texture.wrapT = THREE.RepeatWrapping;
texture.wrapS = THREE.RepeatWrapping;
texture.repeat.set(1,1);
stairmaterials[0] = new THREE.MeshBasicMaterial(
{
side: THREE.DoubleSide,
map: texture
});
I tried to change repeat values to achieve 1:1 not stretched ration, but I wasn't successful at all. I all got worse and worse.
I also use following algorithm to calculate vertex UVS:
geom.computeBoundingBox();
var max = geom.boundingBox.max;
var min = geom.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.z);
var range = new THREE.Vector2(max.x - min.x, max.z - min.z);
geom.faceVertexUvs[0] = [];
var faces = geom.faces;
for (i = 0; i < geom.faces.length; i++) {
var v1 = geom.vertices[faces[i].a];
var v2 = geom.vertices[faces[i].b];
var v3 = geom.vertices[faces[i].c];
geom.faceVertexUvs[0].push([
new THREE.Vector2(( v1.x + offset.x ) / range.x, ( v1.z + offset.z ) / range.z),
new THREE.Vector2(( v2.x + offset.x ) / range.x, ( v2.z + offset.z ) / range.z),
new THREE.Vector2(( v3.x + offset.x ) / range.x, ( v3.z + offset.z ) / range.z)
]);
}
geom.uvsNeedUpdate = true;
Is your mesh imported or generated?
You are iterating through each and every triangle, and doing some sort of scale / projection. If all of these boxes are a single mesh, this will work to an extent, you will scale each and every box by the same ratio. They also exist in the same model space, so yes, it will be scaled properly and aligned.
If they are not the same mesh, it can fail.
The vertices you are looping through exist in model space. Each one of these boxes, could exist in their own scale, at there own arbitrary local positions. So, you'd have to transform them to world space using the object's .modelMatrix . When you apply this matrix you will bring all of those vertices into the same space. Try doing this without the bounding box, or just grab the bounding box from one object and you will see a uniform scale, and proper alignment.
You wont be able to use this algorithm to get the map to show in 3d though, you are doing a 2d projection. You can check each triangle's orientation, and then choose a different projection direction though.
I'm looking to understand quaternions for three.js, but for all the tutorials, I haven't been able to translate them into the application I need. This is the problem:
Given a sphere centered at (0,0,0), I want to angle an object on the sphere's surface, that acts as the focal point for the camera. This point is to be moved and rotated on the surface with keyboard input.
Setting the focal point into a chosen orbit is easy of course, but maintaining the right rotation perpendicular to the surface escapes me. I know quaternions are neccessary for smooth movement and arbitrary axis rotation, but I don't know where to start.
The second part then is rotating the camera offset with the focal point. The snippet I found for this does not have the desired effect anymore, as the cameraOffset does not inherit the rotation:
var cameraOffset = relativeCameraOffset.clone().applyMatrix4( focalPoint.matrixWorld );
camera.position.copy( focalPoint.position.clone().add(cameraOffset) );
camera.lookAt( focalPoint.position );
Update 1: Tried it with fixed camera on the pole and rotating the planet. But unless I'm missing something important, this fails as well, due to the directions getting skewed completely when going towards the equator. (Left becomes forward). Code in update is:
acceleration.set(0,0,0);
if (keyboard.pressed("w")) acceleration.x = 1 * accelerationSpeed;
if (keyboard.pressed("s")) acceleration.x = -1 * accelerationSpeed;
if (keyboard.pressed("a")) acceleration.z = 1 * accelerationSpeed;
if (keyboard.pressed("d")) acceleration.z = -1 * accelerationSpeed;
if (keyboard.pressed("q")) acceleration.y = 1 * accelerationSpeed;
if (keyboard.pressed("e")) acceleration.y = -1 * accelerationSpeed;
velocity.add(acceleration);
velocity.multiplyScalar(dropOff);
velocity.max(minV);
velocity.min(maxV);
planet.mesh.rotation.x += velocity.x;
planet.mesh.rotation.y += velocity.y;
planet.mesh.rotation.z += velocity.z;
So I'm still open for suggestions.
Finally found the solution from a mixture of matrices and quaternions:
//Setup
var ux = new THREE.Vector3(1,0,0);
var uy = new THREE.Vector3(0,1,0);
var uz = new THREE.Vector3(0,0,1);
var direction = ux.clone();
var m4 = new THREE.Matrix4();
var dq = new THREE.Quaternion(); //direction quad base
var dqq; //final direction quad
var dq2 = new THREE.Quaternion();
dq2.setFromAxisAngle(uz,Math.PI/2); //direction perpendicular rot
//Update
if (velocity.length() < 0.1) return;
if (velocity.x) { focalPoint.translateY( velocity.x ); }
if (velocity.y) { focalPoint.translateX( velocity.y ); }
//create new direction from focalPoint quat, but perpendicular
dqq = dq.clone().multiply(focalPoint.quaternion).multiply(dq2);
velocity.multiplyScalar(dropOff);
//forward direction vector
direction = ux.clone().applyQuaternion(dqq).normalize();
//use Matrix4.lookAt to align focalPoint with the direction
m4.lookAt(focalPoint.position, planet.mesh.position, direction);
focalPoint.quaternion.setFromRotationMatrix(m4);
var cameraOffset = relativeCameraOffset.clone();
cameraOffset.z = cameraDistance;
cameraOffset.applyQuaternion(focalPoint.quaternion);
camera.position = focalPoint.position.clone().add(cameraOffset) ;
//use direction for camera rotation as well
camera.up = direction;
camera.lookAt( focalPoint.position );
This is the hard core of it. It pans (and with some extension rotates) around the planet without the poles being an issue.
I'm not sure to understand your problem.
But for help, I draw a boat on a sphere with the code below.
var geometry = new THREE.ShapeGeometry(shape);
var translation = new THREE.Matrix4().makeTranslation(boat.position.x, boat.position.y, boat.position.z);
var rotationZ = new THREE.Matrix4().makeRotationZ(-THREE.Math.degToRad(boat.cap));
var rotationX = new THREE.Matrix4().makeRotationX(-THREE.Math.degToRad(boat.latitude));
var rotationY = new THREE.Matrix4().makeRotationY(Math.PI / 2 + THREE.Math.degToRad(boat.longitude));
var roationXY = rotationY.multiply(rotationX);
geometry.applyMatrix(rotationZ);
geometry.applyMatrix(roationXY );
geometry.applyMatrix(translation);
First, I apply a rotation on Z to define boat cap
Then, I apply
rotation on Y,X to to set the boat perpendicular to the surface of
the sphere
Finally I apply a translation to put the boat on the
surafce of the sphere
The rotations order is important