I have objects (very far away) in a 3D scene, using a perspective camera and a 2D HUD set up using a orthographic camera:
this.scene = new THREE.Scene();
this.hud = new THREE.Scene();
this.camera = new THREE.PerspectiveCamera( 30, aspect, front, back );
this.camera.position.set(0,0,0);
this.hudCamera = new THREE.OrthographicCamera (-this.windowHalfX,this.windowHalfX, this.windowHalfY, -this.windowHalfY, 1, 10);
this.hudCamera.position.set(0,0,10);
Here is my render loop:
updateFrame : function () {
this.renderer.clear();
this.renderer.render( this.scene, this.camera );
this.renderer.clearDepth();
this.renderer.render( this.hud, this.hudCamera );
},
How can I find the position of the objects in the HUD, using their position in the 3D scene?
In order to find the 2D HUD position of a 3D object (using three.js version r71), you may do the following, which I have modified from this post:
findHUDPosition : function (obj) {
var vector = new THREE.Vector3();
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.project(this.camera);
vector.x = ( vector.x * this.windowHalfX );
vector.y = ( vector.y * this.windowHalfY );
return {
x: vector.x,
y: vector.y,
z: vector.z
}
}
The parameter obj is the object you are trying to find the position of in the hud.
vector.project(this.camera); draws a vector from the object to the position of this.camera, through the near plane of the camera.
The new value of vector's components are the intersection of the projected vector and this.camera's near plane.
The coordinates are in three.js' world coordinate system though, so we have to do a quick conversion to pixel coordinates, to scale up to the size of our canvas.
vector.x = ( vector.x * this.windowHalfX );
vector.y = ( vector.y * this.windowHalfY );
The above conversion is for a setup where the HUD's coordinate system has an origin (0,0) at the center of the screen, and has a maximum value of half the canvas' resolution. For example, if your canvas is 1024 x 768 pixels, the position of the upper right corner would be (512, 384).
For a typical screen coordinate system, the bottom right corner would be (1024, 768), and the middle of the screen would be (512, 384). To have this setup, you can use the following conversion, as seen in this post.
vector.x = ( vector.x * widthHalf ) + widthHalf;
vector.y = - ( vector.y * heightHalf ) + heightHalf;
Notice, the z coordinate doesn't matter now, since we are in 2D.
The last thing you'll want to do is make sure that the object you're showing in 2D is actually visible to the perspective camera. This is as simple as checking to see if the object falls within the frustum of this.camera. source for following code
checkFrustrum : function (obj) {
var frustum = new THREE.Frustum();
var projScreenMatrix = new THREE.Matrix4();
this.camera.updateMatrix();
this.camera.updateMatrixWorld();
projScreenMatrix.multiplyMatrices( this.camera.projectionMatrix, this.camera.matrixWorldInverse );
frustum.setFromMatrix( new THREE.Matrix4().multiplyMatrices( this.camera.projectionMatrix,
this.camera.matrixWorldInverse ) );
return frustum.containsPoint ( obj.position );
}
If this is not done, you can have an object which is behind the camera being registered as visible in the 2D scene (this poses problems for object tracking). It's also good practice to update obj's matrix and matrix world.
Related
I referenced this SO post which explains how to convert a point in 3D space to an actual pixel position on the display. This is the function, copied for convenience.
function toScreenPosition(obj, camera)
{
var vector = new THREE.Vector3();
var widthHalf = 0.5*renderer.context.canvas.width;
var heightHalf = 0.5*renderer.context.canvas.height;
obj.updateMatrixWorld();
vector.setFromMatrixPosition(obj.matrixWorld);
vector.project(camera);
vector.x = ( vector.x * widthHalf ) + widthHalf;
vector.y = - ( vector.y * heightHalf ) + heightHalf;
return {
x: vector.x,
y: vector.y
};
};
I noticed that vector.z isn't actually 0 after it's been projected using vector.project(camera). For my particular setup, vector.z is ~0.8 (not that it means much for a general question).
So, how should I interpret vector.z? What does it actually represent?
vector.z represents the depth of the point from the screen. Regarding a pixel, or position on the screen, the depth doesn't matter as it doesn't affect the x or y position on the screen. So that component of the vector is not part of the solution, and thus ignored.
As to why it isn't 0, that's because projection multiplies the vertices with the camera matrix. Assuming the camera is positioned to view everything you need, and not pointed somewhere else, the projection function is putting distance (ie depth) between you and the scene.
When it comes to 3D animation, there are a lot of terms and concepts that I'm not familiar with (maybe a secondary question to append to this one: what are some good books to get familiar with the concepts?). I don't know what a "UV" is (in the context of 3D rendering) and I'm not familiar with what tools exist for mapping pixels on an image to points on a mesh.
I have the following image being produced by a 360-degree camera (it's actually the output of an HTML video element):
I want the center of this image to be the "top" of the sphere, and any radius of the circle in this image to be an arc along the sphere from top to bottom.
Here's my starting point (copying lines of code directly from the Three.JS documentation):
var video = document.getElementById( "texture-video" );
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
var material = new THREE.MeshBasicMaterial( { map: texture } );
var geometry = new THREE.SphereGeometry(0.5, 100, 100);
var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
camera.position.z = 1
function animate()
{
mesh.rotation.y += 0.01;
requestAnimationFrame( animate );
renderer.render( scene, camera );
}
animate();
This produces the following:
There are a few problems:
The texture is rotated 90 degrees
The ground is distorted, although this may be fixed if the rotation is fixed?
Update: Upon further investigation of the sphere being produced, it's not actually rotated 90 degrees. Instead, the top of the image is the top of the sphere and the bottom of the image is the bottom of the sphere. This causes the left and right edges of the image to become the distorted "sideways ground" I saw
This is on the outside of the sphere. I want to project this to the inside of the sphere (and place the camera inside the sphere)
Currently if I place the camera inside the sphere I get solid black. I don't think it's a lighting issue because the Three.JS docs said that a MeshBasicMaterial didn't need lighting. I think the issue may be that the normals of all of the sphere faces point outward and I need to reverse them. I'm not sure how one would do this - but I'm pretty sure it's possible since I think this is how skyboxes work.
Doing some research I'm pretty sure I need to modify the "UV"s to fix this, I just don't know how or really what that even means...
Working Example
I forked #manthrax's CodeSandbox.io solution and updated it with my own:
https://codesandbox.io/s/4w1njkrv9
The Solution
So after spending a day researching UV mapping to understand what it meant and how it worked, I was able to sit down and scratch out some trig to map points on a sphere to points on my stereographic image. It basically came down to the following:
Use arccosine of the Y coordinate to determine the magnitude of a polar coordinate on the stereographic image
Use the arctangent of the X and Z coordinates to determine the angle of the polar coordinate on the stereographic image
Use x = Rcos(theta), y = Rsin(theta) to compute the rectangular coordinates on the stereographic image
If time permits I may draw a quick image in Illustrator or something to explain the math, but it's standard trigonometry
I went a step further after this, because the camera I was using only has a 240 degree vertical viewing angle - which caused the image to get slightly distorted (especially near the ground). By subtracting the vertical viewing angle from 360 and dividing by two, you get an angle from the vertical within which no mapping should occur. Because the sphere is oriented along the Y axis, this angle maps to a particular Y coordinate - above which there's data, and below which there isn't.
Calculate this "minimum Y value"
For all points on the sphere:
If the point is above the minimum Y value, scale it linearly so that the first such value is counted as "0" and the top of the sphere is still counted as "1" for mapping purposes
If the point is below the minimum Y value, return nothing
Weird Caveats
For some reason the code I wrote flipped the image upside down. I don't know if I messed up on my trigonometry or if I messed up on my understanding of UV maps. Whatever the case, this was trivially fixed by flipping the sphere 180 degrees after mapping
As well, I don't know how to "return nothing" in the UV map, so instead I mapped all points below the minimum Y value to the corner of the image (which was black)
With a 240-degree viewing angle the space at the bottom of the sphere with no image data was sufficiently large (on my monitor) that I could see the black circle when looking directly ahead. I didn't like the visual appearance of this, so I plugged in 270 for the vertical FOV. this leads to minor distortion around the ground, but not as bad as when using 360.
The Code
Here's the code I wrote for updating the UV maps:
// Enter the vertical FOV for the camera here
var vFov = 270; // = 240;
var material = new THREE.MeshBasicMaterial( { map: texture, side: THREE.BackSide } );
var geometry = new THREE.SphereGeometry(0.5, 200, 200);
function updateUVs()
{
var maxY = Math.cos(Math.PI * (360 - vFov) / 180 / 2);
var faceVertexUvs = geometry.faceVertexUvs[0];
// The sphere consists of many FACES
for ( var i = 0; i < faceVertexUvs.length; i++ )
{
// For each face...
var uvs = faceVertexUvs[i];
var face = geometry.faces[i];
// A face is a triangle (three vertices)
for ( var j = 0; j < 3; j ++ )
{
// For each vertex...
// x, y, and z refer to the point on the sphere in 3d space where this vertex resides
var x = face.vertexNormals[j].x;
var y = face.vertexNormals[j].y;
var z = face.vertexNormals[j].z;
// Because our stereograph goes from 0 to 1 but our vertical field of view cuts off our Y early
var scaledY = (((y + 1) / (maxY + 1)) * 2) - 1;
// uvs[j].x, uvs[j].y refer to a point on the 2d texture
if (y < maxY)
{
var radius = Math.acos(1 - ((scaledY / 2) + 0.5)) / Math.PI;
var angle = Math.atan2(x, z);
uvs[j].x = (radius * Math.cos(angle)) + 0.5;
uvs[j].y = (radius * Math.sin(angle)) + 0.5;
} else {
uvs[j].x = 0;
uvs[j].y = 0;
}
}
}
// For whatever reason my UV mapping turned everything upside down
// Rather than fix my math, I just replaced "minY" with "maxY" and
// rotated the sphere 180 degrees
geometry.rotateZ(Math.PI);
geometry.uvsNeedUpdate = true;
}
updateUVs();
var mesh = new THREE.Mesh( geometry, material );
The Results
Now if you add this mesh to a scene everything looks perfect:
One Thing I Still Don't Understand
Right around the "hole" at the bottom of the sphere there's a multi-colored ring. It almost looks like a mirror of the sky. I don't know why this exists or how it got there. Could anyone shed light on this in the comments?
Here is as close as I could get it in about 10 minutes of fiddling with a polar unwrapping of the uv's.
You can modify the polarUnwrap function to try and get a better mapping....
https://codesandbox.io/s/8nx75lkn28
You can replace the TextureLoader().loadTexture() with
//assuming you have created a HTML video element with id="video"
var video = document.getElementById( 'video' );
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
to get your video fed in there...
More info here:
https://threejs.org/docs/#api/textures/VideoTexture
Also this may be useful to you:
https://community.theta360.guide/t/displaying-thetas-dual-fisheye-video-with-three-js/1160
I think, it's would be quite difficult to modify the UVs, so that the stereographic projected image will fit. The UVs of a sphere are set to fit textures with equirectangular projection.
To transform the image from stereographic to equirectangular, you might want to use Panorama tools like PTGui or Hugin. Or you can use Photoshop (apply Filter > Distort > Polar Coordinates > polar to rectangular).
Equirectangular projection of the image (with Photoshop), resized to 2:1 aspect ratio (not necessary for texture)
If you want the texture to be inside the sphere (or normals flipped), you are able to set the material to THREE.BackSide.
var material = new THREE.MeshBasicMaterial( { map: texture, side: THREE.BackSide } );
Maybe, you have to flip the texture horizontally then: How to flip a Three.js texture horizontally
I have a scene where I want to combine perspective objects (ie. objects that appear smaller when they are far away) with orthogographic objects (ie. objects that appear the same size irrespective of distance). The perspective objects are part of the rendered "world", while the orthogographic objects are adornments, like labels or icons. Unlike a HUD, I want the orthogographic objects to be rendered "within" the world, which means that they can be covered by world objects (imagine a plane passing before a label).
My solution is to use one renderer, but two scenes, one with a PerspectiveCamera and one with an OrthogographicCamera. I render them in sequence without clearing the z buffer (the renderer's autoClear property is set to false). The problem that I am facing is that I need to synchronize the placement of the objects in each scene so that an object in one scene is assigned a z-position that is behind objects in the other scene that are before it, but before objects that are behind it.
To do that, I am designating my perspective scene as the "leading" scene, ie. all coordinates of all objects (perspective and orthogographic) are assigned based on this scene. The perspective objects use these coordinates directly and are rendered within that scene and with the perspective camera. The coordinates of the orthogographic objects are transformed to the coordinates in the orthogographic scene and then rendered in that scene with the orthogographic camera. I do the transformation by projecting the coordinates in the perspective scene to the perspective camera's view pane and then back to the orthogonal scene with the orthogographic camera:
position.project(perspectiveCamera).unproject(orthogographicCamera);
Alas, this is not working as indended. The orthogographic objects are always rendered before the perspective objects even if they should be between them. Consider this example, in which the blue circle should be displayed behind the red square, but before the green square (which it isn't):
var pScene = new THREE.Scene();
var oScene = new THREE.Scene();
var pCam = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 1, 1000);
pCam.position.set(0, 40, 50);
pCam.lookAt(new THREE.Vector3(0, 0, -50));
var oCam = new THREE.OrthographicCamera(window.innerWidth / -2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / -2, 1, 500);
oCam.Position = pCam.position.clone();
pScene.add(pCam);
pScene.add(new THREE.AmbientLight(0xFFFFFF));
oScene.add(oCam);
oScene.add(new THREE.AmbientLight(0xFFFFFF));
var frontPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x990000 }));
frontPlane.position.z = -50;
pScene.add(frontPlane);
var backPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x009900 }));
backPlane.position.z = -100;
pScene.add(backPlane);
var circle = new THREE.Mesh(new THREE.CircleGeometry(60, 20), new THREE.MeshBasicMaterial( { color: 0x000099 }));
circle.position.z = -75;
//Transform position from perspective camera to orthogonal camera -> doesn't work, the circle is displayed in front
circle.position.project(pCam).unproject(oCam);
oScene.add(circle);
var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
renderer.autoClear = false;
renderer.render(oScene, oCam);
renderer.render(pScene, pCam);
You can try out the code here.
In the perspective world the (world) z-position of the circle is -75, which is between the squares (-50 and -100). But it is actually displayed in front of both squares. If you manually set the circles z-position (in the orthogographic scene) to -500 it is displayed between the squares, so with the right positioning, what I'm trying should be possible in principle.
I know that I can not render a scene the same with orthogographic and perspective cameras. My intention is to reposition all orthogographic objects before each rendering so that they appear to be at the right position.
What do I have to do to calculate the orthogographic coordinates from the perspective coordinates so that the objects are rendered with the right depth values?
UPDATE:
I have added an answer with my current solution to the problem in case someone has a similar problem. However, since this solution does not provide the same quality as the orthogographic camera. So I would still be happy if somoeone could explain why the orthogographic camera does not work as expected and/or provide a solution to the problem.
You are very close to the result what you have expected. You have forgotten to update the camera matrices, which have to be calculated that the operation project and project can proper work:
pCam.updateMatrixWorld ( false );
oCam.updateMatrixWorld ( false );
circle.position.project(pCam).unproject(oCam);
Explanation:
In a rendering, each mesh of the scene usually is transformed by the model matrix, the view matrix and the projection matrix.
Projection matrix:
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. The projection matrix transforms from view space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) in the range (-1, -1, -1) to (1, 1, 1) by dividing with the w component of the clip coordinates.
View matrix:
The view matrix describes the direction and position from which the scene is looked at. The view matrix transforms from the wolrd space to the view (eye) space. In the coordinat system on the viewport, the X-axis points to the left, the Y-axis up and the Z-axis out of the view (Note in a right hand system the Z-Axis is the cross product of the X-Axis and the Y-Axis).
Model matrix:
The model matrix defines the location, oriantation and the relative size of a mesh in the scene. The model matrix transforms the vertex positions from of the mesh to the world space.
If a fragment is drawn "behind" or "before" another fragment, depends on the depth value of the fragment. While for orthographic projection the Z coordinate of the view space is linearly mapped to the depth value, in perspective projection it is not linear.
In general, the depth value is calculated as follows:
float ndc_depth = clip_space_pos.z / clip_space_pos.w;
float depth = (((farZ-nearZ) * ndc_depth) + nearZ + farZ) / 2.0;
The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport. It transforms from eye space to the clip space, and the coordinates in the clip space are transformed to the normalized device coordinates (NDC) by dividing with the w component of the clip coordinates.
At Orthographic Projection the coordinates in the eye space are linearly mapped to normalized device coordinates.
Orthographic Projection
At Orthographic Projection the coordinates in the eye space are linearly mapped to normalized device coordinates.
Orthographic Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2/(r-l) 0 0 0
0 2/(t-b) 0 0
0 0 -2/(f-n) 0
-(r+l)/(r-l) -(t+b)/(t-b) -(f+n)/(f-n) 1
At Orthographic Projection, the Z component is calcualted by the linear function:
z_ndc = z_eye * -2/(f-n) - (f+n)/(f-n)
Perspective Projection
At Perspective Projection the projection matrix describes the mapping from 3D points in the world as they are seen from of a pinhole camera, to 2D points of the viewport. The eye space coordinates in the camera frustum (a truncated pyramid) are mapped to a cube (the normalized device coordinates).
Perspective Projection
Perspective Projection Matrix:
r = right, l = left, b = bottom, t = top, n = near, f = far
2*n/(r-l) 0 0 0
0 2*n/(t-b) 0 0
(r+l)/(r-l) (t+b)/(t-b) -(f+n)/(f-n) -1
0 0 -2*f*n/(f-n) 0
At Perspective Projection, the Z component is calcualted by the rational function:
z_ndc = ( -z_eye * (f+n)/(f-n) - 2*f*n/(f-n) ) / -z_eye
See a detailed description at the answer to the Stack Overflow question How to render depth linearly in modern OpenGL with gl_FragCoord.z in fragment shader?
In your case this means, that you have to choose the Z coordinate of the circle in the orthographic projection in that way, that the depth value is inbetween of the depths of the objects in the perspective projection.
Since the depth value in nothing else than depth = z ndc * 0.5 + 0.5 in both cases, it also possible to do the calculations by normalized device coordinates instead of depth values.
The normalized device coordinates can easily be caluclated by the project function of the THREE.PerspectiveCamera. The project converrts from wolrd space to view space and from view space to normalized device coordinates.
To find a Z coordinate which is in between in orthographic projection, the middle normalized device Z coordinate, has to be transformed to a view space Z coordinate. This can be done by the unproject function of the THREE.PerspectiveCamera. The unproject converts from normalized device coordinates to view space and from view space to world sapce.
See further OpenGL - Mouse coordinates to Space coordinates.
See the example:
var renderer, pScene, oScene, pCam, oCam, frontPlane, backPlane, circle;
var init = function () {
pScene = new THREE.Scene();
oScene = new THREE.Scene();
pCam = new THREE.PerspectiveCamera(40, window.innerWidth / window.innerHeight, 1, 1000);
pCam.position.set(0, 40, 50);
pCam.lookAt(new THREE.Vector3(0, 0, -50));
oCam = new THREE.OrthographicCamera(window.innerWidth / -2, window.innerWidth / 2, window.innerHeight / 2, window.innerHeight / -2, 1, 500);
oCam.Position = pCam.position.clone();
pScene.add(pCam);
pScene.add(new THREE.AmbientLight(0xFFFFFF));
oScene.add(oCam);
oScene.add(new THREE.AmbientLight(0xFFFFFF));
frontPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x990000 }));
frontPlane.position.z = -50;
pScene.add(frontPlane);
backPlane = new THREE.Mesh(new THREE.PlaneGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x009900 }));
backPlane.position.z = -100;
pScene.add(backPlane);
circle = new THREE.Mesh(new THREE.CircleGeometry(20, 20), new THREE.MeshBasicMaterial( { color: 0x000099 }));
circle.position.z = -75;
//Transform position from perspective camera to orthogonal camera -> doesn't work, the circle is displayed in front
pCam.updateMatrixWorld ( false );
oCam.updateMatrixWorld ( false );
circle.position.project(pCam).unproject(oCam);
oScene.add(circle);
renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
};
var render = function () {
renderer.autoClear = false;
renderer.render(oScene, oCam);
renderer.render(pScene, pCam);
};
var animate = function () {
requestAnimationFrame(animate);
//controls.update();
render();
};
init();
animate();
html,body {
height: 100%;
width: 100%;
margin: 0;
overflow: hidden;
}
<script src="https://threejs.org/build/three.min.js"></script>
I have found a solution that involves only the perspective camera and scales the adornments according to their distance to the camera. It is similar to the answer posted to a similar question, but not quite the same. My specific issue is that I don't only need the adornments to be the same size independent of their distance to the camera, I also need to control their exact size on screen.
To scale them to the right size, not to any size that does not change, I use the function to calculate on screen size found in this answer to calculate the position of both ends of a vector of a known on-screen length and check the length of the projection to the screen. From the difference in length I can calculate the exact scaling factor:
var widthVector = new THREE.Vector3( 100, 0, 0 );
widthVector.applyEuler(pCam.rotation);
var baseX = getScreenPosition(circle, pCam).x;
circle.position.add(widthVector);
var referenceX = getScreenPosition(circle, pCam).x;
circle.position.sub(widthVector);
var scale = 100 / (referenceX - baseX);
circle.scale.set(scale, scale, scale);
The problem with this solution is that in most of the cases the calculation is precise enough to provide an exact size. But every now and then some rounding error makes the adornment not render correctly.
I'm confused about converting screen position to world position in three.js. I've looked at several answers to questions on Stackoverflow but can't make sense of the formulas and/or extract them from raycasting or mouse testing in the examples (neither of which I'm doing). I simply want to position a plane in world position according to the width of the scene/window. For example the bottom right corner of the window minus the width and height of the plane so it's on screen and visible. Something like this:
//geometry and material of the plane
var geometry = new THREE.PlaneGeometry(50, 50);
var material = new THREE.MeshPhongMaterial( { color: 0xff0000 } );
//create plane
plane = new THREE.Mesh( geometry, material);
position = new THREE.Vector3();
position.x = ( window.innerWidth / 2 ) - asquare.geometry.parameters.width;
position.y = - ( window.innerHeight / 2 ) - asquare.geometry.parameters.height;
position.z = 0;
plane.position.set( position.x, position.y, position.z );
It's in the right direction but too far, I guess it's the difference between the camera and planes z position that is the issue. If my camera is at z 500 and the plane is at z 0 how do I adjust the x and y accordingly?
I have a great problem about the rotation in three.js
I want to rotate my 3D cube in one of my game.
//init
geometry = new THREE.CubeGeometry grid, grid, grid
material = new THREE.MeshLambertMaterial {color:0xFFFFFF * Math.random(), shading:THREE.FlatShading, overdraw:true, transparent: true, opacity:0.8}
for i in [1...#shape.length]
othergeo = new THREE.Mesh new THREE.CubeGeometry(grid, grid, grid)
othergeo.position.x = grid * #shape[i][0]
othergeo.position.y = grid * #shape[i][1]
THREE.GeometryUtils.merge geometry, othergeo
#mesh = new THREE.Mesh geometry, material
//rotate
#mesh.rotation.y += y * Math.PI / 180
#mesh.rotation.x += x * Math.PI / 180
#mesh.rotation.z += z * Math.PI / 180
and (x, y, z) may be (1, 0, 0)
then the cube can rotate, but the problem is the cube rotate on its own axis,so after it has rotated, it can't rotate as expected.
I find the page How to rotate a Three.js Vector3 around an axis?, but it just let a Vector3 point rotate around the world axis?
and I have tried to use matrixRotationWorld as
#mesh.matrixRotationWorld.x += x * Math.PI / 180
#mesh.matrixRotationWorld.y += y * Math.PI / 180
#mesh.matrixRotationWorld.z += z * Math.PI / 180
but it doesn't work, I don't whether I used it in a wrong way or there are other ways..
so, how to let the 3D cube rotate around the world's axis???
Since release r59, three.js provides those three functions to rotate a object around object axis.
object.rotateX(angle);
object.rotateY(angle);
object.rotateZ(angle);
Here are the two functions I use. They are based on matrix rotations. and can rotate around arbitrary axes. To rotate using the world's axes you would want to use the second function rotateAroundWorldAxis().
// Rotate an object around an arbitrary axis in object space
var rotObjectMatrix;
function rotateAroundObjectAxis(object, axis, radians) {
rotObjectMatrix = new THREE.Matrix4();
rotObjectMatrix.makeRotationAxis(axis.normalize(), radians);
// old code for Three.JS pre r54:
// object.matrix.multiplySelf(rotObjectMatrix); // post-multiply
// new code for Three.JS r55+:
object.matrix.multiply(rotObjectMatrix);
// old code for Three.js pre r49:
// object.rotation.getRotationFromMatrix(object.matrix, object.scale);
// old code for Three.js r50-r58:
// object.rotation.setEulerFromRotationMatrix(object.matrix);
// new code for Three.js r59+:
object.rotation.setFromRotationMatrix(object.matrix);
}
var rotWorldMatrix;
// Rotate an object around an arbitrary axis in world space
function rotateAroundWorldAxis(object, axis, radians) {
rotWorldMatrix = new THREE.Matrix4();
rotWorldMatrix.makeRotationAxis(axis.normalize(), radians);
// old code for Three.JS pre r54:
// rotWorldMatrix.multiply(object.matrix);
// new code for Three.JS r55+:
rotWorldMatrix.multiply(object.matrix); // pre-multiply
object.matrix = rotWorldMatrix;
// old code for Three.js pre r49:
// object.rotation.getRotationFromMatrix(object.matrix, object.scale);
// old code for Three.js pre r59:
// object.rotation.setEulerFromRotationMatrix(object.matrix);
// code for r59+:
object.rotation.setFromRotationMatrix(object.matrix);
}
So you should call these functions within your anim function (requestAnimFrame callback), resulting in a rotation of 90 degrees on the x-axis:
var xAxis = new THREE.Vector3(1,0,0);
rotateAroundWorldAxis(mesh, xAxis, Math.PI / 180);
I needed the rotateAroundWorldAxis function but the above code doesn't work with the newest release (r52). It looks like getRotationFromMatrix() was replaced by setEulerFromRotationMatrix()
function rotateAroundWorldAxis( object, axis, radians ) {
var rotationMatrix = new THREE.Matrix4();
rotationMatrix.makeRotationAxis( axis.normalize(), radians );
rotationMatrix.multiplySelf( object.matrix ); // pre-multiply
object.matrix = rotationMatrix;
object.rotation.setEulerFromRotationMatrix( object.matrix );
}
with r55 you have to change
rotationMatrix.multiplySelf( object.matrix );
to
rotationMatrix.multiply( object.matrix );
In Three.js R59, object.rotation.setEulerFromRotationMatrix(object.matrix); has been changed to object.rotation.setFromRotationMatrix(object.matrix);
3js is changing so rapidly :D
Just in case...in r52 the method is called setEulerFromRotationMatrix instead of getRotationFromMatrix
Somewhere around r59 this gets easier (rotate around x):
bb.GraphicsEngine.prototype.calcRotation = function ( obj, rotationX)
{
var euler = new THREE.Euler( rotationX, 0, 0, 'XYZ' );
obj.position.applyEuler(euler);
}
In Three.js R66, this is what I use (CoffeeScript version):
THREE.Object3D.prototype.rotateAroundWorldAxis = (axis, radians) ->
rotWorldMatrix = new THREE.Matrix4()
rotWorldMatrix.makeRotationAxis axis.normalize(), radians
rotWorldMatrix.multiply this.matrix
this.matrix = rotWorldMatrix
this.rotation.setFromRotationMatrix this.matrix
I solved in this way:
I created the 'ObjectControls' module for ThreeJS that allows you to rotate a single OBJECT (or a Group), and not the SCENE.
Include the libary:
<script src="ObjectControls.js"></script>
Usage:
var controls = new ObjectControls(camera, renderer.domElement, yourMesh);
You can find here a live demo here: https://albertopiras.github.io/threeJS-object-controls/
Here is the repo: https://github.com/albertopiras/threeJS-object-controls.