In a three.js project, I'm using a modified version of PointerLockControls.js for the camera control. I want to modify the rotation functionality so that there is no absolute "up" axis by which the camera rotates, but rather moving the mouse up or down will pitch indefinitely, same for left and right for yawing (and keys for rolling).
I can't seem to get the yawing component working, as it seems to rotate around the same axis regardless of pitch. (ie moving left or right when face facing straight upwards will just rotate the camera)
Any help in the right direction would be great!
I had the same problem recently, so I had a look at some of the THREE.*Controls files, similar to you.
Using those as a basis, I made this:
https://github.com/squarefeet/THREE.ObjectControls
The important bits are the following (see here for context):
var updateTarget = function( dt ) {
var velX = positionVector.x * dt,
velY = positionVector.y * dt,
velZ = positionVector.z * dt;
rotationQuaternion.set(
rotationVector.x * dt,
rotationVector.y * dt,
rotationVector.z * dt,
1
).normalize();
targetObject.quaternion.multiply( rotationQuaternion );
targetObject.translateX( velX );
targetObject.translateY( velY );
targetObject.translateZ( velZ );
};
The rotationVector is probably of most interest to you, so here's what it's doing:
It's using a THREE.Vector3 to describe the rotation, the rotationVector variable in this example.
Each component of the rotationVector (x, y, z) is relative to pitch, yaw, and roll respectively.
Set a quaternion's x, y, and z values to the of the rotation vector, making sure the w component is always 1 (to learn what the w component does, see here, it's a great answer.
Normalizing this quaternion will get us a quaternion of length 1, which is very handy when we come to the next step...
targetObject in this case is an instance of THREE.Object3D (a THREE.Mesh, which inherits from THREE.Object3D), so it has a quaternion we can play with.
So now, it's just a matter of multiplying your targetObject's quaternion by your shiny new rotationQuaternion.
Since our object is now rotated to where we want, we can move it along it's new axis angles by using translateX/Y/Z.
The important thing to note here is that quaternions don't act like Euler vectors. Rather than adding two quaternions together to get a new angle, you multiply them.
Anyway, I hope that helps you somewhat!
Related
I am trying to model a Rubik's Cube for a personal project, using Zdog for lightweight 3d graphics. Zdog uses a {x,y,z} vector to represent rotation - I believe this is essentially a Tait-Bryan angle.
To animate a rotation of the top, right, front, etc side, I attach the 9 blocks to an anchor in the center of the cube and rotate it 90 degrees in the desired direction. This works great, but when the animation is done I need to "save" the translation and rotation on the 9 blocks. Translation is relatively simple, but I'm stuck on rotation. I basically need a function like this:
function updateRotation(xyz, axis, angle) {
// xyz is a {x,y,z} vector
// axis is "x", "y", or "z"
// rotation is the angle of rotation in radians
}
that would apply the axis/angle rotation in world coordinates to the xyz vector in object coordinates. Originally I just had xyz[axis] += angle, but this only works when no other axis has any rotation. I then thought I could use a lookup table, and I think that's possible as I only use quarter turns, but constructing the table turns out to be harder than I thought.
I am starting to suspect I need to translate the xyz vector to some other representation (matrix? quaternion?) and apply the rotation there, but I'm not sure where to start. The frustrating thing is that my blocks are in the right position at the end of the animation - I'm just not sure how to apply the parent transform so that I can detach them from the parent without losing the rotation.
As far as I can tell, this can't be done with Euler angles alone (at least not in any easy way). My solution was to convert to quaternions, rotate, then convert back to Euler:
function updateRotation(obj, axis, rotation) {
const {x, y, z} = obj.rotate;
const q = new Quaternion()
.rotateZ(z)
.rotateY(y)
.rotateX(x);
const q2 = new Quaternion();
if (axis === 'x') {
q2.rotateX(rotation);
} else if (axis === 'y') {
q2.rotateY(rotation);
} else if (axis === 'z') {
q2.rotateZ(rotation);
}
q.multiply(q2, null);
const e = new Euler().fromQuaternion(q);
obj.rotate.x = e.x;
obj.rotate.y = e.y;
obj.rotate.z = e.z;
obj.normalizeRotate();
}
This uses the Euler and Quaternion classes from math.gl.
(It turned out Zdog actually uses ZYX Euler angles as far as I could tell, hence the order of rotations when creating the first quaternion.)
So, I'm putting together a 3D project that will eventually make it's way onto a kiosk in the office of a client, basically shows all of their branches as points on a globe. One of the features requested is that end users be able to swipe on the screen to orbit around the globe, in addition, they should be able to rotate the camera on it's Z axis via a rotation gesture. The problem is I'm using camera.lookAt in the animation loop, which relies on the up vector being updated correctly whenever I rotate the camera in order to not "snap" back to place along the previous up vector when a user swipes, and for the life of me, I cannot get it to cooperate.
Currently, what I'm doing to update the up vector (based off of another stackoverflow thread with a similar issue) is this:
//Current full 360 degree angle of rotation, calculated earlier
let radian = THREE.Math.degToRad(full);
//Create new vector at radian angle to camera's current position
let v1 = new THREE.Vector3(_this.object.position.x + Math.cos(radian), _this.object.position.y + Math.sin(radian), _this.object.position.z).sub( _this.object.position ).normalize();
//_this.target = 0,0,0
let v2 = _this.target.clone().sub( _this.object.position ).normalize();
//Cross vectors to get the proper up
let v3 = new THREE.Vector3().crossVectors( v1, v2 ).normalize();
_this.object.up.copy( v3 );
And this works... up to the point where the camera seemingly inverts once I head near the side of the globe opposite the camera's starting position (0,0,1.75) and then negates my rotations (as far as I can tell) which causes the same "snap" to a different rotation like before.
Once I rotate the camera, I want it to maintain the rotation when using lookAt, regardless of the lookAt inverting everything.
I'm creating a script that rotates a THREE.js camera arround based on a mobile phones gyroscope input. It's currently working pretty well, except that every time I rotate my phone over a quadrant, the camera will turn 180 degrees instead of continuing as intended. This is the code that I currently use:
private onDeviceOrientation = ( event ) => {
if( event.alpha !== null && event.beta !== null && event.gamma !== null ) {
let rotation = [
event.beta,
event.alpha,
event.gamma
],
this.orientation = new THREE.Vector3(rotation[0], rotation[1], rotation[2]);
this.viewer.navigation.setTarget(this.calcPosition());
}
};
private calcPosition = () => {
const camPosition = this.viewer.navigation.getPosition(),
radians = Math.PI / 180,
aAngle = radians * - this.orientation.y,
bAngle = radians * + this.orientation.z,
distance = this.calcDistance();
let medianX = Math.cos(bAngle) * Math.sin(aAngle);
let medianY = Math.cos(bAngle) * Math.cos(aAngle);
let nX = camPosition.x + (medianX * distance),
nY = camPosition.y + (medianY * distance),
nZ = camPosition.z + Math.sin(bAngle) * distance;
return new THREE.Vector3(nX, nY, nZ);
};
window.addEventListener('deviceorientation', this.onDeviceOrientation, false);
Soafter doing some research I found that I need to use a Quaternion prevent the switchen when going into a new quadrant. I have no experience with Quaternions, so I was wondering what the best way would be to combine the two Vector3's in the code above into a singel Quaternion.
[Edit]
I calculate the distance using this method:
private calcDistance = (): number => {
const camPosition = this.viewer.navigation.getPosition();
const curTarget = this.viewer.navigation.getTarget();
let nX = camPosition.x - curTarget.x,
nY = camPosition.y - curTarget.y,
nZ = camPosition.z - curTarget.z;
return Math.sqrt((nX * nX) + (nY * nY) + (nZ * nZ));from squared averages
};
And I follow the MDN conventions when working with the gyroscope.
[Edit #2]
Turns out I had my angle all wrong, I managed to fix it by calculating the final position like this:
let nX = camPosition.x - (Math.cos(zAngle) * Math.sin(yAngle)) * distance,
nY = camPosition.y + (Math.cos(zAngle) * Math.cos(yAngle)) * distance,
nZ = camPosition.z - (Math.cos(xAngle) * Math.sin(zAngle)) * distance;
Here is the closest I can give you to an answer:
First of all, you don't need a quaternion. (If you really find yourself needing to convert between Euler angles and quaternions, it is possible as long as you have all the axis conventions down pat.) The Euler angle orientation information you obtain from the device is sufficient to represent any rotation without ambiguity; if you were calculating angular velocities, I'd agree that you want to avoid Euler angles since there are some orientations in which the rates of change of the Euler angles go to infinity. But you're not, so you don't need it.
I'm going to try to summarize the underlying problem you're trying to solve, and then tell you why it might not be solvable. 🙁
You are given the full orientation of the device with a camera, as yaw, pitch, and roll. Assuming yaw is like panning the camera horizontally, and pitch is like tilting the camera vertically, then roll is a degree of freedom that doesn't change affect direction the camera is pointing, but it does affect the orientation of the images the camera sees. So you are given three coordinates, where two have to do with the direction the camera is pointing, and one does not.
You are trying to output this information to the camera controller but you are only allowed to specify the target location, which is the point in space that the camera is looking. This is to be specified via three Cartesian coordinates, which you can calculate from the direction the camera is pointing (2 degrees of freedom) and the distance to the target object (one degree of freedom).
So you have three inputs and three outputs, but only two of those have anything to do with each other. The target location has no way to represent the roll direction of the camera, and the orientation of the camera has no way to represent the distance to some target object.
Since you don't have a real target object, you can just pick an arbitrary fixed distance (1, for example) and use it. You certainly don't have anything from which to calculate it... if I follow your code, you are defining distance in terms of the target location, which is itself defined in terms of the distance from the previous step. This is extra work for no benefit at best (the distance drifts around some initial value), and numerically unstable at worst (the distance drifts toward zero and you lose precision or get infinities). Just use a fixed value for distance and make it simple.
So now you probably have a system that points a camera in a direction, but you cannot tell it what the roll angle is. That means your camera controller is apparently just going to choose it for you based on the yaw and pitch angles. Let's say it always picks zero degrees (that would be the least crazy thing it could do). This will cause discontinuities when the roll angle and yaw angle line up (when the pitch is at ±90°): Imagine pointing a physical camera at the northern horizon and yawing around westward, past the western horizon, and settling on the southern horizon. The whole time, the roll angle of the camera is 0°, so there's no problem. But now imagine pointing it at the northern horizon, and pitching upward, past the zenith, and continuing to pitch backward until you are facing the southern horizon. Now the camera is upside down; the roll angle is 180°. But if the camera controller doesn't change the roll angle from 0°, then it will do a nonphysical "flip" right when you pass the zenith. The problem is that there really is no way to synthesize a roll angle based purely on position and not have this happen. We've just demonstrated that there are two ways to move your camera from pointing north to pointing south, where the roll angle is completely different at the end.
So you're stuck, I guess. Well, maybe not. Can you rotate the image from the camera based on the roll angle of the device orientation? That is, add the roll back into the displayed image? If so, you may have a solution. Let's say the roll angle of the camera controller is always at zero. Then you just rotate the image by the desired roll angle (something derived from beta I guess?) and you're done. If the camera controller has some other convention for choosing the roll angle, you will need to figure that out, undo it, and add the roll angle back on.
Without the actual system in front of me I probably can't help you debug your way to a solution. So I think this is where my journey with this question must end. Good luck!
Summary:
You don't need a quaternion
Pick a fixed distance to your simulated target
Add the roll angle by rotating the image before displaying it
Good luck!
I have a mesh which is a circle geometry. I would like to animate it like in this example from two.js, a 2D library:
https://two.js.org/examples/physics.html
For now I look at this example and put the camera on the top of the shape but I'm sure there's a more simple way for my needs: https://threejs.org/examples/#webgl_gpgpu_water
Does anyone know how I can do that?
You simply need to shift the vertices positions according some sin() or cos() value according X and Y coordinates and a incremental phase (time) to animate.
your vertex shader could include something like this, where phase is incrementing with time (typically, a clock).
glPosition.x = vertex.x + sin((phase*frequency) + vertex.y) * amplitude;
glPosition.y = vertex.y + sin((phase*frequency) + vertex.x) * amplitude;
The basic concept is here, but you have to adapt the components yourself by testing the result. You probably should adjust frequency, amplitude, adding some more factors to add asymetry and randomness.
i'm writing a server for a game me and my friends are making. I want to keep the direction a certain player is looking at in a 3D plane in a variable. I was considering having it an object with two variables of radians, i.e vertical angle and horizontal angle. But my friend told me to store it the way Three.js stores it because it would make his life easier. Could anybody help me out here?
You should brush up on Math for Game Developers series: https://www.youtube.com/watch?v=sKCF8A3XGxQ&list=PLW3Zl3wyJwWOpdhYedlD-yCB7WQoHf-My&index=1
Specifically, using vectors. You should store the orientation / facing angle of your characters or entities as a Vector3, or a 3 dimensional vector. In THREE.js, that's new THREE.Vector3( x, y, z )
To get the direction of object A to object B, relative to A you would do:
var direction = posB.clone().sub( posA )
This clones position B so we don't mess it up by subtraction, and then immediately subtract it by A.
However you'll notice how the vector now has some length. This is often undesirable in calculations, for example if you wanted to multiply this direction by something else say, a thrust force. In this case, we need to normalize the vector:
direction.normalize()
Now you can do fun stuff like:
posA.add( direction.clone().multiplyScalar( 10.0 ) );
This will move posA in the direction towards posB, 10 units of space.