Force camera.lookAt to maintain current Z rotation - javascript

So, I'm putting together a 3D project that will eventually make it's way onto a kiosk in the office of a client, basically shows all of their branches as points on a globe. One of the features requested is that end users be able to swipe on the screen to orbit around the globe, in addition, they should be able to rotate the camera on it's Z axis via a rotation gesture. The problem is I'm using camera.lookAt in the animation loop, which relies on the up vector being updated correctly whenever I rotate the camera in order to not "snap" back to place along the previous up vector when a user swipes, and for the life of me, I cannot get it to cooperate.
Currently, what I'm doing to update the up vector (based off of another stackoverflow thread with a similar issue) is this:
//Current full 360 degree angle of rotation, calculated earlier
let radian = THREE.Math.degToRad(full);
//Create new vector at radian angle to camera's current position
let v1 = new THREE.Vector3(_this.object.position.x + Math.cos(radian), _this.object.position.y + Math.sin(radian), _this.object.position.z).sub( _this.object.position ).normalize();
//_this.target = 0,0,0
let v2 = _this.target.clone().sub( _this.object.position ).normalize();
//Cross vectors to get the proper up
let v3 = new THREE.Vector3().crossVectors( v1, v2 ).normalize();
_this.object.up.copy( v3 );
And this works... up to the point where the camera seemingly inverts once I head near the side of the globe opposite the camera's starting position (0,0,1.75) and then negates my rotations (as far as I can tell) which causes the same "snap" to a different rotation like before.
Once I rotate the camera, I want it to maintain the rotation when using lookAt, regardless of the lookAt inverting everything.

Related

Calculate camera target for orbit controls from given rotation and position of camera

I have the following issue when using orbit controls. I set the camera target to 0, 0, 0. Then after an AJAX call I set the camera position and rotation manually. After orbit controls update call, the rotation of the camera gets reset.
I have found that this happens because when the orbit controls update is called, the rotation of the camera is calculated from the camera position and the target of the orbit controls.
Is there some way to solve this?
I would save the distance between camera and target before applying transformation to camera. Afterwards, I would place the target in front of the camera with the respective distance.
var distance = new THREE.Vector3().subVectors(camera.position, controls.target).length();
// apply transformation - matrix, euler rotation, or quaternion?
var normal = new THREE.Vector3(0,0,-1).applyQuaternion(camera.quaternion);
// instead of quaternion, you could also use .applyEuler(camera.rotation);
// or if you used matrix, extract quaternion from matrix
controls.target = new THREE.Vector3().add(camera.position).add(normal.setLength(distance));
EDIT: Could you additionally explain why in the calculation of normal you use vector (0, 0, -1)?
In computer graphics, the local coordinate system of a camera is usually that positive x-axis is pointing to the right, the positive y-axis is pointing up, and hence the direction you are looking at is negative z-axis. For the calculation above, I needed the direction the camera is looking to - from local space (0,0,-1) to world space (0,0,-1).applyQuaternion().

Calculating a quaternion from two combined angles

I'm creating a script that rotates a THREE.js camera arround based on a mobile phones gyroscope input. It's currently working pretty well, except that every time I rotate my phone over a quadrant, the camera will turn 180 degrees instead of continuing as intended. This is the code that I currently use:
private onDeviceOrientation = ( event ) => {
if( event.alpha !== null && event.beta !== null && event.gamma !== null ) {
let rotation = [
event.beta,
event.alpha,
event.gamma
],
this.orientation = new THREE.Vector3(rotation[0], rotation[1], rotation[2]);
this.viewer.navigation.setTarget(this.calcPosition());
}
};
private calcPosition = () => {
const camPosition = this.viewer.navigation.getPosition(),
radians = Math.PI / 180,
aAngle = radians * - this.orientation.y,
bAngle = radians * + this.orientation.z,
distance = this.calcDistance();
let medianX = Math.cos(bAngle) * Math.sin(aAngle);
let medianY = Math.cos(bAngle) * Math.cos(aAngle);
let nX = camPosition.x + (medianX * distance),
nY = camPosition.y + (medianY * distance),
nZ = camPosition.z + Math.sin(bAngle) * distance;
return new THREE.Vector3(nX, nY, nZ);
};
window.addEventListener('deviceorientation', this.onDeviceOrientation, false);
Soafter doing some research I found that I need to use a Quaternion prevent the switchen when going into a new quadrant. I have no experience with Quaternions, so I was wondering what the best way would be to combine the two Vector3's in the code above into a singel Quaternion.
[Edit]
I calculate the distance using this method:
private calcDistance = (): number => {
const camPosition = this.viewer.navigation.getPosition();
const curTarget = this.viewer.navigation.getTarget();
let nX = camPosition.x - curTarget.x,
nY = camPosition.y - curTarget.y,
nZ = camPosition.z - curTarget.z;
return Math.sqrt((nX * nX) + (nY * nY) + (nZ * nZ));from squared averages
};
And I follow the MDN conventions when working with the gyroscope.
[Edit #2]
Turns out I had my angle all wrong, I managed to fix it by calculating the final position like this:
let nX = camPosition.x - (Math.cos(zAngle) * Math.sin(yAngle)) * distance,
nY = camPosition.y + (Math.cos(zAngle) * Math.cos(yAngle)) * distance,
nZ = camPosition.z - (Math.cos(xAngle) * Math.sin(zAngle)) * distance;
Here is the closest I can give you to an answer:
First of all, you don't need a quaternion. (If you really find yourself needing to convert between Euler angles and quaternions, it is possible as long as you have all the axis conventions down pat.) The Euler angle orientation information you obtain from the device is sufficient to represent any rotation without ambiguity; if you were calculating angular velocities, I'd agree that you want to avoid Euler angles since there are some orientations in which the rates of change of the Euler angles go to infinity. But you're not, so you don't need it.
I'm going to try to summarize the underlying problem you're trying to solve, and then tell you why it might not be solvable. 🙁
You are given the full orientation of the device with a camera, as yaw, pitch, and roll. Assuming yaw is like panning the camera horizontally, and pitch is like tilting the camera vertically, then roll is a degree of freedom that doesn't change affect direction the camera is pointing, but it does affect the orientation of the images the camera sees. So you are given three coordinates, where two have to do with the direction the camera is pointing, and one does not.
You are trying to output this information to the camera controller but you are only allowed to specify the target location, which is the point in space that the camera is looking. This is to be specified via three Cartesian coordinates, which you can calculate from the direction the camera is pointing (2 degrees of freedom) and the distance to the target object (one degree of freedom).
So you have three inputs and three outputs, but only two of those have anything to do with each other. The target location has no way to represent the roll direction of the camera, and the orientation of the camera has no way to represent the distance to some target object.
Since you don't have a real target object, you can just pick an arbitrary fixed distance (1, for example) and use it. You certainly don't have anything from which to calculate it... if I follow your code, you are defining distance in terms of the target location, which is itself defined in terms of the distance from the previous step. This is extra work for no benefit at best (the distance drifts around some initial value), and numerically unstable at worst (the distance drifts toward zero and you lose precision or get infinities). Just use a fixed value for distance and make it simple.
So now you probably have a system that points a camera in a direction, but you cannot tell it what the roll angle is. That means your camera controller is apparently just going to choose it for you based on the yaw and pitch angles. Let's say it always picks zero degrees (that would be the least crazy thing it could do). This will cause discontinuities when the roll angle and yaw angle line up (when the pitch is at ±90°): Imagine pointing a physical camera at the northern horizon and yawing around westward, past the western horizon, and settling on the southern horizon. The whole time, the roll angle of the camera is 0°, so there's no problem. But now imagine pointing it at the northern horizon, and pitching upward, past the zenith, and continuing to pitch backward until you are facing the southern horizon. Now the camera is upside down; the roll angle is 180°. But if the camera controller doesn't change the roll angle from 0°, then it will do a nonphysical "flip" right when you pass the zenith. The problem is that there really is no way to synthesize a roll angle based purely on position and not have this happen. We've just demonstrated that there are two ways to move your camera from pointing north to pointing south, where the roll angle is completely different at the end.
So you're stuck, I guess. Well, maybe not. Can you rotate the image from the camera based on the roll angle of the device orientation? That is, add the roll back into the displayed image? If so, you may have a solution. Let's say the roll angle of the camera controller is always at zero. Then you just rotate the image by the desired roll angle (something derived from beta I guess?) and you're done. If the camera controller has some other convention for choosing the roll angle, you will need to figure that out, undo it, and add the roll angle back on.
Without the actual system in front of me I probably can't help you debug your way to a solution. So I think this is where my journey with this question must end. Good luck!
Summary:
You don't need a quaternion
Pick a fixed distance to your simulated target
Add the roll angle by rotating the image before displaying it
Good luck!

updating an object's geometry when camera is moving causes glitches - three.js

I have a problem updating a vertex of a line in three.js
So, I want to have a line in my scene, that its start is always at the (0,0,0) and its end is always in a specific position of the users screen (in x,y coordinates).
What I do to achieve that (and I almost succeed) is to have an invisible plane looking always to the camera and also have its position always a little bit in front of the camera. The reason I do that is because I want the line to seem like "going towards" the user's screen. So I "send" a raycaster from the desired screen position (in x,y) and I check in which point of the plane it intersect and that's my 3D point in three.js scene. Then I update one of the 2 vertices of the line.
The problem
What I do works fine, the line end is where I want to be, but something in updating the camera and the vertex is not synchronized and causes some noticeable glitches. When I move the camera, the line do not update itself quickly and smoothly, and as a result I see the line in other position before I see it in the calculated and desireable one.
Please take a look at this jsfiddle I created to emulate the problem.
What can I do to avoid these glitches?
Thanks
code i use in render function :
var cameToCenterScaled = camera.position.clone();
cameToCenterScaled.setLength(cameToCenterScaled.length()*0.9);
plane.position.set(cameToCenterScaled.x, cameToCenterScaled.y, cameToCenterScaled.z);
plane.lookAt(camera.position);
// define in pixels where in screen we want the line to end
var notePos = findNotePoint(120,30);
linemesh.geometry.vertices[ 1 ].set(notePos.x, notePos.y, notePos.z) ;
linemesh.geometry.verticesNeedUpdate = true;
when you raycast you set the raycaster from camera, you have to make sure the camera matrices are updated
simply add
camera.updateMatrixWorld();
before you call
raycaster.setFromCamera( new THREE.Vector2( x_, y_ ) , camera );
and the line will behave as you described

How do I 'wrap' a plane over a sphere with three.js?

I am relatively new to three.js and am trying to position and manipulate a plane object to have the effect of laying over the surface of a sphere object (or any for that matter), so that the plane takes the form of the object surface. The intention is to be able to move the plane on the surface later on.
I position the plane in front of the sphere and index through the plane's vertices casting a ray towards the sphere to detect the intersection with the sphere. I then try to change the z position of said vertices, but it does not achieve the desired result. Can anyone give me some guidance on how to get this working, or indeed suggest another method?
This is how I attempt to change the vertices (with an offset of 1 to be visible 'on' the sphere surface);
planeMesh.geometry.vertices[vertexIndex].z = collisionResults[0].distance - 1;
Making sure to set the following before rendering;
planeMesh.geometry.verticesNeedUpdate = true;
planeMesh.geometry.normalsNeedUpdate = true;
I have a fiddle that shows where I am, here I cast my rays in z and I do not get intersections (collisions) with the sphere, and cannot change the plane in the manner I wish.
http://jsfiddle.net/stokewoggle/vuezL/
You can rotate the camera around the scene with the left and right arrows (in chrome anyway) to see the shape of the plane. I have made the sphere see through as I find it useful to see the plane better.
EDIT: Updated fiddle and corrected description mistake.
Sorry for the delay, but it took me a couple of days to figure this one out. The reason why the collisions were not working was because (like we had suspected) the planeMesh vertices are in local space, which is essentially the same as starting in the center of the sphere and not what you're expecting. At first, I thought a quick-fix would be to apply the worldMatrix like stemkoski did on his github three.js collision example I linked to, but that didn't end up working either because the plane itself is defined in x and y coordinates, up and down, left and right - but no z information (depth) is made locally when you create a flat 2D planeMesh.
What ended up working is manually setting the z component of each vertex of the plane. You had originaly wanted the plane to be at z = 201, so I just moved that code inside the loop that goes through each vertex and I manually set each vertex to z = 201; Now, all the ray start-positions were correct (globally) and having a ray direction of (0,0,-1) resulted in correct collisions.
var localVertex = planeMesh.geometry.vertices[vertexIndex].clone();
localVertex.z = 201;
One more thing was in order to make the plane-wrap absolutely perfect in shape, instead of using (0,0,-1) as each ray direction, I manually calculated each ray direction by subtracting each vertex from the sphere's center position location and normalizing the resulting vector. Now, the collisionResult intersection point will be even better.
var directionVector = new THREE.Vector3();
directionVector.subVectors(sphereMesh.position, localVertex);
directionVector.normalize();
var ray = new THREE.Raycaster(localVertex, directionVector);
Here is a working example:
http://jsfiddle.net/FLyaY/1/
As you can see, the planeMesh fits snugly on the sphere, kind of like a patch or a band-aid. :)
Hope this helps. Thanks for posting the question on three.js's github page - I wouldn't have seen it here. At first I thought it was a bug in THREE.Raycaster but in the end it was just user (mine) error. I learned a lot about collision code from working on this problem and I will be using it later down the line in my own 3D game projects. You can check out one of my games at: https://github.com/erichlof/SpacePong3D
Best of luck to you!
-Erich
Your ray start position is not good. Probably due to vertex coordinates being local to the plane. You start the raycast from inside the sphere so it never hits anything.
I changed the ray start position like this as a test and get 726 collisions:
var rayStart = new THREE.Vector3(0, 0, 500);
var ray = new THREE.Raycaster(rayStart, new THREE.Vector3(0, 0, -1));
Forked jsfiddle: http://jsfiddle.net/H5YSL/
I think you need to transform the vertex coordinates to world coordinates to get the position correctly. That should be easy to figure out from docs and examples.

Three.js lookat seems to be flipped

I have a demo of what I mean here: Test Site or (Backup)
For some reason, even though the mouse vector is correct my object is rotated by 90 degrees always in favor of the positive Y axis. The only call that this could be going wrong, as far as I can tell, in is the call: ship.mesh.lookAt(mouse);, I call this every time the screen is animated.
Can anyone tell me what to do to fix this and why it is doing it?
object.lookAt( position ) orients the object so that the object's local positive z-axis points toward the desired position.
Your "ship's" front points in the direction of the local positive y-axis.
EDIT:
To re-orient your geometry, apply a matrix right after the geometry is created, like so:
geometry.applyMatrix( new THREE.Matrix4().makeRotationX( Math.PI / 2 ) );

Categories