Device orientation using Quaternion - javascript

I've written a JS SDK that listens to mobile device rotation, providing 3 inputs:
α : An angle can range between 0 and 360 degrees
β : An Angle between -180 and 180 degrees
γ : An Angle between -90 to 90 degrees
Documentation for device rotation
I have tried using Euler Angles to determine the device orientation but encountered the gimbal lock effect, that made calculation explode when the device was pointing up. That lead me to use Quaternion, that does not suffer from the gimbal lock effect.
I've found this js library that converts α,β and γ to a Quaternion, so for the following values:
α : 81.7324
β : 74.8036
γ : -84.3221
I get this Quaternion for ZXY order:
w: 0.7120695154301472
x: 0.6893688637611577
y: -0.10864439143062626
z: 0.07696733776346154
Code:
var rad = Math.PI / 180;
window.addEventListener("deviceorientation", function(ev) {
// Update the rotation object
var q = Quaternion.fromEuler(ev.alpha * rad, ev.beta * rad, ev.gamma * rad, 'ZXY');
// Set the CSS style to the element you want to rotate
elm.style.transform = "matrix3d(" + q.conjugate().toMatrix4() + ")";
}, true);
Visualizing the device orientation using 4d CSS matrix derived from the Quaternion Reflected the right device orientation (DEMO, use mobile):
Wrong visualizing with Euler Angels and the developer tools (DEMO, use mobile):
I would like to write a method that gets α,β and γ and outputs if the device is in one of the following orientations:
portrait
portrait upside down
landscape left
landscape right
display up
display down
Defining each orientation as a range of +- 45° around the relevant axes.
What approach should I take?

Given that you've already managed to convert the Euler angles into a unit quaternion, here's a simple way to determine the orientation of the device:
Take a world-space vector pointing straight up (i.e. along the +z axis) and use the quaternion (or its conjugate) to rotate it into device coordinates. (Note that you could also do this using the Euler angles directly, or using a rotation matrix, or using any other representation of the device rotation that you can apply to transform a vector.)
Take the transformed vector, and find the component with the largest absolute value. This will tell you which axis of your device is pointing closest to vertical, and the sign of the component value tells you whether it's pointing up or down.
In particular:
if the device x axis is the most vertical, the device is in a landscape orientation;
if the device y axis is the most vertical, the device is in a portrait orientation;
if the device z axis is the most vertical, the device has the screen pointing up or down.
Here's a simple JS demo that should work at least on Chrome — or it would, except that the device orientation API doesn't seem to work in Stack Snippets at all. :( For a live demo, try this CodePen instead.
const orientations = [
['landscape left', 'landscape right'], // device x axis points up/down
['portrait', 'portrait upside down'], // device y axis points up/down
['display up', 'display down'], // device z axis points up/down
];
const rad = Math.PI / 180;
function onOrientationChange (ev) {
const q = Quaternion.fromEuler(ev.alpha * rad, ev.beta * rad, ev.gamma * rad, 'ZXY');
// transform an upward-pointing vector to device coordinates
const vec = q.conjugate().rotateVector([0, 0, 1]);
// find the axis with the largest absolute value
const [value, axis] = vec.reduce((acc, cur, idx) => (Math.abs(cur) < Math.abs(acc[0]) ? acc : [cur, idx]), [0, 0]);
const orientation = orientations[axis][1 * (value < 0)];
document.querySelector('#angles').textContent = `alpha = ${ev.alpha.toFixed(1)}°, beta = ${ev.beta.toFixed(1)}°, gamma = ${ev.gamma.toFixed(1)}°`;
document.querySelector('#vec').textContent = `vec = ${vec.map(a => a.toFixed(3))}, dominant axis = ${axis}, value = ${value.toFixed(3)}`;
document.querySelector('#orientation').textContent = `orientation = ${orientation}`;
}
onOrientationChange({ alpha: 0, beta: 0, gamma: 0 });
window.addEventListener("deviceorientation", onOrientationChange, true);
<script src="https://cdn.jsdelivr.net/npm/quaternion#1.1.0/quaternion.min.js"></script>
<div id="angles"></div>
<div id="vec"></div>
<div id="orientation"></div>
Note that there's apparently some inconsistency between browsers in the signs and ranges of the Euler angles supplied by the device orientation API, potentially causing the wrong signs to be computed on other browsers. You might need to do some browser sniffing to fix this, or use a wrapper library like gyronorm.js.

Related

Way to detect angle (0, 90, -90, 180) of the phone in JS

I am using JavaScript. I need to know when user changed the orientation of the phone and I also need to know to which orientation ( 0, 90, -90 or 180).
I tried to use orientationchange from the docs.
window.addEventListener("orientationchange", function(event) {
console.log("the orientation of the device is now " + event.target.screen.orientation.angle);
});
Unfortunatelly, This doesn't work on IOS Safari. There's no orientation object in event.target.screen.
I tried using window.orientation (seems like it's working, but from the docs, it's deprecated and MUST be avoided.
I tried resize listener .
window.onresize = () => {
if(window.innerWidth >= window.innerHeight){
//landscape
}
else{
//portrait
}
};
This has 2 problems. 1) Sometimes this doesn't detect correctly on some iPhones or maybe on the same iPhone, but at different times. 2) I have no idea how to know the angle.(0,90, -90, 180). I don't know much about screens and innerHeight or innerWidth or stuff like that.
Any proper solution ?
Have you tried beta (for the x-axis) or gamma (for the y-axis)?
e.g.
function handleOrientation(event) {
var absolute = event.absolute;
var alpha = event.alpha; // z-axis
var beta = event.beta; // x-axis
var gamma = event.gamma; // y-axis
// Do stuff with the new orientation data
}
You can use the deviceorientation event on the window.
window.addEventListener("deviceorientation", function(e){
const {absolute, alpha, beta, gamma} = e;
});
According to MDN:
The DeviceOrientationEvent.alpha value represents the motion of the device around the z axis, represented in degrees with values ranging from 0 to 360.
The DeviceOrientationEvent.beta value represents the motion of the device around the x axis, represented in degrees with values ranging from -180 to 180. This represents a front to back motion of the device.
The DeviceOrientationEvent.gamma value represents the motion of the device around the y axis, represented in degrees with values ranging from -90 to 90. This represents a left to right motion of the device.

How to rotate (any degree) ImageData object and receive new "rotated" ImagaData.data array

I have an imageData object containing elevation data encoded in rgba. I need to compute the elevation profile at specified heading. For one pixel width line and North up heading it is easy as it is just a column of the imagaData array. For specified degree it is also easy as it involves simple trigonometry to compute x,y position in the array, however my requirement is more complex. In fact I need to compute elevation profile (max values) for a stripe with width larger than 1 pixel, so I need to rotate the whole image data.
In short, I need to:
1) crop image to work on smaller data ( for example centered half of the original image) - performance is important
2) rotate by center the cropped image for the specified degree, so I will have required elevation data in top up orientation in the table.
3) Scan elevation profile data, raw by ray and compute max elevation value per each row.
As this operation must be computed several times per second, I need optimized solution. I tried with transformation/rotation of canvas context, however the imageData object is not changed, so I guess I would need to save the context image (rotated) to another image object, but I am not sure how.
any hint more than welcomed
I decided to do the math myself, as I only need to rotate part of the image. This function rotates clockwise single point (p) around (r) by angle (in radians)
function rotate(p, r, angle) // angle in radians, clockwise
{
var new_p = new Object();
// shortened calculation
new_p.x = Math.round(r.x + ((p.x - r.x)*Math.cos(angle) - (p.y - r.y)*Math.sin(angle)));
new_p.y = Math.round(r.y + ((p.x - r.x)*Math.sin(angle) + (p.y - r.y)*Math.cos(angle)));
return new_p;
}
test, rotate (0,0) by 90 deg
var test_p = rotate({ x:0, y:0 },{x:400,y:400},Math.PI/2)
result
{x: 800, y: 0}

How to render raycasted wall textures on HTML-Canvas

I am trying to build a raycasting-engine. i have successfully rendered a scene column by column using ctx.fillRect() as follows.
canvas-raycasting.png
demo
code
the code i wrote for above render:
var scene = [];// this contains distance of wall from player for an perticular ray
var points = [];// this contains point at which ray hits the wall
/*
i have a Raycaster class which does all math required for ray casting and returns
an object which contains two arrays
1 > scene : array of numbers representing distance of wall from player.
2 > points : contains objects of type { x , y } representing point where ray hits the wall.
*/
var data = raycaster.cast(wall);
/*
raycaster : instance of Raycaster class ,
walls : array of boundries constains object of type { x1 , y1 , x2 , y2 } where
(x1,y1) represent start point ,
(x2,y2) represent end point.
*/
scene = data.scene;
var scene_width = 800;
var scene_height = 400;
var w = scene_width / scene.length;
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i] ;
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
ctx.beginPath();
ctx.fillStyle = 'rgb('+s+','+s+','+s+')';
ctx.fillRect(i*w,200-h*0.5,w+1,h);
ctx.closePath();
}
Now i am trying to build an web based FPS(First Person Shooter) and stucked on rendering wall-textures on canvas.
ctx.drawImage() mehthod takes arguments as follows
void ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight);
ctx.drawImage_arguments
but ctx.drawImage() method draws image as a rectangle with no 3D effect like Wolfenstein 3D
i have no idea how to do it.
should i use ctx.tranform()? if yes, How ? if no, What should i do?
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting.
some pseudo 3d games are Wolfenstein 3D Doom
i am trying to build something like this
THANK YOU : )
The way that you're mapping (or not, as the case may be) texture coordinates isn't working as intended.
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting
The Wikipedia Entry for Texture Mapping has a nice section with the specific maths of how Doom performs texture mapping. From the entry:
The Doom engine restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. A fast affine mapping could be used along those lines because it would be correct.
A "fast affine mapping" is just a simple 2D interpolation of texture coordinates, and would be an appropriate operation for what you're attempting. A limitation of the Doom engine was also that
Doom renders vertical and horizontal spans with affine texture mapping, and is therefore unable to draw ramped floors or slanted walls.
It doesn't appear that your logic contains any code for transforming coordinates between various coordinate spaces. You'll need to apply transforms between a given raytraced coordinate and texture coordinate spaces in the very least. This typically involves matrix math and is very common and can also be referred to as Projection, as in projecting points from one space/surface to another. With affine transformations you can avoid using matrices in favor of linear interpolation.
The coordinate equation for this adapted to your variables (see above) might look like the following:
u = (1 - a) * wallStart + a * wallEnd
where 0 <= *a* <= 1
Alternatively, you could use a Weak Perspective projection, since you have much of the data already computed. From wikipedia again:
To determine which screen x-coordinate corresponds to a point at
A_x_,A_z_
multiply the point coordinates by:
B_x = A_x * B_z / A_z
where
B_x
is the screen x coordinate
A_x
is the model x coordinate
B_z
is the focal length—the axial distance from the camera center *to the image plane*
A_z
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
In your case A_x is the location of the wall, in worldspace. B_z is the focal length, which will be 1. A_z is the distance you calculated using the ray trace. The result is the x or y coordinate representing a translation to viewspace.
The main draw routine for W3D documents the techniques used to raytrace and transform coordinates for rendering the game. The code is quite readable even if you're not familiar with C/ASM and is a great way to learn more about your topics of interest. For more reading, I would suggest performing a search in your engine of choice for things like "matrix transformation of coordinates for texture mapping", or search the GameDev SE site for similar.
A specific area of that file to zero-in on would be this section starting ln 267:
> ========================
> =
> = TransformTile
> =
> = Takes paramaters:
> = tx,ty : tile the object is centered in
> =
> = globals:
> = viewx,viewy : point of view
> = viewcos,viewsin : sin/cos of viewangle
> = scale : conversion from global value to screen value
> =
> = sets:
> = screenx,transx,transy,screenheight: projected edge location and size
> =
> = Returns true if the tile is withing getting distance
> =
A great book on "teh Maths" is this one - I would highly recommend it for anyone seeking to create or improve upon these skills.
Update:
Essentially, you'll be mapping pixels (points) from the image onto points on your rectangular wall-tile, as reported by the ray trace.
Pseudo(ish)-code:
var image = getImage(someImage); // get the image however you want. make sure it finishes loading before drawing
var iWidth = image.width, iHeight = image.height;
var sX = 0, sY = 0; // top-left corner of image. Adjust when using e.g., sprite sheets
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i];
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
var wX = i*w, wY = 200 - h * 0.5;
var wWidth = w + 1, wHeight = h;
//... render the rectangle shape
/* we are using the same image, but we are scaling it to the size of the rectangle
and placing it at the same location as the wall.
*/
var u, v, uW, vH; // texture x- and y- values and sizes. compute these.
ctx.drawImage(image, sX, sY, iWidth, iHeight, u, v, uW, vH);
}
Since I'm not familiar with your code performing the raytrace, its' coordinate system, etc, you may need to further adjust the values for wX, wY, wWidth, and wHeight (e.g., translate points from center to top-left corner).

Calculating a quaternion from two combined angles

I'm creating a script that rotates a THREE.js camera arround based on a mobile phones gyroscope input. It's currently working pretty well, except that every time I rotate my phone over a quadrant, the camera will turn 180 degrees instead of continuing as intended. This is the code that I currently use:
private onDeviceOrientation = ( event ) => {
if( event.alpha !== null && event.beta !== null && event.gamma !== null ) {
let rotation = [
event.beta,
event.alpha,
event.gamma
],
this.orientation = new THREE.Vector3(rotation[0], rotation[1], rotation[2]);
this.viewer.navigation.setTarget(this.calcPosition());
}
};
private calcPosition = () => {
const camPosition = this.viewer.navigation.getPosition(),
radians = Math.PI / 180,
aAngle = radians * - this.orientation.y,
bAngle = radians * + this.orientation.z,
distance = this.calcDistance();
let medianX = Math.cos(bAngle) * Math.sin(aAngle);
let medianY = Math.cos(bAngle) * Math.cos(aAngle);
let nX = camPosition.x + (medianX * distance),
nY = camPosition.y + (medianY * distance),
nZ = camPosition.z + Math.sin(bAngle) * distance;
return new THREE.Vector3(nX, nY, nZ);
};
window.addEventListener('deviceorientation', this.onDeviceOrientation, false);
Soafter doing some research I found that I need to use a Quaternion prevent the switchen when going into a new quadrant. I have no experience with Quaternions, so I was wondering what the best way would be to combine the two Vector3's in the code above into a singel Quaternion.
[Edit]
I calculate the distance using this method:
private calcDistance = (): number => {
const camPosition = this.viewer.navigation.getPosition();
const curTarget = this.viewer.navigation.getTarget();
let nX = camPosition.x - curTarget.x,
nY = camPosition.y - curTarget.y,
nZ = camPosition.z - curTarget.z;
return Math.sqrt((nX * nX) + (nY * nY) + (nZ * nZ));from squared averages
};
And I follow the MDN conventions when working with the gyroscope.
[Edit #2]
Turns out I had my angle all wrong, I managed to fix it by calculating the final position like this:
let nX = camPosition.x - (Math.cos(zAngle) * Math.sin(yAngle)) * distance,
nY = camPosition.y + (Math.cos(zAngle) * Math.cos(yAngle)) * distance,
nZ = camPosition.z - (Math.cos(xAngle) * Math.sin(zAngle)) * distance;
Here is the closest I can give you to an answer:
First of all, you don't need a quaternion. (If you really find yourself needing to convert between Euler angles and quaternions, it is possible as long as you have all the axis conventions down pat.) The Euler angle orientation information you obtain from the device is sufficient to represent any rotation without ambiguity; if you were calculating angular velocities, I'd agree that you want to avoid Euler angles since there are some orientations in which the rates of change of the Euler angles go to infinity. But you're not, so you don't need it.
I'm going to try to summarize the underlying problem you're trying to solve, and then tell you why it might not be solvable. 🙁
You are given the full orientation of the device with a camera, as yaw, pitch, and roll. Assuming yaw is like panning the camera horizontally, and pitch is like tilting the camera vertically, then roll is a degree of freedom that doesn't change affect direction the camera is pointing, but it does affect the orientation of the images the camera sees. So you are given three coordinates, where two have to do with the direction the camera is pointing, and one does not.
You are trying to output this information to the camera controller but you are only allowed to specify the target location, which is the point in space that the camera is looking. This is to be specified via three Cartesian coordinates, which you can calculate from the direction the camera is pointing (2 degrees of freedom) and the distance to the target object (one degree of freedom).
So you have three inputs and three outputs, but only two of those have anything to do with each other. The target location has no way to represent the roll direction of the camera, and the orientation of the camera has no way to represent the distance to some target object.
Since you don't have a real target object, you can just pick an arbitrary fixed distance (1, for example) and use it. You certainly don't have anything from which to calculate it... if I follow your code, you are defining distance in terms of the target location, which is itself defined in terms of the distance from the previous step. This is extra work for no benefit at best (the distance drifts around some initial value), and numerically unstable at worst (the distance drifts toward zero and you lose precision or get infinities). Just use a fixed value for distance and make it simple.
So now you probably have a system that points a camera in a direction, but you cannot tell it what the roll angle is. That means your camera controller is apparently just going to choose it for you based on the yaw and pitch angles. Let's say it always picks zero degrees (that would be the least crazy thing it could do). This will cause discontinuities when the roll angle and yaw angle line up (when the pitch is at ±90°): Imagine pointing a physical camera at the northern horizon and yawing around westward, past the western horizon, and settling on the southern horizon. The whole time, the roll angle of the camera is 0°, so there's no problem. But now imagine pointing it at the northern horizon, and pitching upward, past the zenith, and continuing to pitch backward until you are facing the southern horizon. Now the camera is upside down; the roll angle is 180°. But if the camera controller doesn't change the roll angle from 0°, then it will do a nonphysical "flip" right when you pass the zenith. The problem is that there really is no way to synthesize a roll angle based purely on position and not have this happen. We've just demonstrated that there are two ways to move your camera from pointing north to pointing south, where the roll angle is completely different at the end.
So you're stuck, I guess. Well, maybe not. Can you rotate the image from the camera based on the roll angle of the device orientation? That is, add the roll back into the displayed image? If so, you may have a solution. Let's say the roll angle of the camera controller is always at zero. Then you just rotate the image by the desired roll angle (something derived from beta I guess?) and you're done. If the camera controller has some other convention for choosing the roll angle, you will need to figure that out, undo it, and add the roll angle back on.
Without the actual system in front of me I probably can't help you debug your way to a solution. So I think this is where my journey with this question must end. Good luck!
Summary:
You don't need a quaternion
Pick a fixed distance to your simulated target
Add the roll angle by rotating the image before displaying it
Good luck!

JavaScript - Angular Velocity by Vector - 2D

I am making a game in JavaScript, with the Canvas API.
I perform circle to segment collision, and I need to calculate the anglar velocity for the circle by its velocity vector, I use this formula to do it:
ball.av = ball.v.length() / ball.r
// ball.av = angular velocity
// ball.v = velocity vector, contains x and y values
// .length() = returns initial velocity (pythagoras) for example: return Math.sqrt(ball.v.x * ball.v.x + ball.v.y * ball.v.y)
// ball.r = radius
Now since a square root can't be negative, this won't work when the ball is supposed to rotate anti-clockwise. So, I need a signed version of the initial velocity that also can be negative, how do I calculate that?
I've heard about that the Wedge product is working for this, and I've read many articles about it, but I still don't understand how to implement it to my code, please help!
In the general case, if the ball is rolling on a surface then the angular velocity would be the cross product of the velocity with the surface normal over the radius.
ball.av = CrossProduct(surfaceNormal, ball.v) / radius;
But if you are always on a flat surface along the x direction then this simplifies to this:
ball.av = -ball.v.x / ball.r;
Here is crossproduct for you if you don't have it.
float CrossProduct(const Vector2D & v1, const Vector2D & v2) const
{
return (v1.X*v2.Y) - (v1.Y*v2.X);
}
NOTE: if the ball rolls backwards just add a '-' sign to your calculations or swap the parameters in the crossProduct call but I think they are right as I've written them.
Surface normal is a perpendicular normalized (unit) vector from a surface. In your case surface normal is the normalized vector from the contact point to the centre of the circle.
As a side note to remove the component of gravity into a surface as a ball is rolling do this:
vec gravity;
gravity = gravity - surfaceNormal*dot(surfaceNormal, gravity);
you can then apply the resultant gravity as a ball is rolling down a surface.

Categories