Related
This codepen has a small rotation function that accepts theta and phi values. The formula of the rotation can be found anywhere on the internet and is fairly simple:
function rotate(P, C, theta, phi) {
let ct = Math.cos(theta);
let st = Math.sin(theta);
let cp = Math.cos(phi);
let sp = Math.sin(phi);
let x = P.x - C.x;
let y = P.y - C.y;
let z = P.z - C.z;
P.x = ct * x - st * cp * y + st * sp * z + C.x;
P.y = st * x + ct * cp * y - ct * sp * z + C.y;
P.z = sp * y + cp * z + C.z;
}
This rotation works as a demonstration, but It's easy to loose track of which face of the cube is the initially bottom face.
openSCAD on the other hand has a very nice rotation where the z-axis of the object always appears as a vertical line:
The above function would behave the same, if you'd only change the theta value. As soon as you change phi value the cube stars to fall over.
How would I write such a rotation in JavaScript?
I am trying to create a function that will return the 3 points coordinates of arrow head (isoscele triangle) that I want to draw at the end of a line.
The challenge is in the orientation (angle) of the line that can vary between 0 and 360 degree in the quadrant.
I have the following values:
//start coordinates of the line
var x0 = 100;
var y0 = 100;
//end coordinates of the line
var x1 = 200;
var y1 = 200;
//height of the triangle
var h = 10;
//width of the base of the triangle
var w = 30 ;
This is my function until now that returns the two point coordinates of the base of the triangle:
var drawHead = function(x0, y0, x1, y1, h, w){
var L = Math.sqrt(Math.pow((x0 - x1),2)+Math.pow((y0 - y1),2));
//first base point coordinates
var base_x0 = x1 + (w/2) * (y1 - y0) / L;
var base_y0 = y1 + (w/2) * (x0 - x1) / L;
//second base point coordinates
var base_x1 = x1 - (w/2) * (y1 - y0) / L;
var base_y1 = y1 - (w/2) * (x0 - x1) / L;
//now I have to find the last point coordinates ie the top of the arrow head
}
How can I determine the coordinates of the top of the triangle considering the angle of the line?
The head of the arrow will lie along the same line as the body of the arrow. Therefore, the slope of the line segment between (x1, y1) and (head_x, head_y) will be the same as the slope of the line segment between(x0, y0) and (x1, y1). Let's say that dx = head_x - x1 and dy = head_y - y1 and slope = (y1 - y0) / (x1 - x0). Therefore, dy / dx = slope. We also know that dx^2 + dy^2 = h^2. We can solve for dx in terms of slope and h. Then, dy = dx * slope. Once you have dx and dy, you can just add those to x1 and y1 to get the head point. Some pseudocode:
if x1 == x0: #avoid division by 0
dx = 0
dy = h
if y1 < y0:
dy = -h #make sure arrow head points the right way
else:
dy = h
else:
if x1 < x0: #make sure arrow head points the right way
h = -h
slope = (y1 - y0) / (x1 - x0)
dx = h / sqrt(1 + slope^2)
dy = dx * slope
head_x = x1 + dx
head_y = y1 + dy
I see it like this:
A=(x0,y0) , B=(x1,y1) are the known line endpoints
dir=B-A; dir/=|dir|; is unit vector of direction of line (|| is vector size)
dir.x=B.x-A.x;
dir.y=B.y-A.y;
dir/=sqrt((dir.x*dir.x)+(dir.y*dir.y));
so you can use it and its 90 degree rotation as as a basis vectors. Let q be the 90 degrees rotated vector, in 2D it is easy to obtain:
q.x=+dir.y
q.y=-dir.x
so now you can compute your wanted points:
C=B-(h*dir)+(w*q/2.0);
D=B-(h*dir)-(w*q/2.0);
it is just translation by h and w/2 along basis vectors
So I have an object rotating around an origin point. Once I rotate and then change the origin point. My object seems to jump positions. After the jump it rotates fine... Need help finding the pattern/why it's jumping and what I need to do to stop it.
Here's the rotation code:
adjustMapTransform = function (_x, _y) {
var x = _x + (map.width/2);
var y = _y + (map.height/2);
//apply scale here
var originPoint = {
x:originXInt,
y:originYInt
};
var mapOrigin = {
x:map.x + (map.width/2),
y:map.y + (map.height/2)
};
//at scale 1
var difference = {
x:mapOrigin.x - originPoint.x,
y:mapOrigin.y - originPoint.y
};
x += (difference.x * scale) - difference.x;
y += (difference.y * scale) - difference.y;
var viewportMapCentre = {
x: originXInt,
y: originYInt
}
var rotatedPoint = {};
var angle = (rotation) * Math.PI / 180.0;
var s = Math.sin(angle);
var c = Math.cos(angle);
// translate point back to origin:
x -= viewportMapCentre.x;
y -= viewportMapCentre.y;
// rotate point
var xnew = x * c - y * s;
var ynew = x * s + y * c;
// translate point back:
x = xnew + viewportMapCentre.x - (map.width/2);
y = ynew + viewportMapCentre.y - (map.height/2);
var coords = {
x:x,
y:y
};
return coords;
}
Also here is a JS Fiddle project that you can play around in to give you a better idea of what's happening.
EDITED LINK - Got rid of the originY bug and scaling bug
https://jsfiddle.net/fionoble/6k8sfkdL/13/
Thanks!
The direction of rotation is a consequence of the sign you pick for the elements in your rotation matrix. [This is Rodrigues formula for rotation in two dimensions]. So to rotate in the opposite direction simply subtract your y cosine term rather than your y sine term.
Also you might try looking at different potential representations of your data.
If you use the symmetric representation of the line between your points you can avoid shifting and instead simply transform your coordinates.
Take your origin [with respect to your rotation], c_0, to be the constant offset in the symmetric form.
You have for a point p relative to c_0:
var A = (p.x - c_0.x);
var B = (p.y - c_0.y);
//This is the symmetric form.
(p.x - c_0.x)/A = (p.y - c_0.y)/B
which will be true under a change of coordinates and for any point on the line (which also takes care of scaling/dilation).
Then after the change of coordinates for rotation you have [noting that this rotation has the opposite sense, not the same as yours].
//This is the symmetric form of the line incident on your rotated point
//and on the center of its rotation
((p.x - c_0.x) * c + (p.y - c_0.y) * s)/A = ((p.x - c_0.x) * s - (p.y - c_0.y) * c)/B
so, multiplying out we get
(pn.x - c_0.x) * B * c + (pn.y - c_0.y) * B * s = (pn.x - c_0.x) * A * s - (pn.y - c_0.y) * A * c
rearrangement gives
(pn.x - c_0.x) * (B * c - A * s) = - (pn.y - c_0.y) * (B * s + A * c)
pn.y = -(pn.x - c_0.x) * (B * c - A * s) / (B * s + A * c) + c_0.y;
for any scaling.
I think this is the best place for this question.
I am trying to get the heading and pitch of any clicked point on an embedded Google Street View.
The only pieces of information I know and can get are:
The field of view (degrees)
The center point's heading and pitch (in degrees) and x and y pixel position
The x and y pixel position of the mouse click
I've included here a screenshot with simplified measurements as an example:
I initally just thought you could divide the field of view by the pixel width to get degrees per pixel, but it's more complicated, I think it has to do with projecting onto the inside of a sphere, where the camera is at the centre of the sphere?
Bonus if you can tell me how to do the reverse too...
Clarification:
The goal is not to move the view to the clicked point, but give information about a clicked point. The degrees per pixel method doesn't work because the viewport is not linear.
THe values I have here are just examples, but the field of view can be bigger or smaller (from [0.something, 180], and the center is not fixed, it could be any value in the range [0, 360] and vertically [-90, 90]. The point [0, 0] is simply the heading (horizontal degrees) and pitch (vertical degrees) of the photogapher when the photo was taken, and doesn't really represent anything.
TL;DR: JavaScript code for a proof of concept included at the end of this answer.
The heading and pitch parameters h0 and p0 of the panorama image corresponds to a direction. By using the focal length f of the camera to scale this direction vector, one can get the 3D coordinates (x0, y0, z0) of the viewport center at (u0, v0):
x0 = f * cos( p0 ) * sin( h0 )
y0 = f * cos( p0 ) * cos( h0 )
z0 = f * sin( p0 )
The goal is now to find the 3D coordinates of the point at to some given pixel coordinates (u, v) in the image. First, map these pixel coordinates to pixel offsets (du, dv) (to the right and to the top) from the viewport center:
du = u - u0 = u - w / 2
dv = v0 - v = h / 2 - v
Then a local orthonormal 2D basis of the viewport in 3D has to be found. The unit vector (ux, uy, uz) supports the x-axis (to the right along the direction of increasing headings) and the vector (vx, vy, vz) supports the y-axis (to the top along the direction of increasing pitches) of the image. Once these two vectors are determined, the 3D coordinates of the point on the viewport matching the (du, dv) pixel offset in the viewport are simply:
x = x0 + du * ux + dv * vx
y = y0 + du * uy + dv * vy
z = z0 + du * uz + dv * vz
And the heading and pitch parameters h and p for this point are then:
R = sqrt( x * x + y * y + z * z )
h = atan2( x, y )
p = asin( z / R )
Finally to get the two unit vectors (ux, uy, uz) and (vx, vy, vz), compute the derivatives of the spherical coordinates by the heading and pitch parameters at (p0, h0), and one should get:
vx = -sin( p0 ) * sin ( h0 )
vy = -sin( p0 ) * cos ( h0 )
vz = cos( p0 )
ux = sgn( cos ( p0 ) ) * cos( h0 )
uy = -sgn( cos ( p0 ) ) * sin( h0 )
uz = 0
where sgn( a ) is +1 if a >= 0 else -1.
Complements:
The focal length is derived from the horizontal field of view and the width of the image:
f = (w / 2) / Math.tan(fov / 2)
The reverse mapping from heading and pitch parameters to pixel coordinates can be done similarly:
Find the 3D coordinates (x, y, z) of the direction of the ray corresponding to the specified heading and pitch parameters,
Find the 3D coordinates (x0, y0, z0) of the direction of the ray corresponding to the viewport center (an associated image plane is located at (x0, y0, z0) with an (x0, y0, z0) normal),
Intersect the ray for the specified heading and pitch parameters with the image plane, this gives the 3D offset from the viewport center,
Project this 3D offset on the local basis, getting the 2D offsets du and dv
Map du and dv to absolute pixel coordinates.
In practice, this approach seems to work similarly well on both square and rectangular viewports.
Proof of concept code (call the onLoad() function on a web page containing a sized canvas element with a "panorama" id)
'use strict';
var viewer;
function onClick(e) {
viewer.click(e);
}
function onLoad() {
var element = document.getElementById("panorama");
viewer = new PanoramaViewer(element);
viewer.update();
}
function PanoramaViewer(element) {
this.element = element;
this.width = element.width;
this.height = element.height;
this.pitch = 0;
this.heading = 0;
element.addEventListener("click", onClick, false);
}
PanoramaViewer.FOV = 90;
PanoramaViewer.prototype.makeUrl = function() {
var fov = PanoramaViewer.FOV;
return "https://maps.googleapis.com/maps/api/streetview?location=40.457375,-80.009353&size=" + this.width + "x" + this.height + "&fov=" + fov + "&heading=" + this.heading + "&pitch=" + this.pitch;
}
PanoramaViewer.prototype.update = function() {
var element = this.element;
element.style.backgroundImage = "url(" + this.makeUrl() + ")";
var width = this.width;
var height = this.height;
var context = element.getContext('2d');
context.strokeStyle = '#FFFF00';
context.beginPath();
context.moveTo(0, height / 2);
context.lineTo(width, height / 2);
context.stroke();
context.beginPath();
context.moveTo(width / 2, 0);
context.lineTo(width / 2, height);
context.stroke();
}
function sgn(x) {
return x >= 0 ? 1 : -1;
}
PanoramaViewer.prototype.unmap = function(heading, pitch) {
var PI = Math.PI
var cos = Math.cos;
var sin = Math.sin;
var tan = Math.tan;
var fov = PanoramaViewer.FOV * PI / 180.0;
var width = this.width;
var height = this.height;
var f = 0.5 * width / tan(0.5 * fov);
var h = heading * PI / 180.0;
var p = pitch * PI / 180.0;
var x = f * cos(p) * sin(h);
var y = f * cos(p) * cos(h);
var z = f * sin(p);
var h0 = this.heading * PI / 180.0;
var p0 = this.pitch * PI / 180.0;
var x0 = f * cos(p0) * sin(h0);
var y0 = f * cos(p0) * cos(h0);
var z0 = f * sin(p0);
//
// Intersect the ray O, v = (x, y, z)
// with the plane at M0 of normal n = (x0, y0, z0)
//
// n . (O + t v - M0) = 0
// t n . v = n . M0 = f^2
//
var t = f * f / (x0 * x + y0 * y + z0 * z);
var ux = sgn(cos(p0)) * cos(h0);
var uy = -sgn(cos(p0)) * sin(h0);
var uz = 0;
var vx = -sin(p0) * sin(h0);
var vy = -sin(p0) * cos(h0);
var vz = cos(p0);
var x1 = t * x;
var y1 = t * y;
var z1 = t * z;
var dx10 = x1 - x0;
var dy10 = y1 - y0;
var dz10 = z1 - z0;
// Project on the local basis (u, v) at M0
var du = ux * dx10 + uy * dy10 + uz * dz10;
var dv = vx * dx10 + vy * dy10 + vz * dz10;
return {
u: du + width / 2,
v: height / 2 - dv,
};
}
PanoramaViewer.prototype.map = function(u, v) {
var PI = Math.PI;
var cos = Math.cos;
var sin = Math.sin;
var tan = Math.tan;
var sqrt = Math.sqrt;
var atan2 = Math.atan2;
var asin = Math.asin;
var fov = PanoramaViewer.FOV * PI / 180.0;
var width = this.width;
var height = this.height;
var h0 = this.heading * PI / 180.0;
var p0 = this.pitch * PI / 180.0;
var f = 0.5 * width / tan(0.5 * fov);
var x0 = f * cos(p0) * sin(h0);
var y0 = f * cos(p0) * cos(h0);
var z0 = f * sin(p0);
var du = u - width / 2;
var dv = height / 2 - v;
var ux = sgn(cos(p0)) * cos(h0);
var uy = -sgn(cos(p0)) * sin(h0);
var uz = 0;
var vx = -sin(p0) * sin(h0);
var vy = -sin(p0) * cos(h0);
var vz = cos(p0);
var x = x0 + du * ux + dv * vx;
var y = y0 + du * uy + dv * vy;
var z = z0 + du * uz + dv * vz;
var R = sqrt(x * x + y * y + z * z);
var h = atan2(x, y);
var p = asin(z / R);
return {
heading: h * 180.0 / PI,
pitch: p * 180.0 / PI
};
}
PanoramaViewer.prototype.click = function(e) {
var rect = e.target.getBoundingClientRect();
var u = e.clientX - rect.left;
var v = e.clientY - rect.top;
var uvCoords = this.unmap(this.heading, this.pitch);
console.log("current viewport center");
console.log(" heading: " + this.heading);
console.log(" pitch: " + this.pitch);
console.log(" u: " + uvCoords.u)
console.log(" v: " + uvCoords.v);
var hpCoords = this.map(u, v);
uvCoords = this.unmap(hpCoords.heading, hpCoords.pitch);
console.log("click at (" + u + "," + v + ")");
console.log(" heading: " + hpCoords.heading);
console.log(" pitch: " + hpCoords.pitch);
console.log(" u: " + uvCoords.u);
console.log(" v: " + uvCoords.v);
this.heading = hpCoords.heading;
this.pitch = hpCoords.pitch;
this.update();
}
This answer is unprecise, have a look at most recent answer of user3146587.
I'm not very good at mathematical explanations. I've coded an example and tried to explain the steps in the code. As soon as you click on one point in the image, this point becomes the new center of the image. Even though you have explicitly not demanded for this, this is perfect for illustrating the effect. The new image is drawn with the previously calculated angle.
Example: JSFiddle
The important part is, that I use the radian to calculate radius of the "sphere of view". The radian in this case is the width of the image (in your example 100)
radius = radian / FOV
With the radian, radius and the relative position of the mouse position I can calculate the degree that changes from the center to the mouse position.
Center(50,50)
MousePosition(75/25)
RelativeMousePosition(25,-25)
When the relative mouse position is 25 the radian used for the calculation of the horizontal angle is 50.
radius = 50 / FOV // we've calculated the radius before, it stays the same
See this image for the further process:
I can calculate the new heading and pitch when I add/subtract the calculated angle to the actual angle (depending on left/right, above/under). See the linked JSFiddle for the correct behavior of this.
Doing the reverse is simple, just do the listed steps in the opposite direction (the radius stays the same).
As I've already mentioned, I'm not very good at mathematical explanations, but don't hesitate to ask questions in the comments.
Here is an attempt to give a mathematical derivation of the answer to your question.
Note: Unfortunately, this derivation only works in 1D and the conversion from a pair of angular deviations to heading and pitch is wrong.
Notations:
f: focal length of the camera
h: height in pixels of the viewport
w: width in pixels of the viewport
dy: vertical deviation in pixels from the center of the viewport
dx: horizontal deviation in pixels from the center of the viewport
fov_y: vertical field of view
fov_x: horizontal field of view
dtheta_y: relative vertical angle from the center of the viewport
dtheta_x: relative horizontal angle from the center of the viewport
Given dy, the vertical offset of the pixel from the center of the viewport (this pixel corresponds to the green ray on the figure), we are trying to find dtheta_y (the red angle), the relative vertical angle from the center of the viewport (the pitch of the center of the viewport is known to be theta_y0).
From the figure, we have:
tan( fov_y / 2 ) = ( h / 2 ) / f
tan( dtheta_y ) = dy / f
so:
tan( dtheta_y ) = dy / ( ( h / 2 ) / tan( fov_y / 2 ) )
= 2 * dy * tan( fov_y / 2 ) / h
and finally:
dtheta_y = atan( 2 * dy * tan( fov_y / 2 ) / h )
This is the relative pitch angle for the pixel at dy from the center of the viewport, simply add to it the pitch angle at the center of the viewport to get the absolute pitch angle (i.e. theta_y = theta_y0 + dtheta_y).
similarly:
dtheta_x = atan( 2 * dx * tan( fov_x / 2 ) / w )
This is the relative heading angle for the pixel at dx from the center of the viewport.
Complements:
Both relations can be inverted to get the mapping from relative heading / pitch angle to relative pixel coordinates, for instance:
dy = h tan( dtheta_y ) / ( 2 * tan( fov_y / 2 ) )
The vertical and horizontal fields of view fov_y and fov_x are linked by the relation:
w / h = tan( fov_x / 2 ) / tan( fov_y / 2 )
so:
fov_x = 2 * atan( w * tan( fov_y / 2 ) / h )
The vertical and horizontal deviations from the viewport center dy and dx can be mapped to absolute pixel coordinates:
x = w / 2 + dx
y = h / 2 - dy
Proof of concept fiddle
Martin Matysiak wrote a JS library that implements the inverse of this (placing a marker at a specific heading/pitch). I mention this as the various jsfiddle links in other answers are 404ing, the original requestor added a comment requesting this, and this SO page comes up near the top for related searches.
The blog post discussing it is at https://martinmatysiak.de/blog/view/panomarker.
The library itself is at https://github.com/marmat/google-maps-api-addons.
There's documentation and demos at http://marmat.github.io/google-maps-api-addons/ (look at http://marmat.github.io/google-maps-api-addons/panomarker/examples/basic.html and http://marmat.github.io/google-maps-api-addons/panomarker/examples/fancy.html for the PanoMarker examples).
Like the masochistic I am, I'm trying to learn all the matrix math behind creating modelview and perspective matrices so that I can write my own functions for generating them without the use of JS libraries.
I understand the concept of the matrices, but not how to actually generate them. I've been looking very closely at the glMatrix library, and I have the following questions:
1) What is going on in the following mat4.perspecive method?
/**
* Generates a perspective projection matrix with the given bounds
*
* #param {mat4} out mat4 frustum matrix will be written into
* #param {number} fovy Vertical field of view in radians
* #param {number} aspect Aspect ratio. typically viewport width/height
* #param {number} near Near bound of the frustum
* #param {number} far Far bound of the frustum
* #returns {mat4} out
*/
mat4.perspective = function (out, fovy, aspect, near, far) {
var f = 1.0 / Math.tan(fovy / 2),
nf = 1 / (near - far);
out[0] = f / aspect;
out[1] = 0;
out[2] = 0;
out[3] = 0;
out[4] = 0;
out[5] = f;
out[6] = 0;
out[7] = 0;
out[8] = 0;
out[9] = 0;
out[10] = (far + near) * nf;
out[11] = -1;
out[12] = 0;
out[13] = 0;
out[14] = (2 * far * near) * nf;
out[15] = 0;
return out;
};
Specifically, I get what Math.tan(fovy / 2) is calculating, but why take the inverse of it? Likewise, why take the inverse of the difference between the near boundary and the far boundary? Also, why is out[11] set to -1 and what is the value stored in out[14] for?
2) The following mat4.lookAt method in the library is also confusing me:
/**
* Generates a look-at matrix with the given eye position, focal point,
* and up axis
*
* #param {mat4} out mat4 frustum matrix will be written into
* #param {vec3} eye Position of the viewer
* #param {vec3} center Point the viewer is looking at
* #param {vec3} up vec3 pointing up
* #returns {mat4} out
*/
mat4.lookAt = function (out, eye, center, up) {
var x0, x1, x2, y0, y1, y2, z0, z1, z2, len,
eyex = eye[0],
eyey = eye[1],
eyez = eye[2],
upx = up[0],
upy = up[1],
upz = up[2],
centerx = center[0],
centery = center[1],
centerz = center[2];
if (Math.abs(eyex - centerx) < GLMAT_EPSILON &&
Math.abs(eyey - centery) < GLMAT_EPSILON &&
Math.abs(eyez - centerz) < GLMAT_EPSILON) {
return mat4.identity(out);
}
z0 = eyex - centerx;
z1 = eyey - centery;
z2 = eyez - centerz;
len = 1 / Math.sqrt(z0 * z0 + z1 * z1 + z2 * z2);
z0 *= len;
z1 *= len;
z2 *= len;
x0 = upy * z2 - upz * z1;
x1 = upz * z0 - upx * z2;
x2 = upx * z1 - upy * z0;
len = Math.sqrt(x0 * x0 + x1 * x1 + x2 * x2);
if (!len) {
x0 = 0;
x1 = 0;
x2 = 0;
} else {
len = 1 / len;
x0 *= len;
x1 *= len;
x2 *= len;
}
y0 = z1 * x2 - z2 * x1;
y1 = z2 * x0 - z0 * x2;
y2 = z0 * x1 - z1 * x0;
len = Math.sqrt(y0 * y0 + y1 * y1 + y2 * y2);
if (!len) {
y0 = 0;
y1 = 0;
y2 = 0;
} else {
len = 1 / len;
y0 *= len;
y1 *= len;
y2 *= len;
}
out[0] = x0;
out[1] = y0;
out[2] = z0;
out[3] = 0;
out[4] = x1;
out[5] = y1;
out[6] = z1;
out[7] = 0;
out[8] = x2;
out[9] = y2;
out[10] = z2;
out[11] = 0;
out[12] = -(x0 * eyex + x1 * eyey + x2 * eyez);
out[13] = -(y0 * eyex + y1 * eyey + y2 * eyez);
out[14] = -(z0 * eyex + z1 * eyey + z2 * eyez);
out[15] = 1;
return out;
};
Similar to the mat4.perspecive method, why is the inverse of the length of the vector being calculated? Also, why is that value then multiplied by the z0, z1 and z2 values? The same thing is being done for the x0-x2 variables and the y0-y2 variables. Why? Lastly, what is the meaning of the values set for out[12]-out[14]?
3) Lastly, I have a few questions about the mat4.translate method. Specifically, I bought the book Professional WebGL Programming: Developing 3D Graphics for the Web, and it says that the following 4x4 matrix is used to translate a vertex:
1 0 0 x
0 1 0 y
0 0 1 z
0 0 0 1
However, when I look at the following mat4.translate method in the glMatrix library, I see that out[12]-out[15] are set via some complex equations. Why are these values set at all?
/**
* Translate a mat4 by the given vector
*
* #param {mat4} out the receiving matrix
* #param {mat4} a the matrix to translate
* #param {vec3} v vector to translate by
* #returns {mat4} out
*/
mat4.translate = function (out, a, v) {
var x = v[0], y = v[1], z = v[2],
a00, a01, a02, a03,
a10, a11, a12, a13,
a20, a21, a22, a23;
if (a === out) {
out[12] = a[0] * x + a[4] * y + a[8] * z + a[12];
out[13] = a[1] * x + a[5] * y + a[9] * z + a[13];
out[14] = a[2] * x + a[6] * y + a[10] * z + a[14];
out[15] = a[3] * x + a[7] * y + a[11] * z + a[15];
} else {
a00 = a[0]; a01 = a[1]; a02 = a[2]; a03 = a[3];
a10 = a[4]; a11 = a[5]; a12 = a[6]; a13 = a[7];
a20 = a[8]; a21 = a[9]; a22 = a[10]; a23 = a[11];
out[0] = a00; out[1] = a01; out[2] = a02; out[3] = a03;
out[4] = a10; out[5] = a11; out[6] = a12; out[7] = a13;
out[8] = a20; out[9] = a21; out[10] = a22; out[11] = a23;
out[12] = a00 * x + a10 * y + a20 * z + a[12];
out[13] = a01 * x + a11 * y + a21 * z + a[13];
out[14] = a02 * x + a12 * y + a22 * z + a[14];
out[15] = a03 * x + a13 * y + a23 * z + a[15];
}
return out;
};
Thank you all for your time, and sorry for all the questions. I come from a JS background, not an OpenGL/3D programming background, so it's hard for me to understand the math behind all the matrices.
If there are any great resources out there that explain the math used for these equations/methods, then that would be great too. Thanks.
Specifically, I get what Math.tan(fovy / 2) is calculating, but why
take the inverse of it?
Because the focal distance d comes from the formula
Math.tan(fovy / 2) = y / d
to get the focal length you need to multiply by
1 / Math.tan(fovy / 2)
why take the inverse of the difference between the near boundary and
the far boundary? Also, why is out[11] set to -1 and what is the value
stored in out[14] for?
You can project (x,y,z) into (x*d/z, y*d/z) using the focal distance d. This is enough but OpenGL requires a linear transformation to (x,y,z) such as the projection gives coordinates in [-1,1]. Such normalized coordinates simplify clipping and retain the z information used to remove hidden surfaces.
out[11] is set to -1 because there's no linear transformation that gives normalized coordinates unless a reflection is applied. This -1 causes the handedness of the system to be switched with normalized coordinates.
out[14] is used with out[10] to transform z from [-n -f] to [-1 1] after projection.
Similar to the mat4.perspecive method, why is the inverse of the
length of the vector being calculated? Also, why is that value then
multiplied by the z0, z1 and z2 values? The same thing is being done
for the x0-x2 variables and the y0-y2 variables. Why?
To normalize the vectors x, y and z
what is the meaning of the values set for out[12]-out[14]?
A camera is composed of a base of vectors and a position.
out[12]-out[14] apply an inverse translation to set the camera position.
However, when I look at the following mat4.translate method in the
glMatrix library, I see that out[12]-out[15] are set via some complex
equations. Why are these values set at all?
The equations look complex because it's a product of a translation matrix and an existing matrix a.
Professional WebGL Programming: Developing 3D Graphics for the Web
I don't know this book, it might explain some math but if you need detailed explanation you should consider Eric Lengyel's book that explains and derivates the important math used in 3d raster graphics.