Convert 2:1 equirectangular panorama to cube map - javascript

I'm currently working on a simple 3D panorama viewer for a website. For mobile performance reasons I'm using the Three.js CSS 3 renderer. This requires a cube map, split up into six single images.
I'm recording the images on the iPhone with Google Photo Sphere, or similar apps that create 2:1 equirectangular panoramas. I then resize and convert these to a cubemap with this website: http://gonchar.me/panorama/ (Flash)
Preferably, I'd like to do the conversion myself, either on the fly in Three.js, if that's possible, or in Photoshop. I found Andrew Hazelden's Photoshop actions, and they seem kind of close, but no direct conversion is available. Is there a mathematical way to convert these, or some sort of script that does it? I'd like to avoid going through a 3D application like Blender, if possible.
Maybe this is a long shot, but I thought I'd ask. I have okay experience with JavaScript, but I'm pretty new to Three.js. I'm also hesitant to rely on the WebGL functionality, since it seems either slow or buggy on mobile devices. Support is also still spotty.

If you want to do it server side there are many options. ImageMagick has a bunch of command-line tools which could slice your image into pieces. You could put the command to do this into a script and just run that each time you have a new image.
It's hard to tell quite what algorithm is used in the program. We can try and reverse engineer quite what is happening by feeding a square grid into the program. I've used a grid from Wikipedia:
Which gives:
This gives us a clue as to how the box is constructed.
Imaging a sphere with lines of latitude and longitude on it, and a cube surrounding it. Now projecting from the point at center of the sphere produces a distorted grid on the cube.
Mathematically, take polar coordinates r, θ, ø, for the sphere r=1, 0 < θ < π, -π/4 < ø < 7π/4
x= r sin θ cos ø
y= r sin θ sin ø
z= r cos θ
centrally project these to the cube. First we divide into four regions by the latitude -π/4 < ø < π/4, π/4 < ø < 3π/4, 3π/4 < ø < 5π/4, 5π/4 < ø < 7π/4. These will either project to one of the four sides the top or the bottom.
Assume we are in the first side -π/4 < ø < π/4. The central projection of
(sin θ cos ø, sin θ sin ø, cos θ) will be (a sin θ cos ø, a sin θ sin ø, a cos θ) which hits the x=1 plane when
a sin θ cos ø = 1
so
a = 1 / (sin θ cos ø)
and the projected point is
(1, tan ø, cot θ / cos ø)
If | cot θ / cos ø | < 1, this will be on the front face. Otherwise, it will be projected on the top or bottom and you will need a different projection for that. A better test for the top uses the fact that the minimum value of cos ø will be cos π/4 = 1/√2, so the projected point is always on the top if cot θ / (1/√2) > 1 or tan θ < 1/√2. This works out as θ < 35º or 0.615 radians.
Put this together in Python:
import sys
from PIL import Image
from math import pi,sin,cos,tan
def cot(angle):
return 1/tan(angle)
# Project polar coordinates onto a surrounding cube
# assume ranges theta is [0,pi] with 0 the north poll, pi south poll
# phi is in range [0,2pi]
def projection(theta,phi):
if theta<0.615:
return projectTop(theta,phi)
elif theta>2.527:
return projectBottom(theta,phi)
elif phi <= pi/4 or phi > 7*pi/4:
return projectLeft(theta,phi)
elif phi > pi/4 and phi <= 3*pi/4:
return projectFront(theta,phi)
elif phi > 3*pi/4 and phi <= 5*pi/4:
return projectRight(theta,phi)
elif phi > 5*pi/4 and phi <= 7*pi/4:
return projectBack(theta,phi)
def projectLeft(theta,phi):
x = 1
y = tan(phi)
z = cot(theta) / cos(phi)
if z < -1:
return projectBottom(theta,phi)
if z > 1:
return projectTop(theta,phi)
return ("Left",x,y,z)
def projectFront(theta,phi):
x = tan(phi-pi/2)
y = 1
z = cot(theta) / cos(phi-pi/2)
if z < -1:
return projectBottom(theta,phi)
if z > 1:
return projectTop(theta,phi)
return ("Front",x,y,z)
def projectRight(theta,phi):
x = -1
y = tan(phi)
z = -cot(theta) / cos(phi)
if z < -1:
return projectBottom(theta,phi)
if z > 1:
return projectTop(theta,phi)
return ("Right",x,-y,z)
def projectBack(theta,phi):
x = tan(phi-3*pi/2)
y = -1
z = cot(theta) / cos(phi-3*pi/2)
if z < -1:
return projectBottom(theta,phi)
if z > 1:
return projectTop(theta,phi)
return ("Back",-x,y,z)
def projectTop(theta,phi):
# (a sin θ cos ø, a sin θ sin ø, a cos θ) = (x,y,1)
a = 1 / cos(theta)
x = tan(theta) * cos(phi)
y = tan(theta) * sin(phi)
z = 1
return ("Top",x,y,z)
def projectBottom(theta,phi):
# (a sin θ cos ø, a sin θ sin ø, a cos θ) = (x,y,-1)
a = -1 / cos(theta)
x = -tan(theta) * cos(phi)
y = -tan(theta) * sin(phi)
z = -1
return ("Bottom",x,y,z)
# Convert coords in cube to image coords
# coords is a tuple with the side and x,y,z coords
# edge is the length of an edge of the cube in pixels
def cubeToImg(coords,edge):
if coords[0]=="Left":
(x,y) = (int(edge*(coords[2]+1)/2), int(edge*(3-coords[3])/2) )
elif coords[0]=="Front":
(x,y) = (int(edge*(coords[1]+3)/2), int(edge*(3-coords[3])/2) )
elif coords[0]=="Right":
(x,y) = (int(edge*(5-coords[2])/2), int(edge*(3-coords[3])/2) )
elif coords[0]=="Back":
(x,y) = (int(edge*(7-coords[1])/2), int(edge*(3-coords[3])/2) )
elif coords[0]=="Top":
(x,y) = (int(edge*(3-coords[1])/2), int(edge*(1+coords[2])/2) )
elif coords[0]=="Bottom":
(x,y) = (int(edge*(3-coords[1])/2), int(edge*(5-coords[2])/2) )
return (x,y)
# convert the in image to out image
def convert(imgIn,imgOut):
inSize = imgIn.size
outSize = imgOut.size
inPix = imgIn.load()
outPix = imgOut.load()
edge = inSize[0]/4 # the length of each edge in pixels
for i in xrange(inSize[0]):
for j in xrange(inSize[1]):
pixel = inPix[i,j]
phi = i * 2 * pi / inSize[0]
theta = j * pi / inSize[1]
res = projection(theta,phi)
(x,y) = cubeToImg(res,edge)
#if i % 100 == 0 and j % 100 == 0:
# print i,j,phi,theta,res,x,y
if x >= outSize[0]:
#print "x out of range ",x,res
x=outSize[0]-1
if y >= outSize[1]:
#print "y out of range ",y,res
y=outSize[1]-1
outPix[x,y] = pixel
imgIn = Image.open(sys.argv[1])
inSize = imgIn.size
imgOut = Image.new("RGB",(inSize[0],inSize[0]*3/4),"black")
convert(imgIn,imgOut)
imgOut.show()
The projection function takes the theta and phi values and returns coordinates in a cube from -1 to 1 in each direction. The cubeToImg takes the (x,y,z) coordinates and translates them to the output image coordinates.
The above algorithm seems to get the geometry right using an image of buckingham palace. We get:
This seems to get most of the lines in the paving right.
We are getting a few image artefacts. This is due to not having a one-to-one map of pixels. We need to do is use an inverse transformation. Rather than a loop through each pixel in the source and finding the corresponding pixel in the target, we loop through the target images and find the closest corresponding source pixel.
import sys
from PIL import Image
from math import pi,sin,cos,tan,atan2,hypot,floor
from numpy import clip
# get x,y,z coords from out image pixels coords
# i,j are pixel coords
# face is face number
# edge is edge length
def outImgToXYZ(i,j,face,edge):
a = 2.0*float(i)/edge
b = 2.0*float(j)/edge
if face==0: # back
(x,y,z) = (-1.0, 1.0-a, 3.0 - b)
elif face==1: # left
(x,y,z) = (a-3.0, -1.0, 3.0 - b)
elif face==2: # front
(x,y,z) = (1.0, a - 5.0, 3.0 - b)
elif face==3: # right
(x,y,z) = (7.0-a, 1.0, 3.0 - b)
elif face==4: # top
(x,y,z) = (b-1.0, a -5.0, 1.0)
elif face==5: # bottom
(x,y,z) = (5.0-b, a-5.0, -1.0)
return (x,y,z)
# convert using an inverse transformation
def convertBack(imgIn,imgOut):
inSize = imgIn.size
outSize = imgOut.size
inPix = imgIn.load()
outPix = imgOut.load()
edge = inSize[0]/4 # the length of each edge in pixels
for i in xrange(outSize[0]):
face = int(i/edge) # 0 - back, 1 - left 2 - front, 3 - right
if face==2:
rng = xrange(0,edge*3)
else:
rng = xrange(edge,edge*2)
for j in rng:
if j<edge:
face2 = 4 # top
elif j>=2*edge:
face2 = 5 # bottom
else:
face2 = face
(x,y,z) = outImgToXYZ(i,j,face2,edge)
theta = atan2(y,x) # range -pi to pi
r = hypot(x,y)
phi = atan2(z,r) # range -pi/2 to pi/2
# source img coords
uf = ( 2.0*edge*(theta + pi)/pi )
vf = ( 2.0*edge * (pi/2 - phi)/pi)
# Use bilinear interpolation between the four surrounding pixels
ui = floor(uf) # coord of pixel to bottom left
vi = floor(vf)
u2 = ui+1 # coords of pixel to top right
v2 = vi+1
mu = uf-ui # fraction of way across pixel
nu = vf-vi
# Pixel values of four corners
A = inPix[ui % inSize[0],clip(vi,0,inSize[1]-1)]
B = inPix[u2 % inSize[0],clip(vi,0,inSize[1]-1)]
C = inPix[ui % inSize[0],clip(v2,0,inSize[1]-1)]
D = inPix[u2 % inSize[0],clip(v2,0,inSize[1]-1)]
# interpolate
(r,g,b) = (
A[0]*(1-mu)*(1-nu) + B[0]*(mu)*(1-nu) + C[0]*(1-mu)*nu+D[0]*mu*nu,
A[1]*(1-mu)*(1-nu) + B[1]*(mu)*(1-nu) + C[1]*(1-mu)*nu+D[1]*mu*nu,
A[2]*(1-mu)*(1-nu) + B[2]*(mu)*(1-nu) + C[2]*(1-mu)*nu+D[2]*mu*nu )
outPix[i,j] = (int(round(r)),int(round(g)),int(round(b)))
imgIn = Image.open(sys.argv[1])
inSize = imgIn.size
imgOut = Image.new("RGB",(inSize[0],inSize[0]*3/4),"black")
convertBack(imgIn,imgOut)
imgOut.save(sys.argv[1].split('.')[0]+"Out2.png")
imgOut.show()
The results of this are:
If anyone want to go in the reverse, see this JS Fiddle page.

Given the excellent accepted answer, I wanted to add my corresponding C++ implementation, based on OpenCV.
For those not familiar with OpenCV, think of Mat as an image. We first construct two maps that remap from the equirectangular image to our corresponding cubemap face. Then, we do the heavy lifting (i.e., remapping with interpolation) using OpenCV.
The code can be made more compact, if readability is not of concern.
// Define our six cube faces.
// 0 - 3 are side faces, clockwise order
// 4 and 5 are top and bottom, respectively
float faceTransform[6][2] =
{
{0, 0},
{M_PI / 2, 0},
{M_PI, 0},
{-M_PI / 2, 0},
{0, -M_PI / 2},
{0, M_PI / 2}
};
// Map a part of the equirectangular panorama (in) to a cube face
// (face). The ID of the face is given by faceId. The desired
// width and height are given by width and height.
inline void createCubeMapFace(const Mat &in, Mat &face,
int faceId = 0, const int width = -1,
const int height = -1) {
float inWidth = in.cols;
float inHeight = in.rows;
// Allocate map
Mat mapx(height, width, CV_32F);
Mat mapy(height, width, CV_32F);
// Calculate adjacent (ak) and opposite (an) of the
// triangle that is spanned from the sphere center
//to our cube face.
const float an = sin(M_PI / 4);
const float ak = cos(M_PI / 4);
const float ftu = faceTransform[faceId][0];
const float ftv = faceTransform[faceId][1];
// For each point in the target image,
// calculate the corresponding source coordinates.
for(int y = 0; y < height; y++) {
for(int x = 0; x < width; x++) {
// Map face pixel coordinates to [-1, 1] on plane
float nx = (float)y / (float)height - 0.5f;
float ny = (float)x / (float)width - 0.5f;
nx *= 2;
ny *= 2;
// Map [-1, 1] plane coords to [-an, an]
// thats the coordinates in respect to a unit sphere
// that contains our box.
nx *= an;
ny *= an;
float u, v;
// Project from plane to sphere surface.
if(ftv == 0) {
// Center faces
u = atan2(nx, ak);
v = atan2(ny * cos(u), ak);
u += ftu;
} else if(ftv > 0) {
// Bottom face
float d = sqrt(nx * nx + ny * ny);
v = M_PI / 2 - atan2(d, ak);
u = atan2(ny, nx);
} else {
// Top face
float d = sqrt(nx * nx + ny * ny);
v = -M_PI / 2 + atan2(d, ak);
u = atan2(-ny, nx);
}
// Map from angular coordinates to [-1, 1], respectively.
u = u / (M_PI);
v = v / (M_PI / 2);
// Warp around, if our coordinates are out of bounds.
while (v < -1) {
v += 2;
u += 1;
}
while (v > 1) {
v -= 2;
u += 1;
}
while(u < -1) {
u += 2;
}
while(u > 1) {
u -= 2;
}
// Map from [-1, 1] to in texture space
u = u / 2.0f + 0.5f;
v = v / 2.0f + 0.5f;
u = u * (inWidth - 1);
v = v * (inHeight - 1);
// Save the result for this pixel in map
mapx.at<float>(x, y) = u;
mapy.at<float>(x, y) = v;
}
}
// Recreate output image if it has wrong size or type.
if(face.cols != width || face.rows != height ||
face.type() != in.type()) {
face = Mat(width, height, in.type());
}
// Do actual resampling using OpenCV's remap
remap(in, face, mapx, mapy,
CV_INTER_LINEAR, BORDER_CONSTANT, Scalar(0, 0, 0));
}
Given the following input:
The following faces are generated:
Image courtesy of Optonaut.

UPDATE 2: It looks like someone else had already built a far superior web application than my own. Their conversion runs client-side, so there aren't any uploads and downloads to worry about.
I suppose if you hate JavaScript for some reason, or are trying to do this on your mobile, then my web application below is okay.
UPDATE: I've published a simple web application where you can upload a panorama and have it return the six skybox images in a ZIP file.
The source is a cleaned up reimplementation of what's below, and is available on GitHub.
The application is presently running on a single free-tier Heroku dyno, but please don't attempt to use it as an API. If you want automation, make your own deployment; single click Deploy to Heroku available.
Original: Here's a (naively) modified version of Salix Alba's absolutely fantastic answer that converts one face at a time, spits out six different images and preserves the original image's file type.
Aside from the fact most use cases probably expect six separate images, the main advantage of converting one face at a time is that it makes working with huge images a lot less memory intensive.
#!/usr/bin/env python
import sys
from PIL import Image
from math import pi, sin, cos, tan, atan2, hypot, floor
from numpy import clip
# get x,y,z coords from out image pixels coords
# i,j are pixel coords
# faceIdx is face number
# faceSize is edge length
def outImgToXYZ(i, j, faceIdx, faceSize):
a = 2.0 * float(i) / faceSize
b = 2.0 * float(j) / faceSize
if faceIdx == 0: # back
(x,y,z) = (-1.0, 1.0 - a, 1.0 - b)
elif faceIdx == 1: # left
(x,y,z) = (a - 1.0, -1.0, 1.0 - b)
elif faceIdx == 2: # front
(x,y,z) = (1.0, a - 1.0, 1.0 - b)
elif faceIdx == 3: # right
(x,y,z) = (1.0 - a, 1.0, 1.0 - b)
elif faceIdx == 4: # top
(x,y,z) = (b - 1.0, a - 1.0, 1.0)
elif faceIdx == 5: # bottom
(x,y,z) = (1.0 - b, a - 1.0, -1.0)
return (x, y, z)
# convert using an inverse transformation
def convertFace(imgIn, imgOut, faceIdx):
inSize = imgIn.size
outSize = imgOut.size
inPix = imgIn.load()
outPix = imgOut.load()
faceSize = outSize[0]
for xOut in xrange(faceSize):
for yOut in xrange(faceSize):
(x,y,z) = outImgToXYZ(xOut, yOut, faceIdx, faceSize)
theta = atan2(y,x) # range -pi to pi
r = hypot(x,y)
phi = atan2(z,r) # range -pi/2 to pi/2
# source img coords
uf = 0.5 * inSize[0] * (theta + pi) / pi
vf = 0.5 * inSize[0] * (pi/2 - phi) / pi
# Use bilinear interpolation between the four surrounding pixels
ui = floor(uf) # coord of pixel to bottom left
vi = floor(vf)
u2 = ui+1 # coords of pixel to top right
v2 = vi+1
mu = uf-ui # fraction of way across pixel
nu = vf-vi
# Pixel values of four corners
A = inPix[ui % inSize[0], clip(vi, 0, inSize[1]-1)]
B = inPix[u2 % inSize[0], clip(vi, 0, inSize[1]-1)]
C = inPix[ui % inSize[0], clip(v2, 0, inSize[1]-1)]
D = inPix[u2 % inSize[0], clip(v2, 0, inSize[1]-1)]
# interpolate
(r,g,b) = (
A[0]*(1-mu)*(1-nu) + B[0]*(mu)*(1-nu) + C[0]*(1-mu)*nu+D[0]*mu*nu,
A[1]*(1-mu)*(1-nu) + B[1]*(mu)*(1-nu) + C[1]*(1-mu)*nu+D[1]*mu*nu,
A[2]*(1-mu)*(1-nu) + B[2]*(mu)*(1-nu) + C[2]*(1-mu)*nu+D[2]*mu*nu )
outPix[xOut, yOut] = (int(round(r)), int(round(g)), int(round(b)))
imgIn = Image.open(sys.argv[1])
inSize = imgIn.size
faceSize = inSize[0] / 4
components = sys.argv[1].rsplit('.', 2)
FACE_NAMES = {
0: 'back',
1: 'left',
2: 'front',
3: 'right',
4: 'top',
5: 'bottom'
}
for face in xrange(6):
imgOut = Image.new("RGB", (faceSize, faceSize), "black")
convertFace(imgIn, imgOut, face)
imgOut.save(components[0] + "_" + FACE_NAMES[face] + "." + components[1])

I wrote a script to cut the generated cubemap into individual files (posx.png, negx.png, posy.png, negy.png, posz.png and negz.png). It will also pack the 6 files into a .zip file.
The source is here: https://github.com/dankex/compv/blob/master/3d-graphics/skybox/cubemap-cut.py
You can modify the array to set the image files:
name_map = [ \
["", "", "posy", ""],
["negz", "negx", "posz", "posx"],
["", "", "negy", ""]]
The converted files are:

First: unless you really have to convert the images yourself (i.e., because of some specific software requirement), don't.
The reason is that, even though there is a very simple mapping between equirectangular projection and cubic projection, the mapping between the areas is not simple: when you establish a correspondence between a specific point of your destination image and a point in the source with an elementary computation, as soon as you convert both points to pixels by rounding you are doing a very raw approximation that doesn't consider the size of the pixels, and the quality of the image is bound to be low.
Second: even if you need to do the conversion at runtime, are you sure that you need to do the conversion at all? Unless there is some very stringent performance problem, if you just need a skybox, create a very big sphere, stitch the equirectangular texure on it, and off you go. Three.js provides the sphere already, as far as I remember ;-)
Third: NASA provides a tool to convert between all conceivable projections (I just found out, tested it, and works like a charm). You can find it here:
G.Projector — Global Map Projector
And I find it reasonable to think that the guys know what they are doing ;-)
It turns out that the "guys" know what they do up to some point: the generated cubemap has an hideous border which makes the conversion not that easy...
I found the definitive tool for equirectangular to cubemap conversion, and it's called erect2cubic.
It's a small utility that generates a script to be fed to hugin, in this way:
erect2cubic --erect=input.png --ptofile=cube.pto
nona -o cube_prefix cube.pto
(information siphoned from Vinay's Hacks page)
And it will generate all six cubemap faces. I'm using it for my project and it works like a charm!
The only downside of this approach is that the script erect2cubit is not in the standard Ubuntu distribution (which is what I'm using) and I had to resort to the instructions at a blog describing how to install and use erect2cubic to find out how to install it.
It is totally worth it!

cmft Studio supports conversion/filtering of various HDR/LDR projections to cubemaps.
https://github.com/dariomanesku/cmftStudio

Here's a JavaScript version of Benjamin Dobell's code. The convertFace needs to be passed two ìmageData objects and a face ID (0-6).
The provided code can safely be used in a web worker, since it has no dependencies.
// convert using an inverse transformation
function convertFace(imgIn, imgOut, faceIdx) {
var inPix = shimImgData(imgIn),
outPix = shimImgData(imgOut),
faceSize = imgOut.width,
pi = Math.PI,
pi_2 = pi/2;
for(var xOut=0; xOut<faceSize; xOut++) {
for(var yOut=0; yOut<faceSize; yOut++) {
var xyz = outImgToXYZ(xOut, yOut, faceIdx, faceSize);
var theta = Math.atan2(xyz.y, xyz.x); // range -pi to pi
var r = Math.hypot(xyz.x, xyz.y);
var phi = Math.atan2(xyz.z, r); // range -pi/2 to pi/2
// source image coordinates
var uf = 0.5 * imgIn.width * (theta + pi) / pi;
var vf = 0.5 * imgIn.width * (pi_2 - phi) / pi;
// Use bilinear interpolation between the four surrounding pixels
var ui = Math.floor(uf); // coordinate of pixel to bottom left
var vi = Math.floor(vf);
var u2 = ui + 1; // coordinates of pixel to top right
var v2 = vi + 1;
var mu = uf - ui; // fraction of way across pixel
var nu = vf - vi;
// Pixel values of four corners
var A = inPix.getPx(ui % imgIn.width, clip(vi, 0, imgIn.height-1));
var B = inPix.getPx(u2 % imgIn.width, clip(vi, 0, imgIn.height-1));
var C = inPix.getPx(ui % imgIn.width, clip(v2, 0, imgIn.height-1));
var D = inPix.getPx(u2 % imgIn.width, clip(v2, 0, imgIn.height-1));
// interpolate
var rgb = {
r:A[0]*(1-mu)*(1-nu) + B[0]*(mu)*(1-nu) + C[0]*(1-mu)*nu + D[0]*mu*nu,
g:A[1]*(1-mu)*(1-nu) + B[1]*(mu)*(1-nu) + C[1]*(1-mu)*nu + D[1]*mu*nu,
b:A[2]*(1-mu)*(1-nu) + B[2]*(mu)*(1-nu) + C[2]*(1-mu)*nu + D[2]*mu*nu
};
rgb.r = Math.round(rgb.r);
rgb.g = Math.round(rgb.g);
rgb.b = Math.round(rgb.b);
outPix.setPx(xOut, yOut, rgb);
} // for(var yOut=0; yOut<faceSize; yOut++) {...}
} // for(var xOut=0;xOut<faceSize;xOut++) {...}
} // function convertFace(imgIn, imgOut, faceIdx) {...}
// get x, y, z coordinates from out image pixels coordinates
// i,j are pixel coordinates
// faceIdx is face number
// faceSize is edge length
function outImgToXYZ(i, j, faceIdx, faceSize) {
var a = 2 * i / faceSize,
b = 2 * j / faceSize;
switch(faceIdx) {
case 0: // back
return({x:-1, y:1-a, z:1-b});
case 1: // left
return({x:a-1, y:-1, z:1-b});
case 2: // front
return({x: 1, y:a-1, z:1-b});
case 3: // right
return({x:1-a, y:1, z:1-b});
case 4: // top
return({x:b-1, y:a-1, z:1});
case 5: // bottom
return({x:1-b, y:a-1, z:-1});
}
} // function outImgToXYZ(i, j, faceIdx, faceSize) {...}
function clip(val, min, max) {
return(val<min ? min : (val>max ? max : val));
}
function shimImgData(imgData) {
var w = imgData.width*4,
d = imgData.data;
return({
getPx:function(x, y) {
x = x*4 + y*w;
return([d[x], d[x+1], d[x+2]]);
},
setPx:function(x, y, rgb) {
x = x*4 + y*w;
d[x] = rgb.r;
d[x+1] = rgb.g;
d[x+2] = rgb.b;
d[x+3] = 255; // alpha
}
});
} // function shimImgData(imgData) {...}

I created a solution for this problem using OpenGL and made a command-line tool around it. It works both on images and videos, and it is the fastest tool that I found out there.
Convert360 - Project on GitHub.
OpenGL Shader - The fragment shader used for the re-projection.
The usage is as simple as:
pip install convert360
convert360 -i ~/Pictures/Barcelona/sagrada-familia.jpg -o example.png -s 300 300
To get something like this:

There are various representations of environment maps. Here is a nice overview.
Overview - Panoramic Images
If you use Photosphere (or any panorama app for that matter), you most likely already have the horizontal latitude / longitude representation.
You can then simply draw a textured three.js SphereGeometry. Here is a tutorial on how to render earth.
Tutorial - How to Make the Earth in WebGL?
Best of luck :).

A very simple C++ application which converts an equirectangular panorama to cube map based on the answer by Salix Alba:
Photo Panorama Converter

Perhaps I am missing something here. But it seems that most if not all the presented transformation code may be somewhat incorrect. They take a spherical panorama (equirectangular --- 360 deg horizontally and 180 deg vertically) and seem to convert to the cube faces using a Cartesian <-> cylindrical transformation. Should they not be using a Cartesian <-> spherical transformation?
See Spherical Coordinates.
I suppose that as long as they reverse the calculation to go from the cube faces to the panorama, then it should work out. But the images of the cube faces may be slightly different when using the spherical transformation.
If I start with this equirectangular (spherical panorama):
Then if I use a cylindrical transformation (which I am not 100% sure is correct at this time), I get this result:
But if I use a spherical transformation, I get this result:
They are not the same. But my spherical transformation result seems to match the result of Danke Xie, but his link does not show the kind of transformation he is using, as best I can read it.
So am I misunderstanding the code being used by many of the contributors to this topic?

kubi can convert from an equirectangular image to cube faces. I have written it to be fast and flexible. It provides options to choose the output layout (six separate images is the default) and decide on the resampling method.

Related

Finding length of arc on unit circle only given x position

Some background:
I've been trying to map a texture onto a "sphere" using a look up table of texture co-ordinate changes. This is for a really slow micro controller to draw on a little LCD panel. So Three.JS is out, WebGL etc... the look up table should work!
The equations for texturing a sphere all pinch the poles. I can't "pre-spread" the texture of these extremes because the texture offset changes to make the "sphere" appear to rotate.
If you examine the code for making the lookup table here, you'll see the approach, and the running demo shows the issue.
https://codepen.io/SarahC/pen/KKoKqKW
I figured I'd try and come up with a new approach myself!
After thinking a while, I realised a sphere texture in effect moves the location of the texture pixel further from the spheres origin the further away from the origin it is! In a straight line from the origin.
So I figured - calculate the angle the current pixel is from the origin, calculate it's unit distance, then all I need to do is make a function that is given the distance, and calculates the new distance based on some "sphere calculation". That new distance is almost 1 to 1 near the center of the sphere, and rapidly increases right at the edges. Mapping a sphere texture!
That offset function I figured (may be wrong here?) (diagrammed below) given the distance from the origin L1 (unit circle) it returns the length of the arc L2 which in effect is the flat pixel offset to use from the source texture.
(I asked on Reddit, and got given Math.acos for X, but now I know that's wrong, because that's the X position of the circle! Not the straight line X position from the offset, AND it gives an angle, not the Y position... wrong on two counts. Oooph!
Oddly, surprisingly, because I thought it gave the Y position, I dropped it into an atan2 function, and it ALMOST worked... for all the wrong reasons of course but it made a bump at the center of the "sphere".
The current "state of the mistake" is right here:
https://codepen.io/SarahC/pen/abYbgwa?editors=0010 )
Now I know that aCos isn't the function I need, I'm at a loss for the solution.
But! Perhaps this approach I thought up is stupid? I'd happily use any look-up mapping function you think would work. =)
Thanks for your time and reading and sharing, I like learning new stuff.
//JS
An interesting but befuddling problem...
Per Spektre's comment and my follow up comment, the mapping of x to the length of the circle's arc still resulted in the center bubble effect of the image as described in the question. I tried numerous mathematically "correct" attempts, including picking a distant view point from the sphere and then calculating the compression of the 2d image as the view point angle swept from top center of the sphere to the edge, but again, to no avail, as the bubble effect persisted...
In the end, I introduced a double fudge factor. To eliminate the bubble effect, I took the 32nd root of the unit radius to stretch the sphere's central points. Then, when calculating the arc length (per the diagram in the question and the comments on "L2") I undid the stretching fudge factor by raising to the 128th power the unit radius to compress and accentuate the curvature towards the edge of the sphere.
Although this solution appears to provide the proper results, it offends the mathematician in me, as it is a pure fudge to eliminate the confusing bubble effect. The use of the 32nd root and 128th power were simply arrived at via trial and error, rather than any true mathematical reasoning. Ugh...
So, FWIW, the code snippet below exemplifies both the calculation and use of the lookup table in functions unitCircleLut2() and drawSphere2(), respectively...
// https://www.reddit.com/r/GraphicsProgramming/comments/vlnqjc/oldskool_textured_sphere_using_lut_and_texture_xy/
// Perhaps useable as a terminator eye?........
// https://www.youtube.com/watch?v=nSlEQumWLHE
// https://www.youtube.com/watch?v=hx_0Ge4hDpI
// This is an attempt to recreate the 3D eyeball that the Terminator upgrade produces on the Adafruit M4sk system.
// As the micro control unit only has 200Kb RAM stack and C and no 3D graphics support, chances are there's no textured 3D sphere, but a look up table to map an eyeball texture to a sphere shape on the display.
// I've got close - but this thing pinches the two poles - which I can't see happening with the M4sk version.
// Setup the display, and get its pixels so we can write to them later.
let c = document.createElement("canvas");
c.width = 300;
c.height = 300;
document.body.appendChild(c);
let ctx = c.getContext("2d");
let imageDataWrapper = ctx.getImageData(0, 0, c.width, c.height);
let imageData = imageDataWrapper.data; // 8 bit ARGB
let imageData32 = new Uint32Array(imageData.buffer); // 32 bit pixel
// Declare the look up table - dimensions same as display.
let offsetLUT = null;
// Texture to map to sphere.
let textureCanvas = null;
let textureCtx = null;
let textureDataWrapper = null;
let textureData = null;
let textureData32 = null;
let px = 0;
let py = 0;
let vx = 2;
let vy = 0.5;
// Load the texture and get its pixels.
let textureImage = new Image();
textureImage.crossOrigin = "anonymous";
textureImage.onload = _ => {
textureCanvas = document.createElement("canvas");
textureCtx = textureCanvas.getContext("2d");
offsetLUT = unitCircleLut2( 300 );
textureCanvas.width = textureImage.width;
textureCanvas.height = textureImage.height;
textureCtx.drawImage(textureImage, 0, 0);
textureDataWrapper = textureCtx.getImageData(0, 0, textureCanvas.width, textureCanvas.height);
textureData = textureDataWrapper.data;
textureData32 = new Uint32Array(textureData.buffer);
// Draw texture to display - just to show we got it.
// Won't appear if everything works, as it will be replaced with the sphere draw.
for(let i = 0; i < imageData32.length; i++) {
imageData32[i] = textureData32[i];
}
ctx.putImageData(imageDataWrapper, 0, 0);
requestAnimationFrame(animation);
}
textureImage.src = "https://untamed.zone/miscImages/metalEye.jpg";
function unitCircleLut2( resolution ) {
function y( x ) {
// x ** 128 compresses when x approaches 1. This helps accentuate the
// curvature of the sphere near the edges...
return ( Math.PI / 2 - Math.acos( x ** 128 ) ) / ( Math.PI / 2 );
}
let r = resolution / 2 |0;
// Rough calculate the length of the arc...
let arc = new Array( r );
for ( let i = 0; i < r; i++ ) {
// The calculation for nx stretches x when approaching 0. This removes the
// center bubble effect...
let nx = ( i / r ) ** ( 1 / 32 );
arc[ i ] = { x: nx, y: y( nx ), arcLen: 0 };
if ( 0 < i ) {
arc[ i ].arcLen = arc[ i - 1 ].arcLen + Math.sqrt( ( arc[ i ].x - arc[ i - 1 ].x ) ** 2 + ( arc[ i ].y - arc[ i - 1 ].y ) ** 2 );
}
}
let arcLength = arc[ r - 1 ].arcLen;
// Now, for each element in the array, calculate the factor to apply to the
// metal eye to either stretch (center) or compress (edges) the image...
let lut = new Array( resolution );
let centerX = r;
let centerY = r;
for( let y = 0; y < resolution; y++ ) {
let ny = y - centerY;
lut[ y ] = new Array( resolution );
for( let x = 0; x < resolution; x++ ) {
let nx = x - centerX;
let nd = Math.sqrt( nx * nx + ny * ny ) |0;
if ( r <= nd ) {
lut[ y ][ x ] = null;
} else {
lut[ y ][ x ] = arc[ nd ].arcLen / arcLength;
}
}
}
return lut;
}
function drawSphere2(dx, dy){
const mx = textureCanvas.width - c.width;
const my = textureCanvas.height - c.height;
const idx = Math.round(dx);
const idy = Math.round(dy);
const textureCenterX = textureCanvas.width / 2 |0;
const textureCenterY = textureCanvas.height / 2 |0;
let iD32index = 0;
for(let y = 0; y < c.height; y++){
for(let x = 0; x < c.width; x++){
let stretch = offsetLUT[y][x];
if(stretch == null){
imageData32[iD32index++] = 0;
}else{
// The 600/150 is the ratio of the metal eye to the offsetLut. But, since the
// eye doesn't fill the entire image, the ratio is fudged to get more of the
// eye into the sphere...
let tx = ( x - 150 ) * 600/150 * Math.abs( stretch ) + textureCenterX + dx |0;
let ty = ( y - 150 ) * 600/150 * Math.abs( stretch ) + textureCenterY + dy |0;
let textureIndex = tx + ty * textureCanvas.width;
imageData32[iD32index++] = textureData32[textureIndex];
}
}
}
ctx.putImageData(imageDataWrapper, 0, 0);
}
// Move the texture on the sphere and keep redrawing.
function animation(){
px += vx;
py += vy;
let xx = Math.cos(px / 180 * Math.PI) * 180 + 0;
let yy = Math.cos(py / 180 * Math.PI) * 180 + 0;
drawSphere2(xx, yy);
requestAnimationFrame(animation);
}
body {
background: #202020;
color: #f0f0f0;
font-family: arial;
}
canvas {
border: 1px solid #303030;
}

Find intersection coordinates of two circles on earth?

I'm trying to find a second intersection point of two circles. One of the points that I already know was used to calculate a distance and then used as the circle radius (exemple). The problem is that im not getting the know point, im getting two new coordinates, even thou they are similar. The problem is probably related to the earth curvature but I have searched for some solution and found nothing.
The circles radius are calculated with the earth curvature. And this is the code I have:
function GET_coordinates_of_circles(position1,r1, position2,r2) {
var deg2rad = function (deg) { return deg * (Math.PI / 180); };
x1=position1.lng;
y1=position1.lat;
x2=position2.lng;
y2=position2.lat;
var centerdx = deg2rad(x1 - x2);
var centerdy = deg2rad(y1 - y2);
var R = Math.sqrt(centerdx * centerdx + centerdy * centerdy);
if (!(Math.abs(r1 - r2) <= R && R <= r1 + r2)) { // no intersection
console.log("nope");
return []; // empty list of results
}
// intersection(s) should exist
var R2 = R*R;
var R4 = R2*R2;
var a = (r1*r1 - r2*r2) / (2 * R2);
var r2r2 = (r1*r1 - r2*r2);
var c = Math.sqrt(2 * (r1*r1 + r2*r2) / R2 - (r2r2 * r2r2) / R4 - 1);
var fx = (x1+x2) / 2 + a * (x2 - x1);
var gx = c * (y2 - y1) / 2;
var ix1 = fx + gx;
var ix2 = fx - gx;
var fy = (y1+y2) / 2 + a * (y2 - y1);
var gy = c * (x1 - x2) / 2;
var iy1 = fy + gy;
var iy2 = fy - gy;
// note if gy == 0 and gx == 0 then the circles are tangent and there is only one solution
// but that one solution will just be duplicated as the code is currently written
return [[iy1, ix1], [iy2, ix2]];
}
The deg2rad variable it is suppose to adjust the other calculations with the earth curvature.
Thank you for any help.
Your calculations for R and so on are wrong because plane Pythagorean formula does not work for spherical trigonometry (for example - we can have triangle with all three right angles on the sphere!). Instead we should use special formulas. Some of them are taken from this page.
At first find big circle arcs in radians for both radii using R = Earth radius = 6,371km
a1 = r1 / R
a2 = r2 / R
And distance (again arc in radians) between circle center using haversine formula
var R = 6371e3; // metres
var φ1 = lat1.toRadians();
var φ2 = lat2.toRadians();
var Δφ = (lat2-lat1).toRadians();
var Δλ = (lon2-lon1).toRadians();
var a = Math.sin(Δφ/2) * Math.sin(Δφ/2) +
Math.cos(φ1) * Math.cos(φ2) *
Math.sin(Δλ/2) * Math.sin(Δλ/2);
var ad = 2 * Math.atan2(Math.sqrt(a), Math.sqrt(1-a));
And bearing from position1 to position 2:
//where φ1,λ1 is the start point, φ2,λ2 the end point
//(Δλ is the difference in longitude)
var y = Math.sin(λ2-λ1) * Math.cos(φ2);
var x = Math.cos(φ1)*Math.sin(φ2) -
Math.sin(φ1)*Math.cos(φ2)*Math.cos(λ2-λ1);
var brng = Math.atan2(y, x);
Now look at the picture from my answer considering equal radii case.
(Here circle radii might be distinct and we should use another approach to find needed arcs)
We have spherical right-angle triangles ACB and FCB (similar to plane case BD is perpendicular to AF in point C and BCA angle is right).
Spherical Pythagorean theorem (from the book on sph. trig) says that
cos(AB) = cos(BC) * cos(AC)
cos(FB) = cos(BC) * cos(FC)
or (using x for AC, y for BC and (ad-x) for FC)
cos(a1) = cos(y) * cos(x)
cos(a2) = cos(y) * cos(ad-x)
divide equations to eliminate cos(y)
cos(a1)*cos(ad-x) = cos(a2) * cos(x)
cos(a1)*(cos(ad)*cos(x) + sin(ad)*sin(x)) = cos(a2) * cos(x)
cos(ad)*cos(x) + sin(ad)*sin(x) = cos(a2) * cos(x) / cos(a1)
sin(ad)*sin(x) = cos(a2) * cos(x) / cos(a1) - cos(ad)*cos(x)
sin(ad)*sin(x) = cos(x) * (cos(a2) / cos(a1) - cos(ad))
TAC = tg(x) = (cos(a2) / cos(a1) - cos(ad)) / sin(ad)
Having hypotenuse and cathetus of ACB triangle we can find angle between AC and AB directions (Napier's rules for right spherical triangles) - note we already know TAC = tg(AC) and a1 = AB
cos(CAB)= tg(AC) * ctg(AB)
CAB = Math.acos(TAC * ctg(a1))
Now we can calculate intersection points - they lie at arc distance a1 from position1 along bearings brng-CAB and brng+CAB
B_bearing = brng - CAB
D_bearing = brng + CAB
Intersection points' coordinates:
var latB = Math.asin( Math.sin(lat1)*Math.cos(a1) +
Math.cos(lat1)*Math.sin(a1)*Math.cos(B_bearing) );
var lonB = lon1.toRad() + Math.atan2(Math.sin(B_bearing)*Math.sin(a1)*Math.cos(lat1),
Math.cos(a1)-Math.sin(lat1)*Math.sin(lat2));
and the same for D_bearing
latB, lonB are in radians
I had a similar need ( Intersection coordinates (lat/lon) of two circles (given the coordinates of the center and the radius) on earth ) and hereby I share the solution in python in case it might help someone:
'''
FINDING THE INTERSECTION COORDINATES (LAT/LON) OF TWO CIRCLES (GIVEN THE COORDINATES OF THE CENTER AND THE RADII)
Many thanks to Ture Pålsson who directed me to the right source, the code below is based on whuber's brilliant work here:
https://gis.stackexchange.com/questions/48937/calculating-intersection-of-two-circles
The idea is that;
1. The points in question are the mutual intersections of three spheres: a sphere centered beneath location x1 (on the
earth's surface) of a given radius, a sphere centered beneath location x2 (on the earth's surface) of a given radius, and
the earth itself, which is a sphere centered at O = (0,0,0) of a given radius.
2. The intersection of each of the first two spheres with the earth's surface is a circle, which defines two planes.
The mutual intersections of all three spheres therefore lies on the intersection of those two planes: a line.
Consequently, the problem is reduced to intersecting a line with a sphere.
Note that "Decimal" is used to have higher precision which is important if the distance between two points are a few
meters.
'''
from decimal import Decimal
from math import cos, sin, sqrt
import math
import numpy as np
def intersection(p1, r1_meter, p2, r2_meter):
# p1 = Coordinates of Point 1: latitude, longitude. This serves as the center of circle 1. Ex: (36.110174, -90.953524)
# r1_meter = Radius of circle 1 in meters
# p2 = Coordinates of Point 2: latitude, longitude. This serves as the center of circle 1. Ex: (36.110174, -90.953524)
# r2_meter = Radius of circle 2 in meters
'''
1. Convert (lat, lon) to (x,y,z) geocentric coordinates.
As usual, because we may choose units of measurement in which the earth has a unit radius
'''
x_p1 = Decimal(cos(math.radians(p1[1]))*cos(math.radians(p1[0]))) # x = cos(lon)*cos(lat)
y_p1 = Decimal(sin(math.radians(p1[1]))*cos(math.radians(p1[0]))) # y = sin(lon)*cos(lat)
z_p1 = Decimal(sin(math.radians(p1[0]))) # z = sin(lat)
x1 = (x_p1, y_p1, z_p1)
x_p2 = Decimal(cos(math.radians(p2[1]))*cos(math.radians(p2[0]))) # x = cos(lon)*cos(lat)
y_p2 = Decimal(sin(math.radians(p2[1]))*cos(math.radians(p2[0]))) # y = sin(lon)*cos(lat)
z_p2 = Decimal(sin(math.radians(p2[0]))) # z = sin(lat)
x2 = (x_p2, y_p2, z_p2)
'''
2. Convert the radii r1 and r2 (which are measured along the sphere) to angles along the sphere.
By definition, one nautical mile (NM) is 1/60 degree of arc (which is pi/180 * 1/60 = 0.0002908888 radians).
'''
r1 = Decimal(math.radians((r1_meter/1852) / 60)) # r1_meter/1852 converts meter to Nautical mile.
r2 = Decimal(math.radians((r2_meter/1852) / 60))
'''
3. The geodesic circle of radius r1 around x1 is the intersection of the earth's surface with an Euclidean sphere
of radius sin(r1) centered at cos(r1)*x1.
4. The plane determined by the intersection of the sphere of radius sin(r1) around cos(r1)*x1 and the earth's surface
is perpendicular to x1 and passes through the point cos(r1)x1, whence its equation is x.x1 = cos(r1)
(the "." represents the usual dot product); likewise for the other plane. There will be a unique point x0 on the
intersection of those two planes that is a linear combination of x1 and x2. Writing x0 = ax1 + b*x2 the two planar
equations are;
cos(r1) = x.x1 = (a*x1 + b*x2).x1 = a + b*(x2.x1)
cos(r2) = x.x2 = (a*x1 + b*x2).x2 = a*(x1.x2) + b
Using the fact that x2.x1 = x1.x2, which I shall write as q, the solution (if it exists) is given by
a = (cos(r1) - cos(r2)*q) / (1 - q^2),
b = (cos(r2) - cos(r1)*q) / (1 - q^2).
'''
q = Decimal(np.dot(x1, x2))
if q**2 != 1 :
a = (Decimal(cos(r1)) - Decimal(cos(r2))*q) / (1 - q**2)
b = (Decimal(cos(r2)) - Decimal(cos(r1))*q) / (1 - q**2)
'''
5. Now all other points on the line of intersection of the two planes differ from x0 by some multiple of a vector
n which is mutually perpendicular to both planes. The cross product n = x1~Cross~x2 does the job provided n is
nonzero: once again, this means that x1 and x2 are neither coincident nor diametrically opposite. (We need to
take care to compute the cross product with high precision, because it involves subtractions with a lot of
cancellation when x1 and x2 are close to each other.)
'''
n = np.cross(x1, x2)
'''
6. Therefore, we seek up to two points of the form x0 + t*n which lie on the earth's surface: that is, their length
equals 1. Equivalently, their squared length is 1:
1 = squared length = (x0 + t*n).(x0 + t*n) = x0.x0 + 2t*x0.n + t^2*n.n = x0.x0 + t^2*n.n
'''
x0_1 = [a*f for f in x1]
x0_2 = [b*f for f in x2]
x0 = [sum(f) for f in zip(x0_1, x0_2)]
'''
The term with x0.n disappears because x0 (being a linear combination of x1 and x2) is perpendicular to n.
The two solutions easily are t = sqrt((1 - x0.x0)/n.n) and its negative. Once again high precision
is called for, because when x1 and x2 are close, x0.x0 is very close to 1, leading to some loss of
floating point precision.
'''
if (np.dot(x0, x0) <= 1) & (np.dot(n,n) != 0): # This is to secure that (1 - np.dot(x0, x0)) / np.dot(n,n) > 0
t = Decimal(sqrt((1 - np.dot(x0, x0)) / np.dot(n,n)))
t1 = t
t2 = -t
i1 = x0 + t1*n
i2 = x0 + t2*n
'''
7. Finally, we may convert these solutions back to (lat, lon) by converting geocentric (x,y,z) to geographic
coordinates. For the longitude, use the generalized arctangent returning values in the range -180 to 180
degrees (in computing applications, this function takes both x and y as arguments rather than just the
ratio y/x; it is sometimes called "ATan2").
'''
i1_lat = math.degrees( math.asin(i1[2]))
i1_lon = math.degrees( math.atan2(i1[1], i1[0] ) )
ip1 = (i1_lat, i1_lon)
i2_lat = math.degrees( math.asin(i2[2]))
i2_lon = math.degrees( math.atan2(i2[1], i2[0] ) )
ip2 = (i2_lat, i2_lon)
return [ip1, ip2]
elif (np.dot(n,n) == 0):
return("The centers of the circles can be neither the same point nor antipodal points.")
else:
return("The circles do not intersect")
else:
return("The centers of the circles can be neither the same point nor antipodal points.")
'''
Example: The output of below is [(36.989311051533505, -88.15142628069133), (38.2383796094578, -92.39048549120287)]
intersection_points = intersection((37.673442, -90.234036), 107.5*1852, (36.109997, -90.953669), 145*1852)
print(intersection_points)
'''
Any feedback is appreciated.

How can I offset a global directional force to be applied over a local axis?

I want to apply a forward force in relation to the object's local axis, but the engine I'm using only allows to me apply a force over the global axis.
I have access to the object's global rotation as a quaternion. I'm not familiar with using quats however (generally untrained in advanced maths). Is that sufficient information to offset the applied force along the desired axis? How?
For example, to move forward globally I would do:
this.entity.rigidbody.applyForce(0, 0, 5);
but to keep that force applied along the object's local axis, I need to distribute the applied force in a different way along the axes, based on the object's rotational quat, for example:
w:0.5785385966300964
x:0
y:-0.815654993057251
z:0
I've researched quaternions trying to figure this out, but watching a video on what they are and why they're used hasn't helped me figure out how to actually work with them to even begin to figure out how to apply the offset needed here.
What I've tried so far was sort of a guess on how to do it, but it's wrong:
Math.degrees = function(radians) {
return radians * 180 / Math.PI;
};
//converted this from a python func on wikipedia,
//not sure if it's working properly or not
function convertQuatToEuler(w, x, y, z){
ysqr = y * y;
t0 = 2 * (w * x + y * z);
t1 = 1 - 2 * (x * x + ysqr);
X = Math.degrees(Math.atan2(t0, t1));
t2 = 2 * (w * y - z * x);
t2 = (t2 >= 1) ? 1 : t2;
t2 = (t2 < -1) ? -1 : t2;
Y = Math.degrees(Math.asin(t2));
t3 = 2 * (w * z + x * y);
t4 = 1 - 2 * (ysqr + z * z);
Z = Math.degrees(Math.atan2(t3, t4));
console.log('converted', {w, x, y, z}, 'to', {X, Y, Z});
return {X, Y, Z};
}
function applyGlobalShift(x, y, z, quat) {
var euler = convertQuatToEuler(quat.w, quat.x, quat.y, quat.z);
x = x - euler.X; // total guess
y = y - euler.Y; // total guess
z = z - euler.Z; // total guess
console.log('converted', quat, 'to', [x, y, z]);
return [x, y, z];
}
// represents the entity's current local rotation in space
var quat = {
w:0.6310858726501465,
x:0,
y:-0.7757129669189453,
z:0
}
console.log(applyGlobalShift(-5, 0, 0, quat));
Don't laugh at my terrible guess at how to calculate the offset :P I knew it was not even close but I'm really bad at math
Quaternions are used as a replacement for euler angles. Your approach, thus, defeats their purpose. Instead of trying to use euler angles, levy the properties of a quaternion.
A quaternion has 4 components, 3 vector components and a scalar component.
q = x*i + y*j + z*k + w
A quaternion therefore has a vector part x*i + y*j + z*k and a scalar part w. A vector is thus a quaternion with a zero scalar or real component.
It is important to note that a vector multiplied by a quaternion is another vector. This can be easily proved by using the rules of multiplication of quaternion basis elements (left as an exercise for the reader).
The inverse of a quaternion is simply its conjugate divided by its magnitude. The conjugate of a quaternion w + (x*i + y*j + z*k) is simply w - (x*i + y*j + z*k), and its magnitude is sqrt(x*x + y*y + z*z + w*w).
A rotation of a vector is simply the vector obtained by rotating that vector through an angle about an axis. Rotation quaternions represent such an angle-axis rotation as shown here.
A vector v can be rotated about the axis and through the angle represented by a rotation quaternion q by conjugating v by q. In other words,
v' = q * v * inverse(q)
Where v' is the rotated vector and q * v * inverse(q) is the conjugation operation.
Since the quaternion represents a rotation, it can be reasonably assumed that its magnitude is one, making inverse(q) = q* where q* is the conjugate of q.
On separating q into real part s and vector part u and simplifying the quaternion operation (as beautifully shown here),
v' = 2 * dot(u, v) * u + (s*s - dot(u, u)) * v + 2 * s * cross(u, v)
Where dot returns the dot product of two vectors, and cross returns the cross product of two vectors.
Putting the above into (pseudo)code,
function rotate(v: vector3, q: quaternion4) -> vector3 {
u = vector3(q.x, q.y, q.z)
s = q.w
return 2 * dot(u, v) * u + (s*s - dot(u, u)) * v + 2 * s * cross(u, v)
}
Now that we know how to rotate a vector with a quaternion, we can use the world (global) rotation quaternion to find the corresponding world direction (or axis) for a local direction by conjugating the local direction by the rotation quaternion.
The local forward axis is always given by 0*i + 0*j + 1*k. Therefore, to find the world forward axis for an object, you must conjugate the vector (0, 0, 1) with the world rotation quaternion.
Using the function defined above, the forward axis becomes
forward = rotate(vector3(0, 0, 1), rotationQuaternion)
Now that you have the world forward axis, a force applied along it will simply be a scalar multiple of the world forward axis.

rotating a sphere a to b point on itself

Im trying to figure out how to rotate a sphere from point A on itself to point b on itself. I found some Unity3d code:
Quaternion rot = Quaternion.FromToRotation (pointA, pointB);
sphere.transform.rotation *= rot; //Multiply rotations to get the resultant rotation
via http://answers.unity3d.com/questions/21921/rotate-point-a-on-sphere-to-point-b-on-sphere.html but I can't figure out how to implement it in Three.js.
Here's my code:
var s = sphere mesh
var va = 1st start vector
var vb = 2nd end vector;
var qa = new THREE.Quaternion(va.x, va.y, va.z, 1);
var qb = new THREE.Quaternion(vb.x, vb.y, vb.z, 1);
var qc = new THREE.Quaternion();
THREE.Quaternion.slerp(qa, qb, qc, 1);
s.useQuaternion = true;
s.quaternion = qc;
Thanks!
Assume the sphere is centered at the origin and A and B are normalized (i.e. unit length). Then compute the cross product C = A×B. This is your vector representing your rotation axis. The angle for your rotation is given by θ = cos-1(A∙B) where A∙B is the dot product of the unit vectors A and B. Note that the angle in this case is typically in radians, not degrees.
If the sphere is centered at some point P not at the origin or its radius is not unit length, you will have to translate and scale A and B before derive the rotation. This looks like:
A' ← (A-P)/|A-P| // Normalized version of A
B' ← (B-P)/|B-P| // Normalized version of B
V ← A'×B' // Axis about which to rotate the system
θ ← cos-1(A'∙B') // Angle by which to rotate
You can then use V and θ as your arguments for constructing the quaternion you will use for defining your rotation. This rotation will be centered at the origin, so before applying it, translate by -P, then apply the rotation, and translate back by P.
One note: There may be a sign error in here. If it doesn't work, it's because the sign of the cross product doesn't match with the sign of the dot product. Just reverse the order of the arguments in the cross product to fix it if this is a problem.
var c = group.rotation.y;
var d = -b * (Math.PI / 180)%(2 * Math.PI);
var e = Math.PI / 2 * -1;
group.rotation.y = c % (2 * Math.PI);
group.rotation.x = a * (Math.PI / 180) % Math.PI;
group.rotation.y= d+e;
where a= latitude, b= longitude,group=Object3D(or sphere)

Find column, row on 2D isometric grid from x,y screen space coords (Convert equation to function)

I'm trying to find the row, column in a 2d isometric grid of a screen space point (x, y)
Now I pretty much know what I need to do which is find the length of the vectors in red in the pictures above and then compare it to the length of the vector that represent the bounds of the grid (which is represented by the black vectors)
Now I asked for help over at mathematics stack exchange to get the equation for figuring out what the parallel vectors are of a point x,y compared to the black boundary vectors. Link here Length of Perpendicular/Parallel Vectors
but im having trouble converting this to a function
Ideally i need enough of a function to get the length of both red vectors from three sets of points, the x,y of the end of the 2 black vectors and the point at the end of the red vectors.
Any language is fine but ideally javascript
What you need is a base transformation:
Suppose the coordinates of the first black vector are (x1, x2) and the coordinates of the second vector are (y1, y2).
Therefore, finding the red vectors that get at a point (z1, z2) is equivalent to solving the following linear system:
x1*r1 + y1*r2 = z1
x2*r1 + y2*r2 = z2
or in matrix form:
A x = b
/x1 y1\ |r1| = |z1|
\x2 y2/ |r2| |z2|
x = inverse(A)*b
For example, lets have the black vector be (2, 1) and (2, -1). The corresponding matrix A will be
2 2
1 -1
and its inverse will be
1/4 1/2
1/4 -1/2
So a point (x, y) in the original coordinates will be able to be represened in the alternate base, bia the following formula:
(x, y) = (1/4 * x + 1/2 * y)*(2,1) + (1/4 * x -1/2 * y)*(2, -1)
What exactly is the point of doing it like this? Any isometric grid you display usually contains cells of equal size, so you can skip all the vector math and simply do something like:
var xStep = 50,
yStep = 30, // roughly matches your image
pointX = 2*xStep,
pointY = 0;
Basically the points on any isometric grid fall onto the intersections of a non-isometric grid. Isometric grid controller:
screenPositionToIsoXY : function(o, w, h){
var sX = ((((o.x - this.canvas.xPosition) - this.screenOffsetX) / this.unitWidth ) * 2) >> 0,
sY = ((((o.y - this.canvas.yPosition) - this.screenOffsetY) / this.unitHeight) * 2) >> 0,
isoX = ((sX + sY - this.cols) / 2) >> 0,
isoY = (((-1 + this.cols) - (sX - sY)) / 2) >> 0;
// isoX = ((sX + sY) / isoGrid.width) - 1
// isoY = ((-2 + isoGrid.width) - sX - sY) / 2
return $.extend(o, {
isoX : Math.constrain(isoX, 0, this.cols - (w||0)),
isoY : Math.constrain(isoY, 0, this.rows - (h||0))
});
},
// ...
isoToUnitGrid : function(isoX, isoY){
var offset = this.grid.offset(),
isoX = $.uD(isoX) ? this.isoX : isoX,
isoY = $.uD(isoY) ? this.isoY : isoY;
return {
x : (offset.x + (this.grid.unitWidth / 2) * (this.grid.rows - this.isoWidth + isoX - isoY)) >> 0,
y : (offset.y + (this.grid.unitHeight / 2) * (isoX + isoY)) >> 0
};
},
Okay so with the help of other answers (sorry guys neither quite provided the answer i was after)
I present my function for finding the grid position on an iso 2d grid using a world x,y coordinate where the world x,y is an offset screen space coord.
WorldPosToGridPos: function(iPosX, iPosY){
var d = (this.mcBoundaryVectors.upper.x * this.mcBoundaryVectors.lower.y) - (this.mcBoundaryVectors.upper.y * this.mcBoundaryVectors.lower.x);
var a = ((iPosX * this.mcBoundaryVectors.lower.y) - (this.mcBoundaryVectors.lower.x * iPosY)) / d;
var b = ((this.mcBoundaryVectors.upper.x * iPosY) - (iPosX * this.mcBoundaryVectors.upper.y)) / d;
var cParaUpperVec = new Vector2(a * this.mcBoundaryVectors.upper.x, a * this.mcBoundaryVectors.upper.y);
var cParaLowerVec = new Vector2(b * this.mcBoundaryVectors.lower.x, b * this.mcBoundaryVectors.lower.y);
var iGridWidth = 40;
var iGridHeight = 40;
var iGridX = Math.floor((cParaLowerVec.length() / this.mcBoundaryVectors.lower.length()) * iGridWidth);
var iGridY = Math.floor((cParaUpperVec.length() / this.mcBoundaryVectors.upper.length()) * iGridHeight);
return {gridX: iGridX, gridY: iGridY};
},
The first line is best done once in an init function or similar to save doing the same calculation over and over, I just included it for completeness.
The mcBoundaryVectors are two vectors defining the outer limits of the x and y axis of the isometric grid (The black vectors shown in the picture above).
Hope this helps anyone else in the future

Categories