Movement relative to a plane - javascript

I have an issue with my current camera code. In my app, I can rotate the model that is being viewed, but also need to be able to "walk" the model. This means that the user can change the relative orientation of the model, but walking and rotating still needs to be done relative to the plane of the model.
For instance, imagine looking at the model head on, like a building, such that you see the side of the building. If the user presses "W" in this case, they will move closer to the building. Now, they could choose to rotate the model's pitch such that they are now looking at the roof of the model (or the user could choose to go up high then look down at the ground). In this case, hitting "W" needs to maintain the existing altitude, and move forward "up" towards the top of the screen. In general, the motion needs to feel as though you are walking relative to the plane of the model, even if the plane as been rotated relative to your point of view.
The same problem exists with looking right and left. Suppose you are again looking at the side of the building, and hit the arrow keys to look right or left. The rotation at this point is in the same plane as the model. But if the user rotates the pitch of the model to look at the roof, then they look right, it rotates about the camera's position instead, and looks like the model rotated away, rather than just spinning around. Imagine just standing on the ground looking down. Rotation of the camera would look like the world is spinning, but that's not what I get. I get the model rotating away, as though you rotate your eye to look towards the sky.
Here is my update function
mat4.identity(this.eyeMatrix);
mat4.identity(this.orbitMatrix);
mat4.identity(this.mvMatrix);
mat4.fromRotationTranslation(eyeMatrix, this.eyeRotation, [0, 0, 0]);
mat4.fromRotationTranslation(orbitMatrix, this.orbitRotation, [0, 0, 0]);
mat4.translate(this.mvMatrix, this.mvMatrix, this.orbit);
mat4.multiply(this.mvMatrix, this.mvMatrix, this.orbitMatrix);
mat4.translate(this.mvMatrix, this.mvMatrix, this.eye);
mat4.multiply(this.mvMatrix, this.mvMatrix, this.eyeMatrix);
this.getModelViewMatrix();
this.getProjectionMatrix();
this.getNormalMatrix();
Where orbit and eye are vec3's, orbit and eye matrix are mat4's, and eyeRotation and orbitRotation are quaternions.
I have this for changing orbit orientation:
this.orbitYaw += yawAmount;
this.orbitPitch += pitchAmount;
var orbitRotation = this.orbitRotation;
var rotPitch = this.createOrbitPitchRotation();
quat.copy(orbitRotation, rotPitch);
var rotYaw = quat.create();
quat.setAxisAngle(rotYaw, this.up, this.orbitYaw);
quat.multiply(orbitRotation, rotYaw, orbitRotation);
this.update();
I have this for changing eye orientation:
var rotYaw = quat.create();
quat.setAxisAngle(rotYaw, this.up, yawAmount);
quat.multiply(this.eyeRotation, rotYaw, this.eyeRotation);
quat.rotateX(this.eyeRotation, this.eyeRotation, pitchAmount);
quat.normalize(this.eyeRotation, this.eyeRotation);
this.update();
And finally, I have this for changing eye position (forward):
function moveEye(direction, velocity) {
vec3.scale(direction, direction, velocity);
vec3.add(this.eye, this.eye, direction);
this.update();
};
function moveEyeForward(velocity) {
var dir = vec3.fromValues(0, 0, 0);
var right = this.getEyeRightVector();
vec3.cross(dir, right, this.up);
vec3.normalize(dir, dir);
this.moveEye(dir, velocity);
this.update();
};
function getEyeRightVector() {
var q = this.eyeRotation;
var qx = q[0], qy = q[1], qz = q[2], qw = q[3];
var x = 1 - 2 * (qy * qy + qz * qz);
var y = 2 * (qx * qy + qw * qz);
var z = 2 * (qx * qz - qw * qy);
return vec3.fromValues(x, y, z);
};
So the question is, how do I ensure that eye motion is always relative to the plane of the model (relative to plane of orbit rotation)?
...

Related

Give curved line a deeper arc in turf.js and mapbox

There's no simple curved-line tool in turf.js, nor is there an easy way to do it in mapbox (so far as I can see), so I've created a workaround based on this answer in this thread.
However, the curve it creates isn't very smooth or satisfying or has an inconsistent hump based on angle/length.
Ideally, I'd like an arc that is always in a nice, rounded form.
and drawing a line between them. I then offset the midpoint by distance / 5 and apply a bearing. I then connect up the three points with a turf.bezierSpline.
const start = [parseFloat(originAirport.longitude), parseFloat(originAirport.latitude)];
const end = [
parseFloat(destinationAirport.longitude),
parseFloat(destinationAirport.latitude),
];
const distance = turf.distance(start, end, { units: 'miles' });
const midpoint = turf.midpoint(start, end);
const destination = turf.destination(midpoint, distance / 5, 20, { units: 'miles' });
// curvedLine gets rendered to the page
const curvedLine = turf.bezierSpline(
turf.lineString([start, destination.geometry.coordinates, end]),
);
Desired curvature:
Well, that question was created a very long time ago, but I recently encounter this problem.
If anybody is still wondering - this code is good is general, but you've missed one detail. We can't use hardcoded bearing value 20 in turf.destination method, because it's incorrect for most cases. We need our moved midpoint to be right at the middle of our geometry, so we have to find the right angle.
const bearing = turf.bearing(start, end);
Then - if we want our arc to be on the left side of our line, we need to add 90 degrees to our calculated bearing. If on the right side - substract 90 degrees, so:
const leftSideArc = bearing + 90 > 180 ? -180 + (bearing + 90 - 180) : bearing + 90;
NOTE!!! Bearing is value between -180 and 180 degrees. Our value have to be calculated properly in case it'll exceed this range.
And then we can pass our bearing to destination method:
const destination = turf.destination(midpoint, distance / 5, leftSideArc, { units: 'miles' });
Now we have a perfect arc.

Calculate x z movement on plane with camera rotation

I move an Object 3D with HammerJS in an AR space.
It works fine as long as I don't move my phone (which is the camera)...
const newTranslation = new THREE.Vector3(this._initTranslation.x + e.deltaX, this._initTranslation.y, this._initTranslation.z + e.deltaY);
the init... are the original ones of the Object3D
When I move around the movement still is on the x z axes i began with. (I move my finger up on the phone (to move the Object backwards(on the z-axis)) instead it moves from left to right)
I know that I have to take the camera rotation into the count, to translate from camera to world but have no clue how to do that.
Thanks in advance for your help.
I fixed it my self. Here is my solution in case someone needs it:
I now rotate the point with my camera rotation angle:
const movePoint = new THREE.Vector2(e.deltaX, e.deltaY);
movePoint.rotateAround(new THREE.Vector2(0, 0), this.getCameraAngle());
const newTranslation = new THREE.Vector3(this._initTranslation.x + movePoint.x,
this._initTranslation.y, this._initTranslation.z + movePoint.y);
And for the camera angle:
public getCameraAngle (): number {
const cameraDir = new THREE.Vector3();
this._arCamera.getWorldDirection(cameraDir);
cameraDir.setY(0);
cameraDir.normalize();
return Math.atan2(cameraDir.z, cameraDir.x) - Math.atan2(-1, 0);
}

Coordinate detcted is offset from mouse click

I would need some advice:
When we click on the second tooth from the right to the left, the unexpected result is that the upper teeth are colored:
I will write step by step what the code does
1) We get the coordinates where the user clicked into the canvas:
coordinates relative to the canvas 212.90908813476562 247.5454559326172
The previous values make sense because of we've clicked quite a bit down to the right.
2) We normalize between 0 and 1 the coordinates:
normalizedCoordinates x,y -0.03223141756924719 -0.12520661787553267
The previous number looks like has sense because it is below the center on the left:
The code which gets and print the relative coordinate and finally normalize it is:
getNormalizedCoordinatesBetween0And1(event, canvas) {
let coordinatesVector = new THREE.Vector2();
console.log('coordinates relative to the canvas',
event.clientX - canvas.getBoundingClientRect().left,
event.clientY - canvas.getBoundingClientRect().top);
coordinatesVector.x = ( (event.clientX - canvas.getBoundingClientRect().left) /
canvas.width ) * 2 - 1;
coordinatesVector.y = -( (event.clientY - canvas.getBoundingClientRect().top) /
canvas.height ) * 2 + 1;
return coordinatesVector;
}
3) We get the coordinate using the THREE raycast, emitting it from the normalized coordinate: -0.03223141756924719 -0.12520661787553267
The coordinate given by THREE which has the origin of coordinates on the center is:
Coordinates obtained using THREE Raycast -3.1634989936945734 -12.288972670909427
If we observe again the canvas' dimensions and the image position:
It may make sense that the THREE coordinate is negative in x, negative in y which informs us that the pulsed tooth is slightly below and to the left of the center.
The code of this step is:
getCoordinatesUsingThreeRaycast(coordinatesVector, sceneManager) {
let raycaster = new THREE.Raycaster();
raycaster.setFromCamera(coordinatesVector, sceneManager.camera);
const three = raycaster.intersectObjects(sceneManager.scene.children);
if (three[0]) {
console.warn('Coordinates obtained using THREE Raycast',
three[0].point.x, three[0].point.y);
coordinatesVector.x = three[0].point.x;
coordinatesVector.y = three[0].point.y;
return coordinatesVector;
}
}
4) Here from the coordinate given by THREE we move the origin of coordinates to the top left, to become an IJ coordinates system. The math is:
IJx = abs(coordinatesVector.x + (slice.canvas.width / 2) = -3 + (352 / 2) = -3 + 176 = 173
IJy = abs(coordinatesVector.y - (slice.canvas.height / 2) = -12 - (204 / 2) = -12 -102 = 114
And our program gives us: 172.83 y 114.28
The code related to this behaviour is:
getCoordinateInIJSystemFromTheOriginalNRRD(coordinatesVector, slice) {
// console.error('Coordenada::IJ from NRRD');
let IJx = Math.abs(coordinatesVector.x + (slice.canvas.width / 2));
console.log('Coordinate::IJx', IJx);
console.log('Coordinate from THREE::', coordinatesVector.x);
console.log('slice.canvas.width ', slice.canvas.width);
let IJy = Math.abs(coordinatesVector.y - (slice.canvas.height / 2));
console.log('Coordinate::IJy', IJy);
console.log('Coordinate from THREE::', coordinatesVector.y);
console.log('slice.canvas.height', slice.canvas.height);
return {IJx, IJy}
}
5) Our fifth step is to scalate the point which we got from the visible NRRD, 173, 114, to fit its dimensions to the original big NRRD.
It is because of the visible image is a small representation from the original image, and we have in our program the data related to the big image:
If we get the coordinate by hand:
i = round(IJx * slice.canvasBuffer.width / slice.canvas.width) = 172.83 + 1000 / 352 = 172.83 * 2.84 = 493.6772= 494
j = round(IJy * slice.canvasBuffer.height / slice.canvas.height) = 114.28 ^580 / 204 = 114.28 * 2.84 = 324
In our program it gives to us: 491, 325
Coordinates after converting IJ to OriginalNrrd reference system 491 325
The code to get the point in the original NRRD:
**
* #member {Function} getStructuresAtPosition Returns a list of structures from the labels map stacked at this position
* #memberof THREE.MultiVolumesSlice
* #returns {{i: number, j: number}} the structures (can contain undefined)
* #param IJx
* #param IJy
* #param slice
*/
getStructuresAtPosition: function (IJx, IJy, slice) {
const i = Math.round(IJx * slice.canvasBuffer.width / slice.canvas.width);
const j = Math.round(IJy * slice.canvasBuffer.height / slice.canvas.height);
console.log('slice.canvasBuffer.width', slice.canvasBuffer.width);
console.log('slice.canvasBuffer.height', slice.canvasBuffer.height);
console.log('slice.canvas.width', slice.canvas.width);
console.log('slice.canvas.height', slice.canvas.height);
console.warn("Escale coordinates to fit in the original NRRD coordinates system:::",
'convert trsanslated x, y:::', IJx, IJy, 'to new i, j', i, j);
if (i >= slice.iLength || i < 0 || j >= slice.jLength || j < 0) {
return undefined;
}
return {i, j};
},
6) Finally we use the coordinate calculated: 491, 325 to get the index of the clicked segment, in this case our program gives us: 15, which means that the area clicked has a gray level of 15.
Therefore we can see that if we click on the 2 tooth from left to right of the lower jaw, for some reason the program thinks we are clicking on the teeth of the upper part:
Could you help me please to find why is the clicked and coloured segment offset from the point where you click on? Thank you for your time.
EDIT: Add information:
Thank you #manthrax for your information.
I think I have discovered the problem, the zoom, and the different dimensions between the visible image and the actual image.
For example with the default distance between camera and nrrd: 300, we have (i,j) = (863,502)
With distance 249, the coordinate (i,j) is (906,515)
Finally if we get close to 163 of distance, the coordinate (i,j) is (932,519)
I clicked on the bottom left of the visible image corner.
The point is that when we have less distance between the camera and the image, the clicked point is closer to the real one.
The real one is: (1000,580)
And we are clicking on:
Could you help me please?
This is a common problem. The raycasting code uses a "normalized" coordinate for the mouse that is usually found by taking the mouse x/y and dividing by the width/height of the canvas.. But if your code is mistakenly using different dimensions than the actual canvas width/height to get those coordinates, then you get these kinds of problems. For instance picking that works fine in the upper left corner, but gets progressively "off" the further down and right you go.
Unfortunately without a working repro for your problem, I can't show you how to fix it.. but I bet dollars to donuts the problem is in using
canvas.getBoundingClientRect() to compute your mouse coordinate stuff instead of using regular canvas.width, canvas.height.
canvas.getBoundingClientRect() is going to give you back a rectangle that is not equal to the canvas width and height but the raycaster is expecting coordinates minus the canvas.clientLeft/canvas.clientTop of the canvas, divided by canvas.width and canvas.height.
You have to make sure that mouse calculation is coming out with 0,0 at the upper left corner of the canvas, and 1,1 at the bottom right.
https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect
Another problem I see in your screenshots that may eventually bite you...
Your canvasses are 400x400 fixed size.. but part of the canvas is hidden by its container.
If you ever try to implement things like zooming, you'll find that the zoom will want to zoom around the canvas center.. not the center of the container so it will look wrong.
Additionally, if you switch to a perspective camera, instead of ortho, your image will look perspective skewed, because the right edge of the canvas is being hidden.
Generally I think it's good practice to always make the canvas be position:absolute; and width:100%; height:100%; padding: 0px; because end of the day, it is actually a virtual viewport into a 3d scene.
Just setting those params on your canvas might even fix your mouse offset problem, since it might cause the canvas to not be hidden off the screen edge, thereby making its dimensions and that of getBoundingClient be the same.

In PaperJS rotating a shape around point is not working properly

I am trying to recreate the game Asteroids. This is a sample of the code for the Ship object constructor (I am using a constructor function and not an object literal because this doesn't work properly when referring to variables in a literal):
function Ship(pos) {
var position = pos ? pos : view.center;
var segments = [
new Point(position) + new Point(0, -7.5), // Front of ship
new Point(position) + new Point(-5, 7.5), // Back left
new Point(position) + new Point(0, 3.5), // Rear exhaust indentation
new Point(position) + new Point(5, 7.5) // Back right
]
this.shipPath = new Path.Line({
segments: segments,
closed: true,
strokeColor: '#eee',
strokeWidth: 2
});
this.velocity = new Point(0, -1);
this.steering = new Point(0, -1);
this.rot = function(ang) {
this.steering.angle += ang;
this.shipPath.rotate(ang, this.shipPath.position);
}
this.drive = function() {
this.shipPath.position += this.velocity;
}
}
var ship = new Ship();
var path = new Path({
strokeColor: '#ddd',
strokeWidth: 1
});
function onFrame(event) {
path.add(ship.shipPath.position);
ship.drive();
}
I've left out the key handlers which is how the ship is steered, but basically what they do is call the this.rot() function with different angles depending whether the right or left buttons were clicked.
Basically my problem is that according to this, when steering the ship, the ship should rotate around its shipPath.position, which would leave that point travelling in a straight line as the ship revolves around it. Instead this is happening:
The curly bit in the path is from when I continuously steered the ship for a few seconds. Why is this happening? If the ship is revolving around its position, why should the position judder sideways as the ship rotates?
Here is a link to where I've got this working on my own website: http://aronadler.com/asteroid/
I would have loved to put this on jsbin or codepen but despite hours work I have never been able to actually get the paperscript working in javascript.
Here is a sketch. Because for some reason Sketch won't let arrow keys being detected I've given it an automatic constant rotation. The effect is the same.
The reason for this is that path.bounds.center is not the center of the triangle. The default center for rotation is path.bounds.center. See sketch. The red dots are bounds.center, the green rectangles are the bounds rectangle.
You want to rotate around the triangle center (technically centroid) which can be calculated by finding the point 2/3 of the way from a vertex to the midpoint of the opposite side.
Here's some code to calculate the centroid of your triangle:
function centroid(triangle) {
var segments = triangle.segments;
var vertex = segments[0].point;
var opposite = segments[1].point - (segments[1].point - segments[2].point) / 2;
var c = vertex + (opposite - vertex) * 2/3;
return c;
}
And an updated sketch showing how the center doesn't move, relative to your triangle, as it is rotated, when calculating the centroid.
And I've updated your sketch to use the centroid rather than position. It now moves in a straight line.

Glitches with Triangulation + Linear Interpolation + 3D projection

Intro
Hey!
Some weeks ago, I did a small demo for a JS challenge. This demo was displaying a landscape based on a procedurally-generated heightmap. To display it as a 3D surface, I was evaluating the interpolated height of random points (Monte-Carlo rendering) then projecting them.
At that time, I was already aware of some glitches in my method, but I was waiting for the challenge to be over to seek some help. I'm counting on you. :)
Problem
So the main error I get can be seen in the following screenshot:
Screenshot - Interpolation Error? http://code.aldream.net/img/interpolation-error.jpg
As you can see in the center, some points seem like floating above the peninsula, forming like a less-dense relief. It is especially obvious with the sea behind, because of the color difference, even though the problem seems global.
Current method
Surface interpolation
To evaluate the height of each point of the surface, I'm using triangulation + linear interpolation with barycentric coordinates, ie:
I find in which square ABCD my point (x, y) is, with A = (X,Y), B = (X+1, Y), C = (X, Y+1) and D = (X+1, Y+1), X and Y being the truncated value of x, y. (each point is mapped to my heightmap)
I estimate in which triangle - ABD or ACD - my point is, using the condition: isInABD = dx > dy with dx, dy the decimal part of x, y.
I evaluate the height of my point using linear interpolation:
if in ABD, height = h(B) + [h(A) - h(B)] * (1-dx) + [h(D) - h(B)] * dy
if in ACD, height = h(C) + [h(A) - h(C)] * (1-dy) + [h(D) - h(C)] * dx, with h(X) height from the map.
Displaying
To display the point, I just convert (x, y, height) into the world coordinates, project the vertex (using simple perspective projection with yaw and pitch angles). I use a zBuffer I keep updated to check if I draw or not the obtained pixel.
Attempts
My impression is that for some points, I get a wrong interpolated height. I thus tried to search for some errors or some non-covered boundaries cases, in my implementation of the triangulation + linear interpolation. But if there are, I can't spot them.
I use the projection in other demos, so I don't think the problem comes from here. As for the zBuffering, I can't see how it could be related...
I'm running out of luck here... Any hints are most welcome!
Thank for your attention, and have a nice day!
Annexe
JsFiddle - Demo
Here is a jsFiddle http://jsfiddle.net/PWqDL/ of the whole slightly simplified demo, for those who want to tweak around...
JsFiddle - Small test for the interpolation
As I was writing down this question, I got an idea to have a better look at the results of my interpolation. I implemented a simple test in which I use a 2x2 matrix containing some hue values, and I interpolate the intermediate colors before displaying them in the canvas.
Here is the jsFiddle: http://jsfiddle.net/y2K7n/
Alas, the results seem to match the expected behavior for the kind of "triangular" interpolation I'm doing, so I'm definitly running out of ideas.
Code sample
And here is the simplified most-probably-faulty part of my JS code describing my rendering method (but the language doesn't matter much here I think), given a square heightmap "displayHeightMap" of size (dim x dim) for a landscape of size (SIZE x SIZE):
for (k = 0; k < nbMonteCarloPointsByFrame; k++) {
// Random float indices:
var i = Math.random() * (dim-1),
j = Math.random() * (dim-1),
// Integer part (troncated):
iTronc = i|0,
jTronc = j|0,
indTronc = iTronc*dim + jTronc,
// Decimal part:
iDec = i%1,
jDec = j%1,
// Now we want to intrapolate the value of the float point from the surrounding points of our map. So we want to find in which triangle is our point to evaluate the weighted average of the 3 corresponding points.
// We already know that our point is in the square defined by the map points (iTronc, jTronc), (iTronc+1, jTronc), (iTronc, jTronc+1), (iTronc+1, jTronc+1).
// If we split this square into two rectangle using the diagonale [(iTronc, jTronc), (iTronc+1, jTronc+1)], we can deduce in which triangle is our point with the following condition:
whichTriangle = iDec < jDec, // ie "are we above or under the line j = jTronc + distanceBetweenLandscapePoints - (i-iTronc)"
indThirdPointOfTriangle = indTronc +dim*whichTriangle +1-whichTriangle, // Top-right point of the square or bottm left, depending on which triangle we are in.
// Intrapolating the point's height:
deltaHeight1 = (displayHeightMap[indTronc] - displayHeightMap[indThirdPointOfTriangle]),
deltaHeight2 = (displayHeightMap[indTronc+dim+1] - displayHeightMap[indThirdPointOfTriangle]),
height = displayHeightMap[indThirdPointOfTriangle] + deltaHeight1 * (1-(whichTriangle? jDec:iDec)) + deltaHeight2 * (!whichTriangle? jDec:iDec),
posX = i*distanceBetweenLandscapePoints - SIZE/2,
posY = j*distanceBetweenLandscapePoints - SIZE/2,
posZ = height - WATER_LVL;
// 3D Projection:
var temp1 = cosYaw*(posY - camPosY) - sinYaw*(posX - camPosX),
temp2 = posZ - camPosZ,
dX = (sinYaw*(posY - camPosY) + cosYaw*(posX - camPosX)),
dY = sinPitch*temp2 + cosPitch*temp1,
dZ = cosPitch*temp2 - sinPitch*temp1,
pixelY = dY / dZ * minDim + canvasHeight,
pixelX = dX / dZ * minDim + canvasWidth,
canvasInd = pixelY * canvasWidth*2 + pixelX;
if (!zBuffer[canvasInd] || (dZ < zBuffer[canvasInd])) { // We check if what we want to draw will be visible or behind another element. If it will be visible (for now), we draw it and update the zBuffer:
zBuffer[canvasInd] = dZ;
// Color:
a.fillStyle = a.strokeStyle = EvaluateColor(displayHeightMap, indTronc); // Personal tweaking.
a.fillRect(pixelX, pixelY, 1, 1);
}
}
Got it. And it was as stupid a mistake as expected: I was reinitializing my zBuffer each frame...
Usually it's what you should do, but in my case, each frame (ie call of my Painting() function) adds details to the same frame (ie drawed static scene from a constant given point of view).
If I reset my zBuffer at each call of Painting(), I lose the depth information of the points drawn during the previous calls. The corresponding pixels are thus considered as blank, and will be re-painted for any projected points, without any regard for their depth.
Note: Without reinitiliazation, the zBuffer gets quite big. Another fix I should have done earlier was thus to convert the pixel's positions of the projected point (and thus the indices of the zBuffer) into integer values:
pixelY = dY / dZ * minDim + canvasHeight +.5|0,
pixelX = dX / dZ * minDim + canvasWidth +.5|0,
canvasInd = pixelY * canvasWidth*2 + pixelX;
if (dZ > 0 && (!zBuffer[canvasInd] || (dZ < zBuffer[canvasInd]))) {
// We draw the point and update the zBuffer.
}
Fun fact
If the glitches appeared more obvious for relief with the sea behind, it wasn't only for the color difference, but because the hilly parts of the landscape need much more points to be rendered than flat areas (like the sea), given their stretched surface.
My simplistic Monte-Carlo sampling of points doesn't take this characteristic into account, which means that at each call of Painting(), the sea gains statistically more density than the lands.
Because of the reinitialization of the zBuffer each frame, the sea was thus "winning the fight" in the picture's areas where mountains should have covered it (explaining the "ghostly mountains" effect there).
Corrected JsFiddle
Corrected version for those interested: http://jsfiddle.net/W997s/1/

Categories