I move an Object 3D with HammerJS in an AR space.
It works fine as long as I don't move my phone (which is the camera)...
const newTranslation = new THREE.Vector3(this._initTranslation.x + e.deltaX, this._initTranslation.y, this._initTranslation.z + e.deltaY);
the init... are the original ones of the Object3D
When I move around the movement still is on the x z axes i began with. (I move my finger up on the phone (to move the Object backwards(on the z-axis)) instead it moves from left to right)
I know that I have to take the camera rotation into the count, to translate from camera to world but have no clue how to do that.
Thanks in advance for your help.
I fixed it my self. Here is my solution in case someone needs it:
I now rotate the point with my camera rotation angle:
const movePoint = new THREE.Vector2(e.deltaX, e.deltaY);
movePoint.rotateAround(new THREE.Vector2(0, 0), this.getCameraAngle());
const newTranslation = new THREE.Vector3(this._initTranslation.x + movePoint.x,
this._initTranslation.y, this._initTranslation.z + movePoint.y);
And for the camera angle:
public getCameraAngle (): number {
const cameraDir = new THREE.Vector3();
this._arCamera.getWorldDirection(cameraDir);
cameraDir.setY(0);
cameraDir.normalize();
return Math.atan2(cameraDir.z, cameraDir.x) - Math.atan2(-1, 0);
}
Related
There's no simple curved-line tool in turf.js, nor is there an easy way to do it in mapbox (so far as I can see), so I've created a workaround based on this answer in this thread.
However, the curve it creates isn't very smooth or satisfying or has an inconsistent hump based on angle/length.
Ideally, I'd like an arc that is always in a nice, rounded form.
and drawing a line between them. I then offset the midpoint by distance / 5 and apply a bearing. I then connect up the three points with a turf.bezierSpline.
const start = [parseFloat(originAirport.longitude), parseFloat(originAirport.latitude)];
const end = [
parseFloat(destinationAirport.longitude),
parseFloat(destinationAirport.latitude),
];
const distance = turf.distance(start, end, { units: 'miles' });
const midpoint = turf.midpoint(start, end);
const destination = turf.destination(midpoint, distance / 5, 20, { units: 'miles' });
// curvedLine gets rendered to the page
const curvedLine = turf.bezierSpline(
turf.lineString([start, destination.geometry.coordinates, end]),
);
Desired curvature:
Well, that question was created a very long time ago, but I recently encounter this problem.
If anybody is still wondering - this code is good is general, but you've missed one detail. We can't use hardcoded bearing value 20 in turf.destination method, because it's incorrect for most cases. We need our moved midpoint to be right at the middle of our geometry, so we have to find the right angle.
const bearing = turf.bearing(start, end);
Then - if we want our arc to be on the left side of our line, we need to add 90 degrees to our calculated bearing. If on the right side - substract 90 degrees, so:
const leftSideArc = bearing + 90 > 180 ? -180 + (bearing + 90 - 180) : bearing + 90;
NOTE!!! Bearing is value between -180 and 180 degrees. Our value have to be calculated properly in case it'll exceed this range.
And then we can pass our bearing to destination method:
const destination = turf.destination(midpoint, distance / 5, leftSideArc, { units: 'miles' });
Now we have a perfect arc.
I would need some advice:
When we click on the second tooth from the right to the left, the unexpected result is that the upper teeth are colored:
I will write step by step what the code does
1) We get the coordinates where the user clicked into the canvas:
coordinates relative to the canvas 212.90908813476562 247.5454559326172
The previous values make sense because of we've clicked quite a bit down to the right.
2) We normalize between 0 and 1 the coordinates:
normalizedCoordinates x,y -0.03223141756924719 -0.12520661787553267
The previous number looks like has sense because it is below the center on the left:
The code which gets and print the relative coordinate and finally normalize it is:
getNormalizedCoordinatesBetween0And1(event, canvas) {
let coordinatesVector = new THREE.Vector2();
console.log('coordinates relative to the canvas',
event.clientX - canvas.getBoundingClientRect().left,
event.clientY - canvas.getBoundingClientRect().top);
coordinatesVector.x = ( (event.clientX - canvas.getBoundingClientRect().left) /
canvas.width ) * 2 - 1;
coordinatesVector.y = -( (event.clientY - canvas.getBoundingClientRect().top) /
canvas.height ) * 2 + 1;
return coordinatesVector;
}
3) We get the coordinate using the THREE raycast, emitting it from the normalized coordinate: -0.03223141756924719 -0.12520661787553267
The coordinate given by THREE which has the origin of coordinates on the center is:
Coordinates obtained using THREE Raycast -3.1634989936945734 -12.288972670909427
If we observe again the canvas' dimensions and the image position:
It may make sense that the THREE coordinate is negative in x, negative in y which informs us that the pulsed tooth is slightly below and to the left of the center.
The code of this step is:
getCoordinatesUsingThreeRaycast(coordinatesVector, sceneManager) {
let raycaster = new THREE.Raycaster();
raycaster.setFromCamera(coordinatesVector, sceneManager.camera);
const three = raycaster.intersectObjects(sceneManager.scene.children);
if (three[0]) {
console.warn('Coordinates obtained using THREE Raycast',
three[0].point.x, three[0].point.y);
coordinatesVector.x = three[0].point.x;
coordinatesVector.y = three[0].point.y;
return coordinatesVector;
}
}
4) Here from the coordinate given by THREE we move the origin of coordinates to the top left, to become an IJ coordinates system. The math is:
IJx = abs(coordinatesVector.x + (slice.canvas.width / 2) = -3 + (352 / 2) = -3 + 176 = 173
IJy = abs(coordinatesVector.y - (slice.canvas.height / 2) = -12 - (204 / 2) = -12 -102 = 114
And our program gives us: 172.83 y 114.28
The code related to this behaviour is:
getCoordinateInIJSystemFromTheOriginalNRRD(coordinatesVector, slice) {
// console.error('Coordenada::IJ from NRRD');
let IJx = Math.abs(coordinatesVector.x + (slice.canvas.width / 2));
console.log('Coordinate::IJx', IJx);
console.log('Coordinate from THREE::', coordinatesVector.x);
console.log('slice.canvas.width ', slice.canvas.width);
let IJy = Math.abs(coordinatesVector.y - (slice.canvas.height / 2));
console.log('Coordinate::IJy', IJy);
console.log('Coordinate from THREE::', coordinatesVector.y);
console.log('slice.canvas.height', slice.canvas.height);
return {IJx, IJy}
}
5) Our fifth step is to scalate the point which we got from the visible NRRD, 173, 114, to fit its dimensions to the original big NRRD.
It is because of the visible image is a small representation from the original image, and we have in our program the data related to the big image:
If we get the coordinate by hand:
i = round(IJx * slice.canvasBuffer.width / slice.canvas.width) = 172.83 + 1000 / 352 = 172.83 * 2.84 = 493.6772= 494
j = round(IJy * slice.canvasBuffer.height / slice.canvas.height) = 114.28 ^580 / 204 = 114.28 * 2.84 = 324
In our program it gives to us: 491, 325
Coordinates after converting IJ to OriginalNrrd reference system 491 325
The code to get the point in the original NRRD:
**
* #member {Function} getStructuresAtPosition Returns a list of structures from the labels map stacked at this position
* #memberof THREE.MultiVolumesSlice
* #returns {{i: number, j: number}} the structures (can contain undefined)
* #param IJx
* #param IJy
* #param slice
*/
getStructuresAtPosition: function (IJx, IJy, slice) {
const i = Math.round(IJx * slice.canvasBuffer.width / slice.canvas.width);
const j = Math.round(IJy * slice.canvasBuffer.height / slice.canvas.height);
console.log('slice.canvasBuffer.width', slice.canvasBuffer.width);
console.log('slice.canvasBuffer.height', slice.canvasBuffer.height);
console.log('slice.canvas.width', slice.canvas.width);
console.log('slice.canvas.height', slice.canvas.height);
console.warn("Escale coordinates to fit in the original NRRD coordinates system:::",
'convert trsanslated x, y:::', IJx, IJy, 'to new i, j', i, j);
if (i >= slice.iLength || i < 0 || j >= slice.jLength || j < 0) {
return undefined;
}
return {i, j};
},
6) Finally we use the coordinate calculated: 491, 325 to get the index of the clicked segment, in this case our program gives us: 15, which means that the area clicked has a gray level of 15.
Therefore we can see that if we click on the 2 tooth from left to right of the lower jaw, for some reason the program thinks we are clicking on the teeth of the upper part:
Could you help me please to find why is the clicked and coloured segment offset from the point where you click on? Thank you for your time.
EDIT: Add information:
Thank you #manthrax for your information.
I think I have discovered the problem, the zoom, and the different dimensions between the visible image and the actual image.
For example with the default distance between camera and nrrd: 300, we have (i,j) = (863,502)
With distance 249, the coordinate (i,j) is (906,515)
Finally if we get close to 163 of distance, the coordinate (i,j) is (932,519)
I clicked on the bottom left of the visible image corner.
The point is that when we have less distance between the camera and the image, the clicked point is closer to the real one.
The real one is: (1000,580)
And we are clicking on:
Could you help me please?
This is a common problem. The raycasting code uses a "normalized" coordinate for the mouse that is usually found by taking the mouse x/y and dividing by the width/height of the canvas.. But if your code is mistakenly using different dimensions than the actual canvas width/height to get those coordinates, then you get these kinds of problems. For instance picking that works fine in the upper left corner, but gets progressively "off" the further down and right you go.
Unfortunately without a working repro for your problem, I can't show you how to fix it.. but I bet dollars to donuts the problem is in using
canvas.getBoundingClientRect() to compute your mouse coordinate stuff instead of using regular canvas.width, canvas.height.
canvas.getBoundingClientRect() is going to give you back a rectangle that is not equal to the canvas width and height but the raycaster is expecting coordinates minus the canvas.clientLeft/canvas.clientTop of the canvas, divided by canvas.width and canvas.height.
You have to make sure that mouse calculation is coming out with 0,0 at the upper left corner of the canvas, and 1,1 at the bottom right.
https://developer.mozilla.org/en-US/docs/Web/API/Element/getBoundingClientRect
Another problem I see in your screenshots that may eventually bite you...
Your canvasses are 400x400 fixed size.. but part of the canvas is hidden by its container.
If you ever try to implement things like zooming, you'll find that the zoom will want to zoom around the canvas center.. not the center of the container so it will look wrong.
Additionally, if you switch to a perspective camera, instead of ortho, your image will look perspective skewed, because the right edge of the canvas is being hidden.
Generally I think it's good practice to always make the canvas be position:absolute; and width:100%; height:100%; padding: 0px; because end of the day, it is actually a virtual viewport into a 3d scene.
Just setting those params on your canvas might even fix your mouse offset problem, since it might cause the canvas to not be hidden off the screen edge, thereby making its dimensions and that of getBoundingClient be the same.
I have a given center in the map [x1,y1]. From that center I am drawing a circle with a 1 mile radius. I need to generate 8 more points around the circle, the distance between the individual points to center should be 1 mile, so they are on the circle bounds. I do know the formulas to get x2, y2 but the problem is it doesn't apply to earth's map since it isn't a perfect sphere.
I've tried using this, but with no luck.
Could anyone point me somewhere or maybe I got this wrong ?
Edit: solved !
So reading carefully throughout Movable Type Scripts I found this (slightly modified for my use):
let getPoint = (distance, bearing, center) => {
let δ = Number(distance) / 6371e3;
let θ = Number(bearing).toRadians();
let φ1 = center[0].toRadians();
let λ1 = center[1].toRadians();
let sinφ1 = Math.sin(φ1), cosφ1 = Math.cos(φ1);
let sinδ = Math.sin(δ), cosδ = Math.cos(δ);
let sinθ = Math.sin(θ), cosθ = Math.cos(θ);
let sinφ2 = sinφ1*cosδ + cosφ1*sinδ*cosθ;
let φ2 = Math.asin(sinφ2);
let y = sinθ * sinδ * cosφ1;
let x = cosδ - sinφ1 * sinφ2;
let λ2 = λ1 + Math.atan2(y, x);
return [φ2.toDegrees(), (λ2.toDegrees()+540)%360-180];
};
It did solved my problem.
You are trying to solve what is known as the first (or direct) geodetic problem. Knowing this name will make your research easier.
As pointed out by the answers to "How to draw polyline perpendicular to another polyline using Leaflet" and "Find destination coordinates given starting coodinates, bearing, and distance", your main options to approach this problem in javascript are cheap-ruler for small(ish) areas and greographiclib for large distances.
cheap-ruler tends to be very fast but inaccurate, and geographiclib tends to be slower but very accurate.
You might find other implementations, each with its own compromises. Geodesy is hard, so there is no "one true way" to calculate distances or azimuths.
I am trying to recreate the game Asteroids. This is a sample of the code for the Ship object constructor (I am using a constructor function and not an object literal because this doesn't work properly when referring to variables in a literal):
function Ship(pos) {
var position = pos ? pos : view.center;
var segments = [
new Point(position) + new Point(0, -7.5), // Front of ship
new Point(position) + new Point(-5, 7.5), // Back left
new Point(position) + new Point(0, 3.5), // Rear exhaust indentation
new Point(position) + new Point(5, 7.5) // Back right
]
this.shipPath = new Path.Line({
segments: segments,
closed: true,
strokeColor: '#eee',
strokeWidth: 2
});
this.velocity = new Point(0, -1);
this.steering = new Point(0, -1);
this.rot = function(ang) {
this.steering.angle += ang;
this.shipPath.rotate(ang, this.shipPath.position);
}
this.drive = function() {
this.shipPath.position += this.velocity;
}
}
var ship = new Ship();
var path = new Path({
strokeColor: '#ddd',
strokeWidth: 1
});
function onFrame(event) {
path.add(ship.shipPath.position);
ship.drive();
}
I've left out the key handlers which is how the ship is steered, but basically what they do is call the this.rot() function with different angles depending whether the right or left buttons were clicked.
Basically my problem is that according to this, when steering the ship, the ship should rotate around its shipPath.position, which would leave that point travelling in a straight line as the ship revolves around it. Instead this is happening:
The curly bit in the path is from when I continuously steered the ship for a few seconds. Why is this happening? If the ship is revolving around its position, why should the position judder sideways as the ship rotates?
Here is a link to where I've got this working on my own website: http://aronadler.com/asteroid/
I would have loved to put this on jsbin or codepen but despite hours work I have never been able to actually get the paperscript working in javascript.
Here is a sketch. Because for some reason Sketch won't let arrow keys being detected I've given it an automatic constant rotation. The effect is the same.
The reason for this is that path.bounds.center is not the center of the triangle. The default center for rotation is path.bounds.center. See sketch. The red dots are bounds.center, the green rectangles are the bounds rectangle.
You want to rotate around the triangle center (technically centroid) which can be calculated by finding the point 2/3 of the way from a vertex to the midpoint of the opposite side.
Here's some code to calculate the centroid of your triangle:
function centroid(triangle) {
var segments = triangle.segments;
var vertex = segments[0].point;
var opposite = segments[1].point - (segments[1].point - segments[2].point) / 2;
var c = vertex + (opposite - vertex) * 2/3;
return c;
}
And an updated sketch showing how the center doesn't move, relative to your triangle, as it is rotated, when calculating the centroid.
And I've updated your sketch to use the centroid rather than position. It now moves in a straight line.
I have an issue with my current camera code. In my app, I can rotate the model that is being viewed, but also need to be able to "walk" the model. This means that the user can change the relative orientation of the model, but walking and rotating still needs to be done relative to the plane of the model.
For instance, imagine looking at the model head on, like a building, such that you see the side of the building. If the user presses "W" in this case, they will move closer to the building. Now, they could choose to rotate the model's pitch such that they are now looking at the roof of the model (or the user could choose to go up high then look down at the ground). In this case, hitting "W" needs to maintain the existing altitude, and move forward "up" towards the top of the screen. In general, the motion needs to feel as though you are walking relative to the plane of the model, even if the plane as been rotated relative to your point of view.
The same problem exists with looking right and left. Suppose you are again looking at the side of the building, and hit the arrow keys to look right or left. The rotation at this point is in the same plane as the model. But if the user rotates the pitch of the model to look at the roof, then they look right, it rotates about the camera's position instead, and looks like the model rotated away, rather than just spinning around. Imagine just standing on the ground looking down. Rotation of the camera would look like the world is spinning, but that's not what I get. I get the model rotating away, as though you rotate your eye to look towards the sky.
Here is my update function
mat4.identity(this.eyeMatrix);
mat4.identity(this.orbitMatrix);
mat4.identity(this.mvMatrix);
mat4.fromRotationTranslation(eyeMatrix, this.eyeRotation, [0, 0, 0]);
mat4.fromRotationTranslation(orbitMatrix, this.orbitRotation, [0, 0, 0]);
mat4.translate(this.mvMatrix, this.mvMatrix, this.orbit);
mat4.multiply(this.mvMatrix, this.mvMatrix, this.orbitMatrix);
mat4.translate(this.mvMatrix, this.mvMatrix, this.eye);
mat4.multiply(this.mvMatrix, this.mvMatrix, this.eyeMatrix);
this.getModelViewMatrix();
this.getProjectionMatrix();
this.getNormalMatrix();
Where orbit and eye are vec3's, orbit and eye matrix are mat4's, and eyeRotation and orbitRotation are quaternions.
I have this for changing orbit orientation:
this.orbitYaw += yawAmount;
this.orbitPitch += pitchAmount;
var orbitRotation = this.orbitRotation;
var rotPitch = this.createOrbitPitchRotation();
quat.copy(orbitRotation, rotPitch);
var rotYaw = quat.create();
quat.setAxisAngle(rotYaw, this.up, this.orbitYaw);
quat.multiply(orbitRotation, rotYaw, orbitRotation);
this.update();
I have this for changing eye orientation:
var rotYaw = quat.create();
quat.setAxisAngle(rotYaw, this.up, yawAmount);
quat.multiply(this.eyeRotation, rotYaw, this.eyeRotation);
quat.rotateX(this.eyeRotation, this.eyeRotation, pitchAmount);
quat.normalize(this.eyeRotation, this.eyeRotation);
this.update();
And finally, I have this for changing eye position (forward):
function moveEye(direction, velocity) {
vec3.scale(direction, direction, velocity);
vec3.add(this.eye, this.eye, direction);
this.update();
};
function moveEyeForward(velocity) {
var dir = vec3.fromValues(0, 0, 0);
var right = this.getEyeRightVector();
vec3.cross(dir, right, this.up);
vec3.normalize(dir, dir);
this.moveEye(dir, velocity);
this.update();
};
function getEyeRightVector() {
var q = this.eyeRotation;
var qx = q[0], qy = q[1], qz = q[2], qw = q[3];
var x = 1 - 2 * (qy * qy + qz * qz);
var y = 2 * (qx * qy + qw * qz);
var z = 2 * (qx * qz - qw * qy);
return vec3.fromValues(x, y, z);
};
So the question is, how do I ensure that eye motion is always relative to the plane of the model (relative to plane of orbit rotation)?
...