These two snippets are the culprits:
function tr(){
ctx.translate(0-cameraX,0-cameraY);
ctx.scale(zoom,zoom);
}
function uiPosToWorldPos(pos){
return [(pos[0] + cameraX) / zoom,(pos[1] + cameraY) / zoom];
}
What I am doing is moving my world opposite of my camera to create a scrolling/parallax world (fairly standard.) This is all good and well until I add zooming of the camera. I am having trouble finding a way to factor in the zoom to the above functions without this one problem: The farther my camera is from (0,0) and the lower zoom (zooming out) the farther my camera tends to move away from the origin.
I want to be able to zoom in and out cleanly wherever I am so that the same point in the world will remain center camera regardless of zoom. The 2nd function is necessary to determine where my mouse is on the world and also so I can set my bounds and only draw the items that are on my screen.
You calculate your scale coefficient and then multiples it with your object position.
Related
I'm currently writting a 3D implementation of the boids algorithm in P5.js but I'm having trouble orienting my boids according to their direction (velocity). Rotations are limited to RotateX(), RotateY() and RotateZ(). The simplest solution that I feel should work goes along these lines :
push();
translate(this.pos);
rotateZ(createVector(this.vel.x, this.vel.y).heading());
rotateY(createVector(this.vel.x, this.vel.z).heading());
beginShape();
// Draw Boid Vertices..
endShape();
pop();
But it doesn't.
I've written a much smaller version of the program which contains only the orientation for randomly generated particles that go in a single direction. It is available here directly on the p5js website : https://editor.p5js.org/itsKaspar/sketches/JvypSPGGh
There is a default orbit control so you can zoom and drag the mouse to check the orientation of the particles.
Thanks so much, I've been stuck on this for half a day now
From your demo, the z component is flipped, and you can test this from only trying one of the rotations at a time. Second, chaining rotations in 3D this way will usually not do what you want, as rotating will change the "up" or "right" vector of the coordinate system attached to a certain object. For example, rotating about the up (-y for p5) vector, or the yaw angle, will rotate the right vector. The second rotation then needs to be about the rotated right vector (now pitch), so you can't just use rotateX/Y/Z as they are still in world space instead of object space. Note that I'm completely ignoring roll in this solution, but if you look at the boids from the front and top angles, it should be aligned with the velocities
var right = p5.Vector(this.vel.x, 0, this.vel.z);
rotate(atan(this.vel.y/ this.vel.x), right);
rotateY(atan2(-this.vel.z, this.vel.x));
I have an object (a pen) in my scene, which is rotating around its axis in the render loop.
groupPen.rotation.y += speed;
groupPen.rotation.x += speed;
and I have also a TrackballControls, which allows the user to rotate the whole scene.
What I now want is to get the "real" position of the pen (or its pick) and place small spheres to create a trail behind it.
This means I need to know where the camera is looking at and place the trail spheres behind the peak of the pen and exclude them from the animation and the TrackballControls.
What I tried is:
groupSphereTrail.lookAt(camera.position);
didn't work. Means no reaction at all.
camera.add(groupSphereTrail);
didn't work. groupSphereTrail is than not in the view area, couldn't make it visible - manipulating position.z didn't help.
Then I tried something like sending a tray with traycaster. The idea was to send a ray from the center of the camera through the peak of the pen and then draw the trail there. But then I still doesn't have the "real" position.
Another idea was to create a 2d vector of the current position of the pen peak and just draw an html element on top of the canvas:
var p = penPeak.position.clone();
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
but this also doesn't work.
What could be another working solution?
Current progress:
https://zhaw.swissmade.xyz
(click on the cap of the pen to see the writing - this writing trail should stay at its place when you rotate the camera)
If i understood the question right, you want to show the trail as if it were draw on the screen itself (screen space)?
yourTrailParticle.position.project(camera)
camera.add(yourTrailParticle)
That's the basic idea, but it gets a bit tricky with PerspectiveCamera. You could set up a whole new THREE.Scene to hold the trail, and render it with a fixed size orthographic camera.
The point is .project() will give you a normalized screen space coordinate of a world space vector, and you need to keep it somehow in sync with that camera (since the screen is too). The perspective camera has distortion so you need to figure out the appropriate distance to map the coordinate to. With a separate scene, this may become easier.
I am trying to make a Three.js scene where I've got a dodecahedron. I want the camera to be zoomed in on one side of the dodecahedron and when a button is pressed I want it to zoom out, rotate until it is standing across another side and then zoom in again.
To make this clear:
If the camera would be fully zoomed in on side 1 and I pressed "5", I would want the camera to zoom out - showing the dodecahedron - then rotate towards 5 (or let the dodecahedron rotate side 5 facing the camera?) and zoom in again. It's important that the camera is always set parallel with the base of the pentagon it's facing, not the top or any other rotation.
I thought it would be smart to start off with just a cube, to not start too complicated. I added some tweens (when pressing G) to illustrate some basic movement, but that doesn't look too good anymore in the fiddle. jsfiddle
Because I feel like I should have a function that does all this movement and calculating for me I first tried to write down each position and rotation each side-view had from the cube so I might detect a pattern. I can see some pattern in the values I wrote down for the cube, but I do not know how to convert this into a working function, let alone for a dodecahedron. My noted values are
side1 (0, 0, 600) (0, 0, 0)
side2 (600, 0, 0) (0, pi/2, 0)
side3 (0, 0, -600) (0, pi, 0);
side4 (-600, 0, 0) (0, -pi/2, 0);
side5 (0, 600, 0) (-pi/2, 0, 0);
side6 (0, -600, 0) (pi/2, 0, 0);
I can see some sort of recurrence happening and some relationships, but I wouldn't see how to link them in a function. I think that would be a first step in getting to a function doing the same but for a more complex shape. Could anyone guide me into some direction I should be looking right now? Because I could of course work with a lot of if clauses, but that's not the correct way to go I feel.
To solve the problem, the first we would do is getting the center coordinate of each side of the dodecahedron.
as you know in three.js, a mesh consists of triangles, every triangle has three points, all the faces and vertices can be found in mesh.geometry.faces and mesh.geometry.vertices. in dodecahedron, each side has three faces and five vertices, and I use each face normal to divide them into 12 groups which have same normal and in the same plane. Then, we got 5 points of each side, calculate the average coordinate to get the center coordinate.
After getting coordinates we need to rotate, one way is rotating the dodecahedron and keep the camera, another way is keeping the dodecahedron and translate camera, in this case, I select the second one.
Now camera face to centerA, the green circle is the track of camera, because we need to keep the distance from camera to the dodecahedron center.
To get the target position, we just scale the centerB vector, cause the coordinate of the centerB is object system coordinate, we need to apply the matrix to change the coordinate to world system coordinate.
Then, we translate the camera in an animation, the camera needs to take an arc.
I use the parametric equation of a circle in 3D space to do that, about the equation you can see Parametric Equation of a Circle in 3D Space. With this formula, I got the parameter θ1 on camera position and θ2 on target position. I update the camera position with θ1 in every animation frame loop.
I add some comments on jsfiddle.(Still has some bugs, need to update.)
Here is another solution by keeping the camera and rotate the object.
I have a tile-based isometric world and I can calculate which tile is underneath specific (mouse) coordinates by using the following calculations:
function isoTo2D(pt:Point):Point{
var tempPt:Point = new Point(0, 0);
tempPt.x = (2 * pt.y + pt.x) / 2;
tempPt.y = (2 * pt.y - pt.x) / 2;
return(tempPt);
}
function getTileCoordinates(pt:Point, tileHeight:Number):Point{
var tempPt:Point = new Point(0, 0);
tempPt.x = Math.floor(pt.x / tileHeight);
tempPt.y = Math.floor(pt.y / tileHeight);
return(tempPt);
}
(Taken from http://gamedevelopment.tutsplus.com/tutorials/creating-isometric-worlds-a-primer-for-game-developers--gamedev-6511, this is a flash implementation but the maths is the same)
However, my problem comes in when I have tiles that have different elevation levels:
In these scenarios, due to the height of some tiles which have a higher elevation, the tiles (or portions of tiles) behind are covered up and shouldn't be able to be selected by the mouse, instead selecting the tile which is in front of it.
How can I calculate the tile by mouse coordinates taking into account the tiles' elevation?
I'm using a javascript and canvas implementation.
There is a technique of capturing object under the mouse on a canvas without needing to recalculate mouse coordinates into your "world" coordinates. This is not perfect, has some drawbacks and restrictions, yet it does it's job in some simple cases.
1) Position another canvas atop of your main canvas and set it's opacity to 0. Make sure your second canvas has the same size and overlaps your main one.
2) Whenever you draw your interactive objects to the main canvas, draw and fill the same objects on the second canvas, but using one unique color per object (from #000000 to #ffffff)
3) Set mouse event handling to the second canvas.
4) Use getPixel on the second canvas at mouse position to get the "id" of the object clicked/hovered over.
Main advantage is WYSIWYG principle, so (if everything is done properly) you can be sure, that objects on the main canvas are in the same place as on the second canvas, so you don't need to worry about canvas resizing or object depth (like in your case) calculations to get the right object.
Main drawback is need to "double-render" the whole scene, yet it can be optimized by not drawing on the second canvas when it's not necessary, like:
in "idling" scene state, when interactive objects are staying on their places and wait for user action.
in "locked" scene state, when some stuff is animated or smth. and user is not allowed to interact with objects.
Main restriction is a maximum number of interactive objects on the scene (up to #ffffff or 16777215 objects).
So... Not reccomended for:
Games with big amount of interactive objects on a scene. (bad performance)
Fast-paced games, where interactive objects are constantly moved/created/destroyed.(bad performance, issues with re-using id's)
Good for:
GUI's handling
Turn-based games / slow-paced puzzle games.
Your hit test function will need to have access to all your tiles in order to determine which one is hit. It will then perform test hits starting with the tallest elevation.
Assuming that you only have discreet (integer) tile heights, the general algorithm would be like this (pseudo code, assuming that tiles is a two-dimensional array of object with an elevation property):
function getTile(mousePt, tiles) {
var maxElevation = getMaxElevation(tiles);
var minElevation = getMinElevation(tiles);
var elevation;
for (elevation = maxElevation; elevation >= minElevation; elevation--) {
var pt = getTileCoordinates(mousePt, elevation);
if (tiles[pt.x][pt.y].elevation === elevation) {
return pt;
}
}
return null; // not tile hit
}
This code would need to be adjusted for arbitrary elevations and could be optimized to skip elevation that don't contain any tiles.
Note that my pseudocode ignores vertical sides of a tile and clicks on them will select the (lower elevation) tile obscured by the vertical side. If vertical tiles need to be accounted for, then a more generic surface hit detection approach will be needed. You could visit every tile (from closest to farthest away) and test whether the mouse coordinates are in the "roof" or in one of the viewer facing "wall" polygons.
If map is not rotatable and exatly same as picture you posted here,
When you are drawing polygons, save each tile's polygon(s) in a polygon array. Then sort the array only once using distance of them(their tile) to you(closest first, farthest last) while keeping them grouped by tile index.
When click event happens, get x,y coordinates of mouse, and do point in polygon test starting from first element array until last element. When hit, stop at that element.
No matter how high a tile is, will not hide any tile that is closer to you(or even same distance to you).
Point in polygon test is already solved:
Point in Polygon Algorithm
How can I determine whether a 2D Point is within a Polygon?
Point in polygon
You can even check every pixel of canvas once with this function and save results into an 2d array of points, vect2[x][y] which gives i,j indexes of tiles from x,y coordinates of mouse, then use this as a very fast index finder.
Pros:
fast and parallelizable using webworkers(if there are millions of tiles)
scalable to multiple isometric maps using arrays of arrays of polygons sorted by distance to you.
Elevation doesnt decrease performance because of only 3 per tile maximum.
Doesn't need any conversion to isometric to 2d. Just the coordinates of corners of polygons on canvas and coordinates of mouse on the same canvas.
Cons:
You need coordinates of each corner if you haven't already.
Clicking a corner will pick closest tile to you while it is on four tiles at the same time.
The answer, oddly, is written up in the Wikipedia page, in the section titled "Mapping Screen to World Coordinates". Rather than try to describe the graphics, just read the section three times.
You will need to determine exactly which isomorphic projection you are using, often by measuring the tile size on the screen with a ruler.
OH Great and Knowledgeable Stack Overflow, I humbly request your great minds assistance...
I'm using the three js library, and I need to implement a 'show extents' button. It will move the camera to a position such that all the objects in the world are visible in the camera view (given they are not blocked of course).
I can find the bounding box of all the objects in the world, say they are w0x,w0y,w0z and w1x,w1y,w1z
How can I, given these to bounds, place the camera such that it will have a clear view of the edges of the box?
Obviously there will have to be a 'side' chosen to view from...I've googled for an algorithm to no avail!
Thanks!
So Let's say that you have picked a face. and that you are picking a camera position so that the camera's line-of-sight is parallel to one of the axes.
Let's say that the face has a certain width, "w", and let's say that your camera has a horizontal field-of-view "a". What you want to figure out is what is the distance, "d" from the center of the face that the camera should be to see the whole width.
If you draw it out you will see that you basically have an isosceles triangle whose base is length w and with the angle a at the apex.
Not only that but the angle bisector of the apex angle forms two identical right triangles and it's length (to the base) is the distance we need to figure out.
Trig tells us that the tangent of an angle is the ratio of the oposite and adjacent sides of the triangle. So
tan(a/2) = (w/2) / d
simplifying to:
d = w / 2*tan(a/2)
So if you are placing the camera some axis-aligned distance from one of your bounding box faces then you just need to move d distance along the axis of choice.
Some caveats, make sure you are using radians for the javascript trig function input. Also you may have to compute this again for your face height and camera's vertical field-of-view and pick the farther distance if your face is not square.
If you want to fit the bounding box from an arbitrary angle you can use the same ideas - but first you have to find the (aligned) bounding box of the scene projected onto a plane perpendicular to the camera's line of sight