The title of this question might a bit ambiguous, but I don't know how to phrase it in a line.
Basically I've got this situation: there is a perspective camera in the scene and a mesh. The mesh is NOT centered at the origin of the axis.
The camera points directly to the center of this object, and it's position (I mean literally the "position" property of the Three.js camera object) is the position with respect to the center of the object, not the origin; so it works in another coordinate system.
My question is: how can I get the position of the camera not with respect of the object center but with respect of the origin of the "global" coordinate system?
To clarify, the situation is this. In this image you can see a hand mesh that has a position far away from the origin of the coordinate system. The camera points directly to the center of the hand (so the origin from the point of view of the camera is the center of the hand), and if I print it's position it gives me these values:
x: -53.46980763626004; y: -2.7201492246619283; z: -9.814480359970839
while actually I want the position with respect to the origin of the coordinate stystem (so in this case the values would be different; for example, the y value would be positive).
UPDATE:
I tried #leota's suggestion, so I used the localToWorld method in this way:
var camera = scene.getCamera();
var viewPos = camera.position;
var newView = new THREE.Vector3();
newView.copy(viewPos);
camera.localToWorld(newView);
I did an experiment with this mesh. As you can see this mesh is also not centered on the origin (you can see the axes on the bottom-left corner).
If I print the normal value of the camera's position (so, with respect to the center of the mesh) it gives me these results:
x: 0; y: 0; z: 15
If now I print the resulting values after the code above, the result is:
x: 0; y: 0; z: 30
which is wrong, because as you can see the camera position in the image has x and y values clearly different than 0 (while z = 30 could be true, as far as I can see).
If for example I rotate the camera so that it's very close to the origin, like this (in the image the camera is just behind the origin, so its position in world coordinates should have negative values for x, y, z), the coordinates with respect of the center of the object are:
x: -4.674180744175711; y: -4.8370441591630255; z: -4.877951155147168
while after the code above they become:
x: 3.6176076166961373; y: -4.98753160894295; z: -4.365141278155379
The y and z values might even be accurate at a glance, but the positive value of x tells me that it's totally wrong, and I don't know why.
I'm going to continue looking for a solution, but this might be a step in the right direction. Still, any more suggestions are appreciated!
UPDATE 2:
Found the solution. What #leota said is correct, that is indeed how you would get absolute coordinates for the camera. In my case though, I finally found a single line of code hidden in the project's code that was scaling everything according to some rule (project related). So the solution for me was to take the camera position as it is and then scale it back according to that rule.
Since #leota's answer was indeed the solution to the original question, I'm accepting it as the correct anwser
Not sure I got your question :) if I did then you need to switch between World and Local coordinate systems. The THREE.PerspectiveCamera inherits from THREE.Object3D so you can use the following methods to set your coordinate system:
.localToWorld ( vector )
vector - A local vector.
Updates the vector from local space to world space.
.worldToLocal ( vector )
vector - A world vector.
Updates the vector from world space to local space.
From Three.js Documentation
Update:
First update your camera Matrix:
camera.updateMatrixWorld();
Then:
var vector = camera.position.clone();
vector.applyMatrix( camera.matrixWorld );
The vector should hold the position in world coordinate
I had same question trying to answer I was confused for a while my guess but not sure is
var plot = camera.position.x - mesh.position.x;
var plotb = camera.position.y - mesh.position.y;
var plotc = camera.position.z - mesh.position.z;
mesh.position.x = (camera.position.x + plot) - mesh.position.x;
mesh.position.y = (camera.position.y + plotb) - mesh.position.y;
mesh.position.z = (camera.position.z + plotc) - mesh.position.z;
or
var plot = (camera.position.x * mesh.position.x) / 1000;
var plotb = (camera.position.y * mesh.position.y) / 1000;
var plotc = (camera.position.z * mesh.position.z) / 1000;
mesh.position.x = mesh.position.x + plot;
mesh.position.y = mesh.position.y + plotb;
mesh.position.z = mesh.position.z + plotc;
Related
I want to make portals with threejs by drawing an ellipse and then texture mapping a WebGlRenderTarget to its face. I have that function sort of working, but it tries to stretch the large rectangular buffer from the render target to the ellipse. What I want is to project the texture in its original dimensions onto the ellipse and just cut out anything that doesn't hit the ellipse like so:
Before Projection:
After projection:
How can this be done with threejs?
I've looked into texture coordinates, but don't understand how to use them, and even saw a projection light PR in threejs that might work?
Edit: I also watched a Sebastian Lague video on portals and saw he does this with “screen space coordinates”. Any advice on using those?
Thanks for your help!
Made a codepen available here:
https://codepen.io/cdeep/pen/JjyjOqY
UV mapping lets us specify which parts of the texture correspond to which vertices of the geometry. More details here: https://www.spiria.com/en/blog/desktop-software/understanding-uv-mapping-and-textures/
You could loop through the vertices and set the corresponding UV value.
const vertices = ellipseGeometry.attributes.position.array;
for(let i = 0; i < numPoints; i++) {
const [x, y] = [vertices[3*i], vertices[3*i + 1]];
uvPositions.push(0.5 + x * imageHeight / ((2 * yRadius) * imageWidth));
uvPositions.push(0.5 + y / (2 * yRadius));
}
ellipseGeometry.setAttribute("uv", new THREE.Float32BufferAttribute(uvPositions, 2 ));
UV coordinates increase from (0, 0) to (1, 1) from bottom left to top right.
The above code works because the ellipse is on the x-y plane. Or else, you'll need to get the x,y values in the plane of the ellipse.
More info on texture mapping in three.js here:
https://discoverthreejs.com/book/first-steps/textures-intro/
Edit: Do note that the demo doesn't really look like a portal. For that, you'll need to move the texture based on the camera view which isn't that simple
I am trying to build a raycasting-engine. i have successfully rendered a scene column by column using ctx.fillRect() as follows.
canvas-raycasting.png
demo
code
the code i wrote for above render:
var scene = [];// this contains distance of wall from player for an perticular ray
var points = [];// this contains point at which ray hits the wall
/*
i have a Raycaster class which does all math required for ray casting and returns
an object which contains two arrays
1 > scene : array of numbers representing distance of wall from player.
2 > points : contains objects of type { x , y } representing point where ray hits the wall.
*/
var data = raycaster.cast(wall);
/*
raycaster : instance of Raycaster class ,
walls : array of boundries constains object of type { x1 , y1 , x2 , y2 } where
(x1,y1) represent start point ,
(x2,y2) represent end point.
*/
scene = data.scene;
var scene_width = 800;
var scene_height = 400;
var w = scene_width / scene.length;
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i] ;
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
ctx.beginPath();
ctx.fillStyle = 'rgb('+s+','+s+','+s+')';
ctx.fillRect(i*w,200-h*0.5,w+1,h);
ctx.closePath();
}
Now i am trying to build an web based FPS(First Person Shooter) and stucked on rendering wall-textures on canvas.
ctx.drawImage() mehthod takes arguments as follows
void ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight);
ctx.drawImage_arguments
but ctx.drawImage() method draws image as a rectangle with no 3D effect like Wolfenstein 3D
i have no idea how to do it.
should i use ctx.tranform()? if yes, How ? if no, What should i do?
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting.
some pseudo 3d games are Wolfenstein 3D Doom
i am trying to build something like this
THANK YOU : )
The way that you're mapping (or not, as the case may be) texture coordinates isn't working as intended.
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting
The Wikipedia Entry for Texture Mapping has a nice section with the specific maths of how Doom performs texture mapping. From the entry:
The Doom engine restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. A fast affine mapping could be used along those lines because it would be correct.
A "fast affine mapping" is just a simple 2D interpolation of texture coordinates, and would be an appropriate operation for what you're attempting. A limitation of the Doom engine was also that
Doom renders vertical and horizontal spans with affine texture mapping, and is therefore unable to draw ramped floors or slanted walls.
It doesn't appear that your logic contains any code for transforming coordinates between various coordinate spaces. You'll need to apply transforms between a given raytraced coordinate and texture coordinate spaces in the very least. This typically involves matrix math and is very common and can also be referred to as Projection, as in projecting points from one space/surface to another. With affine transformations you can avoid using matrices in favor of linear interpolation.
The coordinate equation for this adapted to your variables (see above) might look like the following:
u = (1 - a) * wallStart + a * wallEnd
where 0 <= *a* <= 1
Alternatively, you could use a Weak Perspective projection, since you have much of the data already computed. From wikipedia again:
To determine which screen x-coordinate corresponds to a point at
A_x_,A_z_
multiply the point coordinates by:
B_x = A_x * B_z / A_z
where
B_x
is the screen x coordinate
A_x
is the model x coordinate
B_z
is the focal length—the axial distance from the camera center *to the image plane*
A_z
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
In your case A_x is the location of the wall, in worldspace. B_z is the focal length, which will be 1. A_z is the distance you calculated using the ray trace. The result is the x or y coordinate representing a translation to viewspace.
The main draw routine for W3D documents the techniques used to raytrace and transform coordinates for rendering the game. The code is quite readable even if you're not familiar with C/ASM and is a great way to learn more about your topics of interest. For more reading, I would suggest performing a search in your engine of choice for things like "matrix transformation of coordinates for texture mapping", or search the GameDev SE site for similar.
A specific area of that file to zero-in on would be this section starting ln 267:
> ========================
> =
> = TransformTile
> =
> = Takes paramaters:
> = tx,ty : tile the object is centered in
> =
> = globals:
> = viewx,viewy : point of view
> = viewcos,viewsin : sin/cos of viewangle
> = scale : conversion from global value to screen value
> =
> = sets:
> = screenx,transx,transy,screenheight: projected edge location and size
> =
> = Returns true if the tile is withing getting distance
> =
A great book on "teh Maths" is this one - I would highly recommend it for anyone seeking to create or improve upon these skills.
Update:
Essentially, you'll be mapping pixels (points) from the image onto points on your rectangular wall-tile, as reported by the ray trace.
Pseudo(ish)-code:
var image = getImage(someImage); // get the image however you want. make sure it finishes loading before drawing
var iWidth = image.width, iHeight = image.height;
var sX = 0, sY = 0; // top-left corner of image. Adjust when using e.g., sprite sheets
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i];
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
var wX = i*w, wY = 200 - h * 0.5;
var wWidth = w + 1, wHeight = h;
//... render the rectangle shape
/* we are using the same image, but we are scaling it to the size of the rectangle
and placing it at the same location as the wall.
*/
var u, v, uW, vH; // texture x- and y- values and sizes. compute these.
ctx.drawImage(image, sX, sY, iWidth, iHeight, u, v, uW, vH);
}
Since I'm not familiar with your code performing the raytrace, its' coordinate system, etc, you may need to further adjust the values for wX, wY, wWidth, and wHeight (e.g., translate points from center to top-left corner).
I want to place my already placed object at a new location, but it moves from the local position and not global.
this._scene.updateMatrixWorld();
this._scene.add(mesh);
var v1 = new THREE.Vector3();
v1.setFromMatrixPosition( mesh.matrixWorld );
mesh.position.set(v1.x +2, v1.y + 2, v1.z + 2);
What I did to solve my problem:
Every mesh got a geometry attribute. That is why you can call mesh.geometry.
From this point i created a BoundingSphere around my mesh.
mesh.geometry.computeBoundingSphere();
Now it is possible to get the world position of my boundingSphere, which simultaneously is the world position of my mesh.
var vector = mesh.geometry.boundingSphere.center;
Congratulations! 'vector' got the center world position in (x,y,z) of your mesh.
Just to clarify, 'mesh' is a THREE.Mesh object.
Wouldn't you just then move the object itself by the increment? I'm not sure why you need the matrix involved?
mesh.position.set(mesh.position.x +2, mesh.position.y + 2, mesh.position.z + 2);
edit... if you need to use matrix, you need to set the matrixWorld. Look at http://threejs.org/docs/#Reference/Core/Object3D.matrixWorld but using the position setter will do the heavy lifting for you.
I'm working with Three.js, version 68. I'm using the same method for collision detection as this guy is using here, which is great most of the time (A big "thank you" goes out to the author!): http://stemkoski.github.io/Three.js/Collision-Detection.html
Here is a link to the source if you want to download it from github. Just look for Collision-Detection.html: https://github.com/stemkoski/stemkoski.github.com
Here is the code that is important to the collision detection:
var MovingCube;
var collidableMeshList = [];
var wall = new THREE.Mesh(wallGeometry, wallMaterial);
wall.position.set(100, 50, -100);
scene.add(wall);
collidableMeshList.push(wall);
var wall = new THREE.Mesh(wallGeometry, wireMaterial);
wall.position.set(100, 50, -100);
scene.add(wall);
var wall2 = new THREE.Mesh(wallGeometry, wallMaterial);
wall2.position.set(-150, 50, 0);
wall2.rotation.y = 3.14159 / 2;
scene.add(wall2);
collidableMeshList.push(wall2);
var wall2 = new THREE.Mesh(wallGeometry, wireMaterial);
wall2.position.set(-150, 50, 0);
wall2.rotation.y = 3.14159 / 2;
scene.add(wall2);
var cubeGeometry = new THREE.CubeGeometry(50,50,50,1,1,1);
var wireMaterial = new THREE.MeshBasicMaterial( { color: 0xff0000, wireframe:true } );
MovingCube = new THREE.Mesh( cubeGeometry, wireMaterial );
MovingCube.position.set(0, 25.1, 0);
// collision detection:
// determines if any of the rays from the cube's origin to each vertex
// intersects any face of a mesh in the array of target meshes
// for increased collision accuracy, add more vertices to the cube;
// for example, new THREE.CubeGeometry( 64, 64, 64, 8, 8, 8, wireMaterial )
// HOWEVER: when the origin of the ray is within the target mesh, collisions do not occur
var originPoint = MovingCube.position.clone();
for (var vertexIndex = 0; vertexIndex < MovingCube.geometry.vertices.length; vertexIndex++)
{
var localVertex = MovingCube.geometry.vertices[vertexIndex].clone();
var globalVertex = localVertex.applyMatrix4( MovingCube.matrix );
var directionVector = globalVertex.sub( MovingCube.position );
var ray = new THREE.Raycaster( originPoint, directionVector.clone().normalize() );
var collisionResults = ray.intersectObjects( collidableMeshList );
if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() )
appendText(" Hit ");
}
This works great most of the time, but there are times when I can move the cube partially into the wall, and it won't register a collision. For example, look at this image:
It should say "Hit" in the top-left corner where there are just a bunch of dots, and it's not.
NOTE: I also tried his suggestion and did the following, but it didn't seem to help much:
THREE.BoxGeometry( 64, 64, 64, 8, 8, 8, wireMaterial ) // BoxGeometry is used in version 68 instead of CubeGeometry
Does anyone know how this method could be more accurate? Another question: Does anyone know what the following if statement is for, i.e. why does the object's distance have to be less than the length of the direction vector?:
if ( collisionResults.length > 0 && collisionResults[0].distance < directionVector.length() )
To answer your last question first: that line detects whether the collision happened inside your MovingCube. Your raycasting code casts a ray from the MovingCube's position towards each of its vertices. Anything that the ray intersects with is returned, along with the distance from the MovingCube's position at which the intersected object was found (collisionResults[0].distance). That distance is compared with the distance from the MovingCube's position to the relevant vertex. If the distance to the collision is less than the distance to the vertex, the collision happened inside the cube.
Raycasting is a poor method of collision detection because it only detects collisions in the exact directions rays are cast. It also has some additional edge cases. For example, if the ray is cast from inside another object, the other object might not be considered to be colliding. As another example, raycasting in Three.js uses bounding spheres (or, if unavailable, bounding boxes) to calculate ray intersection, so rays can "intersect" with objects even if they wouldn't hit them visually.
If you're only dealing with spheres or upright cuboids, it's straightforward math to check collision. (That's why Three.js uses bounding spheres and bounding boxes - and most applications that need to do collision checking use secondary collision-only geometries that are less complicated than the rendered ones.) Spheres are colliding if the distance between their centers is less than the sum of their radii. Boxes are colliding if the edges overlap (e.g. if the left edge of box 1 is to the left of the right edge of box 2, and the boxes are within a vertical distance the sum of their half-heights and a horizontal distance the sum of their half-lengths).
For certain applications you can also use voxels, e.g. divide the world into cubical units, do box math, and say that two objects are colliding if they overlap with the same cube-unit.
For more complex applications, you'll probably want to use a library like Ammo.js, Cannon.js, or Physi.js.
The reason raycasting is appealing is because it's workable with more complex geometries without using a library. As you've discovered, however, it's less than perfect. :-)
I wrote a book called Game Development with Three.js which goes into this topic in some depth. (I won't link to it here because I'm not here to promote it, but you can Google it if you're interested.) The book comes with sample code that shows how to do basic collision detection, including full code for a 3D capture-the-flag game.
In Three.js (which uses JavaScript/ WebGL), how would one create a camera which flies around a sphere at fixed height, fixed forward speed, and fixed orientation in relation to the sphere, with the user only being able to steer left and right?
Imagine an airplane on an invisible string to the center of a globe, flying near ground and always seeing part of the sphere:
(I currently have code which rotates the sphere so to the camera it looks like it's flying -- left and right steering not implemented yet -- but I figure before I go further it might be cleaner to move the camera/ airplane, not the sphere group.)
Thanks!
You mean like in my Ludum Dare 23 game? I found this to be a bit more complicated than I expected. It's not difficult, though.
Here I'm assuming that you know the latitude and longitude of the camera and its distance from the center of the sphere (called radius), and want to create a transformation matrix for the camera.
Create the following objects only once to avoid creating new objects in the game loop:
var rotationY = new Matrix4();
var rotationX = new Matrix4();
var translation = new Matrix4();
var matrix = new Matrix4();
Then every time the camera moves, create the matrix as follows:
rotationY.setRotationY(longitude);
rotationX.setRotationX(-latitude);
translation.setTranslation(0, 0, radius);
matrix.multiply(rotationY, rotationX).multiplySelf(translation);
After this just set the camera matrix (assuming camera is your camera object):
// Clear the camera matrix.
// Strangely, Object3D doesn't have a way to just SET the matrix(?)
camera.matrix.identity();
camera.applyMatrix(matrix);
Thanks for Martin's answer! I've now got it running fine in another approach as follows (Martin's approach may be perfect too; also many thanks to Lmg!):
Set the camera to be a straight line atop the sphere in the beginning (i.e. a high y value, a bit beyond the radius, which was 200 in my case); make it look a bit lower:
camera.position.set(0, 210, 0);
camera.lookAt( new THREE.Vector3(0, 190, -50) );
Create an empty group (an Object3D) and put the camera in:
camGroup = new THREE.Object3D();
camGroup.add(camera);
scene.add(camGroup);
Track the mouse position in percent in relation to the screen half:
var halfWidth = window.innerWidth / 2, halfHeight = window.innerHeight / 2;
app.mouseX = event.pageX - halfWidth;
app.mouseY = event.pageY - halfHeight;
app.mouseXPercent = Math.ceil( (app.mouseX / halfWidth) * 100 );
app.mouseYPercent = Math.ceil( (app.mouseY / halfHeight) * 100 );
In the animation loop, apply this percent to a rotation, while automoving forward:
camGroup.matrix.rotateY(-app.mouseXPercent * .00025);
camGroup.matrix.rotateX(-.0025);
camGroup.rotation.getRotationFromMatrix(camGroup.matrix);
requestAnimationFrame(animate);
renderer.render(scene, camera);