I am relatively new to three.js and am trying to position and manipulate a plane object to have the effect of laying over the surface of a sphere object (or any for that matter), so that the plane takes the form of the object surface. The intention is to be able to move the plane on the surface later on.
I position the plane in front of the sphere and index through the plane's vertices casting a ray towards the sphere to detect the intersection with the sphere. I then try to change the z position of said vertices, but it does not achieve the desired result. Can anyone give me some guidance on how to get this working, or indeed suggest another method?
This is how I attempt to change the vertices (with an offset of 1 to be visible 'on' the sphere surface);
planeMesh.geometry.vertices[vertexIndex].z = collisionResults[0].distance - 1;
Making sure to set the following before rendering;
planeMesh.geometry.verticesNeedUpdate = true;
planeMesh.geometry.normalsNeedUpdate = true;
I have a fiddle that shows where I am, here I cast my rays in z and I do not get intersections (collisions) with the sphere, and cannot change the plane in the manner I wish.
http://jsfiddle.net/stokewoggle/vuezL/
You can rotate the camera around the scene with the left and right arrows (in chrome anyway) to see the shape of the plane. I have made the sphere see through as I find it useful to see the plane better.
EDIT: Updated fiddle and corrected description mistake.
Sorry for the delay, but it took me a couple of days to figure this one out. The reason why the collisions were not working was because (like we had suspected) the planeMesh vertices are in local space, which is essentially the same as starting in the center of the sphere and not what you're expecting. At first, I thought a quick-fix would be to apply the worldMatrix like stemkoski did on his github three.js collision example I linked to, but that didn't end up working either because the plane itself is defined in x and y coordinates, up and down, left and right - but no z information (depth) is made locally when you create a flat 2D planeMesh.
What ended up working is manually setting the z component of each vertex of the plane. You had originaly wanted the plane to be at z = 201, so I just moved that code inside the loop that goes through each vertex and I manually set each vertex to z = 201; Now, all the ray start-positions were correct (globally) and having a ray direction of (0,0,-1) resulted in correct collisions.
var localVertex = planeMesh.geometry.vertices[vertexIndex].clone();
localVertex.z = 201;
One more thing was in order to make the plane-wrap absolutely perfect in shape, instead of using (0,0,-1) as each ray direction, I manually calculated each ray direction by subtracting each vertex from the sphere's center position location and normalizing the resulting vector. Now, the collisionResult intersection point will be even better.
var directionVector = new THREE.Vector3();
directionVector.subVectors(sphereMesh.position, localVertex);
directionVector.normalize();
var ray = new THREE.Raycaster(localVertex, directionVector);
Here is a working example:
http://jsfiddle.net/FLyaY/1/
As you can see, the planeMesh fits snugly on the sphere, kind of like a patch or a band-aid. :)
Hope this helps. Thanks for posting the question on three.js's github page - I wouldn't have seen it here. At first I thought it was a bug in THREE.Raycaster but in the end it was just user (mine) error. I learned a lot about collision code from working on this problem and I will be using it later down the line in my own 3D game projects. You can check out one of my games at: https://github.com/erichlof/SpacePong3D
Best of luck to you!
-Erich
Your ray start position is not good. Probably due to vertex coordinates being local to the plane. You start the raycast from inside the sphere so it never hits anything.
I changed the ray start position like this as a test and get 726 collisions:
var rayStart = new THREE.Vector3(0, 0, 500);
var ray = new THREE.Raycaster(rayStart, new THREE.Vector3(0, 0, -1));
Forked jsfiddle: http://jsfiddle.net/H5YSL/
I think you need to transform the vertex coordinates to world coordinates to get the position correctly. That should be easy to figure out from docs and examples.
Related
So, I'm putting together a 3D project that will eventually make it's way onto a kiosk in the office of a client, basically shows all of their branches as points on a globe. One of the features requested is that end users be able to swipe on the screen to orbit around the globe, in addition, they should be able to rotate the camera on it's Z axis via a rotation gesture. The problem is I'm using camera.lookAt in the animation loop, which relies on the up vector being updated correctly whenever I rotate the camera in order to not "snap" back to place along the previous up vector when a user swipes, and for the life of me, I cannot get it to cooperate.
Currently, what I'm doing to update the up vector (based off of another stackoverflow thread with a similar issue) is this:
//Current full 360 degree angle of rotation, calculated earlier
let radian = THREE.Math.degToRad(full);
//Create new vector at radian angle to camera's current position
let v1 = new THREE.Vector3(_this.object.position.x + Math.cos(radian), _this.object.position.y + Math.sin(radian), _this.object.position.z).sub( _this.object.position ).normalize();
//_this.target = 0,0,0
let v2 = _this.target.clone().sub( _this.object.position ).normalize();
//Cross vectors to get the proper up
let v3 = new THREE.Vector3().crossVectors( v1, v2 ).normalize();
_this.object.up.copy( v3 );
And this works... up to the point where the camera seemingly inverts once I head near the side of the globe opposite the camera's starting position (0,0,1.75) and then negates my rotations (as far as I can tell) which causes the same "snap" to a different rotation like before.
Once I rotate the camera, I want it to maintain the rotation when using lookAt, regardless of the lookAt inverting everything.
Imagine this three.js scene, set up with an OrthographicCamera and OrbitControls:
When the user drags the yellow disc (meant to represent the Sun), the disc needs to move along its yellow circle in response to this action. Here's the scene from another angle, so you can see the full yellow circle:
So, my event handler must determine which point on this circle is closest to the current cursor position. This yellow circle is a THREE.Mesh, by the way.
I'm using THREE.Raycaster to determine some mouseover events, using its intersectObjects() function, but it's not clear to me how to find the nearest point of a single object with this Raycaster. I'm guessing there is some simple math I can do after translating the mouse's position to world co-ordinates. Can someone help me with this? Is Three.js's Raycaster useful here? If not, how do I determine the nearest point of this mesh?
The full source code is here, if it's helpful: https://github.com/ccnmtl/astro-interactives/blob/master/sun-motion-simulator/src/HorizonView.jsx Search for this.sunDeclination, which corresponds to the yellow circle's Mesh object.
For a working demo, go here: https://ccnmtl.github.io/astro-interactives/sun-motion-simulator/
For reference, the sun should behave like this: https://cse.unl.edu/~astrodev/flashdev2/sunMotions/sunMotions068.html (requires Flash)
The simplest version:
get a point on disk
make a projection in the plane of the circle
knowing the radius of the circle, calculate the multiplier for multiplying the vector by the scalar
var point = res.point.clone();
point.z = 0; // Project on circle plane
var scale = circleRadius / point.length();
point.multiplyScalar(circleRadius / point.length())
[ https://jsfiddle.net/c4m3o7ht/ ]
The raycaster returns all objects hit by the ray.. all of the hit points in worldspace.. (which you can convert to/from model space via object3d.worldToLocal and localToWorld)
It returns the hit distances.. which you can sort by whatever heuristic you need...
What I usually do is cast on mouseDown.. record the object and point.. then on mouseMove get the same objects hit point, and apply my edit operation using the difference between those 2 points.
Is this what you're talking about?
Thank you for having a look at my problem. Currently I am working on a university project and have reached a problem that I can't seem to solve. I narrowed down where the mistake most likely occurs, but I am still a little clueless. I'm going to explain my situation below:
This is a 3D space ship shooter, that we are building using three.js. I am trying to create a ray of particles, that I want to use as an afterburner effect for the space ship.
The first step was to write a particle renderer. The particle renderer has an update function that periodically updates the particles themselves, as well as the start and end point of the particle ray. So in each update call, after a possible movement of the spaceship, I want to calculate the new start and end point of the particle ray.
The particle renderer itself works perfectly fine in a test environment, however using it in the real program I encountered a weird bug: The particle ray works just fine, as long as the coordinate origin is visible for the camera. If the camera cannot see the origin, the particles appear to be invisible. One possible error might be, that the system actually works but the particles are rendered in a different position? Seems very unlikely though, since for the most part the program logic is fine.
I am going to post the code below, that updates the position of the ship. In case more code is needed, please feel free to comment below or ask me directly. I can send you the link to the git repository, however, since it is a public repository of my university group I am not comfortable posting the whole code publically. Thank you for understanding.
// this is the update function that is called in each frame
function update() {
// get the ship position and it's direction vector
var pos = ship.position;
// default direction vector
var dirVector = new THREE.Vector3(0,0,1);
// apply rotation of ship
dirVector.applyQuaternion(ship.quaternion);
// center of the ship has relative coordinates (0,0,0)
// start position of the ray. relative coords: (0,0,6)
var startScale = 6;
startVector = new THREE.Vector3(
pos.x+startScale*dirVector.x,
pos.y+startScale*dirVector.y,
pos.z+startScale*dirVector.z
);
// end position of the ray. relative coords: (0,0,8)
var endScale = 8;
endVector = new THREE.Vector3(
pos.x+endScale*dirVector.x,
pos.y+endScale*dirVector.y,
pos.z+endScale*dirVector.z
);
// update the start- and endpoint of the particle spawner
particleRay.updateStartAndEndpoint(startVector, endVector);
// update particles
particleRay.update();
}
Here you'll find two screenshots, one showing the correct particle ray and one showing the error.
I think the most likely cause of this problem might be a math error or an error while handling the vectors, however neither me, my tutor and anyone in my group can figure it out. I would really appreciate any kind of input on this matter. Thanks a lot!
I have a problem updating a vertex of a line in three.js
So, I want to have a line in my scene, that its start is always at the (0,0,0) and its end is always in a specific position of the users screen (in x,y coordinates).
What I do to achieve that (and I almost succeed) is to have an invisible plane looking always to the camera and also have its position always a little bit in front of the camera. The reason I do that is because I want the line to seem like "going towards" the user's screen. So I "send" a raycaster from the desired screen position (in x,y) and I check in which point of the plane it intersect and that's my 3D point in three.js scene. Then I update one of the 2 vertices of the line.
The problem
What I do works fine, the line end is where I want to be, but something in updating the camera and the vertex is not synchronized and causes some noticeable glitches. When I move the camera, the line do not update itself quickly and smoothly, and as a result I see the line in other position before I see it in the calculated and desireable one.
Please take a look at this jsfiddle I created to emulate the problem.
What can I do to avoid these glitches?
Thanks
code i use in render function :
var cameToCenterScaled = camera.position.clone();
cameToCenterScaled.setLength(cameToCenterScaled.length()*0.9);
plane.position.set(cameToCenterScaled.x, cameToCenterScaled.y, cameToCenterScaled.z);
plane.lookAt(camera.position);
// define in pixels where in screen we want the line to end
var notePos = findNotePoint(120,30);
linemesh.geometry.vertices[ 1 ].set(notePos.x, notePos.y, notePos.z) ;
linemesh.geometry.verticesNeedUpdate = true;
when you raycast you set the raycaster from camera, you have to make sure the camera matrices are updated
simply add
camera.updateMatrixWorld();
before you call
raycaster.setFromCamera( new THREE.Vector2( x_, y_ ) , camera );
and the line will behave as you described
SO!
I was wondering if you could help me to solve one problem:
I've got a cube, which can be rotated in three axis.
I can get information about cube's rotation which is an array of three angles from 0 to 2PI.
The question is: how can I identify, which side of cube is in the bottom from those three euler angles?
I think the perfect function would be something like that:
function getSideFromAngles(x,y,z) {
// magic goes here
// for example getSideFromAngles(Math.PI/2, 0, 0)
// if x===PI/2 and y===0 and z===0 then return "front"
// which means front side of cube "looks" down now.
}
Just in case - Three.js also allows me to get quaternions of the cube.
Thanks in advance for you help
First of all, in your faces you should have the normals, that gives you the base direction of the face.
Also, in your mesh you should have the matrixWorld, that should give the global result of the rotations (all of them).
Now, if you multiply that matrix by the normals, you will have the normals in world space.
Now, create a vector pointing down (0, 0, -1), and calculate the dot product between this and the normals in world space. The face having the highest value is the one that points more "downward"