I have a vector that I'm trying to keep length but rotate 90deg on colliding with the screen edge, but it gives me a weird out effect… don't know what I can be doing worn but it happens when I try to apply a Matrix and a Euler transform… Also for the screen detection, what I have is okj for detecting on and off screen, but it would be handy to know if it's the bottom, top, righ or left edge… Any clues? Thanks!
var direction = new THREE.Vector3(-0.2, -0.2, 0);
var a = new THREE.Euler( (Math.PI/2), 0, 0, 'XYZ' );
direction.applyEuler(a);
For the collision; I'm using the following:
camera.updateMatrixWorld(); // make sure the camera matrix is updated
camera.matrixWorldInverse.getInverse( camera.matrixWorld );
cameraViewProjectionMatrix.multiplyMatrices( camera.projectionMatrix, camera.matrixWorldInverse );
frustum.setFromMatrix( cameraViewProjectionMatrix );
console.log(frustum.intersectsObject(textMesh));
I removed the line to get the reversed camera projection matrix.
The screen edges effectively became an object "outside" my view frustum, with whom my objects can now actively collide. By being now a negative view frustum, everything but that which we see is a 'walled object' that confines the frustum itself.
Related
I'm using a PlaneGeometry as water, and have added a ship (gltf model) on it. Problem is, when the boat slightly rests into the water, the water is shown inside the boat, even though the boat is afloat. Is there a way to clip the water when a boat (or other models/objects) intersect with it with motion?
Every material in Three.js has an AlphaMap property that you can use to change the opacity of the mesh. So you could draw a black rectangle where the boat is to "cut out" that part of the water plane with an opacity of 0.
I presume your boat is going to be moving, so your texture would also need to move this black rectangle. To solve this, you could use an HTML <canvas> element to draw and move the black rectangle. Then you could use THREE.CanvasTexture to turn the <canvas> into a texture for your plane's alpha.
const drawingCanvas = document.getElementById( 'drawing-canvas' );
const canvasTexture = new THREE.CanvasTexture( drawingCanvas );
const waterMaterial = new THREE.MeshBasicMaterial({
transparent: true,
alphaMap: squareTexture
});
See this working demo for how to use a 2D canvas as a texture in your 3D scene. When you draw on the top-left square, you'll see it being applied to the cube. You could copy this approach, but instead of assigning it to material.map, you'd use it on material.alphaMap.
You could check for the position of the hull vertices of the ship each frame and, given enough points on the PlaneGeometry, you could displace all vertices of the surface inside the ship hull to the y coordinate of the nearest hull vertex. This could probably also be done by a custom shader, which is probably a more efficient solution.
#Marquizzo's idea with the alpha channel is probably a better solution, too, since you don't really want to simulate the displacement of the water, as I assume, but simply get rid of the display of water inside the ship. In this case, I would place an orthographic camera above the ship, set the near and far clipping planes as close as possible to the plane, and use the alpha channel or rather CanvasTexture inside the alpha channel as rendertarget. This way, you'll get a real time alpha map that also reacts to rolling, pitching and heave of the ship.
Effect of floating is made with sin function, applied to y-coordinate. You can use the same principle to make gltf model(ship) move on any coordinate axis.
Example:
position.y = Math.sin(ship.userData.initFloating + t) * 0.15;
I am trying to make a Three.js scene where I've got a dodecahedron. I want the camera to be zoomed in on one side of the dodecahedron and when a button is pressed I want it to zoom out, rotate until it is standing across another side and then zoom in again.
To make this clear:
If the camera would be fully zoomed in on side 1 and I pressed "5", I would want the camera to zoom out - showing the dodecahedron - then rotate towards 5 (or let the dodecahedron rotate side 5 facing the camera?) and zoom in again. It's important that the camera is always set parallel with the base of the pentagon it's facing, not the top or any other rotation.
I thought it would be smart to start off with just a cube, to not start too complicated. I added some tweens (when pressing G) to illustrate some basic movement, but that doesn't look too good anymore in the fiddle. jsfiddle
Because I feel like I should have a function that does all this movement and calculating for me I first tried to write down each position and rotation each side-view had from the cube so I might detect a pattern. I can see some pattern in the values I wrote down for the cube, but I do not know how to convert this into a working function, let alone for a dodecahedron. My noted values are
side1 (0, 0, 600) (0, 0, 0)
side2 (600, 0, 0) (0, pi/2, 0)
side3 (0, 0, -600) (0, pi, 0);
side4 (-600, 0, 0) (0, -pi/2, 0);
side5 (0, 600, 0) (-pi/2, 0, 0);
side6 (0, -600, 0) (pi/2, 0, 0);
I can see some sort of recurrence happening and some relationships, but I wouldn't see how to link them in a function. I think that would be a first step in getting to a function doing the same but for a more complex shape. Could anyone guide me into some direction I should be looking right now? Because I could of course work with a lot of if clauses, but that's not the correct way to go I feel.
To solve the problem, the first we would do is getting the center coordinate of each side of the dodecahedron.
as you know in three.js, a mesh consists of triangles, every triangle has three points, all the faces and vertices can be found in mesh.geometry.faces and mesh.geometry.vertices. in dodecahedron, each side has three faces and five vertices, and I use each face normal to divide them into 12 groups which have same normal and in the same plane. Then, we got 5 points of each side, calculate the average coordinate to get the center coordinate.
After getting coordinates we need to rotate, one way is rotating the dodecahedron and keep the camera, another way is keeping the dodecahedron and translate camera, in this case, I select the second one.
Now camera face to centerA, the green circle is the track of camera, because we need to keep the distance from camera to the dodecahedron center.
To get the target position, we just scale the centerB vector, cause the coordinate of the centerB is object system coordinate, we need to apply the matrix to change the coordinate to world system coordinate.
Then, we translate the camera in an animation, the camera needs to take an arc.
I use the parametric equation of a circle in 3D space to do that, about the equation you can see Parametric Equation of a Circle in 3D Space. With this formula, I got the parameter θ1 on camera position and θ2 on target position. I update the camera position with θ1 in every animation frame loop.
I add some comments on jsfiddle.(Still has some bugs, need to update.)
Here is another solution by keeping the camera and rotate the object.
I'm trying to create a very simple scene containing a triangular planar face continuously rotating about the x axis.
Here's the code creating the geometry object, as indicated in this previous SO question:
// create triangular plane geometry
var geometry_1 = new THREE.Geometry();
var v1 = new THREE.Vector3(0,0,0);
var v2 = new THREE.Vector3(3,0,0);
var v3 = new THREE.Vector3(0,3,0);
geometry_1.vertices.push(v1);
geometry_1.vertices.push(v2);
geometry_1.vertices.push(v3);
geometry_1.faces.push(new THREE.Face3(0, 1, 2));
The animation function renders the scene and adds a small rotation to the mesh:
function animate() {
requestAnimationFrame( animate );
mesh_1.rotation.x += 0.005;
renderer.render( scene, camera );
}
Everything works fine until the value of mesh.rotation.x goes into the [Math.PI, 2*Math.PI] interval, at which point it disappears for exactly half of the cycle. This JSFiddle replicates the behavior I'm observing.
This is not a light problem, as there are an ambient light and a directional light supposed to illuminate the mesh at all points of it revolution.
This should not be a material problem, as I did set its side property to THREE.DoubleSide and in fact in the interval mesh.rotation.x into [0, Math.PI] I already observe both faces.
I tried adding another face to the same geometry with geometry_1.faces.push(new THREE.Face3(0, 2, 1)); but that still didn't solve the problem.
Adding a second geometry with an opposite face geometry_2.faces.push(new THREE.Face3(0, 2, 1)); and having the mesh rotate negatively mesh_2.rotation.x -= 0.005; allows me to observe the desired result because the two geometries are now disappearing in opposite halves of the [0, 2*Math.PI] interval. This however is a hacky and not ideal solution.
So what is going on? How can I solve it? Thanks!
Documentation says:
OrthographicCamera( left, right, top, bottom, near, far )
so, set your camera like this:
camera = new THREE.OrthographicCamera(-FRUSTUM_SIDE/2, FRUSTUM_SIDE/2,
FRUSTUM_SIDE/2, -FRUSTUM_SIDE/2);
thus you'll have default near and far camera frustrum planes (0.1 and 2000).
Explanation: You set your cam at z-position, which is equal to FRUSTRUM_SIDE/2 and also you set your far camera frustrum plane with the same value. So you see everything between your cam position and the distance from it, which is FRUSTRUM_SIDE/2. In world coordinates, your far plane is at point (0, 0, 0). That's why your triangle disappears when it goes further then the distance of FRUSTRUM_SIDE/2 from your cam.
Extending the answer from #prisoner849, the problem shown in the JSFiddle has nothing to do with the geometry or the material of the mesh, but with the shape and extension of the frustum defined by the OrthographicCamera.
As explained nicely in this video tutorial and in the documentation the frustum of an OrthographicCamera is a rectangular parallelepiped defined by the values left, right, top, bottom, near, far:
The camera should effectively be thought of as being attached to the surface on the near side and facing towards negative values of the z axis.
Once the frustum's shape is defined with:
FRUSTUM_SIDE = 20;
camera = new THREE.OrthographicCamera(
-FRUSTUM_SIDE/2, FRUSTUM_SIDE/2,
FRUSTUM_SIDE/2, -FRUSTUM_SIDE/2,
-FRUSTUM_SIDE/2, FRUSTUM_SIDE/2);
we will be able to see in our image all the objects in the scene which are entirely contained in it.
However, after defining the frustum the position of the camera is changed: camera.position.z = FRUSTUM_SIDE/2; (line 24 of the fiddle). This effectively moves the whole frustum and not just the location of the image, so while previously any object in (0,0,0) was in the center of the frustum, now it will lie on the very far end plane of it.
The animation function:
function animate() {
requestAnimationFrame( animate );
mesh_1.rotation.x += 0.005;
renderer.render( scene, camera );
}
rotates the triangle towards the image plane, but for angles between [Math.PI, 2*Math.PI] the plane is effectively pointing outside of the frustum and thus becomes invisible.
Removing the camera.position.z = FRUSTUM_SIDE/2; line, or defining a different frustum, or moving the mesh position to the middle of the new frustum are all possible solutions. I.e. corrected fiddle.
Three.js version: r79
Basically, I want to have a 3D object (a mesh created with THREE.TextGeometry) act like it's in 2D space but is always in the same exact place on the screen (never moves with the camera, no matter if I zoom or pan). Is there a way to do this?
I'm actually not quite sure how unless I make what I feel is a giant hack and update the coordinates of the text mesh every time there is a mouse scroll event or pan event.
One solution is to add the mesh as a child of the camera.
scene.add( camera ); // required, since the camera has a child
camera.add( mesh );
mesh.position.set( 0, 0, - 100 ); // or whatever
three.js r.79
I'm trying to manage a 3Dobject that visualize the controls target in order to make rotation and zoom more immediatly understandable ...
I've made a sample here with an AxisHelper which is supposed to indicate the target and to keep the same size. In OrbitControls.js there are comments on each line I've added.
As you can see if you pan and zoom (right click and scroll) it manages cursors too, but the 'helper' has two problems :
the position and scale of the helper is set after the renders, that why it seems to be somewhat elastic ... And if I place the position/scale updates into the scope.update() function that's the same thing.
the function bellow scales the helper to a constant size, it computes a World/View scale at a defined point (the control's target) from a unit vector. But it seems it's not the good solution because when you scroll to the max zoom the helper is growing.
var point = new THREE.Vector3( 1, 0, 0 );
point = point.applyMatrix4( scope.object.matrixWorld );
var scale = point.distanceTo( scope.target );
helper.scale.set(scale, scale, scale);
So if you have an idea to achieve this you are welcome ...
i'm currently developing something similar with threejs and i think that:
I can't see the demo, my proxy not allow it.
I think that is correct...if the object helper is a mesh or a sprite added to the scene when orbit controls zoom the camera became more near to the objects in the scene, the objects dimension remain the same.