I'm currently writting a 3D implementation of the boids algorithm in P5.js but I'm having trouble orienting my boids according to their direction (velocity). Rotations are limited to RotateX(), RotateY() and RotateZ(). The simplest solution that I feel should work goes along these lines :
push();
translate(this.pos);
rotateZ(createVector(this.vel.x, this.vel.y).heading());
rotateY(createVector(this.vel.x, this.vel.z).heading());
beginShape();
// Draw Boid Vertices..
endShape();
pop();
But it doesn't.
I've written a much smaller version of the program which contains only the orientation for randomly generated particles that go in a single direction. It is available here directly on the p5js website : https://editor.p5js.org/itsKaspar/sketches/JvypSPGGh
There is a default orbit control so you can zoom and drag the mouse to check the orientation of the particles.
Thanks so much, I've been stuck on this for half a day now
From your demo, the z component is flipped, and you can test this from only trying one of the rotations at a time. Second, chaining rotations in 3D this way will usually not do what you want, as rotating will change the "up" or "right" vector of the coordinate system attached to a certain object. For example, rotating about the up (-y for p5) vector, or the yaw angle, will rotate the right vector. The second rotation then needs to be about the rotated right vector (now pitch), so you can't just use rotateX/Y/Z as they are still in world space instead of object space. Note that I'm completely ignoring roll in this solution, but if you look at the boids from the front and top angles, it should be aligned with the velocities
var right = p5.Vector(this.vel.x, 0, this.vel.z);
rotate(atan(this.vel.y/ this.vel.x), right);
rotateY(atan2(-this.vel.z, this.vel.x));
Related
I'm using a PlaneGeometry as water, and have added a ship (gltf model) on it. Problem is, when the boat slightly rests into the water, the water is shown inside the boat, even though the boat is afloat. Is there a way to clip the water when a boat (or other models/objects) intersect with it with motion?
Every material in Three.js has an AlphaMap property that you can use to change the opacity of the mesh. So you could draw a black rectangle where the boat is to "cut out" that part of the water plane with an opacity of 0.
I presume your boat is going to be moving, so your texture would also need to move this black rectangle. To solve this, you could use an HTML <canvas> element to draw and move the black rectangle. Then you could use THREE.CanvasTexture to turn the <canvas> into a texture for your plane's alpha.
const drawingCanvas = document.getElementById( 'drawing-canvas' );
const canvasTexture = new THREE.CanvasTexture( drawingCanvas );
const waterMaterial = new THREE.MeshBasicMaterial({
transparent: true,
alphaMap: squareTexture
});
See this working demo for how to use a 2D canvas as a texture in your 3D scene. When you draw on the top-left square, you'll see it being applied to the cube. You could copy this approach, but instead of assigning it to material.map, you'd use it on material.alphaMap.
You could check for the position of the hull vertices of the ship each frame and, given enough points on the PlaneGeometry, you could displace all vertices of the surface inside the ship hull to the y coordinate of the nearest hull vertex. This could probably also be done by a custom shader, which is probably a more efficient solution.
#Marquizzo's idea with the alpha channel is probably a better solution, too, since you don't really want to simulate the displacement of the water, as I assume, but simply get rid of the display of water inside the ship. In this case, I would place an orthographic camera above the ship, set the near and far clipping planes as close as possible to the plane, and use the alpha channel or rather CanvasTexture inside the alpha channel as rendertarget. This way, you'll get a real time alpha map that also reacts to rolling, pitching and heave of the ship.
Effect of floating is made with sin function, applied to y-coordinate. You can use the same principle to make gltf model(ship) move on any coordinate axis.
Example:
position.y = Math.sin(ship.userData.initFloating + t) * 0.15;
I have an object (a pen) in my scene, which is rotating around its axis in the render loop.
groupPen.rotation.y += speed;
groupPen.rotation.x += speed;
and I have also a TrackballControls, which allows the user to rotate the whole scene.
What I now want is to get the "real" position of the pen (or its pick) and place small spheres to create a trail behind it.
This means I need to know where the camera is looking at and place the trail spheres behind the peak of the pen and exclude them from the animation and the TrackballControls.
What I tried is:
groupSphereTrail.lookAt(camera.position);
didn't work. Means no reaction at all.
camera.add(groupSphereTrail);
didn't work. groupSphereTrail is than not in the view area, couldn't make it visible - manipulating position.z didn't help.
Then I tried something like sending a tray with traycaster. The idea was to send a ray from the center of the camera through the peak of the pen and then draw the trail there. But then I still doesn't have the "real" position.
Another idea was to create a 2d vector of the current position of the pen peak and just draw an html element on top of the canvas:
var p = penPeak.position.clone();
var vector = p.project(camera);
vector.x = (vector.x + 1) / 2 * width;
vector.y = -(vector.y - 1) / 2 * height;
but this also doesn't work.
What could be another working solution?
Current progress:
https://zhaw.swissmade.xyz
(click on the cap of the pen to see the writing - this writing trail should stay at its place when you rotate the camera)
If i understood the question right, you want to show the trail as if it were draw on the screen itself (screen space)?
yourTrailParticle.position.project(camera)
camera.add(yourTrailParticle)
That's the basic idea, but it gets a bit tricky with PerspectiveCamera. You could set up a whole new THREE.Scene to hold the trail, and render it with a fixed size orthographic camera.
The point is .project() will give you a normalized screen space coordinate of a world space vector, and you need to keep it somehow in sync with that camera (since the screen is too). The perspective camera has distortion so you need to figure out the appropriate distance to map the coordinate to. With a separate scene, this may become easier.
I am trying to make a Three.js scene where I've got a dodecahedron. I want the camera to be zoomed in on one side of the dodecahedron and when a button is pressed I want it to zoom out, rotate until it is standing across another side and then zoom in again.
To make this clear:
If the camera would be fully zoomed in on side 1 and I pressed "5", I would want the camera to zoom out - showing the dodecahedron - then rotate towards 5 (or let the dodecahedron rotate side 5 facing the camera?) and zoom in again. It's important that the camera is always set parallel with the base of the pentagon it's facing, not the top or any other rotation.
I thought it would be smart to start off with just a cube, to not start too complicated. I added some tweens (when pressing G) to illustrate some basic movement, but that doesn't look too good anymore in the fiddle. jsfiddle
Because I feel like I should have a function that does all this movement and calculating for me I first tried to write down each position and rotation each side-view had from the cube so I might detect a pattern. I can see some pattern in the values I wrote down for the cube, but I do not know how to convert this into a working function, let alone for a dodecahedron. My noted values are
side1 (0, 0, 600) (0, 0, 0)
side2 (600, 0, 0) (0, pi/2, 0)
side3 (0, 0, -600) (0, pi, 0);
side4 (-600, 0, 0) (0, -pi/2, 0);
side5 (0, 600, 0) (-pi/2, 0, 0);
side6 (0, -600, 0) (pi/2, 0, 0);
I can see some sort of recurrence happening and some relationships, but I wouldn't see how to link them in a function. I think that would be a first step in getting to a function doing the same but for a more complex shape. Could anyone guide me into some direction I should be looking right now? Because I could of course work with a lot of if clauses, but that's not the correct way to go I feel.
To solve the problem, the first we would do is getting the center coordinate of each side of the dodecahedron.
as you know in three.js, a mesh consists of triangles, every triangle has three points, all the faces and vertices can be found in mesh.geometry.faces and mesh.geometry.vertices. in dodecahedron, each side has three faces and five vertices, and I use each face normal to divide them into 12 groups which have same normal and in the same plane. Then, we got 5 points of each side, calculate the average coordinate to get the center coordinate.
After getting coordinates we need to rotate, one way is rotating the dodecahedron and keep the camera, another way is keeping the dodecahedron and translate camera, in this case, I select the second one.
Now camera face to centerA, the green circle is the track of camera, because we need to keep the distance from camera to the dodecahedron center.
To get the target position, we just scale the centerB vector, cause the coordinate of the centerB is object system coordinate, we need to apply the matrix to change the coordinate to world system coordinate.
Then, we translate the camera in an animation, the camera needs to take an arc.
I use the parametric equation of a circle in 3D space to do that, about the equation you can see Parametric Equation of a Circle in 3D Space. With this formula, I got the parameter θ1 on camera position and θ2 on target position. I update the camera position with θ1 in every animation frame loop.
I add some comments on jsfiddle.(Still has some bugs, need to update.)
Here is another solution by keeping the camera and rotate the object.
I have a large circle with smaller ones inside made using two.js.
My problem is that these two do not rotate in their own place but in the top left axis.
I want the group of circles (circlesGroup) rotate only inside the large one in a static position. The circlesGroup and the large circle are grouped together as rotatoGroup.
two.bind('update', function(frameCount, timeDelta) {
circlesGroup.rotation = frameCount / 120;
});
two.bind('update', function(frameCount, timeDelta) {
rotatoGroup.rotation = frameCount / 60;
});
The whole code is in CodePen.
All visible shapes when invoked with two.make... ( circles, rectangles, polygons, and lines ) are oriented in the center like this Adobe Illustrator example:
When this shape's translation, rotation, or scale change those changes will be reflected as transformations about the center of the shape.
Two.Groups however do not behave this way. Think of them as display-less rectangles. They're origin, i.e group.translation vector, always begins at (0, 0). In your case you can deal with this by normalizing the translation your defining on all your circles.
Example 1: Predefined in normalized space
In this codepen example we're defining the position of all the circles around -100, 100, effectively half the radius in both positive-and-negative x-and-y directions. Once we've defined the circles within these constraints we can move the whole group with group.translation.set to place it in the center of the screen. Now when the circles rotate they are perceived as rotating around themselves.
Example 2: Normalizing after the fact
In this codepen example we're working with what we already have. A Two.Group that contains all of our shapes ( the bigger circle as well as the array of the smaller circles ). By using the method group.center(); ( line 31 ) we can normalize the children of the group to be around (0, 0). We can then change the translation of the group in order to be in the desired position.
N.B: This example is a bit complicated because it invokes underscore's defer method which forces the centering of the group after all the changes have been registered. I'm in the process of fixing this.
OH Great and Knowledgeable Stack Overflow, I humbly request your great minds assistance...
I'm using the three js library, and I need to implement a 'show extents' button. It will move the camera to a position such that all the objects in the world are visible in the camera view (given they are not blocked of course).
I can find the bounding box of all the objects in the world, say they are w0x,w0y,w0z and w1x,w1y,w1z
How can I, given these to bounds, place the camera such that it will have a clear view of the edges of the box?
Obviously there will have to be a 'side' chosen to view from...I've googled for an algorithm to no avail!
Thanks!
So Let's say that you have picked a face. and that you are picking a camera position so that the camera's line-of-sight is parallel to one of the axes.
Let's say that the face has a certain width, "w", and let's say that your camera has a horizontal field-of-view "a". What you want to figure out is what is the distance, "d" from the center of the face that the camera should be to see the whole width.
If you draw it out you will see that you basically have an isosceles triangle whose base is length w and with the angle a at the apex.
Not only that but the angle bisector of the apex angle forms two identical right triangles and it's length (to the base) is the distance we need to figure out.
Trig tells us that the tangent of an angle is the ratio of the oposite and adjacent sides of the triangle. So
tan(a/2) = (w/2) / d
simplifying to:
d = w / 2*tan(a/2)
So if you are placing the camera some axis-aligned distance from one of your bounding box faces then you just need to move d distance along the axis of choice.
Some caveats, make sure you are using radians for the javascript trig function input. Also you may have to compute this again for your face height and camera's vertical field-of-view and pick the farther distance if your face is not square.
If you want to fit the bounding box from an arbitrary angle you can use the same ideas - but first you have to find the (aligned) bounding box of the scene projected onto a plane perpendicular to the camera's line of sight