I have two meshes: mesh1 and mesh2. both have the same number of vertices and have extrusion.
mesh1 = 5000 vertices.
mesh2 = 5000 vertices.
I assign the vertices of mesh2 to mesh1. then I do:
mesh2.geometry.verticesNeedUpdate = true;
mesh2.geometry.vertices = mesh1.geometry.vertices;
thus the vertices of mesh2 are updated. but this happens too fast. I can not see an animation while it makes mesh2's vertices to mesh1's vertices.
I want to see the transformation when malla2 starts to become the malla1, I mean to see an animation of vertices when they are changing.
I used "Tween.js" for animations such as position and color. I'm not sure if this can help to view animations when vertices begin to change.
I do:
new TWEEN.Tween( mesh2.geometry.vertices ).to( mesh1.geometry.vertices, 1000 ).start();
but not works. sorry for my level of english.
As you've seen, this doesn't work -- in part because for each call to update the mesh vertices, you must also call geometry2.verticesNeedUpdate = true; for every one of these frames.
--
more specifically I think you will want to add .onUpdate(function() { geometry2.verticesNeedUpdate = true; }) to your tween.
Related
I made a line mesh with elliptical shape, representing the orbit wiht eccentricity e, and semi-major axis a. The mesh is a child of a group called orbitGroup that contains other objects.
Also, I added a gui to change this parameters. Every time gui changes it calls the next function:
function ElementsUpdate(){
scene.remove(orbitGroup);
orbitGroup.remove(Orbit);
Orbit = undefined;
Orbit = new THREE.Line( GetGeometryOrbit(GetOrbitLine(a,e,100)), materialOrbit);
orbitGroup.add(Orbit);
scene.add(orbitGroup);
}
The mesh (Orbit) is being created successfully. However the it does not update. I'm aware that setGeometry method is not working anymore. Any solution? I am replacing the mesh because replacing only the geometry seems to be more complicated.
Thanks beforehand for the help.
The project is in this link
You should be able to replace the vertex (position) buffer and call it a day.
function ElementsUpdate(){
let points = GetOrbitLine(a,e,100).getPoints(); // THREE.Curve.getPoints
Orbit.geometry.setFromPoints( points ); // replaces the position buffer
}
Curve.getPoints gives you an array of the points on your elipse.
BufferBgeometry.setFromPoints replaces the position buffer, derived from your array of points.
Because it replaces the buffer (and the BufferAttribute) You should not need to mark anything as needing re-sent to the GPU.
I'm trying to use Three.js to generate some points with Three.Points, and then make the points themselves revolve around a single point (or mesh). I've generated the points randomly in a cylinder region as mentioned in this answer and reviewed posts such as this one, which doesn't seem to work as it's rotating a mesh around a mesh.
Here's what I've got so far:
//Mesh that is to be revolved around
const blackHoleGeometry = new THREE.SphereGeometry(10, 64, 64);
const blackHoleMaterial = new THREE.MeshBasicMaterial({
color: 0x000000
});
const blackHole = new THREE.Mesh(blackHoleGeometry, blackHoleMaterial);
scene.add(blackHole);
//Points that are supposed to revolve around the mesh
const particles = new THREE.PointsMaterial({
color: 0xffffff
});
const geometry = new THREE.Geometry();
// [...] code to generate random points
const pointCloud = new THREE.Points(geometry, particles);
pointCloud.rotation.x = Math.PI / 2; //to rotate it 90*
You can see a full demo here. How can I have the points all revolve around the sphere mesh, as in each vertex of the point "cloud"'s geometry revolving around a center point, or the mesh like a planet and a star?
Couple problems with you code, but here is an updated fiddle for you: https://jsfiddle.net/pwwkght0/2/
So when you create your Three.js scene you want to keep all of your code in the init function. So I moved everything in there and then I called init outside of the variable. When you do this, init will create you scene until it reaches the last line and call animate. You want to call animate instead of render because animate will request animation frames and call render on each.
function init() {
//do all the things to make the scene
function animate() {
requestAnimationFrame(animate);
orbit();
controls.update();
render();
}
function render() {
renderer.render(scene, camera);
}
animate();
}
init()
So now that you are requesting animation frames, it's time to make things orbit. I made a very simple function to grab your point cloud and rotate it on its z-axis, to simulate it rotating around the sphere. Notice how orbit is called in animate:
function orbit() {
pointCloud.rotation.z += 0.01;
}
You can take this a step further and have each point rotate at a different speed around the sphere by accessing pointCloud's children property.
I have a three.js project where I'm adding in 100 meshes, that are divided into 10 scenes:
//add 100 meshes to 10 scenes based on an array containing 100 elements
for (var i = 0; i < data.length; i++) {
mesh = new THREE.Mesh(geometry, material);
//random positions so they don't spawn on same spot
mesh.position.x = THREE.Math.randInt(-500, 500);
mesh.position.z = THREE.Math.randInt(-500, 500);
They're added in by a loop, and all these meshes are assigned to 10 scenes:
// Assign 10 meshes per scene.
var sceneIndex = Math.floor(i/10);
scenes[sceneIndex].add(mesh);
I also wrote a functionality where I can rotate a mesh around the center of the scene.
But I don't know how to apply the rotation functionality to all meshes while still keeping them divided into their corresponding scenes. This probably sounds way too vague so I have a fiddle that holds all of the relevant code.
If you comment these two lines back in you'll see that the meshes all move to scenes[0], and they all rotate fine the way I wanted, but I still need them divided in their individual scenes.
spinningRig.add(mesh);
scenes[0].add(spinningRig);
How is the code supposed to look like? Waht is the logic to it?
The logic is fairly simple. The simplest format would be to have a separate spinningRig for each scene -- essentially a grouping of the meshes for each scene.
When you create each scene, you'll also create a spinningRig and add + assign it to that scene:
// Setup 10 scenes
for(var i=0;i<10;i++) {
scenes.push(new THREE.Scene());
// Add the spinningRig to the scene
var spinningRig = new THREE.Object3D();
scenes[i].add(spinningRig);
// Track the spinningRig on the scene, for convenience.
scenes[i].userData.spinningRig = spinningRig;
}
Then instead of adding the meshes directly to the scene, add them to the spinningRig for the scene:
var sceneIndex = Math.floor(i/10);
scenes[sceneIndex].userData.spinningRig.add(mesh);
And finally, rotate the spinningRig assigned to the currentScene:
currentScene.userData.spinningRig.rotation.y -= 0.025;
See jsFiddle: https://jsfiddle.net/712777ee/4/
I have a small web app that I've designed for viewing bathymetric data of the seafloor in Three.js. Basically I am using a loader to bring in JSON models of the my extruded bathymetry into my scene and allowing the user to rotate the model or click next to load a new part of the seafloor.
All of my models have the same 2D footprint so are identical in two dimensions, only elevations and texture change from model to model.
My question is this: What is the most cost effective way to update my model?
Using scene.remove(mesh); then calling my loader again to load a new model and then adding it to the scene with scene.add(mesh);.
Updating the existing mesh by calling my loader to bring in material and geometry and then calling mesh.geometry = geometry;, mesh.material = material and then mesh.geometry.needsUpdate;.
I've heard that updating is pretty intensive from a computational point of view, but all of the articles that I've read on this state that the two methods are almost the same. Is this information correct? Is there a better way to approach my code in this instance?
An alternative that I've considered is skipping the step where I create the model (in Blender) and instead using a displacement map to update the y coordinates of my vertices. Then to update I could push new vertices on an existing plane geometry before replacing the material. Would this be a sound approach? At the very least I think the displacement map would be a smaller file to load than a .JSON file. I could even optimize the display by loading a GUI element to divide the mesh into more or fewer divisions for high or low quality render...
I dont know off the top of my head what exactly happens under the hood, but from what i remember i think these two are the exact same thing.
You aren't updating the existing mesh. A mesh extends from Object3D, so it just sits there, wiring some geometry and some materials.
mesh.geometry = geometry did not "update the mesh", or it did, but with new geometry (which may be the thing you are actually referring to as mesh).
In other words, you always have your container, but when you replace the geometry by doing =geometry you set it up for all sorts of GL calls in the next THREE.WebGLRenderer.render() call.
Where that new geometry gets attached to, be it an existing mesh, or a new one, shouldnt matter at all. The geometry is the thing that will trigger the low level webgl calls like gl.bufferData().
//upload two geometries to the gpu on first render()
var meshA = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
var meshB = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
//upload one geometry to the gpu on first render()
var bg = new THREE.BoxGeometry()
var meshA = new THREE.Mesh( bg );
var meshB = new THREE.Mesh( bg );
for ( var i = 0 ; i < someBigNumber ; i ++ ){
var meshTemp = new THREE.Mesh( bg );
}
//doesnt matter that you have X meshes, you only have one geometry
//1 mesh two geometries / "computations"
var meshA = new THREE.Mesh( new THREE.BoxGeometry() ); //first computation - compute box geometry
scene.add(meshA);
renderer.render( scene , camera ); //upload box to the gpu
meshA.geometry = new THREE.SphereGeometry();
renderer.render( scene , camera); //upload sphere to the gpu
THREE.Mesh seems to be the most confusing concept in three.js.
I am currently working on a small project using the new Babylon.js framework. One of the issues I have run into is that I basically have two meshes. One of the meshes is supposed to be the background, and the other is supposed to follow the cursor to mark where on the other mesh you are targeting. The problem is that when I move the targeting mesh to the position of the cursor, it blocks the background mesh when I use scene.pick, resulting in the other mesh having its position set on its self.
Is there any way to ignore the targeting mesh when using scene.pick so that I only pick the background mesh or is there some other method I could use? If not, what would be the steps to implement this sort of feature to essentially raycast only through certain meshes?
If you need code samples or any other forms of description, let me know. Thanks!
Ok, it's easy.
So, we have two meshes. One is called "ground", the second "cursor". If you want to pick only on the ground you have two solutions :
First:
var ground = new BABYLON.Mesh("ground",scene);
ground.isPickable = true ;
var cursor = new BABYLON.Mesh("cursor", scene);
cursor.isPickable = false;
...
var p = scene.pick(event.clientX, event.clientY); // it return only "isPickable" meshes
...
Second:
var ground = new BABYLON.Mesh("ground",scene);
var cursor = new BABYLON.Mesh("cursor", scene);
...
var p = scene.pick(event.clientX, event.clientY, function(mesh) {
return mesh.name == "ground"; // so only ground will be pickable
});
...
regards.