Animating LineSegments in THREE.js - javascript

I have a solid MorphBlendMesh that is overlayed with a LineSegments object using EdgesGeometry/LineBasicMaterial in order to create a wireframe look without the "diagonals" that result from the triangle approach in newer versions of three.js. The problem is that I cannot find a way to get LineSegments to animate along with the mesh, presumably because it isn't a mesh, its simply an Object3D.
Is there a way to animate a LineSegments object with AnimationMixer? Or replicate this same look with a mesh setup that works well with AnimationMixer?
For reference, my question is essentially an expansion of this question -- same idea, but it MUST be capable of animation with AnimationMixer.

You can attach an arbitrary property to the mixer. This property will hold the vertices.
const a: any = ((lines.geometry as THREE.BufferGeometry).attributes.position as BufferAttribute)
.array;
const p: any = (line.geometry as any).attributes.position.array;
(lines as any).value = [...a];
const keyFrame2 = new THREE.NumberKeyframeTrack(
'.value',
[0,1],
[...a, ...p],
THREE.InterpolateSmooth
);
this.lineGeometriesToUpdate.push(lines as THREE.LineSegments);
const clip2 = new THREE.AnimationClip('lines', 1, [keyFrame2]);
const mixer2 = new THREE.AnimationMixer(lines);
const ca2 = mixer2.clipAction(clip2);
this.mixer.push(mixer2);
Then, in your animation loop, you use this property to update the geometry
this.lineGeometriesToUpdate.forEach(l => {
const geom = l.geometry as THREE.BufferGeometry;
const values = (l as any).value;
geom.setAttribute('position', new THREE.BufferAttribute(new Float32Array(values), 3));
(geom.attributes.position as BufferAttribute).needsUpdate = true;
});
this.renderScene();

Related

Scaling a 2D SVG group object in three.js

I'm attempting to create a map of 2d SVG tiles in three.js. I have used SVGLoader() Like so (Keep in mind some brackets are for parent scopes that aren't shown. That is not the issue):
loader = new SVGLoader();
loader.load(
// resource URL
filePath,
// called when the resource is loaded
function ( data ) {
console.log("SVG file successfully loaded");
const paths = data.paths;
for ( let i = 0; i < paths.length; i ++ ) {
const path = paths[ i ];
const material = new THREE.MeshBasicMaterial( {
color: path.color,
side: THREE.DoubleSide,
depthWrite: false
} );
const shapes = SVGLoader.createShapes( path );
console.log(`Shapes length = ${shapes.length}`);
try{
for ( let j = 0; j < shapes.length; j ++ ) {
const shape = shapes[ j ];
const geometry = new THREE.ShapeGeometry( shape );
const testGeometry = new THREE.PlaneGeometry(2,2);
try{
const mesh = new THREE.Mesh(geometry, material );
group.add( mesh );
}catch(e){console.log(e)}
}
}catch(e){console.log(e)}
}
},
// called when loading is in progress
function ( xhr ) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded' );
},
// called when loading has errors
function ( error ) {
console.log( 'An error happened' );
}
);
return group;
}
Dismiss the fact that I surrounded alot of it in try{}catch(){}
I have also created grid lines and added it to my axis helper in the application that allows me to see where each cooordinate is, in relation to the X and Y axis.
This is how the svg appears on screen:
Application Output
I can't seem to figure out how to correlate the scale of the svg, with the individual grid lines. I have a feeling that Im going to have to dive deeper into the SVG loading script that I have above then scale each shape mesh specifically. I call the SVG group itself in the following code.
try{
//SVG returns a group, TGA returns a texture to be added to a material
var object1 = LOADER.textureLoader("TGA", './Art/tile1.tga', pGeometry);
var object2 = LOADER.textureLoader("SVG", '/Art/bitmap.svg');
const testMaterial = new THREE.MeshBasicMaterial({
color: 0xffffff,
map: object1,
side: THREE.DoubleSide
});
//const useMesh = new THREE.Mesh(pGeometry, testMaterial);
//testing scaling the tile
try{
const worldScale = new THREE.Vector3();
object2.getWorldScale(worldScale);
console.log(`World ScaleX: ${worldScale.x} World ScaleY: ${worldScale.y} World ScaleZ: ${worldScale.z}`);
//object2.scale.set(2,2,0);
}catch(error){console.log(error)}
scene.add(object2);
}
Keep in mind that the SVG is object2 in this case. Some of the ideas to tackle this problem I have had is looking into what a world scale is, matrix4 transformations, and the scale methods of either the object3d parent properties or the bufferGeometry parent properties of this particular svg group object. I am also fully aware that three.js is designed for 3d graphics, however I would like to master 2d graphics programming in this library before I get into the 3d aspect of things. I also have a thought that the scale of the SVG group is distinctly different from the scale of the scene and its X Y and Z axis.
If this question has already been answered a link to the corresponding answer would be of great help to me.
Thank you for the time you take to answer this question.
I messed with the dimensions of the svg file itself in the editor I used to paint it and I got it to scale. Not exactly a solution in the code, however I guess the code is just closely tied to the data that the svg file provides and cant be altered too much.

Clone object in Forge Viewer

I'm trying to clone an object in Forge Viewer.
I have tried using THREE.js and creating a clone but it has different structure to the base object.
sceneBuilder = viewer.loadExtension("Autodesk.Viewing.SceneBuilder");
let modelBuilder = await sceneBuilder.addNewModel({
conserveMemory: false,
modelNameOverride: `Custom model`,
});
let renderProxy = viewer.impl.getRenderProxy(viewer.model, fragId);
let geom = new THREE.Geometry();
let VE = Autodesk.Viewing.Private.VertexEnumerator;
VE.enumMeshVertices(renderProxy.geometry, (v: any, i: any) => {
geom.vertices.push(new THREE.Vector3(v.x, v.y, v.z));
});
VE.enumMeshIndices(renderProxy.geometry, (a, b, c) => {
geom.faces.push(new THREE.Face3(a, b, c));
});
geom.computeFaceNormals();
let mesh = new THREE.Mesh(
new THREE.BufferGeometry().fromGeometry(geom),
renderProxy.material
);
(mesh as any).dbId = dbId;
modelBuilder.addMesh(mesh);
I found that renderProxy is also THREE.Mesh, but when I tried let clone = renderProxy.clone(); modelBuilder.addMesh(clone), it doesn't work. Anyway to clone an object in Viewer?
Another thing, when I add a mesh by modelBuilder, I see that the created Object has added to Browser tree, but I still can't use Viewer functions with it (such as Viewer.select(dbId); Viewer.fitToView();)
Cloning the renderProxy directly probably won't work as Forge Viewer basically returns the same THREE.Mesh instance whenever you request the proxy, just with different internals (for performance reasons).
The code snippet you provided (extracting vertices and faces from the proxy) is a safer choice. Is that snippet working as expected, or is it also causing issues?

How to change geometry attributes dynamically using dat GUI in three.js?

I made a sphere geometry with this function
let createBall = () => {
let geoBall = new THREE.SphereGeometry(5, 32, 16);
let mat = new THREE.MeshPhongMaterial({ color: "red", transparent: true });
ball = new THREE.Mesh(geoBall, mat);
ball.position.set(0, 5, 0);
ball.geometry.dynamic = true;
ball.geometry.verticesNeedUpdate = true;
ball.geometry.__dirtyVertices = true;
scene.add(ball);
};
and I call the function in window.onload function. I also use dat GUI to edit the geometry attribute which was the widthSegment of the ball.geometry like this
ballFolder
.add(ball.geometry.parameters, "widthSegments", 1, 64, 1)
.onChange(function () {
console.log(geoBall);
ball.geometry.dispose();
ball.geometry = geoBall.clone();
});
when I log the geoBall in the console, it turns out that the attribute has changed, but the object itself isn't changed. Anyone know how to solve this problem ??
The values in parameters are only used when the geometry is created. Think of the geometry generators (BoxGeometry, SphereGeometry etc.) as factory methods. Changing the parameters has no effect once the object is created.
So I suggest you create a new geometry in your onChange() callback and call dispose() on the previous one (what you already do).
BTW: In recent three.js version geometry objects do not have dynamic, verticesNeedUpdate and __dirtyVertices properties.

Vertices of THREE.BufferGeometry

Since r125, THREE.Geometry was deprecated. We are now updating our code base and we are running into errors that we don't know how to fix.
We create a sphere and use a raycaster on the sphere to get the intesect point.
worldSphere = new THREE.SphereGeometry(
worldSize,
worldXSegments,
worldYSegments
);
...
const intersect = raycaster.intersectObjects([worldGlobe])[0];
...
if (intersect) {
let a = worldSphere.vertices[intersect.face.a];
let b = worldSphere.vertices[intersect.face.b];
let c = worldSphere.vertices[intersect.face.c];
}
Now, normally variable a would contain 3 values for every axis namely a.x, a.y, a.z, same goes for the other variables. However, this code does not work anymore.
We already know that worldSphere is of type THREE.BufferGeometry and that the vertices are stored in a position attribute, but we cannot seem to get it working.
What is the best way to fix our issue?
It should be:
const positionAttribute = worldGlobe.geometry.getAttribute( 'position' );
const a = new THREE.Vector3();
const b = new THREE.Vector3();
const c = new THREE.Vector3();
// in your raycasting routine
a.fromBufferAttribute( positionAttribute, intersect.face.a );
b.fromBufferAttribute( positionAttribute, intersect.face.b );
c.fromBufferAttribute( positionAttribute, intersect.face.c );
BTW: If you only raycast against a single object, use intersectObject() and not intersectObjects().

How to Set-up Raycasting in React-three-fiber

I am trying to set up a scene in react-three-fiber that uses a raycaster to detect if an object intersects it.
My Scene: scene
I have been looking at examples like this example and this other example, of raycasters that use simple three objects but don't utilize separated component jsx ".gltf" meshes or their not in jsx. So I'm not sure how to add my group of meshes to a "raycaster.intersectObject();".
It seems that all you do is set up your camera, scene, and raytracer separately in different variables, but my camera and scene are apart of the Canvas Component.
Question: How do I add raycasting support to my scene? This would obscure the text that is on the opposite side of the sphere.
Thanks!
This is the approach I used. Note that I used useState instead of useRef", since I had problems with the latter
const Template = function basicRayCaster(...args) {
const [ray, setRay] = useState(null);
const [box, setBox] = useState(null);
useEffect(() => {
if (!ray || !box) return;
console.log(ray.ray.direction);
const intersect = ray.intersectObject(box);
console.log(intersect);
}, [box, ray]);
return (
<>
<Box ref={setBox}></Box>
<raycaster
ref={setRay}
ray={[new Vector3(-3, 0, 0), new Vector3(1, 0, 0)]}
></raycaster>
</>
);
};
Try CycleRayCast from Drei lib. It is react-three-fiber-friendly
"This component allows you to cycle through all objects underneath the cursor with optional visual feedback. This can be useful for non-trivial selection, CAD data, housing, everything that has layers. It does this by changing the raycasters filter function and then refreshing the raycaster."
You can retrieve the raycaster from useThree.
Example to modify the raycaster threshold for points onMount:
const { raycaster } = useThree();
useEffect(() => {
if (raycaster.params.Points) {
raycaster.params.Points.threshold = 0.1;
}
}, []);
Once done you are enable to modify its properties or to use it

Categories