I need to export scene as single STL file.
Whereas its easy to export each single <asset>/<mesh>/<model> exporting whole scene with transformations its another story. That requires applying world matrix transform to every vertex of each asset data on-the-fly before export.
Does XML3D has some mechanisms that would help me with that?
Where should I start?
Actually, XML3D is an presentation format and was never designed to extract something useful other than interactive renderings. However, since it is JavaScript, you can access everything somehow and obviously you can also get the data you need to apply all transformations and create a single huge STL mesh from the scene.
The easiest way I can imagine is using the internal scene:
var scene = document.querySelector("xml3d")._configured.adapters["webgl_1"].getScene();
// Iterate render objects
scene.ready.forEach(function(renderObject) {
// Get word matrix
var worldMatrix = new Float32Array(16);
renderObject.getWorldMatrix(worldMatrix);
// Get local position data
var dataRequest = new Xflow.ComputeRequest(renderObject.drawable.dataNode, ["position"]);
var positions = dataRequest.getResult().getOutputData("position").getValue();
console.log(worldMatrix, positions.length);
// apply worldmatrix to all positions
...
});
Related
I am not able to implement LOD to a 3d Object with json data.
Here is my implementation:
loader.load('models/robot-threejs/robot.json', function(object){
var lod = new THREE.LOD(object);
for (var i=1; i<=3;i++) {
console.log("this"+i);
lod.addLevel(object,i);
}
lod.updateMatrix();
lod.matrixAutoUpdate = false;
// lod.updateMatrix();
// lod.matrixAutoUpdate = false;
scene.add(lod);
//scene.add(object);
// object.position.set(30, 30, 30);
})
You're implementing THREE.LOD wrong.
The constructor does not take any parameters, so when you do this: new THREE.LOD(object);, it does nothing. You just have to use new THREE.LOD();
You're adding the same mesh to LOD 3 times, so you're not gonna see any difference. You need to create separate meshes with different geometries if you want to see any change in detail. Keep in mind that you have to generate these geometries yourself. Three.js doesn't automatically change the geometry for you. But you could use the SimplifyModifier for this.
Not sure why you're playing with matrix updates. There's no reason for this here.
You also need to call lod.update(camera) on your render loop if you want to see the change in detail.
I strongly recommend you read the documentation for LOD and read through the code in this example to better understand how it works.
I'm trying to clone and then scale a mesh, but scaling does not seem to be working immediately on the cloned object, for programming purposes using CSG ThreeBSP. I think I should call a function after the scaling to force the matrix or other internal variables to recalculate immediately and not to wait for the full update loop on render side.
My code looks something like this:
var someMesh2 = someMesh1.clone();
someMesh2.scale.set(2,2,2);
someProgrammingOperation(someMesh2);
//It turns out that internally, someMesh2 still has the same properties (matrix?) as someMesh1 :(
What am I missing? Suggestions are also welcomed :)
object.matrix is updated for you by the renderer whenever you call renderer.render().
If you need to update the object matrix manually, call
object.updateMatrix();
and it will update the matrix from the current values of object.position, object.quaternion, and object.scale.
(Note that object.rotation and object.quaternion remain synchronized. When you update one, the other updates automatically.)
three.js r.84
In the end, my problem was that the CSG ThreeBSP object needed to work based on the Geometry of the object, not in the Mesh itself. I applied the scaling on the Geometry and it worked as expected.
There is a caveat though, that one should be careful as with the meshes and geometries instances, therefore is needed to do some cloning in order to keep the original objects as they were, as in the following example:
var clonedMesh = original.mesh.clone()
var clonedGeometry = clonedMesh.geometry.clone()
clonedMesh.geometry = clonedGeometry
clonedMesh.geometry.scale(2,2,2)
var someBsp = new ThreeBSP( clonedMesh )
var newMesh = someBspBsp.toMesh()
someScene.add newMesh
I'm new in three.js, so I ask you for advice.
I use CubeTexture as envMap for my materials to make my objects looks like steel.
loader = new THREE.CubeTextureLoader();
this.cubeTexture = loader.load([
posXUrl, negXUrl,
posYUrl, negYUrl,
posZUrl, negZUrl
]);
...
mesh.material.envMap = this.textureCube;
And everything is ok with it but I want to make one enhancement in my scene.
The thing is that floor (negYUrl) on CubeTexture is static, but my scene assumes that floor is rotated. Unfortunately didn't find any API that allow to rotate buttom side of TextureCube instance.
Could you help me and point me on techniques that allow to do such things?
I have a Three.js scene rendered and I would like to export how it looks after the animations have rendered. For example, after the animation has gone ~100 frames, the user hits export and the scene should be exported to STL just as it is at that moment.
From what I've tried (using STLExporter.js, that is), it seems to export the model using the initial positions only.
If there's already a way to do this, or a straightforward work around, I would appreciate a nudge in that direction.
Update: After a bit more digging into the internals, I've figured out (at least superficially) why STLExporter did not work. STLExporter finds all objects and asks them for the vertices and faces of the Geometry object. My model has a bunch of bones that are skinned. During the animation step, the bones get updated, but these updates does not propagate to the original Geometry object. I know these transformed vertices are being calculated and exist somewhere (they get displayed on the canvas).
The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?
The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?
The answer to this, unfortunately, is nowhere. These are all computed on the GPU through calls to WebGL functions by passing in several large arrays.
To explain how to calculate this, let's first review how animation works, using this knight example for reference.
The SkinnedMesh object contains, among other things, a skeleton (made of many Bones) and a bunch of vertices. They start out arranged in what's known as a bind pose. Each vertex is bound to 0-4 bones and if those bones move, the vertexes will move, creating animation.
If you were to take our knight example, pause the animation mid-swing, and try the standard STL exporter, the STL file generated would be exactly this pose, not the animated one. Why? Because it simply looks at mesh.geometry.vertices, which are not changed from the original bind pose during animation. Only the bones experience change and the GPU does some math to move the vertices corresponding to each bone.
That math to move each vertex is pretty straight forward - transform the bind-pose vertex position into bone-space and then from bone-space to global-space before exporting.
Adapting the code from here, we add this to the original exporter:
vector.copy( vertices[ vertexIndex ] );
boneIndices = []; //which bones we need
boneIndices[0] = mesh.geometry.skinIndices[vertexIndex].x;
boneIndices[1] = mesh.geometry.skinIndices[vertexIndex].y;
boneIndices[2] = mesh.geometry.skinIndices[vertexIndex].z;
boneIndices[3] = mesh.geometry.skinIndices[vertexIndex].w;
weights = []; //some bones impact the vertex more than others
weights[0] = mesh.geometry.skinWeights[vertexIndex].x;
weights[1] = mesh.geometry.skinWeights[vertexIndex].y;
weights[2] = mesh.geometry.skinWeights[vertexIndex].z;
weights[3] = mesh.geometry.skinWeights[vertexIndex].w;
inverses = []; //boneInverses are the transform from bind-pose to some "bone space"
inverses[0] = mesh.skeleton.boneInverses[ boneIndices[0] ];
inverses[1] = mesh.skeleton.boneInverses[ boneIndices[1] ];
inverses[2] = mesh.skeleton.boneInverses[ boneIndices[2] ];
inverses[3] = mesh.skeleton.boneInverses[ boneIndices[3] ];
skinMatrices = []; //each bone's matrix world is the transform from "bone space" to the "global space"
skinMatrices[0] = mesh.skeleton.bones[ boneIndices[0] ].matrixWorld;
skinMatrices[1] = mesh.skeleton.bones[ boneIndices[1] ].matrixWorld;
skinMatrices[2] = mesh.skeleton.bones[ boneIndices[2] ].matrixWorld;
skinMatrices[3] = mesh.skeleton.bones[ boneIndices[3] ].matrixWorld;
var finalVector = new THREE.Vector4();
for(var k = 0; k<4; k++) {
var tempVector = new THREE.Vector4(vector.x, vector.y, vector.z);
//weight the transformation
tempVector.multiplyScalar(weights[k]);
//the inverse takes the vector into local bone space
tempVector.applyMatrix4(inverses[k])
//which is then transformed to the appropriate world space
.applyMatrix4(skinMatrices[k]);
finalVector.add(tempVector);
}
output += '\t\t\tvertex ' + finalVector.x + ' ' + finalVector.y + ' ' + finalVector.z + '\n';
This yields STL files that look like:
The full code is available at https://gist.github.com/kjlubick/fb6ba9c51df63ba0951f
After a week of pulling my hair out I managed to modify the code to include morphTarget data in the final stl file. you can find the modified code to Kevin's change at https://gist.github.com/jcarletto27/e271bbb7639c4bed2427
As JS is not my favored language, it's not pretty but, it manages to work without much fuss. Hopefully someone gets some use out of this besides me!
I have a a question that's been bothering me for some time.
I am using the three.js webgl library to render a large scene with many textures and meshes.
This question is not necessarily bound to webgl, but more javascript arrays and memory management.
I am basically doing this:
var modelArray = [];
var model = function(geometry,db_data){
var tex = THREE.ImageUtils.loadTexture('texture.jpg');
var mat = new THREE.MeshPhongMaterial({map:tex})
this.mesh = new THREE.Mesh(geometry,mat);
this.db = db_data;
scene.add(this.mesh);
};
function loadModels(model_array){
for(i=0;i<geometry.length;i++){
modelArray.push(new model(model_array[i]['geometry'],model_array[i]['db_info']));
}
}
loadModels();
Am I being inefficient here? Am I essentially doubling up the amount of memory being used since I have the mesh loaded to the scene and an array. Or does the model (specifically the model.mesh) object in the array simply point to a singular memory block?
Should I just create an array of mesh ids and reference the scene objects, or is it ok to add the mesh to the scene and an array?
Thanks in advance and I hope I was clear enough.
The main thing that jumps out at me is this:
var tex = THREE.ImageUtils.loadTexture('texture.jpg');
var mat = new THREE.MeshPhongMaterial({map:tex})
If you are loading the same texture every time you create a new model, that could create a lot of overhead (and it can also be pretty slow). I would load the texture(s) and corresponding material(s) you need outside of your loop once.
Your modelArray is a list of plain model objects, each of which has a pointer to the corresponding mesh object (and db object). The scene has a pointer to the same mesh object so you are not exploding your memory use by cloning meshes.
It's possible that your memory use is just because your mesh geometries take up a lot of memory. Try loading your models one by one while watching memory usage; perhaps you have one that is unexpectedly detailed.