I've an animated model that I've animated with Mixamo and then exported as an FBX into Maya. I've then used the Three.js exporter to output the animation 'baked' as morph targets.
Here's how the model looks when loaded into Maya:
However, when I read the data in, it includes not just the animation, but also the base model in a static pose, and each morphTarget array has the vertices repeated in it. This is what it ends up looking like:
Beyond manually writing some code to de-duplicate the vertices, is there any way to just get the animation out and not the model as well? I'm very new to Maya, so I'm guessing there's an option that I need to untick, or some selection step that I'm missing.
Thanks in advance
Should someone else have this problem, there's a simple answer (at least in this instance) - truncate the vertex and face arrays by half. After checking the vertices for duplicates it turned out they were all in the second half of these arrays, and could just be dumped.
geometry.vertices.length = geometry.vertices.length / 2
geometry.faces.length = geometry.faces.length / 2
geometry.morphTargets.forEach(function(target) {
target.vertices.length = target.vertices.length / 2
})
There's almost certainly a better way of doing it however.
Related
I'm exporting a simple scene from blender to three. Aside from the texture not showing up (which I'm also fighting with), I have a weird problem with the positions of objects. Here's how it looks in blender:
and this is how it renders in three
as you can see, elements are stacked up on each other (and the skybox texture is missing, even though it's referenced properly in the json, embedded as a base64 image). I'm using Three.js exporter v 1.5.0, three.js v84 and blender v 2.77
this is my configuration:
here's the code loading the scene:
var loader = new THREE.ObjectLoader();
loader.load(
'../dist/landscape.json',
function ( obj ) {
scene.add(obj)
}
);
now, I do realise that this way I'm adding a scene to a scene but for some reason, if I try to extract children from it like this:
loader.load(
'../dist/landscape.json',
function ( obj ) {
obj.children.forEach(function(elem) {
scene.add(elem)
}
}
)
I only get half of the objects. No idea why. Besides the objects are still stacked up on each other. I checked the positions in the result versus the original values in blender, and aside from the standard y/z swap x values are reversed (though that's not the cause of the problem), and rotation is removed from the bridge which causes it to render upside down. I'm completely lost
Also, here are the .blend and .json files:
http://www.filehosting.org/file/details/653174/landscape.blend
http://www.filehosting.org/file/details/653175/landscape.json
EDIT:
Partial solution: Scale was set to 10 in exporter, caused the objects to look as if they were misplaced. The thing is, they are still rotated and there's still some mismatch compared to the original. picture here:
I've just come across this issue for myself once again. Having the scale setting at 1 didn't fix it. The issue was that I hadn't applied object transformations in Blender.
Select all problematic objects in your blender file (or just all with A)
Press CTRL+A
Select Rotation & Scale
Repeat for Location if necessary
I need to create text with inset shadow on my object in three.js, which looks like this:
Something like ring with engraved text.
I think the easier way to do that would be to use a normal-map for the engraving, at least if the text doesn't have to be dynamic (here's how you can export a normal-map from blender). And even if it needs to be dynamic it might be easier to create a normal-map dynamically in a canvas than to actually create a geometry for the engraving.
Another option would be to actually create a geometry that contains the engraving. For that you might want to look at the ThreeCSG-library, that let's you use boolean operators on geometries: You create the 3D-text mesh, warp and align it to the curvature of the ring and finally subtract it from the ring-mesh. This should give you the ring with the engraving spared out.
In fact, I was curious how this would actually work out and implemented something very similar here: https://usefulthink.github.io/three-text-warp-csg/ (source here).
In essence, This is using ThreeCSG to subtract a text-geometry from a cylinder-geometry like so:
const textBSP = new ThreeBSP(textGeometry);
const cylinderBSP = new ThreeBSP(cylinderGeometry);
const resultGeometry = cylinderBSP.subtract(textBSP).toGeometry();
scene.add(new THREE.Mesh(resultGeometry, new THREE.MeshStandardMaterial());
Turns out that the tessellation created by threeCSG really slow (I had to move it into a worker so the page doesn't freeze for almost 10 seconds). It doesn't look too good right now, as there is still a problem with the computed normals that i haven't figured out yet.
The third option would be to use a combination of displacement and normal-maps.
This would be a lot easier and faster in processing, but you would need to add a whole lot of vertices in order to have vertices available where you want an displacement to happen. Here is a small piece of code by mrdoob that can help you with creating the normal-map based on the displacement: http://mrdoob.com/lab/javascript/height2normal/
I am developing a THREE.JS WebGL application where I need to render multiple objects with the same geometry and I've stumbled upon a bottleneck. It seems that my instancing of objects has some issue, that I can't really understand/realize, maybe someone can help me with that. For context, I have a PointCloud with normals, that gives me information about where to position my instanced objects, and also the orientation of the object through the normal quaternion. Then, I loop through this array, and place each instanced object accordingly. After looking at various posts about instancing, merging, etc, I can't figure out what I'm doing wrong.
I attach the code snippet of the method in question :
bitbucket.org/snippets/electricganesha/Mdddz
After reviewing it multiple times, I'm really wondering what is wrong here, and why does this particular method slow down my application from 60fps to 20fps.
You might be overcompensating with the optimization.
In your loop where you merge all these geometries try to add something like this
var maxVerts = 1 << 16;
//if merging a new object causes the vert number to go over 2^16 push the merged geometry somewhere, and make a new one for the next batch
if( singleGeometry.vertices.length + newObject.geometry.vertices.length > maxVerts ){
scene.add(singleGeometry);
singleGeometry = new Geometry();
}
singleGeometry.merge(newObject.geometry, newObject.matrix);
so - I have a situation where I'm filling the screen with calculated polygons. The polygons are constantly changing shape - ie the number of vertices is changing each frame. If I create a new geometry each frame - suddenly my machine effectively halts and I'm apparently sucking heaps of memory - it seems that I have to change the buffers in a geometry.
So I've tried using bufferGeometry, and basically doing:
var position = asBuffer.attributes.position;
position.array[0] = changedValue;
position.needsUpdate = true;
in my render loop - but it doesn't seem to work at all. If I use just a normal geometry, it will dynamically chnage - if I set needsUpdate - but only if I change the values in the original vectors. If I change the arrays themselves - it doesn't seem to show up.
I've got an example of all of this here: http://jsbin.com/fanebah/edit?js,console,output - if you swap the lines that create the "cube" - it goes from working to not working.
I'd prefer to use bufferGeometry - it's faster and closer to the way I'm producing the data - What am I doing wrong? Or does threejs just not support dynamic buffergeometry?
I'm trying to make a little scene for viewing 3D models.
I modified the GLGE Collada example to add a .dae model from code.
http://goleztrol.nl/SO/GLGE/01/
What I've got
So far it works. The camera is rotated using an animation.
Using the buttons 'Add' and 'Remove' the model is added and removed from the scene, using the following code (Don't mind 'duck'. It was a duck in the original example.)
var duck = null;
function addDuck()
{
if (duck) return;
duck = new GLGE.Collada();
doc.getElement("mainscene").addCollada(duck);
duck.setId("duck");
duck.setDocument("amyrose.dae");
duck.setLocY(-15);
duck.setRotX(1);
duck.setScale(2);
}
function removeDuck()
{
if (!duck) return;
doc.getElement("mainscene").removeChild(duck);
duck = null;
}
Problem
Now the model is lying down, while it should stand up. The various methods of the element seem to work. The location is set, and the scale is set, but the call to setRotX seems to be ignored. I tried various others methods from the api, but setRotY, setRot, setQuatX and setDRotX all seem to fail. I don't get any errors (well not about this method). I tried values of 1.57 (which should be about 90 degrees), but other values as well, ranging from 1 to 180.
I can't find out what I'm doing wrong. Of course I could rotate the model itself in Blender, but I'd like to do it using the GLGE API.
Update
When I load the demo-model, seymourplane_triangulate.dae, the rotation works. Apparently my model differs in that it cannot be rotated. I just don't understand why. I figured it may be because the model is built of various separate meshes, but I don't understand why scaling and moving does work.
Does anyone know what's wrong with this model, and what I could do to fix it (maybe using Blender)?
Setting an initial rotation in the XML file that contains the scene does work. Setting rotation on another element (like the whole scene) works as well.
You need to rotate it after it has been loaded.
You can do this in the callback to setDocument
duck.setDocument("amyrose.dae", null, function() {
duck.setLocY(-15);
duck.setScale(2);
duck.setRotX(0);
duck.setRotY(0);
duck.setRotZ(3);
});