Exporting Three.js scene to STL keeping animations intact - javascript

I have a Three.js scene rendered and I would like to export how it looks after the animations have rendered. For example, after the animation has gone ~100 frames, the user hits export and the scene should be exported to STL just as it is at that moment.
From what I've tried (using STLExporter.js, that is), it seems to export the model using the initial positions only.
If there's already a way to do this, or a straightforward work around, I would appreciate a nudge in that direction.
Update: After a bit more digging into the internals, I've figured out (at least superficially) why STLExporter did not work. STLExporter finds all objects and asks them for the vertices and faces of the Geometry object. My model has a bunch of bones that are skinned. During the animation step, the bones get updated, but these updates does not propagate to the original Geometry object. I know these transformed vertices are being calculated and exist somewhere (they get displayed on the canvas).
The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?

The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?
The answer to this, unfortunately, is nowhere. These are all computed on the GPU through calls to WebGL functions by passing in several large arrays.
To explain how to calculate this, let's first review how animation works, using this knight example for reference.
The SkinnedMesh object contains, among other things, a skeleton (made of many Bones) and a bunch of vertices. They start out arranged in what's known as a bind pose. Each vertex is bound to 0-4 bones and if those bones move, the vertexes will move, creating animation.
If you were to take our knight example, pause the animation mid-swing, and try the standard STL exporter, the STL file generated would be exactly this pose, not the animated one. Why? Because it simply looks at mesh.geometry.vertices, which are not changed from the original bind pose during animation. Only the bones experience change and the GPU does some math to move the vertices corresponding to each bone.
That math to move each vertex is pretty straight forward - transform the bind-pose vertex position into bone-space and then from bone-space to global-space before exporting.
Adapting the code from here, we add this to the original exporter:
vector.copy( vertices[ vertexIndex ] );
boneIndices = []; //which bones we need
boneIndices[0] = mesh.geometry.skinIndices[vertexIndex].x;
boneIndices[1] = mesh.geometry.skinIndices[vertexIndex].y;
boneIndices[2] = mesh.geometry.skinIndices[vertexIndex].z;
boneIndices[3] = mesh.geometry.skinIndices[vertexIndex].w;
weights = []; //some bones impact the vertex more than others
weights[0] = mesh.geometry.skinWeights[vertexIndex].x;
weights[1] = mesh.geometry.skinWeights[vertexIndex].y;
weights[2] = mesh.geometry.skinWeights[vertexIndex].z;
weights[3] = mesh.geometry.skinWeights[vertexIndex].w;
inverses = []; //boneInverses are the transform from bind-pose to some "bone space"
inverses[0] = mesh.skeleton.boneInverses[ boneIndices[0] ];
inverses[1] = mesh.skeleton.boneInverses[ boneIndices[1] ];
inverses[2] = mesh.skeleton.boneInverses[ boneIndices[2] ];
inverses[3] = mesh.skeleton.boneInverses[ boneIndices[3] ];
skinMatrices = []; //each bone's matrix world is the transform from "bone space" to the "global space"
skinMatrices[0] = mesh.skeleton.bones[ boneIndices[0] ].matrixWorld;
skinMatrices[1] = mesh.skeleton.bones[ boneIndices[1] ].matrixWorld;
skinMatrices[2] = mesh.skeleton.bones[ boneIndices[2] ].matrixWorld;
skinMatrices[3] = mesh.skeleton.bones[ boneIndices[3] ].matrixWorld;
var finalVector = new THREE.Vector4();
for(var k = 0; k<4; k++) {
var tempVector = new THREE.Vector4(vector.x, vector.y, vector.z);
//weight the transformation
tempVector.multiplyScalar(weights[k]);
//the inverse takes the vector into local bone space
tempVector.applyMatrix4(inverses[k])
//which is then transformed to the appropriate world space
.applyMatrix4(skinMatrices[k]);
finalVector.add(tempVector);
}
output += '\t\t\tvertex ' + finalVector.x + ' ' + finalVector.y + ' ' + finalVector.z + '\n';
This yields STL files that look like:
The full code is available at https://gist.github.com/kjlubick/fb6ba9c51df63ba0951f

After a week of pulling my hair out I managed to modify the code to include morphTarget data in the final stl file. you can find the modified code to Kevin's change at https://gist.github.com/jcarletto27/e271bbb7639c4bed2427
As JS is not my favored language, it's not pretty but, it manages to work without much fuss. Hopefully someone gets some use out of this besides me!

Related

Object pooling (isometric tiling) on phaser and camera positioning

I am new to Phaser and I am currently having a hard time in generating a tilemap with the aid of the phaser isometric plugin. I am also having trouble with understanding some of the concepts related with the Phaser world, game, and the camera which make the part of generating the tilemap correctly even harder. I have actually noticed that this problem seems to be an obstacle to Phaser newbies, like myself, and having it correctly explained would certainly help to fight this.
Getting to the matter:
I have generated a tilemap using a for loop using single grass sprites. The for loop is working correctly and I also implemented a function where I can specify the number of tiles I want to generate.
{
spawnBasicTiles: function(half_tile_width, half_tile_height, size_x, size_y) {
var tile;
for(var xx = 0; xx < size_x; xx += half_tile_width) {
for(var yy = 0; yy < size_y; yy += half_tile_height) {
tile = game.add.isoSprite(xx, yy, 0, 'tile', 0, tiles_group);
tile.checkWorldBounds = true;
tile.outOfBoundsKill = true;
tile.anchor.set(0.5, 0);
}
}
}
}
So the process of generating the static grass tiles is not a problem. The problem, and one of the reasons I am trying to getting the object pooling to work correctly is when the tiles number is superior to 80 which has a dramatic impact on the game performance. Since I aim for making HUGE, auto-generating maps that are rendered according to the player character position the object pooling is essential. I have created a group for these tiles and added the properties I thought that would be required for having the tiles that are out of bounds to not be rendered(physics, world bounds...). However, after many attempts I concluded that even the tiles that are out of bounds are still being generated. I also used another property rather than add.isoSprite to generate the tiles, which was .create but made no difference. I guess no tiles are being "killed".
Is this process correct? What do I need to do in order to ONLY generate the tiles that appear on camera (I assume the camera is the visible game rectangle) and generate the rest of them when the character moves to another area assuming the camera is already tracking the character movement?
Besides that, I am looking to generate my character in the middle of the world which is the middle of the generated grass tilemap. However, I am having a hard time doing that too. I think the following concepts are the ones I should play with in order to achieve that, despite not being able to:
.anchor.set()
game.world.setBounds(especially this one... it doesn't seem to set where I order to);
Phaser.game ( I set its size to my window.width and window.height, not having much trouble with this...)
Summing up:
Using the following for loop method of generating tiles, how can I make infinite/almost infinite maps that are generated when the camera that follows my character moves to another region? And besides that, what is the proper way of always generating my character in the middle of my generated grass map and also for the camera to start where the character is?
I will appreciate the help a lot and hope this might be valuable for people in similar situations to mine. For any extra information just ask and if you want the function for generating the tiles by a specific number I will gladly post it. Sorry for any language mistakes.
Thank you very much.
EDIT:
I was able to spawn my character always in the middle of the grass by setting its X and Y to (size/width) and (size/height). Size is the measure of X and Y in px.

Three JS and Maya: exporting just the animation as JSON

I've an animated model that I've animated with Mixamo and then exported as an FBX into Maya. I've then used the Three.js exporter to output the animation 'baked' as morph targets.
Here's how the model looks when loaded into Maya:
However, when I read the data in, it includes not just the animation, but also the base model in a static pose, and each morphTarget array has the vertices repeated in it. This is what it ends up looking like:
Beyond manually writing some code to de-duplicate the vertices, is there any way to just get the animation out and not the model as well? I'm very new to Maya, so I'm guessing there's an option that I need to untick, or some selection step that I'm missing.
Thanks in advance
Should someone else have this problem, there's a simple answer (at least in this instance) - truncate the vertex and face arrays by half. After checking the vertices for duplicates it turned out they were all in the second half of these arrays, and could just be dumped.
geometry.vertices.length = geometry.vertices.length / 2
geometry.faces.length = geometry.faces.length / 2
geometry.morphTargets.forEach(function(target) {
target.vertices.length = target.vertices.length / 2
})
There's almost certainly a better way of doing it however.

XML3D: Exporting scene

I need to export scene as single STL file.
Whereas its easy to export each single <asset>/<mesh>/<model> exporting whole scene with transformations its another story. That requires applying world matrix transform to every vertex of each asset data on-the-fly before export.
Does XML3D has some mechanisms that would help me with that?
Where should I start?
Actually, XML3D is an presentation format and was never designed to extract something useful other than interactive renderings. However, since it is JavaScript, you can access everything somehow and obviously you can also get the data you need to apply all transformations and create a single huge STL mesh from the scene.
The easiest way I can imagine is using the internal scene:
var scene = document.querySelector("xml3d")._configured.adapters["webgl_1"].getScene();
// Iterate render objects
scene.ready.forEach(function(renderObject) {
// Get word matrix
var worldMatrix = new Float32Array(16);
renderObject.getWorldMatrix(worldMatrix);
// Get local position data
var dataRequest = new Xflow.ComputeRequest(renderObject.drawable.dataNode, ["position"]);
var positions = dataRequest.getResult().getOutputData("position").getValue();
console.log(worldMatrix, positions.length);
// apply worldmatrix to all positions
...
});

render a tile map using javascript

I'm looking for a logical understanding with sample implementation ideas on taking a tilemap such as this:
http://thorsummoner.github.io/old-html-tabletop-test/pallete/tilesets/fullmap/scbw_tiles.png
And rendering in a logical way such as this:
http://thorsummoner.github.io/old-html-tabletop-test/
I see all of the tiles are there, but I don't understand how they are placed in a way that forms shapes.
My understanding of rendering tiles so far is simple, and very manual. Loop through map array, where there are numbers (1, 2, 3, whatever), render that specified tile.
var mapArray = [
[0, 0, 0, 0 ,0],
[0, 1, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 1, 1 ,0]
];
function drawMap() {
background = new createjs.Container();
for (var y = 0; y < mapArray.length; y++) {
for (var x = 0; x < mapArray[y].length; x++) {
if (parseInt(mapArray[y][x]) == 0) {
var tile = new createjs.Bitmap('images/tile.png');
}
if (parseInt(mapArray[y][x]) == 1) {
var tile = new createjs.Bitmap('images/tile2.png');
}
tile.x = x * 28;
tile.y = y * 28;
background.addChild(tile);
}
}
stage.addChild(background);
}
Gets me:
But this means I have to manually figure out where each tile goes in the array so that logical shapes are made (rock formations, grass patches, etc)
Clearly, the guy who made the github code above used a different method. Any guidance on understanding the logic (with simply pseudo code) would be very helpful
There isn't any logic there.
If you inspect the page's source, you'll see that the last script tag, in the body, has a huge array of tile coordinates.
There is no magic in that example which demonstrates an "intelligent" system for figuring out how to form shapes.
Now, that said, there are such things... ...but they're not remotely simple.
What is more simple, and more manageable, is a map-editor.
Tile Editors
out of the box:
There are lots of ways of doing this... There are free or cheap programs which will allow you to paint tiles, and will then spit out XML or JSON or CSV or whatever the given program supports/exports.
Tiled ( http://mapeditor.org ) is one such example.
There are others, but Tiled is the first I could think of, is free, and is actually quite decent.
pros:
The immediate upside is that you get an app that lets you load image tiles, and paint them into maps.
These apps might even support adding collision-layers and entity-layers (put an enemy at [2,1], a power-up at [3,5] and a "hurt-player" trigger, over the lava).
cons:
...the downside is that you need to know exactly how these files are formatted, so that you can read them into your game engines.
Now, the outputs of these systems are relatively-standardized... so that you can plug that map data into different game engines (what's the point, otherwise?), and while game-engines don't all use tile files that are exactly the same, most good tile-editors allow for export into several formats (some will let you define your own format).
...so that said, the alternative (or really, the same solution, just hand-crafted), would be to create your own tile-editor.
DIY
You could create it in Canvas, just as easily as creating the engine to paint the tiles.
The key difference is that you have your map of tiles (like the tilemap .png from StarCr... erm... the "found-art" from the example, there).
Instead of looping through an array, finding the coordinates of the tile and painting them at the world-coordinates which match that index, what you would do is choose a tile from the map (like choosing a colour in MS Paint), and then wherever you click (or drag), figure out which array point that relates to, and set that index to be equal to that tile.
pros:
The sky is the limit; you can make whatever you want, make it fit any file-format you want to use, and make it handle any crazy stuff you want to throw at it...
cons:
...this of course, means you have to make it, yourself, and define the file-format you want to use, and write the logic to handle all of those zany ideas...
basic implementation
While I'd normally try to make this tidy, and JS-paradigm friendly, that would result in a LOT of code, here.
So I'll try to denote where it should probably be broken up into separate modules.
// assuming images are already loaded properly
// and have fired onload events, which you've listened for
// so that there are no surprises, when your engine tries to
// paint something that isn't there, yet
// this should all be wrapped in a module that deals with
// loading tile-maps, selecting the tile to "paint" with,
// and generating the data-format for the tile, for you to put into the array
// (or accepting plug-in data-formatters, to do so)
var selected_tile = null,
selected_tile_map = get_tile_map(), // this would be an image with your tiles
tile_width = 64, // in image-pixels, not canvas/screen-pixels
tile_height = 64, // in image-pixels, not canvas/screen-pixels
num_tiles_x = selected_tile_map.width / tile_width,
num_tiles_y = selected_tile_map.height / tile_height,
select_tile_num_from_map = function (map_px_X, map_px_Y) {
// there are *lots* of ways to do this, but keeping it simple
var tile_y = Math.floor(map_px_Y / tile_height), // 4 = floor(280/64)
tile_x = Math.floor(map_px_X / tile_width ),
tile_num = tile_y * num_tiles_x + tile_x;
// 23 = 4 down * 5 per row + 3 over
return tile_num;
};
// won't go into event-handling and coordinate-normalization
selected_tile_map.onclick = function (evt) {
// these are the coordinates of the click,
//as they relate to the actual image at full scale
map_x, map_y;
selected_tile = select_tile_num_from_map(map_x, map_y);
};
Now you have a simple system for figuring out which tile was clicked.
Again, there are lots of ways of building this, and you can make it more OO,
and make a proper "tile" data-structure, that you expect to read and use throughout your engine.
Right now, I'm just returning the zero-based number of the tile, reading left to right, top to bottom.
If there are 5 tiles per row, and someone picks the first tile of the second row, that's tile #5.
Then, for "painting", you just need to listen to a canvas click, figure out what the X and Y were,
figure out where in the world that is, and what array spot that's equal to.
From there, you just dump in the value of selected_tile, and that's about it.
// this might be one long array, like I did with the tile-map and the number of the tile
// or it might be an array of arrays: each inner-array would be a "row",
// and the outer array would keep track of how many rows down you are,
// from the top of the world
var world_map = [],
selected_coordinate = 0,
world_tile_width = 64, // these might be in *canvas* pixels, or "world" pixels
world_tile_height = 64, // this is so you can scale the size of tiles,
// or zoom in and out of the map, etc
world_width = 320,
world_height = 320,
num_world_tiles_x = world_width / world_tile_width,
num_world_tiles_y = world_height / world_tile_height,
get_map_coordinates_from_click = function (world_x, world_y) {
var coord_x = Math.floor(world_px_x / num_world_tiles_x),
coord_y = Math.floor(world_px_y / num_world_tiles_y),
array_coord = coord_y * num_world_tiles_x + coord_x;
return array_coord;
},
set_map_tile = function (index, tile) {
world_map[index] = tile;
};
canvas.onclick = function (evt) {
// convert screen x/y to canvas, and canvas to world
world_px_x, world_px_y;
selected_coordinate = get_map_coordinates_from_click(world_px_x, world_px_y);
set_map_tile(selected_coordinate, selected_tile);
};
As you can see, the procedure for doing one is pretty much the same as the procedure for doing the other (because it is -- given an x and y in one coordinate-set, convert it to another scale/set).
The procedure for drawing the tiles, then, is nearly the exact opposite.
Given the world-index and tile-number, work in reverse to find the world-x/y and tilemap-x/y.
You can see that part in your example code, as well.
This tile-painting is the traditional way of making 2d maps, whether we're talking about StarCraft, Zelda, or Mario Bros.
Not all of them had the luxury of having a "paint with tiles" editor (some were by hand in text-files, or even spreadsheets, to get the spacing right), but if you load up StarCraft or even WarCraft III (which is 3D), and go into their editors, a tile-painter is exactly what you get, and is exactly how Blizzard made those maps.
additions
With the basic premise out of the way, you now have other "maps" which are also required:
you'd need a collision-map to know which of those tiles you could/couldn't walk on, an entity-map, to show where there are doors, or power-ups or minerals, or enemy-spawns, or event-triggers for cutscenes...
Not all of these need to operate in the same coordinate-space as the world map, but it might help.
Also, you might want a more intelligent "world".
The ability to use multiple tile-maps in one level, for instance...
And a drop-down in a tile-editor to swap tile-maps.
...a way to save out both tile-information (not just X/Y, but also other info about a tile), and to save out the finished "map" array, filled with tiles.
Even just copying JSON, and pasting it into its own file...
Procedural Generation
The other way of doing this, the way you suggested earlier ("knowing how to connect rocks, grass, etc") is called Procedural Generation.
This is a LOT harder and a LOT more involved.
Games like Diablo use this, so that you're in a different randomly-generated environment, every time you play. Warframe is an FPS which uses procedural generation to do the same thing.
premise:
Basically, you start with tiles, and instead of just a tile being an image, a tile has to be an object that has an image and a position, but ALSO has a list of things that are likely to be around it.
When you put down a patch of grass, that grass will then have a likelihood of generating more grass beside it.
The grass might say that there's a 10% chance of water, a 20% chance of rocks, a 30% chance of dirt, and a 40% chance of more grass, in any of the four directions around it.
Of course, it's really not that simple (or it could be, if you're wrong).
While that's the idea, the tricky part of procedural generation is actually in making sure everything works without breaking.
constraints
You couldn't, for example have the cliff wall, in that example, appear on the inside of the high-ground. It can only appear where there's high ground above and to the right, and low-ground below and to the left (and the StarCraft editor did this automatically, as you painted). Ramps can only connect tiles that make sense. You can't wall off doors, or wrap the world in a river/lake that prevents you from moving (or worse, prevents you from finishing a level).
pros
Really great for longevity, if you can get all of your pathfinding and constraints to work -- not only for pseudo-randomly generating the terrain and layout, but also enemy-placement, loot-placement, et cetera.
People are still playing Diablo II, nearly 14 years later.
cons
Really difficult to get right, when you're a one-man team (who doesn't happen to be a mathematician/data-scientist in their spare time).
Really bad for guaranteeing that maps are fun/balanced/competitive...
StarCraft could never have used 100% random-generation for fair gameplay.
Procedural-generation can be used as a "seed".
You can hit the "randomize" button, see what you get, and then tweak and fix from there, but there'll be so much fixing for "balance", or so many game-rules written to constrain the propagation, that you'll end up spending more time fixing the generator than just painting a map, yourself.
There are some tutorials out there, and learning genetic-algorithms, pathfinding, et cetera, are all great skills to have... ...buuuut, for purposes of learning to make 2D top-down tile-games, are way-overkill, and rather, are something to look into after you get a game/engine or two under your belt.

Efficient way to light up/cast shadows on a voxel terrain

I'm using a BufferGeometry and some predefined data to create an object similar to a Minecraft chunk (made of voxels and containing cave-like structures). I'm having a problem lighting up this object efficently.
At the moment I'm using a MeshLambertMaterial and a DirectionalLight which enables me to cast shadows on voxels not in view of the light, however this isn't efficient to use for a large terrain because it requires a very large shadow map and will often cause glitchy shadow artifacts as a result.
Here's the code I'm using to add the indices and vertices to the BufferGeometry:
// Add indices to BufferGeometry
for ( var i = 0; i < section.indices.length; i ++ ) {
var j = i * 3;
var q = section.indices[i];
indices[ j ] = q[0] % chunkSize;
indices[ j + 1 ] = q[1] % chunkSize;
indices[ j + 2 ] = q[2] % chunkSize;
}
// Add vertices to BufferGeometry
for ( var i = 0; i < section.vertices.length; i ++ ) {
var q = section.vertices[i];
// There's 1 color for every 4 vertices (square)
var hexColor = section.colors[i / 4];
addVertex( i, q[0], q[1], q[2], hexColor );
}
And my 'chunk' example: http://jsfiddle.net/9sSyz/4/
A screenshot:
If I were to remove the shadows from my example, all voxels on the correct side would be lit up even if another voxel obstructed the light. I just need another scalable way to give the illusion of a shadow. Perhaps by changing vertex colors if not in view of the light? It doesn't have to be as accurate as the current shadow implementation so changing the vertex colors (to give a blocky vertex-bound shadow) would be enough.
Would appreciate any help or advice. Thanks.
Generally, if you have large terrains, the idea is to split the scene into more cascades and each cascade has its own shadow map. Technique is called CSM - cascaded shadow maps. Problem is, I haven't heard of an webGL example that implements this technique. CSMs are used on dynamic scenes. But I'm not sure how easy would be to implement this with Three.js.
Second option is adding ambient occlusion, as suggested by WestLagnley, but it's just an occlusion, not a shadow. Results are very different.
Third option, if your scene is mostly static - baked shadows. So, preprocessed textures that you simply apply to the terrain etc. To support dynamic objects, just render their shadow maps and apply those to some geometry that just mimics shadowed area (perhaps, a plane that hovers slightly above ground and receives the shadow etc).
Any combination of the techniques mentioned is also an option.
P.S. Could you also supply a screenshot, fiddles fail to load.

Categories