Efficient way to light up/cast shadows on a voxel terrain - javascript

I'm using a BufferGeometry and some predefined data to create an object similar to a Minecraft chunk (made of voxels and containing cave-like structures). I'm having a problem lighting up this object efficently.
At the moment I'm using a MeshLambertMaterial and a DirectionalLight which enables me to cast shadows on voxels not in view of the light, however this isn't efficient to use for a large terrain because it requires a very large shadow map and will often cause glitchy shadow artifacts as a result.
Here's the code I'm using to add the indices and vertices to the BufferGeometry:
// Add indices to BufferGeometry
for ( var i = 0; i < section.indices.length; i ++ ) {
var j = i * 3;
var q = section.indices[i];
indices[ j ] = q[0] % chunkSize;
indices[ j + 1 ] = q[1] % chunkSize;
indices[ j + 2 ] = q[2] % chunkSize;
}
// Add vertices to BufferGeometry
for ( var i = 0; i < section.vertices.length; i ++ ) {
var q = section.vertices[i];
// There's 1 color for every 4 vertices (square)
var hexColor = section.colors[i / 4];
addVertex( i, q[0], q[1], q[2], hexColor );
}
And my 'chunk' example: http://jsfiddle.net/9sSyz/4/
A screenshot:
If I were to remove the shadows from my example, all voxels on the correct side would be lit up even if another voxel obstructed the light. I just need another scalable way to give the illusion of a shadow. Perhaps by changing vertex colors if not in view of the light? It doesn't have to be as accurate as the current shadow implementation so changing the vertex colors (to give a blocky vertex-bound shadow) would be enough.
Would appreciate any help or advice. Thanks.

Generally, if you have large terrains, the idea is to split the scene into more cascades and each cascade has its own shadow map. Technique is called CSM - cascaded shadow maps. Problem is, I haven't heard of an webGL example that implements this technique. CSMs are used on dynamic scenes. But I'm not sure how easy would be to implement this with Three.js.
Second option is adding ambient occlusion, as suggested by WestLagnley, but it's just an occlusion, not a shadow. Results are very different.
Third option, if your scene is mostly static - baked shadows. So, preprocessed textures that you simply apply to the terrain etc. To support dynamic objects, just render their shadow maps and apply those to some geometry that just mimics shadowed area (perhaps, a plane that hovers slightly above ground and receives the shadow etc).
Any combination of the techniques mentioned is also an option.
P.S. Could you also supply a screenshot, fiddles fail to load.

Related

Apply three.js subdivision modifier without changing outer geometry?

I am trying to take any three.js geometry and subdivide its existing faces into smaller faces. This would essentially give the geometry a higher "resolution". There is a subdivision modifier tool in the examples of three.js that works great for what I'm trying to do, but it ends up changing and morphing the original shape of the geometry. I'd like to retain the original shape.
View the Subdivision Modifier Example
Example of how the current subdivision modifier behaves:
Rough example of how I'd like it to behave:
The subdivision modifier is applied like this:
let originalGeometry = new THREE.BoxGeometry(1, 1, 1);
let subdivisionModifier = new THREE.SubdivisionModifier(3);
let subdividedGeometry = originalGeometry.clone();
subdivisionModifier.modify(subdividedGeometry);
I attempted to dig around the source of the subdivision modifier, but I wasn't sure how to modify it to get the desired result.
Note: The subdivision should be able to be applied to any geometry. My example of the desired result might make it seem that a three.js PlaneGeometry with increased segments would work, but I need this to be applied to a variety of geometries.
Based on the suggestions in the comments by TheJim01, I was able to dig through the original source and modify the vertex weight, edge weight, and beta values to retain the original shape. My modifications should remove any averaging, and put all the weight toward the source shape.
There were three sections that had to be modified, so I went ahead and made it an option that can be passed into the constructor called retainShape, which defaults to false.
I made a gist with the modified code for SubdivisionGeometry.js.
View the modified SubdivisionGeometry.js Gist
Below is an example of a cube being subdivided with the option turned off, and turned on.
Left: new THREE.SubdivisionModifier(2, false);
Right: new THREE.SubdivisionModifier(2, true);
If anyone runs into any issues with this or has any questions, let me know!
The current version of three.js has optional parameters for PlaneGeometry that specify the number of segments for the width and height; both default to 1. In the example below I set both widthSegments and heightSegments to 128. This has a similar effect as using SubdivisionModifier. In fact, SubdivisionModifier distorts the shape, but specifying the segments does not distort the shape and works better for me.
var widthSegments = 128;
var heightSegments = 128;
var geometry = new THREE.PlaneGeometry(10, 10, widthSegments, heightSegments);
// var geometry = new THREE.PlaneGeoemtry(10,10); // segments default to 1
// var modifier = new THREE.SubdivisionModifier( 7 );
// geometry = modifier.modify(geometry);
https://threejs.org/docs/#api/en/geometries/PlaneGeometry

How to rotate an array of canvas rectangles

I'm creating a Pentomino puzzle game for a final project in a class I'm taking. I've created all dozen of the required puzzle pieces and can drag those around here. And I've tried this code to rotate the array (without using canvas.rotate() & located at the very bottom of the fiddle), it basically swaps the X & Y coordinates when drawing the new piece:
var newPiece = targetPiece;
pieces.splice(pieces.indexOf(targetPiece), 1);
targetPiece = null;
console.log(newPiece);
var geometry = [];
for (var i = 0; i < newPiece.geometry.length; i++) {
geometry.push([newPiece.geometry[i][3], newPiece.geometry[i][0]]);
}
var offset = [newPiece.offset[1], newPiece.offset[0]];
console.log(geometry);
console.log(offset);
newPiece.geometry = geometry;
newPiece.position = geometry;
newPiece.offset = offset;
pieces.push(newPiece);
console.log(pieces);
for (var j = 0; j < pieces.length; j++) {
draw(pieces[j]);
}
This doesn't work properly, but has promise.
In this fiddle, I've isolated the problem down to a single piece and tried to use canvas.rotate() to rotate the array by double clicking, but what's actually happening is it's rotating each piece of the array (I think), which results in nothing happening because each block of the array is just a 50x50 rectangle and when you rotate a square, it still looks just like a square.
function doubleClickListener(e) {
var br = canvas.getBoundingClientRect();
mouse_x = (e.clientX - br.left) * (canvas.width / br.width);
mouse_y = (e.clientY - br.top) * (canvas.height / br.height);
var pieceToggle = false;
for (var i = 0; i < pieces.length; i++) {
if (onTarget(pieces[i], mouse_x, mouse_y)) {
targetPiece = pieces[i];
rotate(targetPiece);
}
}
}
function rotate() {
targetPiece.rotationIndex = targetPiece.rotationIndex === 0 ?
1 : targetPiece.rotationIndex === 1 ?
2 : targetPiece.rotationIndex === 2 ?
3 : 0;
for (var j = 0; j < pieces.length; j++) {
draw(pieces[j]);
}
}
Just FYI, I've tried creating the puzzle pieces as individual polygons, but could not figure out how to capture it with a mousedown event and move it with mousemove, so I abandoned it for the canvas rectangle arrays which were relatively simple to grab & move.
There's a brute force solution to this, and a total rewrite solution, both of which I'd rather avoid (I'm up against a deadline-ish). The brute force solution is to create geometry for all possible pieces (rotations & mirroring), which requires 63 separate geometry variants for the 12 pieces and management of those states. The rewrite would be to use fabric.js (which I'll probably do after class is over because I want to have a fully functional puzzle).
What I'd like to be able to do is rotate the array of five blocks with a double click (don't care which way it goes as long as it's sequential 90° rotations).
Approaching a usable puzzle:
With lots of help from #absolom, here's what I have, you can drag with a mouse click & drag, rotate a piece by double clicking it, and mirror a piece by right clicking it (well, mostly, it won't actually rotate until you next move the piece, I'm working on that). The Z-order of the pieces are manipulated so that the piece you're working with is always on top (it has to be the last one in the array to appear on top of all the other pieces):
Pentominoes II
The final solution
I've just handed the game in for grading, thanks for all the help! There was a lot more tweaking to be done, and there are still some things I'd change if I rewrite it, but I'm pretty happy with the result.
Pentominoes Final
Quick & Dirty:
The quick & dirty solution is when 2+ pieces are assembled you create a single image of them (using an in-memory canvas). That way you can move / rotate the 2-piece-as-1-image as a single entity.
More Proper:
If the 2+ piece assembly must later be disassembled, then you will need the more proper way of maintaining transformation state per piece. That more proper way is to assign a transformation matrix to each piece.
Stackoverflow contributor Ken Fyrstenberg (K3N) has coded a nice script which allows you to track individual polygons (eg your rects) using transformation matrices: https://github.com/epistemex/transformation-matrix-js
Does this code do what you need? The rotate method looks like this now:
function rotate(piece) {
for (i = 0; i < piece.geometry.length; i++) {
var x = piece.geometry[i][0];
var y = piece.geometry[i][2];
piece.geometry[i][0] = -y;
piece.geometry[i][3] = x;
}
drawAll();
}
I simplified how your geometry and positioning was handled too. It's not perfect, but it can gives you some hints on how to handle your issues.
Please note that this solution works because each piece is composed of blocks with the same color and your rotations are 90 degrees. I only move the blocks around to simulate the rotation but nothing is rotated per-se. If you build your pieces differently or if you need to rotate at different angles, then you would need to go with another approach like transformation matrices.
UPDATE
Here is a better solution: fiddle

Exporting Three.js scene to STL keeping animations intact

I have a Three.js scene rendered and I would like to export how it looks after the animations have rendered. For example, after the animation has gone ~100 frames, the user hits export and the scene should be exported to STL just as it is at that moment.
From what I've tried (using STLExporter.js, that is), it seems to export the model using the initial positions only.
If there's already a way to do this, or a straightforward work around, I would appreciate a nudge in that direction.
Update: After a bit more digging into the internals, I've figured out (at least superficially) why STLExporter did not work. STLExporter finds all objects and asks them for the vertices and faces of the Geometry object. My model has a bunch of bones that are skinned. During the animation step, the bones get updated, but these updates does not propagate to the original Geometry object. I know these transformed vertices are being calculated and exist somewhere (they get displayed on the canvas).
The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?
The question is where are these transformed vertices and faces stored and can I access them to export them as an STL?
The answer to this, unfortunately, is nowhere. These are all computed on the GPU through calls to WebGL functions by passing in several large arrays.
To explain how to calculate this, let's first review how animation works, using this knight example for reference.
The SkinnedMesh object contains, among other things, a skeleton (made of many Bones) and a bunch of vertices. They start out arranged in what's known as a bind pose. Each vertex is bound to 0-4 bones and if those bones move, the vertexes will move, creating animation.
If you were to take our knight example, pause the animation mid-swing, and try the standard STL exporter, the STL file generated would be exactly this pose, not the animated one. Why? Because it simply looks at mesh.geometry.vertices, which are not changed from the original bind pose during animation. Only the bones experience change and the GPU does some math to move the vertices corresponding to each bone.
That math to move each vertex is pretty straight forward - transform the bind-pose vertex position into bone-space and then from bone-space to global-space before exporting.
Adapting the code from here, we add this to the original exporter:
vector.copy( vertices[ vertexIndex ] );
boneIndices = []; //which bones we need
boneIndices[0] = mesh.geometry.skinIndices[vertexIndex].x;
boneIndices[1] = mesh.geometry.skinIndices[vertexIndex].y;
boneIndices[2] = mesh.geometry.skinIndices[vertexIndex].z;
boneIndices[3] = mesh.geometry.skinIndices[vertexIndex].w;
weights = []; //some bones impact the vertex more than others
weights[0] = mesh.geometry.skinWeights[vertexIndex].x;
weights[1] = mesh.geometry.skinWeights[vertexIndex].y;
weights[2] = mesh.geometry.skinWeights[vertexIndex].z;
weights[3] = mesh.geometry.skinWeights[vertexIndex].w;
inverses = []; //boneInverses are the transform from bind-pose to some "bone space"
inverses[0] = mesh.skeleton.boneInverses[ boneIndices[0] ];
inverses[1] = mesh.skeleton.boneInverses[ boneIndices[1] ];
inverses[2] = mesh.skeleton.boneInverses[ boneIndices[2] ];
inverses[3] = mesh.skeleton.boneInverses[ boneIndices[3] ];
skinMatrices = []; //each bone's matrix world is the transform from "bone space" to the "global space"
skinMatrices[0] = mesh.skeleton.bones[ boneIndices[0] ].matrixWorld;
skinMatrices[1] = mesh.skeleton.bones[ boneIndices[1] ].matrixWorld;
skinMatrices[2] = mesh.skeleton.bones[ boneIndices[2] ].matrixWorld;
skinMatrices[3] = mesh.skeleton.bones[ boneIndices[3] ].matrixWorld;
var finalVector = new THREE.Vector4();
for(var k = 0; k<4; k++) {
var tempVector = new THREE.Vector4(vector.x, vector.y, vector.z);
//weight the transformation
tempVector.multiplyScalar(weights[k]);
//the inverse takes the vector into local bone space
tempVector.applyMatrix4(inverses[k])
//which is then transformed to the appropriate world space
.applyMatrix4(skinMatrices[k]);
finalVector.add(tempVector);
}
output += '\t\t\tvertex ' + finalVector.x + ' ' + finalVector.y + ' ' + finalVector.z + '\n';
This yields STL files that look like:
The full code is available at https://gist.github.com/kjlubick/fb6ba9c51df63ba0951f
After a week of pulling my hair out I managed to modify the code to include morphTarget data in the final stl file. you can find the modified code to Kevin's change at https://gist.github.com/jcarletto27/e271bbb7639c4bed2427
As JS is not my favored language, it's not pretty but, it manages to work without much fuss. Hopefully someone gets some use out of this besides me!

Using Javascript to create SVG / canvas sketches with comic style / hand jitter [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
How to create SVG / canvas sketches with comic style / hand jitter using Javascript
I am aware of the xkcd style JS plotter and of this article.
Note that I am not asking for plots, but for any kind of sketching, lines, shapes, text and so on.
What further methods / technologies exist using JS?
EDIT: After having thought and read some time about this I decided to implement my own JS library providing cartoon style drawing for SVG and HTML5 canvas.
It is called comic.js and can be found here.
I would be using a Canvas library to do that, just for the sake of simplicity when it comes to manipulating shapes.
The ones I would look for are Paper.js and Fabric.js.
However, I will focus on Paper.js because it is the one I worked with.
You can draw beziers or lines to create the shapes. You can even
import SVG's if you want. You can even have predefined shapes
such as Circles/Squares etc.
So you have the shapes, now what?
You can flatten them(subdivide them into segments, adding more
vertices to their geometry). You can increase the subdivision
interval which would result in high number of vertices/nodes per
path. Subdivision interval is the parameter maxDistance of the
flatten function.
Then you can walk along the vertices of each path/shape and move each
one by a certain degree(e.g 1-2 pixels to a random direction), by using position.x and position.y
If this is what you mean:
, then here is the code :
//STEP 1 -- create shapes, a circle and rectangle in this example
var myCircle = new Path.Circle(new Point(100, 70), 50);
myCircle.strokeColor = 'white';
myCircle.strokeWidth = 2;
var mySquare = new Rectangle(new Point(350, 250), new Point(190, 100));
var square = new Path.Rectangle(mySquare);
square.strokeColor = 'white';
square.strokeWidth = 2;
//STEP 2 -- Subdivide the shapes into segments. Parameter here is the max distance we walk along-the-path before adding a new vertex
myCircle.flatten(5);
square.flatten(4);
//STEP 3 -- Loop through the segment points of the path and move each to a random value between 1 and 4
for (var i = 0; i < myCircle.segments.length; i++) { //loop for circle
myCircle.segments[i].point.x += getRandomInt(1,3);
myCircle.segments[i].point.y += getRandomInt(1,3);
};
for (var i = 0; i < square.segments.length; i++){ //loop for square
square.segments[i].point.x += getRandomInt(1,3);
square.segments[i].point.y += getRandomInt(1,3);
};
//draw the paper view
view.draw();
//Utility function that returns a random integer within a range
function getRandomInt(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
and this is the jsFiddle for it:
The issue with this scenario is that each 'point' on your canvas is an object and the high number of points/nodes/vertices is a bit heavy for the browser to handle. So this might be an obstacle if your designs are complex and/or you want to have the user interact with your drawings(the interaction might prove sluggish).
Alternatively you can use plain-old canvas to do this, without any libraries but I wouldn't do that since I smell the need for algorithms to draw the shapes manually, then introduce jitter in those algorithms. This would be much faster, in terms of computation time, but it would be harder to implement since Canvas is a low-level kind of thing - it only remembers pixels drawed on and you would need to roll-your-own data structures to remember the shapes, their positions etc etc..

Three.js - What is PlaneBufferGeometry

What is PlaneBufferGeometry exactly and how it is different from PlaneGeometry? (r69)
PlaneBufferGeometry is a low memory alternative for PlaneGeometry. the object itself differs in a lot of ways. for instance, the vertices are located in PlaneBufferGeometry are located in PlaneBufferGeometry.attributes.position instead of PlaneGeometry.vertices
you can take a quick look in the browser console to figure out more differences, but as far as i understand, since the vertices are usually spaced on a uniform distance (X and Y) from each other, only the heights (Z) need to be given to position a vertex.
The main differences are between Geometry and BufferGeometry.
Geometry is a "user-friendly", object-oriented data structure, whereas BufferGeometry is a data structure that maps more directly to how the data is used in the shader program. BufferGeometry is faster and requires less memory, but Geometry is in some ways more flexible, and certain operations can be done with greater ease.
I have very little experience with Geometry, as I have found that BufferGeometry does the job in most cases. It is useful to learn, and work with, the actual data structures that are used by the shaders.
In the case of a PlaneBufferGeometry, you can access the vertex positions like this:
let pos = geometry.getAttribute("position");
let pa = pos.array;
Then set z values like this:
var hVerts = geometry.heightSegments + 1;
var wVerts = geometry.widthSegments + 1;
for (let j = 0; j < hVerts; j++) {
for (let i = 0; i < wVerts; i++) {
//+0 is x, +1 is y.
pa[3*(j*wVerts+i)+2] = Math.random();
}
}
pos.needsUpdate = true;
geometry.computeVertexNormals();
Randomness is just an example. You could also (another e.g.) plot a function of x,y, if you let x = pa[3*(j*wVerts+i)]; and let y = pa[3*(j*wVerts+i)+1]; in the inner loop. For a small performance benefit in the PlaneBufferGeometry case, let y = (0.5-j/(hVerts-1))*geometry.height in the outer loop instead.
geometry.computeVertexNormals(); is recommended if your material uses normals and you haven't calculated more accurate normals analytically. If you don't supply or compute normals, the material will use the default plane normals which all point straight out of the original plane.
Note that the number of vertices along a dimension is one more than the number of segments along the same dimension.
Note also that (counterintuitively) the y values are flipped with respect to the j indices: vertices.push( x, - y, 0 ); (source)

Categories