Three.js - What is PlaneBufferGeometry - javascript

What is PlaneBufferGeometry exactly and how it is different from PlaneGeometry? (r69)

PlaneBufferGeometry is a low memory alternative for PlaneGeometry. the object itself differs in a lot of ways. for instance, the vertices are located in PlaneBufferGeometry are located in PlaneBufferGeometry.attributes.position instead of PlaneGeometry.vertices
you can take a quick look in the browser console to figure out more differences, but as far as i understand, since the vertices are usually spaced on a uniform distance (X and Y) from each other, only the heights (Z) need to be given to position a vertex.

The main differences are between Geometry and BufferGeometry.
Geometry is a "user-friendly", object-oriented data structure, whereas BufferGeometry is a data structure that maps more directly to how the data is used in the shader program. BufferGeometry is faster and requires less memory, but Geometry is in some ways more flexible, and certain operations can be done with greater ease.
I have very little experience with Geometry, as I have found that BufferGeometry does the job in most cases. It is useful to learn, and work with, the actual data structures that are used by the shaders.
In the case of a PlaneBufferGeometry, you can access the vertex positions like this:
let pos = geometry.getAttribute("position");
let pa = pos.array;
Then set z values like this:
var hVerts = geometry.heightSegments + 1;
var wVerts = geometry.widthSegments + 1;
for (let j = 0; j < hVerts; j++) {
for (let i = 0; i < wVerts; i++) {
//+0 is x, +1 is y.
pa[3*(j*wVerts+i)+2] = Math.random();
}
}
pos.needsUpdate = true;
geometry.computeVertexNormals();
Randomness is just an example. You could also (another e.g.) plot a function of x,y, if you let x = pa[3*(j*wVerts+i)]; and let y = pa[3*(j*wVerts+i)+1]; in the inner loop. For a small performance benefit in the PlaneBufferGeometry case, let y = (0.5-j/(hVerts-1))*geometry.height in the outer loop instead.
geometry.computeVertexNormals(); is recommended if your material uses normals and you haven't calculated more accurate normals analytically. If you don't supply or compute normals, the material will use the default plane normals which all point straight out of the original plane.
Note that the number of vertices along a dimension is one more than the number of segments along the same dimension.
Note also that (counterintuitively) the y values are flipped with respect to the j indices: vertices.push( x, - y, 0 ); (source)

Related

Japavascript and P5.js - Optimizing a 4D projection code

I know that "how to optimize this code?" kind of question is generally not welcomed in stack overflow. But I think this is only the way i could phrase my question. I wrote a code that projects a 4 dimensional points onto a 3 dimensional space. I then draw the 3d points using the p5.js library.
Here is my code: https://jsfiddle.net/dfq8ykLw/
Now my question here is, how am I supposed to make this run faster, and optimize the code? Since I am supposed to draw few thousand points (sometimes more) per frame, and calculate the rotation for each of those points, my code tend to run incredibly slowly (mac devices can run this a little faster for some reason).
I tried drawing points instead of vertices, which ended up running even slower.
Are there any suggestion of how to improve the performance? Or an advice of what kind of library to use for drawing 3 dimensional shapes?
Just to explain, my program stores the points as nested array, just like [[Point object, Point object, ...], ...].
Each array in the data works as a face, with each Point object being the vertices.
I first rotate each of these points by applying 6 rotations (for axis xy, xz, xw, yz, yw and zw), then draw them by projecting them onto the 3d space.
Any help is appreciated, as I am terribly stuck!
in source code
Begin shape drawing. However in WEBGL mode, application
performance will likely drop as a result of too many calls to
beginShape() / endShape(). As a high performance alternative,
...
_main.default.RendererGL.prototype.beginShape
So we may want to avoid too many beginShape calls.
Idem call it on cube instead of face
beginShape()
data.forEach((hyperobject, i) => {
// face
for (var p in hyperobject){
hyperobject[p].rotate(angles[0], angles[1], angles[2], angles[3], angles[4], angles[5])
hyperobject[p].draw()
}
if (i % 6 === 0) {
endShape(CLOSE);
beginShape()
}
})
endShape()
However there are some ugly drawn line, because default mode is TRIANGLE_FAN
_main.default.RendererGL.prototype.beginShape = function(mode) {
this.immediateMode.shapeMode =
mode !== undefined ? mode : constants.TRIANGLE_FAN;
So we may specify TRIANGLES instead:
function draw(){
//noLoop()
background(0);
// translate(250, 250);
for (var a in angles){
angles[a] += angleSpeeds[a];
}
beginShape(TRIANGLES)
data.forEach((hyperobject, i) => {
// face
const [a, b, c, d] = hyperobject.map(a => {
a.rotate(angles[0], angles[1], angles[2], angles[3], angles[4], angles[5])
return a
})
//first triangle
a.draw()
b.draw()
c.draw()
a.draw()
b.draw()
d.draw()
if (i % 6 === 0) {
endShape()
beginShape(TRIANGLES)
}
})
endShape()
}
Note that you could factorize the rotation
const [axy, axz, axw, ayz, ayw, azw] = angles
const f = x => [Math.cos(x), Math.sin(x)]
const [Ca, Sa] = f(axy)
const [Cb, Sb] = f(axz)
const [Cc, Sc] = f(axw)
const [Cd, Sd] = f(ayz)
const [Ce, Se] = f(ayw)
const [Cf, Sf] = f(azw)
const R = [
[Ca*Cb*Cc, -Cb*Cc*Sa, -Cc*Sb, -Sc],
[Ca*(-Cb*Sc*Se-Ce*Sb*Sd)+Cd*Ce*Sa, -Sa*(-Cb*Sc*Se-Ce*Sb*Sd)+Ca*Cd*Ce, -Cb*Ce*Sd+Sb*Sc*Se, -Cc*Se],
[Ca*(Sb*(Sd*Se*Sf+Cd*Cf)-Cb*Ce*Sc*Sf)+Sa*(-Cd*Se*Sf+Cf*Sd), -Sa*(Sb*(Sd*Se*Sf+Cd*Cf)-Cb*Ce*Sc*Sf)+Ca*(-Cd*Se*Sf+Cf*Sd), Cb*(Sd*Se*Sf+Cd*Cf)+Ce*Sb*Sc*Sf, -Cc*Ce*Sf],
[Ca*(Sb*(-Cf*Sd*Se+Cd*Sf)+Cb*Ce*Cf*Sc)+Sa*(Cd*Cf*Se+Sd*Sf),-Sa*(Sb*(-Cf*Sd*Se+Cd*Sf)+Cb*Ce*Cf*Sc)+Ca*(Cd*Cf*Se+Sd*Sf), Cb*(-Cf*Sd*Se+Cd*Sf)-Ce*Cf*Sb*Sc, Cc*Ce*Cf]
]
Point.prototype.rotate = function (R) {
const X = [this.origx, this.origy, this.origz, this.origw]
const [x,y,z,w] = prod(R, X)
Object.assign(this, { x, y, z, w })
}
but this is not the bottleneck, (like 1ms to 50ms for drawing), so keeping your matrix decomposition may be preferable.
I can't put the code snippet here since webgl not secure
https://jsfiddle.net/gk4Lvptm/
first see this (and all sub links) for inspiration:
how should i handle (morphing) 4D objects in opengl?
especially the 4D rotations and reper4D links.
Use 5x5 4D homogenuous transform matrices
this will convert all your transformations into single matrix * vector operation without any goniometrics (repeated for each vertex) so it should be much faster. Even allows to stack operations and much more.
You can port to GLSL
You can move much of the computations (transformations included) into shaders. I know GLSL suports only 4x4 matrices but You can compute mat5 * vec5 by using mat4 and vec4 too... You just add the missing stuff into separate variable and use dot product for the missing col/row
Use of VAO/VBO
This can improve speed a lot as you would not need to pass any data to GPU during rendering anymore... However you would need to do the projection on GPU side (which is doable as unlike cross-section the projection is easy to implement).

How to render a sphere with triangle strips

I'm currently going through this tutorial on rendering shapes WebGL (specifically a sphere in this case) and I understand the math behind the generation of each point on the sphere. In the tutorial though, the author defines one method to find all of the vertices and another to generate all of the squares that will comprise the sphere.
A couple of things are unclear from what is done in the tutorial. First, how exactly are the vertices generated by the parametric equation being connected to the squares (triangle strips) being generated? I've made a bare bones program in plain javascript and HTML5 before doing the same thing just using the vertices generated so I'm not seeing how and why they have to be used in conjunction with the triangle strips. The other point of confusion is specifically regarding the function that generates the squares:
var indexData = [];
for (var latNumber = 0; latNumber < latitudeBands; latNumber++) {
for (var longNumber = 0; longNumber < longitudeBands; longNumber++) {
var first = (latNumber * (longitudeBands + 1)) + longNumber;
var second = first + longitudeBands + 1;
indexData.push(first);
indexData.push(second);
indexData.push(first + 1);
indexData.push(second);
indexData.push(second + 1);
indexData.push(first + 1);
}
}
To generate the first point of each square (point on the top left corner) the following is done: var first = (latNumber * (longitudeBands + 1)) + longNumber;
I'm not sure why the number of the lattitude line needs to be multiplied by the total number of longitude lines (plus 1 to fully wrap around) at each step.
The code for both functions is toward the bottom of the tutorial. A general explanation of the use of triangle strips in a case like this could also be helpful, thanks.
how exactly are the vertices generated by the parametric equation being connected to the squares (triangle strips) being generated?
A: Vertices are basically points. So its basically generating points using math. Quote from tutorial :
"for a sphere of radius r, with m latitude bands and n longitude bands, we can generate values for x, y, and z by taking a range of values for θ by splitting the range 0 to π up into m parts, and taking a range of values for φ by splitting the range 0 to 2π into n parts, and then just calculating:
x = r sinθ cosφ
y = r cosθ
z = r sinθ sinφ"
how and why they have to be used in conjunction with the triangle strips
A: they are not triangle STRIPS as in the primitive type gl.TRIANGLE_STRIP, but merely regular triangles defined with 3 points.
regarding the function that generates the squares
A: They are not generation squares per se, but using the points generated from the parametric equation to create triangles for the GPU to render. The code you shown in the OP basically divides a square into 2 triangles.

Packing irregular circles on the surface of a sphere

I'm using Three.js to create points on a sphere, similar to the periodic table of elements example.
My data set is circles of irregular size, and I wish to evenly distribute them around the surface of a sphere. After numerous hours searching the web, I realize that is much harder than it sounds.
Here are examples of this idea in action:
Vimeo
Picture
circlePack Java applet
Is there an algorithm that will allow me to do this? The packing ratio doesn't need to be super high and it'd ideally be something quick and easy to calculate in JavaScript for rendering in Three.js (Cartesian or Coordinate system). Efficiency is key here.
The circle radii can vary widely. Here's an example using the periodic table code:
Here's a method to try: an iterative search using a simulated repulsive force.
Algorithm
First initialize the data set by arranging the circles across the surface in any kind of algorithm. This is just for initialization, so it doesn't have to be great. The periodic table code will do nicely. Also, assign each circle a "mass" using its radius as its mass value.
Now begin the iteration to converge on a solution. For each pass through the main loop, do the following:
Compute repulsive forces for each circle. Model your repulsive force after the formula for gravitational force, with two adjustments: (a) objects should be pushed away from each other, not attracted toward each other, and (b) you'll need to tweak the "force constant" value to fit the scale of your model. Depending on your math ability you may be able to calculate a good constant value during planning; other wise just experiment a little at first and you'll find a good value.
After computing the total forces on each circle (please look up the n-body problem if you're not sure how to do this), move each circle along the vector of its total calculated force, using the length of the vector as the distance to move. This is where you may find that you have to tweak the force constant value. At first you'll want movements with lengths that are less than 5% of the radius of the sphere.
The movements in step 2 will have pushed the circles off the surface of the sphere (because they are repelling each other). Now move each circle back to the surface of the sphere, in the direction toward the center of the sphere.
For each circle, calculate the distance from the circle's old position to its new position. The largest distance moved is the movement length for this iteration in the main loop.
Continue iterating through the main loop for a while. Over time the movement length should become smaller and smaller as the relative positions of the circles stabilize into an arrangement that meets your criteria. Exit the loop when the movement legth drops below some very small value.
Tweaking
You may find that you have to tweak the force calculation to get the algorithm to converge on a solution. How you tweak depends on the type of result you're looking for. Start by tweaking the force constant. If that doesn't work, you may have to change the mass values up or down. Or maybe change the exponent of the radius in the force calculation. For example, instead of this:
f = ( k * m[i] * m[j] ) / ( r * r );
You might try this:
f = ( k * m[i] * m[j] ) / pow( r, p );
Then you can experiment with different values of p.
You can also experiment with different algorithms for the initial distribution.
The amount of trial-and-error will depend on your design goals.
Here is something you can build on perhaps. It will randomly distribute your spheres along a sphere. Later we will iterate over this starting point to get an even distribution.
// Random point on sphere of radius R
var sphereCenters = []
var numSpheres = 100;
for(var i = 0; i < numSpheres; i++) {
var R = 1.0;
var vec = new THREE.Vector3(Math.random(), Math.random(), Math.random()).normalize();
var sphereCenter = new THREE.Vector3().copy(vec).multiplyScalar(R);
sphereCenter.radius = Math.random() * 5; // Random sphere size. Plug in your sizes here.
sphereCenters.push(sphereCenter);
// Create a Three.js sphere at sphereCenter
...
}
Then run the below code a few times to pack the spheres efficiently:
for(var i = 0; i < sphereCenters.length; i++) {
for(var j = 0; j < sphereCenters.length; j++) {
if(i === j)
continue;
// Calculate the distance between sphereCenters[i] and sphereCenters[j]
var dist = new THREE.Vector3().copy(sphereCenters[i]).sub(sphereCenters[j]);
if(dist.length() < sphereSize) {
// Move the center of this sphere to compensate.
// How far do we have to move?
var mDist = sphereSize - dist.length();
// Perturb the sphere in direction of dist magnitude mDist
var mVec = new THREE.Vector3().copy(dist).normalize();
mVec.multiplyScalar(mDist);
// Offset the actual sphere
sphereCenters[i].add(mVec).normalize().multiplyScalar(R);
}
}
}
Running the second section a number of times will "converge" on the solution you are looking for. You have to choose how many times it should be run in order to find the best trade-off between speed, and accuracy.
You can use the same code as in the periodic table of elements.
The rectangles there do not touch, so you can get the same effect with circles, virtually by using the same code.
Here is the code they have:
var vector = new THREE.Vector3();
for ( var i = 0, l = objects.length; i < l; i ++ ) {
var phi = Math.acos( -1 + ( 2 * i ) / l );
var theta = Math.sqrt( l * Math.PI ) * phi;
var object = new THREE.Object3D();
object.position.x = 800 * Math.cos( theta ) * Math.sin( phi );
object.position.y = 800 * Math.sin( theta ) * Math.sin( phi );
object.position.z = 800 * Math.cos( phi );
vector.copy( object.position ).multiplyScalar( 2 );
object.lookAt( vector );
targets.sphere.push( object );
}

Efficient way to light up/cast shadows on a voxel terrain

I'm using a BufferGeometry and some predefined data to create an object similar to a Minecraft chunk (made of voxels and containing cave-like structures). I'm having a problem lighting up this object efficently.
At the moment I'm using a MeshLambertMaterial and a DirectionalLight which enables me to cast shadows on voxels not in view of the light, however this isn't efficient to use for a large terrain because it requires a very large shadow map and will often cause glitchy shadow artifacts as a result.
Here's the code I'm using to add the indices and vertices to the BufferGeometry:
// Add indices to BufferGeometry
for ( var i = 0; i < section.indices.length; i ++ ) {
var j = i * 3;
var q = section.indices[i];
indices[ j ] = q[0] % chunkSize;
indices[ j + 1 ] = q[1] % chunkSize;
indices[ j + 2 ] = q[2] % chunkSize;
}
// Add vertices to BufferGeometry
for ( var i = 0; i < section.vertices.length; i ++ ) {
var q = section.vertices[i];
// There's 1 color for every 4 vertices (square)
var hexColor = section.colors[i / 4];
addVertex( i, q[0], q[1], q[2], hexColor );
}
And my 'chunk' example: http://jsfiddle.net/9sSyz/4/
A screenshot:
If I were to remove the shadows from my example, all voxels on the correct side would be lit up even if another voxel obstructed the light. I just need another scalable way to give the illusion of a shadow. Perhaps by changing vertex colors if not in view of the light? It doesn't have to be as accurate as the current shadow implementation so changing the vertex colors (to give a blocky vertex-bound shadow) would be enough.
Would appreciate any help or advice. Thanks.
Generally, if you have large terrains, the idea is to split the scene into more cascades and each cascade has its own shadow map. Technique is called CSM - cascaded shadow maps. Problem is, I haven't heard of an webGL example that implements this technique. CSMs are used on dynamic scenes. But I'm not sure how easy would be to implement this with Three.js.
Second option is adding ambient occlusion, as suggested by WestLagnley, but it's just an occlusion, not a shadow. Results are very different.
Third option, if your scene is mostly static - baked shadows. So, preprocessed textures that you simply apply to the terrain etc. To support dynamic objects, just render their shadow maps and apply those to some geometry that just mimics shadowed area (perhaps, a plane that hovers slightly above ground and receives the shadow etc).
Any combination of the techniques mentioned is also an option.
P.S. Could you also supply a screenshot, fiddles fail to load.

Collision handling in a JavaScript/canvas particle system

I have a basic particle system in JavaScript (utilising canvas for rendering), and I'm trying to find the best way to handle collisions between particles. The particle system can handle about 70,000 particles at a pretty decent FPS.
It consists of an array that contains every Particle object.
Each Particle object contains 3 Vector objects (one for displacement, velocity, and acceleration) which contain an x and a y variable.
Before each frame, acceleration vectors are applied to velocity vectors, and velocity vectors are applied to displacement vectors for every single Particle object.
The renderer then iterates through each Particle and then draws a 1x1 pixel square at the location of every displacement vector.
The particle system also has 'magnetic' fields also, which can cause the particles to accelerate towards/away from a given point.
I tried applying a 'magnetic' field to each particle, but the calculations I use to get the updated acceleration vectors for each particle are too inefficient, and this method reduced the FPS considerably.
Below is the code I use to recalculate Particle acceleration vectors, with respect to nearby magnetic fields (This function is called before every frame):
Particle.prototype.submitToFields = function (fields) {
// our starting acceleration this frame
var totalAccelerationX = 0;
var totalAccelerationY = 0;
// for each passed field
for (var i = 0; i < fields.length; i++) {
var field = fields[i];
// find the distance between the particle and the field
var vectorX = field.point.x - this.point.x;
var vectorY = field.point.y - this.point.y;
// calculate the force via MAGIC and HIGH SCHOOL SCIENCE!
var force = field.mass / Math.pow(vectorX*vectorX+vectorY*vectorY,1.5);
// add to the total acceleration the force adjusted by distance
totalAccelerationX += vectorX * force;
totalAccelerationY += vectorY * force;
}
// update our particle's acceleration
this.acceleration = new Vector(totalAccelerationX, totalAccelerationY);
}
It's obvious why the above method reduced the performance drastically - the number of calculations rises exponentially with every new particle added.
Is there another method of particle collision detection that will have good performance with thousands of particles? Will these methods work with my current object structure?
Don't create a new Vector here. It means that you're creating 70 000 new Vectors each frame. Just change the vector values :
this.acceleration.x = totalAccelerationX; // or : this.acceleration[0] = totalAccelerationX;
this.acceleration.y = totalAccelerationY; // or : this.acceleration[1] = totalAccelerationY;
If it doesn't helps enough, you'll have to use a WebWorker.

Categories