apply heightmap to SphereGeometry in three.js - javascript

[EDIT: see this jsfiddle for a live example plus accompanying code]
Using three.js I'm trying to render out some celestial bodies with prominent features.
Unfortunately no examples are provided on how to apply spherical heightmaps with threejs but they do have an example where a heightmap is applied to a plane.
I took said example and modified it to use a SphereGeometry(); instead of a PlaneGeometry();
Obviously the geometry of a sphere is critically different from that of a plane, and when rendering out the results the sphere shows as a flat piece of texture.
The heightmap code for planes:
var plane = new THREE.PlaneGeometry( 2000, 2000, quality - 1, quality - 1 );
plane.applyMatrix( new THREE.Matrix4().makeRotationX( - Math.PI / 2 ) );
for ( var i = 0, l = plane.vertices.length; i < l; i ++ ) {
var x = i % quality, y = ~~ ( i / quality );
plane.vertices[ i ].y = data[ ( x * step ) + ( y * step ) * 1024 ] * 2 - 128;
}
Now I'm guessing the solution is relatively simple: instead of mapping to the plane's 2d coordinate in the for loop, it has to find the surface coordinate of the sphere in 3d space. Unfortunately I'm not really a pro at 3d maths so I'm pretty much stuck at this point.
An example of the heightmap applied to the sphere and all code is put together in this jsfiddle. An updated jsfiddle shows an altered sphere but with random data instead of the height map data.
I know for a fact you can distort the sphere's 3d points to generate these surface details, but I'd like to do so using a heightmap. This JSFiddle is as far as I got- it'll randomly alter points to give a rocky appearance to the sphere, but obviously doesnt look very natural.
EDIT: The following is the logic required I wish to implement that maps heightmap data to a sphere.
In order to map the data to a sphere, we will need to map coordinates from a simple spherical coordinate system (longitude φ, latitude θ, radius r) to Cartesian coordinates (x, y, z). Just as in normal height-mapping the data value at (x, y) is mapped to z, we will map the value at (φ, θ) to r. This transformation comes down to:
x = r × cos φ × sin θ
y = r × sin φ × sin θ
z = r × cos θ
r = Rdefault + Rscale × d(φ, θ)
The parameters Rdefault and Rscale can be used to control the size of the sphere and the height map on it.

Uses vector3 to move each vertices:
var vector = new THREE.Vector3()
vector.set(geometry.vertices[i].x, geometry.vertices[i].y, geometry.vertices[i].z);
vector.setLength(h);
geometry.vertices[i].x = vector.x;
geometry.vertices[i].y = vector.y;
geometry.vertices[i].z = vector.z;
Example: http://jsfiddle.net/damienlabat/b3or4up3/

If you want to apply a 2D map onto the 3D sphere surface, you will need to use UVs of the sphere. Fortunately, UVs come with THREE.SphereGeometry by default.
The UVs are stored per-face though, so you will need to iterate through the faces array.
For each face in the geometry:
Read the corresponding UV value in the FaceVertexUvs array for each associated vertex.
Read the height map value using that UV location.
Shift the vertex along the vertex normal by that value. The faces array gives the vertex index, which you can use to index into the vertices array to get/set the vertex position.
After this is all done, set verticesNeedUpdate to true to update the vertices.

Related

How to project only part of a texture to a face threejs?

I want to make portals with threejs by drawing an ellipse and then texture mapping a WebGlRenderTarget to its face. I have that function sort of working, but it tries to stretch the large rectangular buffer from the render target to the ellipse. What I want is to project the texture in its original dimensions onto the ellipse and just cut out anything that doesn't hit the ellipse like so:
Before Projection:
After projection:
How can this be done with threejs?
I've looked into texture coordinates, but don't understand how to use them, and even saw a projection light PR in threejs that might work?
Edit: I also watched a Sebastian Lague video on portals and saw he does this with “screen space coordinates”. Any advice on using those?
Thanks for your help!
Made a codepen available here:
https://codepen.io/cdeep/pen/JjyjOqY
UV mapping lets us specify which parts of the texture correspond to which vertices of the geometry. More details here: https://www.spiria.com/en/blog/desktop-software/understanding-uv-mapping-and-textures/
You could loop through the vertices and set the corresponding UV value.
const vertices = ellipseGeometry.attributes.position.array;
for(let i = 0; i < numPoints; i++) {
const [x, y] = [vertices[3*i], vertices[3*i + 1]];
uvPositions.push(0.5 + x * imageHeight / ((2 * yRadius) * imageWidth));
uvPositions.push(0.5 + y / (2 * yRadius));
}
ellipseGeometry.setAttribute("uv", new THREE.Float32BufferAttribute(uvPositions, 2 ));
UV coordinates increase from (0, 0) to (1, 1) from bottom left to top right.
The above code works because the ellipse is on the x-y plane. Or else, you'll need to get the x,y values in the plane of the ellipse.
More info on texture mapping in three.js here:
https://discoverthreejs.com/book/first-steps/textures-intro/
Edit: Do note that the demo doesn't really look like a portal. For that, you'll need to move the texture based on the camera view which isn't that simple

Three.js: Bounding sphere of a scaled object

I have a set of 3D shapes (pyramid, cube, octahedron, prism etc.) and I need to build described sphere around each of them. It is easy to do so using geometry.boundingSphere as it has radius of the described sphere. But if I scale an object the bounding sphere is not being updated. Is it possible to update bounding sphere relatively to the scale?
Using Three.js 0.129.
const { position } = entity.object3D;
const mesh = entity.getObject3D('mesh') as THREE.Mesh;
mesh.geometry.computeBoundingSphere();
const { radius } = mesh.geometry.boundingSphere;
createSphere(radius, position);
The geometry.boundingSphere property represents the geometry. You could technically have two meshes with different scales share the same geometry, so you would want to maintain the geometry's original bounding sphere, and then compute a new one for each mesh, individually.
One problem with scaling the bounding sphere is that you can scale your mesh in x, y, and z separately, and even invert vertex position values given negative scaling values. unequal scale values would lead to it being less of a sphere, and more of a spheroid, which would not help you in math.
What you can do is recompute a bounding sphere for your mesh, given its updated world transformation matrix. I suggest using the world matrix because other ancestors of your mesh could also influence scale in unpredictable ways.
// Given:
// (THREE.Mesh) yourMesh
// Copy the geometry
let geoClone = yourMesh.geometry.clone() // really bad for memory, but it's simple/easy
// Apply the world transformation to the copy (updates vertex positions)
geoClone.applyMatrix4( yourMesh.matrixWorld )
// Convert the vertices into Vector3s (also bad for memeory)
let vertices = []
let pos = geoClone.attributes.position.array
for( let i = 0, l = pos.length; i < l; i += 3 ){
vertices.push( new THREE.Vector3( pos[i], pos[i+1], pos[i+2] ) )
}
// Create and set your mesh's bounding sphere
yourMesh.userData.boundingSphereWorld = new THREE.Sphere()
yourMesh.userData.boundingSphereWorld.setFromPoints( vertices )
This will create a world-aligned bounding sphere for your mesh. If you want one based on local transformations, you can follow the same idea using the local yourMesh.matrix matrix instead. Just know that your sphere's center will then be based on your mesh's local transformation/rotation, not just its scale.

How to render raycasted wall textures on HTML-Canvas

I am trying to build a raycasting-engine. i have successfully rendered a scene column by column using ctx.fillRect() as follows.
canvas-raycasting.png
demo
code
the code i wrote for above render:
var scene = [];// this contains distance of wall from player for an perticular ray
var points = [];// this contains point at which ray hits the wall
/*
i have a Raycaster class which does all math required for ray casting and returns
an object which contains two arrays
1 > scene : array of numbers representing distance of wall from player.
2 > points : contains objects of type { x , y } representing point where ray hits the wall.
*/
var data = raycaster.cast(wall);
/*
raycaster : instance of Raycaster class ,
walls : array of boundries constains object of type { x1 , y1 , x2 , y2 } where
(x1,y1) represent start point ,
(x2,y2) represent end point.
*/
scene = data.scene;
var scene_width = 800;
var scene_height = 400;
var w = scene_width / scene.length;
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i] ;
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
ctx.beginPath();
ctx.fillStyle = 'rgb('+s+','+s+','+s+')';
ctx.fillRect(i*w,200-h*0.5,w+1,h);
ctx.closePath();
}
Now i am trying to build an web based FPS(First Person Shooter) and stucked on rendering wall-textures on canvas.
ctx.drawImage() mehthod takes arguments as follows
void ctx.drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight);
ctx.drawImage_arguments
but ctx.drawImage() method draws image as a rectangle with no 3D effect like Wolfenstein 3D
i have no idea how to do it.
should i use ctx.tranform()? if yes, How ? if no, What should i do?
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting.
some pseudo 3d games are Wolfenstein 3D Doom
i am trying to build something like this
THANK YOU : )
The way that you're mapping (or not, as the case may be) texture coordinates isn't working as intended.
I am looking for the Maths used to produce pseudo 3d effect using 2D raycasting
The Wikipedia Entry for Texture Mapping has a nice section with the specific maths of how Doom performs texture mapping. From the entry:
The Doom engine restricted the world to vertical walls and horizontal floors/ceilings, with a camera that could only rotate about the vertical axis. This meant the walls would be a constant depth coordinate along a vertical line and the floors/ceilings would have a constant depth along a horizontal line. A fast affine mapping could be used along those lines because it would be correct.
A "fast affine mapping" is just a simple 2D interpolation of texture coordinates, and would be an appropriate operation for what you're attempting. A limitation of the Doom engine was also that
Doom renders vertical and horizontal spans with affine texture mapping, and is therefore unable to draw ramped floors or slanted walls.
It doesn't appear that your logic contains any code for transforming coordinates between various coordinate spaces. You'll need to apply transforms between a given raytraced coordinate and texture coordinate spaces in the very least. This typically involves matrix math and is very common and can also be referred to as Projection, as in projecting points from one space/surface to another. With affine transformations you can avoid using matrices in favor of linear interpolation.
The coordinate equation for this adapted to your variables (see above) might look like the following:
u = (1 - a) * wallStart + a * wallEnd
where 0 <= *a* <= 1
Alternatively, you could use a Weak Perspective projection, since you have much of the data already computed. From wikipedia again:
To determine which screen x-coordinate corresponds to a point at
A_x_,A_z_
multiply the point coordinates by:
B_x = A_x * B_z / A_z
where
B_x
is the screen x coordinate
A_x
is the model x coordinate
B_z
is the focal length—the axial distance from the camera center *to the image plane*
A_z
is the subject distance.
Because the camera is in 3D, the same works for the screen y-coordinate, substituting y for x in the above diagram and equation.
In your case A_x is the location of the wall, in worldspace. B_z is the focal length, which will be 1. A_z is the distance you calculated using the ray trace. The result is the x or y coordinate representing a translation to viewspace.
The main draw routine for W3D documents the techniques used to raytrace and transform coordinates for rendering the game. The code is quite readable even if you're not familiar with C/ASM and is a great way to learn more about your topics of interest. For more reading, I would suggest performing a search in your engine of choice for things like "matrix transformation of coordinates for texture mapping", or search the GameDev SE site for similar.
A specific area of that file to zero-in on would be this section starting ln 267:
> ========================
> =
> = TransformTile
> =
> = Takes paramaters:
> = tx,ty : tile the object is centered in
> =
> = globals:
> = viewx,viewy : point of view
> = viewcos,viewsin : sin/cos of viewangle
> = scale : conversion from global value to screen value
> =
> = sets:
> = screenx,transx,transy,screenheight: projected edge location and size
> =
> = Returns true if the tile is withing getting distance
> =
A great book on "teh Maths" is this one - I would highly recommend it for anyone seeking to create or improve upon these skills.
Update:
Essentially, you'll be mapping pixels (points) from the image onto points on your rectangular wall-tile, as reported by the ray trace.
Pseudo(ish)-code:
var image = getImage(someImage); // get the image however you want. make sure it finishes loading before drawing
var iWidth = image.width, iHeight = image.height;
var sX = 0, sY = 0; // top-left corner of image. Adjust when using e.g., sprite sheets
for(var i=0;i<scene.length;++i){
var c = scene[i] == Infinity ? 500 : scene[i];
var s = map(c,0,500,255,0);// how dark/bright the wall should be
var h = map(c,0,500,scene_height,10); // relative height of wall (farther the smaller)
var wX = i*w, wY = 200 - h * 0.5;
var wWidth = w + 1, wHeight = h;
//... render the rectangle shape
/* we are using the same image, but we are scaling it to the size of the rectangle
and placing it at the same location as the wall.
*/
var u, v, uW, vH; // texture x- and y- values and sizes. compute these.
ctx.drawImage(image, sX, sY, iWidth, iHeight, u, v, uW, vH);
}
Since I'm not familiar with your code performing the raytrace, its' coordinate system, etc, you may need to further adjust the values for wX, wY, wWidth, and wHeight (e.g., translate points from center to top-left corner).

Packing irregular circles on the surface of a sphere

I'm using Three.js to create points on a sphere, similar to the periodic table of elements example.
My data set is circles of irregular size, and I wish to evenly distribute them around the surface of a sphere. After numerous hours searching the web, I realize that is much harder than it sounds.
Here are examples of this idea in action:
Vimeo
Picture
circlePack Java applet
Is there an algorithm that will allow me to do this? The packing ratio doesn't need to be super high and it'd ideally be something quick and easy to calculate in JavaScript for rendering in Three.js (Cartesian or Coordinate system). Efficiency is key here.
The circle radii can vary widely. Here's an example using the periodic table code:
Here's a method to try: an iterative search using a simulated repulsive force.
Algorithm
First initialize the data set by arranging the circles across the surface in any kind of algorithm. This is just for initialization, so it doesn't have to be great. The periodic table code will do nicely. Also, assign each circle a "mass" using its radius as its mass value.
Now begin the iteration to converge on a solution. For each pass through the main loop, do the following:
Compute repulsive forces for each circle. Model your repulsive force after the formula for gravitational force, with two adjustments: (a) objects should be pushed away from each other, not attracted toward each other, and (b) you'll need to tweak the "force constant" value to fit the scale of your model. Depending on your math ability you may be able to calculate a good constant value during planning; other wise just experiment a little at first and you'll find a good value.
After computing the total forces on each circle (please look up the n-body problem if you're not sure how to do this), move each circle along the vector of its total calculated force, using the length of the vector as the distance to move. This is where you may find that you have to tweak the force constant value. At first you'll want movements with lengths that are less than 5% of the radius of the sphere.
The movements in step 2 will have pushed the circles off the surface of the sphere (because they are repelling each other). Now move each circle back to the surface of the sphere, in the direction toward the center of the sphere.
For each circle, calculate the distance from the circle's old position to its new position. The largest distance moved is the movement length for this iteration in the main loop.
Continue iterating through the main loop for a while. Over time the movement length should become smaller and smaller as the relative positions of the circles stabilize into an arrangement that meets your criteria. Exit the loop when the movement legth drops below some very small value.
Tweaking
You may find that you have to tweak the force calculation to get the algorithm to converge on a solution. How you tweak depends on the type of result you're looking for. Start by tweaking the force constant. If that doesn't work, you may have to change the mass values up or down. Or maybe change the exponent of the radius in the force calculation. For example, instead of this:
f = ( k * m[i] * m[j] ) / ( r * r );
You might try this:
f = ( k * m[i] * m[j] ) / pow( r, p );
Then you can experiment with different values of p.
You can also experiment with different algorithms for the initial distribution.
The amount of trial-and-error will depend on your design goals.
Here is something you can build on perhaps. It will randomly distribute your spheres along a sphere. Later we will iterate over this starting point to get an even distribution.
// Random point on sphere of radius R
var sphereCenters = []
var numSpheres = 100;
for(var i = 0; i < numSpheres; i++) {
var R = 1.0;
var vec = new THREE.Vector3(Math.random(), Math.random(), Math.random()).normalize();
var sphereCenter = new THREE.Vector3().copy(vec).multiplyScalar(R);
sphereCenter.radius = Math.random() * 5; // Random sphere size. Plug in your sizes here.
sphereCenters.push(sphereCenter);
// Create a Three.js sphere at sphereCenter
...
}
Then run the below code a few times to pack the spheres efficiently:
for(var i = 0; i < sphereCenters.length; i++) {
for(var j = 0; j < sphereCenters.length; j++) {
if(i === j)
continue;
// Calculate the distance between sphereCenters[i] and sphereCenters[j]
var dist = new THREE.Vector3().copy(sphereCenters[i]).sub(sphereCenters[j]);
if(dist.length() < sphereSize) {
// Move the center of this sphere to compensate.
// How far do we have to move?
var mDist = sphereSize - dist.length();
// Perturb the sphere in direction of dist magnitude mDist
var mVec = new THREE.Vector3().copy(dist).normalize();
mVec.multiplyScalar(mDist);
// Offset the actual sphere
sphereCenters[i].add(mVec).normalize().multiplyScalar(R);
}
}
}
Running the second section a number of times will "converge" on the solution you are looking for. You have to choose how many times it should be run in order to find the best trade-off between speed, and accuracy.
You can use the same code as in the periodic table of elements.
The rectangles there do not touch, so you can get the same effect with circles, virtually by using the same code.
Here is the code they have:
var vector = new THREE.Vector3();
for ( var i = 0, l = objects.length; i < l; i ++ ) {
var phi = Math.acos( -1 + ( 2 * i ) / l );
var theta = Math.sqrt( l * Math.PI ) * phi;
var object = new THREE.Object3D();
object.position.x = 800 * Math.cos( theta ) * Math.sin( phi );
object.position.y = 800 * Math.sin( theta ) * Math.sin( phi );
object.position.z = 800 * Math.cos( phi );
vector.copy( object.position ).multiplyScalar( 2 );
object.lookAt( vector );
targets.sphere.push( object );
}

Convert 3d point on sphere to UV coordinate

I have a 3d point on a sphere and want to convert from that, to a UV point on the sphere's texture.
Could someone point in the right direction for the please? I can take a pure math solution.
Edit:
I currently have this, which does not return the correct UV coordinate.
p is the 3d point on the sphere
mesh.position is the position of the sphere
var x = (p.x-mesh.position.x)/500;
var y = (p.y-mesh.position.y)/500;
var z = (p.z-mesh.position.z)/500;
var u = Math.atan2(x, z) / (2 * Math.PI) + 0.5;
var v = Math.asin(y) / Math.PI + .5;
The Wikipedia about uv-mapping is correct however you need to compute the uv coordinate for each vertex of the mesh. Then you need to transform the uv coordinate to pixel coordinate: Perfect (3D) texture mapping in opengl. Here is an other example: http://www.cse.msu.edu/~cse872/tutorial4.html. You can also try three.js. You also need to physically copy the uv triangle.
I found this wiki article - http://en.wikipedia.org/wiki/UV_mapping - they seem to use a sphere as their example.
Hope it helps.

Categories