Tileable texture in Three.js - javascript

Hello I have created simple renderer for my 3D objects (php generated).
I am successfully rendering all the object, but I got some big issues with textures.
This is my texture: (512x512)
I'd like to use it on my object, but this is what happens:
I can't figure how to display not stretched nicely looking grid in 1:1 ratio.
I think i need to calculate the repeats somehow. Any idea?
This is how im setting up the texture:
var texture = THREE.ImageUtils.loadTexture(basePath + '/images/textures/dt.jpg', new THREE.UVMapping());
texture.wrapT = THREE.RepeatWrapping;
texture.wrapS = THREE.RepeatWrapping;
texture.repeat.set(1,1);
stairmaterials[0] = new THREE.MeshBasicMaterial(
{
side: THREE.DoubleSide,
map: texture
});
I tried to change repeat values to achieve 1:1 not stretched ration, but I wasn't successful at all. I all got worse and worse.
I also use following algorithm to calculate vertex UVS:
geom.computeBoundingBox();
var max = geom.boundingBox.max;
var min = geom.boundingBox.min;
var offset = new THREE.Vector2(0 - min.x, 0 - min.z);
var range = new THREE.Vector2(max.x - min.x, max.z - min.z);
geom.faceVertexUvs[0] = [];
var faces = geom.faces;
for (i = 0; i < geom.faces.length; i++) {
var v1 = geom.vertices[faces[i].a];
var v2 = geom.vertices[faces[i].b];
var v3 = geom.vertices[faces[i].c];
geom.faceVertexUvs[0].push([
new THREE.Vector2(( v1.x + offset.x ) / range.x, ( v1.z + offset.z ) / range.z),
new THREE.Vector2(( v2.x + offset.x ) / range.x, ( v2.z + offset.z ) / range.z),
new THREE.Vector2(( v3.x + offset.x ) / range.x, ( v3.z + offset.z ) / range.z)
]);
}
geom.uvsNeedUpdate = true;

Is your mesh imported or generated?
You are iterating through each and every triangle, and doing some sort of scale / projection. If all of these boxes are a single mesh, this will work to an extent, you will scale each and every box by the same ratio. They also exist in the same model space, so yes, it will be scaled properly and aligned.
If they are not the same mesh, it can fail.
The vertices you are looping through exist in model space. Each one of these boxes, could exist in their own scale, at there own arbitrary local positions. So, you'd have to transform them to world space using the object's .modelMatrix . When you apply this matrix you will bring all of those vertices into the same space. Try doing this without the bounding box, or just grab the bounding box from one object and you will see a uniform scale, and proper alignment.
You wont be able to use this algorithm to get the map to show in 3d though, you are doing a 2d projection. You can check each triangle's orientation, and then choose a different projection direction though.

Related

Three.js: How would I add a longitude/latitude grid to a world globe?

I'm working on a simulation of the famous Prague Astronomical Clock: https://shetline.com/orloj/
I've already produced a lat/lon grid in a crude way, by 2-d drawing lines onto the image which is used for the surface of the globe, but this method doesn't work very well:
(Yes, the globe is supposed to be upside down, as it is on the original clock.)
The grid lines are very jagged this way, and they fade out near the poles because of the scaling near the poles of the original rectangular map image.
I'm trying to find a tutorial on how to do a simple bit of decoration like this by using Three.js/WebGL constructs, but no luck so far. I can get a bit of a start on what I want with the following code:
this.camera = new PerspectiveCamera(FIELD_OF_VIEW, 1);
this.scene = new Scene();
const globe = new SphereGeometry(GLOBE_RADIUS, 50, 50);
globe.rotateY(-PI / 2);
this.globeMesh = new Mesh(globe, new MeshBasicMaterial({ map: new CanvasTexture(Globe.mapCanvas) }));
this.renderer = new WebGLRenderer({ alpha: true });
this.renderer.setSize(GLOBE_PIXEL_SIZE, GLOBE_PIXEL_SIZE);
this.rendererHost.appendChild(this.renderer.domElement);
this.scene.add(this.globeMesh);
const circle = new Line(new CircleGeometry(GLOBE_RADIUS + 0.1, 50),
new LineBasicMaterial({ color: GRID_COLOR, linewidth: 20 }));
this.globeMesh.add(circle);
...by adding a circle, and making the edges of the circle poke out just a bit beyond the surface of the globe, but the result is only a very faint line that doesn't respond to my attempts to make the line thicker. I've also tried something similar with a torus floating just above the surface of the globe, but that doesn't work very well either.
What I want is some sort of additional layer for the lines, or a composite material that combines my pixel image with geometrically-defined lines (rather than painted lines), but I'm not finding the Three.js documentation very clear for figuring out how to do this.
I'm not even sure what kind of geometric model to use for these lines. They're all circles of a sort, but to have a real visual extent it seems the geometry would need to be some sort of globe-hugging, narrow, thin ribbons resting on the surface of the globe.
I ended up using (very, very short) cylinders as my lines of longitude and latitude, floating ever so slightly above the surface of the globe. It seems like anything composed of true line objects can't be guaranteed to honor linewidth, and I didn't want to settle for always-one-pixel lines.
The lines of longitude are simple hollow cylinders, as is the equator. All of the other lines of latitude are flared cylinders, wider at one end to conform to the shape of the globe.
const LINE_THICKNESS = 0.03;
const HAG = 0.01; // Sleight distance above globe that longitude/latitude lines are drawn.
// ...
// Lines of longitude
for (let n = 0; n < 24; ++n) {
const line = new CylinderGeometry(GLOBE_RADIUS + HAG, GLOBE_RADIUS + HAG, LINE_THICKNESS, 50, 1, true);
line.translate(0, -LINE_THICKNESS / 2, 0);
line.rotateX(PI / 2);
line.rotateY(n * PI / 12);
const mesh = new Mesh(line, new MeshBasicMaterial({ color: GRID_COLOR }));
this.globeMesh.add(mesh);
}
// Lines of latitude
for (let n = 1; n < 12; ++n) {
const lat = (n - 6) * PI / 12;
const r = GLOBE_RADIUS * cos(lat);
const y = GLOBE_RADIUS * sin(lat);
const r1 = r - LINE_THICKNESS * sin(lat) / 2;
const r2 = r + LINE_THICKNESS * sin(lat) / 2;
const line = new CylinderGeometry(r1 + HAG, r2 + HAG, cos(lat) * LINE_THICKNESS, 50, 8, true);
line.translate(0, -cos(lat) * LINE_THICKNESS / 2 + y, 0);
const mesh = new Mesh(line, new MeshBasicMaterial({ color: GRID_COLOR }));
this.globeMesh.add(mesh);
}
Perhaps there is a better way where the lines are part of some sort of texture applied to the globe, but this is at least giving satisfactory results for now.

Mapping a stereographic projection to the inside of a sphere in ThreeJS

When it comes to 3D animation, there are a lot of terms and concepts that I'm not familiar with (maybe a secondary question to append to this one: what are some good books to get familiar with the concepts?). I don't know what a "UV" is (in the context of 3D rendering) and I'm not familiar with what tools exist for mapping pixels on an image to points on a mesh.
I have the following image being produced by a 360-degree camera (it's actually the output of an HTML video element):
I want the center of this image to be the "top" of the sphere, and any radius of the circle in this image to be an arc along the sphere from top to bottom.
Here's my starting point (copying lines of code directly from the Three.JS documentation):
var video = document.getElementById( "texture-video" );
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera( 75, window.innerWidth / window.innerHeight, 0.1, 1000 );
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
var material = new THREE.MeshBasicMaterial( { map: texture } );
var geometry = new THREE.SphereGeometry(0.5, 100, 100);
var mesh = new THREE.Mesh( geometry, material );
scene.add( mesh );
camera.position.z = 1
function animate()
{
mesh.rotation.y += 0.01;
requestAnimationFrame( animate );
renderer.render( scene, camera );
}
animate();
This produces the following:
There are a few problems:
The texture is rotated 90 degrees
The ground is distorted, although this may be fixed if the rotation is fixed?
Update: Upon further investigation of the sphere being produced, it's not actually rotated 90 degrees. Instead, the top of the image is the top of the sphere and the bottom of the image is the bottom of the sphere. This causes the left and right edges of the image to become the distorted "sideways ground" I saw
This is on the outside of the sphere. I want to project this to the inside of the sphere (and place the camera inside the sphere)
Currently if I place the camera inside the sphere I get solid black. I don't think it's a lighting issue because the Three.JS docs said that a MeshBasicMaterial didn't need lighting. I think the issue may be that the normals of all of the sphere faces point outward and I need to reverse them. I'm not sure how one would do this - but I'm pretty sure it's possible since I think this is how skyboxes work.
Doing some research I'm pretty sure I need to modify the "UV"s to fix this, I just don't know how or really what that even means...
Working Example
I forked #manthrax's CodeSandbox.io solution and updated it with my own:
https://codesandbox.io/s/4w1njkrv9
The Solution
So after spending a day researching UV mapping to understand what it meant and how it worked, I was able to sit down and scratch out some trig to map points on a sphere to points on my stereographic image. It basically came down to the following:
Use arccosine of the Y coordinate to determine the magnitude of a polar coordinate on the stereographic image
Use the arctangent of the X and Z coordinates to determine the angle of the polar coordinate on the stereographic image
Use x = Rcos(theta), y = Rsin(theta) to compute the rectangular coordinates on the stereographic image
If time permits I may draw a quick image in Illustrator or something to explain the math, but it's standard trigonometry
I went a step further after this, because the camera I was using only has a 240 degree vertical viewing angle - which caused the image to get slightly distorted (especially near the ground). By subtracting the vertical viewing angle from 360 and dividing by two, you get an angle from the vertical within which no mapping should occur. Because the sphere is oriented along the Y axis, this angle maps to a particular Y coordinate - above which there's data, and below which there isn't.
Calculate this "minimum Y value"
For all points on the sphere:
If the point is above the minimum Y value, scale it linearly so that the first such value is counted as "0" and the top of the sphere is still counted as "1" for mapping purposes
If the point is below the minimum Y value, return nothing
Weird Caveats
For some reason the code I wrote flipped the image upside down. I don't know if I messed up on my trigonometry or if I messed up on my understanding of UV maps. Whatever the case, this was trivially fixed by flipping the sphere 180 degrees after mapping
As well, I don't know how to "return nothing" in the UV map, so instead I mapped all points below the minimum Y value to the corner of the image (which was black)
With a 240-degree viewing angle the space at the bottom of the sphere with no image data was sufficiently large (on my monitor) that I could see the black circle when looking directly ahead. I didn't like the visual appearance of this, so I plugged in 270 for the vertical FOV. this leads to minor distortion around the ground, but not as bad as when using 360.
The Code
Here's the code I wrote for updating the UV maps:
// Enter the vertical FOV for the camera here
var vFov = 270; // = 240;
var material = new THREE.MeshBasicMaterial( { map: texture, side: THREE.BackSide } );
var geometry = new THREE.SphereGeometry(0.5, 200, 200);
function updateUVs()
{
var maxY = Math.cos(Math.PI * (360 - vFov) / 180 / 2);
var faceVertexUvs = geometry.faceVertexUvs[0];
// The sphere consists of many FACES
for ( var i = 0; i < faceVertexUvs.length; i++ )
{
// For each face...
var uvs = faceVertexUvs[i];
var face = geometry.faces[i];
// A face is a triangle (three vertices)
for ( var j = 0; j < 3; j ++ )
{
// For each vertex...
// x, y, and z refer to the point on the sphere in 3d space where this vertex resides
var x = face.vertexNormals[j].x;
var y = face.vertexNormals[j].y;
var z = face.vertexNormals[j].z;
// Because our stereograph goes from 0 to 1 but our vertical field of view cuts off our Y early
var scaledY = (((y + 1) / (maxY + 1)) * 2) - 1;
// uvs[j].x, uvs[j].y refer to a point on the 2d texture
if (y < maxY)
{
var radius = Math.acos(1 - ((scaledY / 2) + 0.5)) / Math.PI;
var angle = Math.atan2(x, z);
uvs[j].x = (radius * Math.cos(angle)) + 0.5;
uvs[j].y = (radius * Math.sin(angle)) + 0.5;
} else {
uvs[j].x = 0;
uvs[j].y = 0;
}
}
}
// For whatever reason my UV mapping turned everything upside down
// Rather than fix my math, I just replaced "minY" with "maxY" and
// rotated the sphere 180 degrees
geometry.rotateZ(Math.PI);
geometry.uvsNeedUpdate = true;
}
updateUVs();
var mesh = new THREE.Mesh( geometry, material );
The Results
Now if you add this mesh to a scene everything looks perfect:
One Thing I Still Don't Understand
Right around the "hole" at the bottom of the sphere there's a multi-colored ring. It almost looks like a mirror of the sky. I don't know why this exists or how it got there. Could anyone shed light on this in the comments?
Here is as close as I could get it in about 10 minutes of fiddling with a polar unwrapping of the uv's.
You can modify the polarUnwrap function to try and get a better mapping....
https://codesandbox.io/s/8nx75lkn28
You can replace the TextureLoader().loadTexture() with
//assuming you have created a HTML video element with id="video"
var video = document.getElementById( 'video' );
var texture = new THREE.VideoTexture( video );
texture.minFilter = THREE.LinearFilter;
texture.magFilter = THREE.LinearFilter;
texture.format = THREE.RGBFormat;
to get your video fed in there...
More info here:
https://threejs.org/docs/#api/textures/VideoTexture
Also this may be useful to you:
https://community.theta360.guide/t/displaying-thetas-dual-fisheye-video-with-three-js/1160
I think, it's would be quite difficult to modify the UVs, so that the stereographic projected image will fit. The UVs of a sphere are set to fit textures with equirectangular projection.
To transform the image from stereographic to equirectangular, you might want to use Panorama tools like PTGui or Hugin. Or you can use Photoshop (apply Filter > Distort > Polar Coordinates > polar to rectangular).
Equirectangular projection of the image (with Photoshop), resized to 2:1 aspect ratio (not necessary for texture)
If you want the texture to be inside the sphere (or normals flipped), you are able to set the material to THREE.BackSide.
var material = new THREE.MeshBasicMaterial( { map: texture, side: THREE.BackSide } );
Maybe, you have to flip the texture horizontally then: How to flip a Three.js texture horizontally

How to select objects when using the Orthographic camera in Three Js

I've got an isometric level going and I'm trying to select objects within the level.
I've looked at a few Stackoverflow answers including this one Orthographic camera and selecting objects with raycast but at the moment, nothing seems to be working. Something is off, perhaps it has something to do with my camera set up, so here is the relevant code I am using.
// set up camera
scope.camera = new THREE.OrthographicCamera( - scope.cameraDistance * aspect, scope.cameraDistance * aspect, scope.cameraDistance, - scope.cameraDistance, - height, 1000 );
scope.camera.position.set(0 , 0 , 0);
scope.camera.rotation.order = scope.rotationOrder; // = "YXZ"
scope.camera.rotation.y = - Math.PI / 4;
scope.camera.rotation.x = Math.atan(- 1 / Math.sqrt(2));
When I loop through my matrix and add the tiles, I add them all to a new THREE.Object3D() and then add that object to the scene. My onMouseMove event looks like this
onMouseMove:function(event) {
event.preventDefault();
var scope = Game.GSThree,
$container = $(scope.container.element),
width = $container.width(),
height = $container.height(),
vector,
ray,
intersects;
scope.mouse.x = (event.clientX / width) * 2 - 1;
scope.mouse.y = - (event.clientY / height) * 2 + 1;
vector = new THREE.Vector3(scope.mouse.x , scope.mouse.y , 0.5);
ray = scope.projector.pickingRay(vector , scope.camera);
intersects = ray.intersectObjects(scope.tiles.children);
if(intersects.length) {
console.log(intersects[0]);
}
}
Now the problem is that something is very off. The ray intersects with things when it is no where near them and also seems to intersect with multiple children of tiles at a time. If I log intersects.length it sometimes returns 3, 2 or 1 object(s). Just in case it is relevant; my material for each object mesh is a new new THREE.MeshFaceMaterial() with an array containing 6 new THREE.MeshBasicMaterial() passed in.
Any ideas?
It's always the stupidest thing; my container element had padding-left:250px; on it. Removed that and it works. Always correct for your offsets!

THREE.js: Need Help rotating with Quaternions

I'm looking to understand quaternions for three.js, but for all the tutorials, I haven't been able to translate them into the application I need. This is the problem:
Given a sphere centered at (0,0,0), I want to angle an object on the sphere's surface, that acts as the focal point for the camera. This point is to be moved and rotated on the surface with keyboard input.
Setting the focal point into a chosen orbit is easy of course, but maintaining the right rotation perpendicular to the surface escapes me. I know quaternions are neccessary for smooth movement and arbitrary axis rotation, but I don't know where to start.
The second part then is rotating the camera offset with the focal point. The snippet I found for this does not have the desired effect anymore, as the cameraOffset does not inherit the rotation:
var cameraOffset = relativeCameraOffset.clone().applyMatrix4( focalPoint.matrixWorld );
camera.position.copy( focalPoint.position.clone().add(cameraOffset) );
camera.lookAt( focalPoint.position );
Update 1: Tried it with fixed camera on the pole and rotating the planet. But unless I'm missing something important, this fails as well, due to the directions getting skewed completely when going towards the equator. (Left becomes forward). Code in update is:
acceleration.set(0,0,0);
if (keyboard.pressed("w")) acceleration.x = 1 * accelerationSpeed;
if (keyboard.pressed("s")) acceleration.x = -1 * accelerationSpeed;
if (keyboard.pressed("a")) acceleration.z = 1 * accelerationSpeed;
if (keyboard.pressed("d")) acceleration.z = -1 * accelerationSpeed;
if (keyboard.pressed("q")) acceleration.y = 1 * accelerationSpeed;
if (keyboard.pressed("e")) acceleration.y = -1 * accelerationSpeed;
velocity.add(acceleration);
velocity.multiplyScalar(dropOff);
velocity.max(minV);
velocity.min(maxV);
planet.mesh.rotation.x += velocity.x;
planet.mesh.rotation.y += velocity.y;
planet.mesh.rotation.z += velocity.z;
So I'm still open for suggestions.
Finally found the solution from a mixture of matrices and quaternions:
//Setup
var ux = new THREE.Vector3(1,0,0);
var uy = new THREE.Vector3(0,1,0);
var uz = new THREE.Vector3(0,0,1);
var direction = ux.clone();
var m4 = new THREE.Matrix4();
var dq = new THREE.Quaternion(); //direction quad base
var dqq; //final direction quad
var dq2 = new THREE.Quaternion();
dq2.setFromAxisAngle(uz,Math.PI/2); //direction perpendicular rot
//Update
if (velocity.length() < 0.1) return;
if (velocity.x) { focalPoint.translateY( velocity.x ); }
if (velocity.y) { focalPoint.translateX( velocity.y ); }
//create new direction from focalPoint quat, but perpendicular
dqq = dq.clone().multiply(focalPoint.quaternion).multiply(dq2);
velocity.multiplyScalar(dropOff);
//forward direction vector
direction = ux.clone().applyQuaternion(dqq).normalize();
//use Matrix4.lookAt to align focalPoint with the direction
m4.lookAt(focalPoint.position, planet.mesh.position, direction);
focalPoint.quaternion.setFromRotationMatrix(m4);
var cameraOffset = relativeCameraOffset.clone();
cameraOffset.z = cameraDistance;
cameraOffset.applyQuaternion(focalPoint.quaternion);
camera.position = focalPoint.position.clone().add(cameraOffset) ;
//use direction for camera rotation as well
camera.up = direction;
camera.lookAt( focalPoint.position );
This is the hard core of it. It pans (and with some extension rotates) around the planet without the poles being an issue.
I'm not sure to understand your problem.
But for help, I draw a boat on a sphere with the code below.
var geometry = new THREE.ShapeGeometry(shape);
var translation = new THREE.Matrix4().makeTranslation(boat.position.x, boat.position.y, boat.position.z);
var rotationZ = new THREE.Matrix4().makeRotationZ(-THREE.Math.degToRad(boat.cap));
var rotationX = new THREE.Matrix4().makeRotationX(-THREE.Math.degToRad(boat.latitude));
var rotationY = new THREE.Matrix4().makeRotationY(Math.PI / 2 + THREE.Math.degToRad(boat.longitude));
var roationXY = rotationY.multiply(rotationX);
geometry.applyMatrix(rotationZ);
geometry.applyMatrix(roationXY );
geometry.applyMatrix(translation);
First, I apply a rotation on Z to define boat cap
Then, I apply
rotation on Y,X to to set the boat perpendicular to the surface of
the sphere
Finally I apply a translation to put the boat on the
surafce of the sphere
The rotations order is important

Three.js Javascript object transformation issue

I'm trying to scale height(y) of a cube as well as raising its y position simultaneously so that its base remains on the same x-z plane even after scaling. The rendering works fine but it is not working as expected. The object position increases rapidly in Y and then it starts disappearing. Can anyone help me out?
I've added an eventListener: mouseMove and its listener function to do the above transformations:
var fHeight = 5, yShift;
function onDocumentMouseMove(event) {
var mouseVector = new THREE.Vector3(2*(event.clientX/window.innerWidth) - 1, 1 - 2*(event.clientY/window.innerHeight));
var projector = new THREE.Projector();
var raycaster = projector.pickingRay(mouseVector.clone(), camera);
var intersects = raycaster.intersectObjects( Object.children );
if(intersects.length > 0) {
intersects[0].object.scale.y += 0.1;
fHeight = fHeight*intersects[0].object.scale.y;
yShift = fHeight/2 - intersects[0].object.position.y - 2.5;
intersects[0].object.position.y = intersects[0].object.position.y + yShift;
}
}
You can place the vertices of your cube so its base sticks to its y=0 plane.
Then object.scale.y will directly modify the cube height only, without displacing the base, and you will not have to reposition the object.
Hope it helps

Categories