I'm trying to make a box in THREE that represents a box of 2x4 Legos, 24 pieces wide by 48 pieces long and and arbitrary number of pieces tall. I've generated a texture that shows this pattern using random colors:
I need to show two sides this cube, but the textures have to align so that the pieces on the edges are the same colors, like so (generated in Blender):
I'd really prefer not to make six images for a CubeTexture, particularly since four are not visible. Is it possible to flip the texture on one side so that they appear to align? (We're just going for visual effect here.)
Further, not all 3D rectangles will be cubes, but I can't quite figure out how to set the texture.repeat.x and texture.repeat.y so that the x is scaled correctly and the y is at the same scale, but just cuts off when the height of the object ends, like so:
Thanks!
You can flip an image by flipping the UVs.
You'll need to figure out which UVs correspond to the face you're trying to mirror, and which direction to flip them (not sure how your geometry is created).
Here's an example using a basic BoxBufferGeometry and modifying its uv attribute. (The face on the right is the mirrored-by-UV-flipping face.)
var textureURL = "https://upload.wikimedia.org/wikipedia/commons/0/02/Triangular_hebesphenorotunda.png";
// attribution and license here: https://commons.wikimedia.org/wiki/File:Triangular_hebesphenorotunda.png
var renderer = new THREE.WebGLRenderer({antialias:true});
document.body.appendChild(renderer.domElement);
renderer.setSize(500, 500);
var textureLoader = new THREE.TextureLoader();
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(28, 1, 1, 1000);
camera.position.set(50, 25, 50);
camera.lookAt(scene.position);
scene.add(camera);
camera.add(new THREE.PointLight(0xffffff, 1, Infinity));
var cubeGeo = new THREE.BoxBufferGeometry(20, 20, 20);
var uvs = cubeGeo.attributes.uv;
// originally:
// [0] = 0,1
// [1] = 1,1
// [2] = 0,0
// [3] = 1,0
// convert to:
// [0] = 1,1
// [1] = 0,1
// [2] = 1,0
// [3] = 0.0
uvs.setX(0, 1);
uvs.setY(0, 1);
uvs.setX(1, 0);
uvs.setY(1, 1);
uvs.setX(2, 1);
uvs.setY(2, 0);
uvs.setX(3, 0);
uvs.setY(3, 0);
uvs.needsUpdate = true;
var mat = new THREE.MeshLambertMaterial({
color: "white",
map: textureLoader.load(textureURL, function(){
animate();
})
});
var mesh = new THREE.Mesh(cubeGeo, mat);
scene.add(mesh);
function render() {
renderer.render(scene, camera);
}
function animate() {
requestAnimationFrame(animate);
render();
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/91/three.min.js"></script>
You can create six PlaneBufferGeometries assign the same material, and then position them to form a cube. Rotate them in 90deg increments until you reach the desired result. For performance reasons, you could merge these back into a single BufferGeometry.
You can export the model you made in blender, either using the THREE.js json exporter, or a format like OBJ or GLTF, and load and render it directly.
What you are talking about is simply having the UV's laid out the way you have them in blender.. so if you need that level of control.. it's probably easier to just load the model instead of trying to generate it.
If you use either three.js .json or .gltf, both exporters have an option to embed the textures directly in the export. This can make it easier to get things working quicker, at the expense of possibly less efficient storage.
Related
I'm trying to use pose estimation coordinates to animate a rigged model in three.js The pose estimation tech I'm using provides real time x,y,z coordinates from a person in a video feed and I'm trying to use those to move the 3D model accordingly. I used the code below (some of which I found in an answer to a related question) as a starting point...
let camera, scene, renderer, clock, rightArm;
init();
animate();
function init() {
camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 0.01, 10);
camera.position.set(2, 2, -2);
clock = new THREE.Clock();
scene = new THREE.Scene();
scene.background = new THREE.Color(0xffffff);
const light = new THREE.HemisphereLight(0xbbbbff, 0x444422);
light.position.set(0, 1, 0);
scene.add(light);
// model
const loader = new THREE.GLTFLoader();
loader.load('https://threejs.org/examples/models/gltf/Soldier.glb', function(gltf) {
const model = gltf.scene;
rightArm = model.getObjectByName('mixamorigRightArm');
scene.add(model);
});
renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.outputEncoding = THREE.sRGBEncoding;
document.body.appendChild(renderer.domElement);
window.addEventListener('resize', onWindowResize, false);
const controls = new THREE.OrbitControls(camera, renderer.domElement);
controls.target.set(0, 1, 0);
controls.update();
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
//This was my attempt at deriving the rotation from two vector3's and applying it to the model
//storedresults is simply an array where I store the pose estimation data for a given position
//getPosition is just a helper function for getting the vector three for a specific position
function setRightArmRotation() {
if (rightArm) {
if (storedresults === undefined || storedresults.length == 0) {
return;
} else {
if (vectorarray.length < 2) {
vectorarray.push(getPosition(12));
} else {
vectorarray.pop();
vectorarray.push(getPosition(12));
var quaternion = new THREE.Quaternion();
quaternion.setFromUnitVectors(vectorarray[0], vectorarray[1]);
var matrix = new THREE.Matrix4();
matrix.makeRotationFromQuaternion(quaternion);
rightArm.applyMatrix4(matrix);
}
}
}
}
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
if (rightArm) {
rightArm.rotation.z += Math.sin(t) * 0.005;
//setRightArmRotation()
}
renderer.render(scene, camera);
}
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/build/three.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/examples/js/loaders/GLTFLoader.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/examples/js/controls/OrbitControls.js"></script>
I also referred to this answer on finding rotations from two vectors but I haven't been successful in implementing it to achieve the desired results...
How to find rotation matrix between two vectors
I can get the Vector3 from the pose estimation tech easily, and I understand how most of what is in the jsfiddle works but I can't seem to put it all together to get the desired result of having my 3D model 'mirror' the movement of what is in my video using the pose estimation coords. I pretty much can just get the model to 'thrash around'.
As I understand it I need to manipulate the rotations of the bones to achieve the desired results, and to do that I need to compute those rotations using two vectors, but again after much research and trial and error I just can't seem to put it all together. Any help would be appreciated.
This is a rather big answer, so I'll split this down in to smaller parts.
I'm going to go out on a limb and guess you're using posenet (tensorflow library).
I've had this issue before, and it is very complicated to come up with a solution, but here's what worked for me.
First, using posenet, match each bone in the mesh, to a bone in the keypoints array.
Then I created a Skeleton class that recreated a skeleton in primitive terms, each bone had a seperate class, for instance elbow joint etc.. These all had max hyperextension and retraction values, it took a series of Vector3 values that basically showed the min and max xyz rotation values (similar to the range of movement we have) and converted this from degrees to radians.
from there, I applied all the rotations to each bone, the rotations were nowhere near perfect though, as the arms would often rotate inside the body. To combat this, I created a bounding box round each bone (to the outer mesh) and did collision detection on each part until the bone no longer collided with another part of the mesh, I had a tolerance score for each body part (i.e allow arms to be crossed or legs)
from there I had a "somewhat" good estimate which was applied to a 3d mesh.
I've got some of the original stuff on github if you'd like to see an example:
https://github.com/hudson1998x/2D-To-3D-Pose-Estimation/
Here I bumped to the problem since I need to merge two geometries (or meshes) to one. Using the earlier versions of three.js there was a nice function:
THREE.GeometryUtils.merge(pendulum, ball);
However, it is not on the new version anymore.
I tried to merge pendulum and ball with the following code:
ball is a mesh.
var ballGeo = new THREE.SphereGeometry(24,35,35);
var ballMat = new THREE.MeshPhongMaterial({color: 0xF7FE2E});
var ball = new THREE.Mesh(ballGeo, ballMat);
ball.position.set(0,0,0);
var pendulum = new THREE.CylinderGeometry(1, 1, 20, 16);
ball.updateMatrix();
pendulum.merge(ball.geometry, ball.matrix);
scene.add(pendulum);
After all, I got the following error:
THREE.Object3D.add: object not an instance of THREE.Object3D. THREE.CylinderGeometry {uuid: "688B0EB1-70F7-4C51-86DB-5B1B90A8A24C", name: "", type: "CylinderGeometry", vertices: Array[1332], colors: Array[0]…}THREE.error # three_r71.js:35THREE.Object3D.add # three_r71.js:7770(anonymous function) # pendulum.js:20
To explain Darius' answer more clearly (as I struggled with it, while trying to update a version of Mr Doob's procedural city to work with the Face3 boxes):
Essentially you are merging all of your Meshes into a single Geometry. So, if you, for instance, want to merge a box and sphere:
var box = new THREE.BoxGeometry(1, 1, 1);
var sphere = new THREE.SphereGeometry(.65, 32, 32);
...into a single geometry:
var singleGeometry = new THREE.Geometry();
...you would create a Mesh for each geometry:
var boxMesh = new THREE.Mesh(box);
var sphereMesh = new THREE.Mesh(sphere);
...then call the merge method of the single geometry for each, passing the geometry and matrix of each into the method:
boxMesh.updateMatrix(); // as needed
singleGeometry.merge(boxMesh.geometry, boxMesh.matrix);
sphereMesh.updateMatrix(); // as needed
singleGeometry.merge(sphereMesh.geometry, sphereMesh.matrix);
Once merged, create a mesh from the single geometry and add to the scene:
var material = new THREE.MeshPhongMaterial({color: 0xFF0000});
var mesh = new THREE.Mesh(singleGeometry, material);
scene.add(mesh);
A working example:
<!DOCTYPE html>
<script src="https://cdnjs.cloudflare.com/ajax/libs/three.js/r77/three.js"></script>
<!-- OrbitControls.js is not versioned and may stop working with r77 -->
<script src='http://threejs.org/examples/js/controls/OrbitControls.js'></script>
<body style='margin: 0px; background-color: #bbbbbb; overflow: hidden;'>
<script>
// init renderer
var renderer = new THREE.WebGLRenderer();
renderer.setSize(window.innerWidth, window.innerHeight);
document.body.appendChild(renderer.domElement);
// init scene and camera
var scene = new THREE.Scene();
var camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 0.01, 3000);
camera.position.z = 5;
var controls = new THREE.OrbitControls(camera)
// our code
var box = new THREE.BoxGeometry(1, 1, 1);
var sphere = new THREE.SphereGeometry(.65, 32, 32);
var singleGeometry = new THREE.Geometry();
var boxMesh = new THREE.Mesh(box);
var sphereMesh = new THREE.Mesh(sphere);
boxMesh.updateMatrix(); // as needed
singleGeometry.merge(boxMesh.geometry, boxMesh.matrix);
sphereMesh.updateMatrix(); // as needed
singleGeometry.merge(sphereMesh.geometry, sphereMesh.matrix);
var material = new THREE.MeshPhongMaterial({color: 0xFF0000});
var mesh = new THREE.Mesh(singleGeometry, material);
scene.add(mesh);
// a light
var light = new THREE.HemisphereLight(0xfffff0, 0x101020, 1.25);
light.position.set(0.75, 1, 0.25);
scene.add(light);
// render
requestAnimationFrame(function animate(){
requestAnimationFrame(animate);
renderer.render(scene, camera);
})
</script>
</body>
At least, that's how I am interpreting things; apologies to anyone if I have something wrong, as I am no where close to being a three.js expert (currently learning). I just had the "bad luck" to try my hand at customizing Mr. Doob's procedural city code, when the latest version breaks things (the merge stuff being one of them, the fact that three.js no longer uses quads for cube -ahem- box geometry the other - which has led to all kinds of fun getting the shading and such to work properly again).
Finally, I found a possible solution. I am posting since it could be useful for somebody else while I wasted a lot of hours. The tricky thing is about manipulating the concept of meshes and geometries:
var ballGeo = new THREE.SphereGeometry(10,35,35);
var material = new THREE.MeshPhongMaterial({color: 0xF7FE2E});
var ball = new THREE.Mesh(ballGeo, material);
var pendulumGeo = new THREE.CylinderGeometry(1, 1, 50, 16);
ball.updateMatrix();
pendulumGeo.merge(ball.geometry, ball.matrix);
var pendulum = new THREE.Mesh(pendulumGeo, material);
scene.add(pendulum);
The error message is right. CylinderGeometry is not an Object3D. Mesh is. A Mesh is constructed from a Geometry and a Material. A Mesh can be added to the scene, while a Geometry cannot.
In the newest versions of three.js, Geometry has two merge methods: merge and mergeMesh.
merge takes a mandatory argument geometry, and two optional arguments matrix and materialIndexOffset.
geom.mergeMesh(mesh) is basically a shorthand for geom.merge(mesh.geometry, mesh.matrix), as used in other answers. ('geom' and 'mesh' being arbitrary names for a Geometry and a Mesh, respectively.) The Material of the Mesh is ignored.
This is my ultimate compact version in four (or five) lines (as long as material is defined somewhere else) making use of mergeMesh:
var geom = new THREE.Geometry();
geom.mergeMesh(new THREE.Mesh(new THREE.BoxGeometry(2,20,2)));
geom.mergeMesh(new THREE.Mesh(new THREE.BoxGeometry(5,5,5)));
geom.mergeVertices(); // optional
scene.add(new THREE.Mesh(geom, material));
Edit: added optional extra line to remove duplicate vertices, which should help performance.
Edit 2: I'm using the newest version, 94.
The answers and code that I've seen posted here do not work because the second argument of the merge method is an integer, not a matrix. As far as I can tell, the merge method is not really functioning in a useful way. Therefore, I used the following approach to make a simple rocket with a nose cone.
import * as BufferGeometryUtils from '../three.js/examples/jsm/utils/BufferGeometryUtils.js'
lengthSegments = 2
radius = 5
radialSegments = 32
const bodyLength = dParamWithUnits['launchVehicleBodyLength'].value
const noseConeLength = dParamWithUnits['launchVehicleNoseConeLength'].value
// Create the vehicle's body
const launchVehicleBodyGeometry = new THREE.CylinderGeometry(radius, radius, bodyLength, radialSegments, lengthSegments, false)
launchVehicleBodyGeometry.name = "body"
// Create the nose cone
const launchVehicleNoseConeGeometry = new THREE.CylinderGeometry(0, radius, noseConeLength, radialSegments, lengthSegments, false)
launchVehicleNoseConeGeometry.name = "noseCone"
launchVehicleNoseConeGeometry.translate(0, (bodyLength+noseConeLength)/2, 0)
// Merge the nosecone into the body
const launchVehicleGeometry = BufferGeometryUtils.mergeBufferGeometries([launchVehicleBodyGeometry, launchVehicleNoseConeGeometry])
// Rotate the vehicle to horizontal
launchVehicleGeometry.rotateX(-Math.PI/2)
const launchVehicleMaterial = new THREE.MeshPhongMaterial( {color: 0x7f3f00})
const launchVehicleMesh = new THREE.Mesh(launchVehicleGeometry, launchVehicleMaterial)
My goal is to create an interactive Earth that has lines normal to the surface so that you can click on them and it pulls up pictures that my health care team has taken from around the world. I have the world completely coded (or more accurately someone else did it and I made a few small changes).
Below is the code for the Earth which functions as expected. What I want to know is how to make lines normal to the surface and have them be clickable. It would be optimal if the lines faded and disappeared as they went to the back of the earth rotated or the user rotated the earth and the lines on the side the user couldn't see faded.
I thought about making an array of cities and having a location on the sphere be associated with it but I'm not really sure how to do that. I am very new to Three.js and HTML/JS in general.
it may be helpful to know that I am using three.mins.js, Detector.js, and TrackballControl.js
Code so far as follows:
(function () {
var webglEl = document.getElementById('webgl');
if (!Detector.webgl) {
Detector.addGetWebGLMessage(webglEl);
return;
}
var width = window.innerWidth,
height = window.innerHeight;
// Earth params
var radius = 0.5,
segments = 32,
rotation = 6;
var scene = new THREE.Scene();
var uniforms, mesh, meshes =[];
var camera = new THREE.PerspectiveCamera(45, width / height, 0.01, 1000);
camera.position.z = 1.5;
var renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);
scene.add(new THREE.AmbientLight(0x333333));
var light = new THREE.DirectionalLight(0xffffff, 1);
light.position.set(5,3,5);
scene.add(light);
var sphere = createSphere(radius, segments);
sphere.rotation.y = rotation;
scene.add(sphere)
var clouds = createClouds(radius, segments);
clouds.rotation.y = rotation;
scene.add(clouds)
var stars = createStars(90, 64);
scene.add(stars);
var controls = new THREE.TrackballControls(camera);
webglEl.appendChild(renderer.domElement);
render();
function render() {
controls.update();
sphere.rotation.y += 0.0005;
clouds.rotation.y += 0.0007;
requestAnimationFrame(render);
renderer.render(scene, camera);
}
function createSphere(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, segments),
new THREE.MeshPhongMaterial({
map: THREE.ImageUtils.loadTexture('images/Color_Map.jpg'),
bumpMap: THREE.ImageUtils.loadTexture('images/elev_bump_4k.jpg'),
bumpScale: 0.005,
specularMap: THREE.ImageUtils.loadTexture('images/water_4k.png'),
specular: new THREE.Color('grey')
})
);
}
function createClouds(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius + 0.003, segments, segments),
new THREE.MeshPhongMaterial({
map: THREE.ImageUtils.loadTexture('images/fair_clouds_4k.png'),
transparent: true
})
);
}
function createStars(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, segments),
new THREE.MeshBasicMaterial({
map: THREE.ImageUtils.loadTexture('images/galaxy_starfield.png'),
side: THREE.BackSide
})
);
}
}());
The hope is that it would look like this link but with Earth and not a building (http://3d.cl3ver.com/uWfsD?tryitlocation=3) [also click explore when you go there].
I built a quick demo that most faithfully represents what I think your needs are. It shows some images that seem to be attached to an Earth sphere through lines. It uses sprites to create those images (and the lines themselves, actually). I think it resembles quite well that demo of a building that you linked to. Here is the technique:
Images are added using GIMP to this template and saved as PNGs.
Those images are loaded as textures in the Js apps.
The sprite is created, using the loaded texture.
The sprite is added to a Object3D and its position set to (0,0,radiusOfTheEarthSphere)
The Object3D is added to the sphere, and rotated until the center of the sprite lies in the position in Earth that you want it to rest in.
Each frame, a dot product between a vector from the center of Earth to the camera and a vector from the center of the Earth to each sprite is used to calculate the sprite's opacity.
That equation in 6 is:
opacity = ((|cameraPosition - centerOfEarth| x |spriteCenter - centerOfEarth|) + 1) * 0.5
where "x" is dot product and "||" denotes normalization.
Also note that sprite center is different from its position due to the Object3D used as parent, I calculate its center using the .localToWorld(vec) method.
Please see the demo here: https://33983769c6a202d6064de7bcf6c5ac7f51fd6d9e.googledrive.com/host/0B9scOMN0JFaXSE93YTZRTE5XeDQ/test.html
It is hosted in my Google Drive and it may take some time to load. Three.js will give some errors in the console untill all textures are loaded because I coded it quickly just to show you my implementation ideas.
I have added a normal map to a model in Three.js that is mirrored down the middle. It looks like one of the channels (green perhaps?) is flipped on the mirrored side.
I have one ambient light, one directional headlight, and one spotlight. Here is the code that I use to make the material:
// Create a MeshPhongMaterial for the model
var material = new THREE.MeshPhongMaterial();
material.map = THREE.ImageUtils.loadTexture(texture_color);
// Wrapping modes
//THREE.RepeatWrapping = 1000;
//THREE.ClampToEdgeWrapping = 1001;
//THREE.MirroredRepeatWrapping = 1002;
material.map.wrapS = THREE.RepeatWrapping;
material.map.wrapT = THREE.MirroredRepeatWrapping;
if (texture_normal != null) {
material.normalMap = THREE.ImageUtils.loadTexture(texture_normal);
material.normalMap.wrapS = THREE.RepeatWrapping;
material.normalMap.wrapT = THREE.MirroredRepeatWrapping;
}
material.wrapAround = true;
material.morphTargets = true;
material.shininess = 15;
material.specular = new THREE.Color(0.1, 0.1, 0.1);
material.ambient = new THREE.Color(0, 0, 0);
material.alphaTest = 0.5;
var mesh = new THREE.MorphAnimMesh( geometry, material );
// Turn on shadows
mesh.castShadow = true;
if (shadows) {
mesh.receiveShadow = true;
}
scene.add( mesh );
I tried all of the different combinations of material.normalMap.wrapS and material.normalMap.wrapT but that didn't solve it (tried diffuse map too). What am I doing wrong?
Thank you!
Normal maps are dependent on the geometry, so you can't just mirror it and expect it to work like a diffuse texture would.
To make it work, you need to flip the normal map's red channel wherever the UVWs are mirrored on the model.
http://www.polycount.com/forum/showthread.php?t=116922
Turns out I was using an older version (1.2) of the Blender Three.js exporter. By switching to the latest version (1.5) of the exporter from the r67 repository, Three.js now correctly handles mirrored normal maps with its Phong shader out of the box.
Edit: The Phong Shader was still having issues with the flipped channel. I ended up using the "Normal Map Shader" (see the Three.js examples) and that gave me correct results. Unfortunately the Normal Map Shader doesn't work with Morph animations, only Skeletal.
I don’t understand how normals are computed in threejs.
Here is my problem :
I create a simple plane
var plane = new THREE.PlaneGeometry(10, 100, 10, 10);
var material = new THREE.MeshBasicMaterial();
material.setValues({side: THREE.DoubleSide, color: 0xaabbcc});
var mesh = new THREE.Mesh(plane, material);
mesh.rotateY(Math.PI / 2);
scene.add(mesh);
When I read the normal of this plane, I get (0, 0, 1).
But the plane is parallel to the z axis so the value is wrong.
I tried adding
mesh.geometry.computeFaceNormals();
mesh.geometry.computeVertexNormals();
but I still get the same result.
Did I miss anything ?
How can I get correct values for normals from threejs ?
Thanks.
Geometry normals are in object space. To transform them to world space, first make sure the object matrix is updated.
object.updateMatrixWorld();
(The renderer does this for you in each render loop, so you may be able to skip this step.)
Then, compute the normal matrix:
var normalMatrix = new THREE.Matrix3().getNormalMatrix( object.matrixWorld );
Now transform the normal to world space like so:
var newNormal = normal.clone().applyMatrix3( normalMatrix ).normalize();
three.js r.66