Scaling a 2D SVG group object in three.js - javascript

I'm attempting to create a map of 2d SVG tiles in three.js. I have used SVGLoader() Like so (Keep in mind some brackets are for parent scopes that aren't shown. That is not the issue):
loader = new SVGLoader();
loader.load(
// resource URL
filePath,
// called when the resource is loaded
function ( data ) {
console.log("SVG file successfully loaded");
const paths = data.paths;
for ( let i = 0; i < paths.length; i ++ ) {
const path = paths[ i ];
const material = new THREE.MeshBasicMaterial( {
color: path.color,
side: THREE.DoubleSide,
depthWrite: false
} );
const shapes = SVGLoader.createShapes( path );
console.log(`Shapes length = ${shapes.length}`);
try{
for ( let j = 0; j < shapes.length; j ++ ) {
const shape = shapes[ j ];
const geometry = new THREE.ShapeGeometry( shape );
const testGeometry = new THREE.PlaneGeometry(2,2);
try{
const mesh = new THREE.Mesh(geometry, material );
group.add( mesh );
}catch(e){console.log(e)}
}
}catch(e){console.log(e)}
}
},
// called when loading is in progress
function ( xhr ) {
console.log( ( xhr.loaded / xhr.total * 100 ) + '% loaded' );
},
// called when loading has errors
function ( error ) {
console.log( 'An error happened' );
}
);
return group;
}
Dismiss the fact that I surrounded alot of it in try{}catch(){}
I have also created grid lines and added it to my axis helper in the application that allows me to see where each cooordinate is, in relation to the X and Y axis.
This is how the svg appears on screen:
Application Output
I can't seem to figure out how to correlate the scale of the svg, with the individual grid lines. I have a feeling that Im going to have to dive deeper into the SVG loading script that I have above then scale each shape mesh specifically. I call the SVG group itself in the following code.
try{
//SVG returns a group, TGA returns a texture to be added to a material
var object1 = LOADER.textureLoader("TGA", './Art/tile1.tga', pGeometry);
var object2 = LOADER.textureLoader("SVG", '/Art/bitmap.svg');
const testMaterial = new THREE.MeshBasicMaterial({
color: 0xffffff,
map: object1,
side: THREE.DoubleSide
});
//const useMesh = new THREE.Mesh(pGeometry, testMaterial);
//testing scaling the tile
try{
const worldScale = new THREE.Vector3();
object2.getWorldScale(worldScale);
console.log(`World ScaleX: ${worldScale.x} World ScaleY: ${worldScale.y} World ScaleZ: ${worldScale.z}`);
//object2.scale.set(2,2,0);
}catch(error){console.log(error)}
scene.add(object2);
}
Keep in mind that the SVG is object2 in this case. Some of the ideas to tackle this problem I have had is looking into what a world scale is, matrix4 transformations, and the scale methods of either the object3d parent properties or the bufferGeometry parent properties of this particular svg group object. I am also fully aware that three.js is designed for 3d graphics, however I would like to master 2d graphics programming in this library before I get into the 3d aspect of things. I also have a thought that the scale of the SVG group is distinctly different from the scale of the scene and its X Y and Z axis.
If this question has already been answered a link to the corresponding answer would be of great help to me.
Thank you for the time you take to answer this question.

I messed with the dimensions of the svg file itself in the editor I used to paint it and I got it to scale. Not exactly a solution in the code, however I guess the code is just closely tied to the data that the svg file provides and cant be altered too much.

Related

THREE.Points, parsing JSON, and iterating over materials

So I have some lovely JSON exported from Blender using three.js' addon. Works great.
I want to grab colours from each of the materials that the geometry has been exported with and change their material types to "point."
This again works fine according to my console output of the model.material array before and after my .forEach.
Unfortunately, nothing is displayed, as though no material has been applied whatsoever.
As per the inline comment below, a single colour material does work, as do the original materials loaded from the JSON.
var loader = new THREE.JSONLoader();
var model = loader.parse(json from elsewhere);
var mesh;
var pointMats = [];
model.materials.forEach(function(j) {
var color = new THREE.Color(j.color.r, j.color.g, j.color.b);
var specular = new THREE.Color(j.specular.r, j.specular.g, j.specular.b);
var newPointsMat = new THREE.PointsMaterial({
name: j.name,
color: color,
lights: true,
size: 1
});
pointMats.push(newPointsMat);
});
// var pointsMat = new THREE.PointsMaterial( {
// color: 0xffffff,
// size: 0.01
// }); // this works fine and is applied to all the meshes in my scene
mesh = new THREE.Points(model.geometry, pointMats);
scene.add(mesh);
Of course, no errors are given in the console. Probably because there isn't an error to be displayed.
Thanks for your time
THREE.Points accepts a single instance of Material and can't handle an array of them on a single mesh.

Three JS: Load an OBJ, translate to origin (center in scene), orbit

All I want to do is load an OBJ file and translate its coordinates to the world origins (0,0,0) so that orbit controls work perfectly (no Pivot points please).
I'd like to load random OBJ objects with different geometries/center points and have them translated automatically to the scene origin. In other words, a 'hard coded' translate solution for a specific model won't work
This has got to be one of the most common scenarios for Three JS (basic 3d object viewer), so I'm surprised I can't find a definitive solution on SO.
Unfortunately there are a lot of older answers with deprecated functions, so I would really appreciate a new answer even if there are similar solutions out there.
Things I've tried
the code below fits the object nicely to the camera, but doesn't solve the translation/orbiting problem.
// fit camera to object
var bBox = new THREE.Box3().setFromObject(scene);
var height = bBox.size().y;
var dist = height / (2 * Math.tan(camera.fov * Math.PI / 360));
var pos = scene.position;
// fudge factor so the object doesn't take up the whole view
camera.position.set(pos.x, pos.y, dist * 0.5);
camera.lookAt(pos);
Apparently the geometry.center() is good for translating an object's coordinates back to the origin, but the THREE.GeometryUtils.center has been replaced by geometry.center() and I keep getting errors when trying to use it.
when loading OBJs, geometry has now been replaced by bufferGeometry. I can't seem to cast the buffergeometry into geometry in order to use the center() function. do I have to place this in the object traverse > child loop like so? this seems unnecessarily complicated.
geometry = new THREE.Geometry().fromBufferGeometry( child.geometry );
My code is just a very simple OBJLoader.
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
}
} );
scene.add(object);
});
(BTW first real question on SO so forgive any formatting / noob issues)
Why not object.geometry.center()?
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
child.geometry.center();
}
} );
scene.add(object);
OK figured this out, using some very useful functions from Meshviewer Master, an older Three JS object viewer.
https://github.com/ideesculture/meshviewer
All credit to Gautier Michelin for this code
https://github.com/gautiermichelin
After loading the OBJ, you need to do 3 things:
1. Create a Bounding Box based on the OBJ
boundingbox = new THREE.BoundingBoxHelper(object, 0xff0000);
boundingbox.update();
sceneRadiusForCamera = Math.max(
boundingbox.box.max.y - boundingbox.box.min.y,
boundingbox.box.max.z - boundingbox.box.min.z,
boundingbox.box.max.x - boundingbox.box.min.x
)/2 * (1 + Math.sqrt(5)) ; // golden number to beautify display
2. Setup the Camera based on this bounding box / scene radius
function showFront() {
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
controls.reset();
camera.position.z = 0;
camera.position.y = 0;
camera.position.x = sceneRadiusForCamera;
camera.lookAt(scene.position);
}
(the mesh viewer code also contains functions for viewing left, top, etc)
3. Reposition the OBJ to the scene origin
Like any centering exercise, the position is then the width and height divided by 2
function resetObjectPosition(){
boundingbox.update();
size.x = boundingbox.box.max.x - boundingbox.box.min.x;
size.y = boundingbox.box.max.y - boundingbox.box.min.y;
size.z = boundingbox.box.max.z - boundingbox.box.min.z;
// Repositioning object
objectCopy.position.x = -boundingbox.box.min.x - size.x/2;
objectCopy.position.y = -boundingbox.box.min.y - size.y/2;
objectCopy.position.z = -boundingbox.box.min.z - size.z/2;
boundingbox.update();
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
}
From my understanding of your question, you want the objects that are added to the scene in the origin of the camera view. I believe the common way of achieving an object viewer solution is adding camera controls to your camera in the scene mostly THREE.OrbitControls and specifying the target for the camera as the object that you want to focus on. This makes the object focused to be in the center and the camera rotation and movement will be based on that object.

THREE.js - add a THREE.SkinnedMesh to a custom skeleton structure

I'm trying to define a model of the human body in THREE.js using the classes THREE.Bone, THREE.Skeleton and THREE.SkinnedMesh.
I defined a custom skeleton structure made of 12 body parts, each of which is a THREE.Bone instance, and used the .add() method to define parent / child relationships among them. Finally, I created a standard THREE.Object3D as the body root that is parent of the full skeleton.
Posting only part of the structure for conciseness:
// create person object
var body_root = new THREE.Object3D()
// create torso
var torso = new THREE.Bone();
torso.id = 1;
torso.name = "torso";
x_t = 0;
y_t = 0;
z_t = 0;
torso.position.set(x_t,y_t,z_t);
x_alpha = 0 * Math.PI;
y_alpha = 0 * Math.PI;
z_alpha = 0 * Math.PI;
torso.rotation.set(x_alpha,y_alpha,z_alpha);
// create right arm
var right_arm = new THREE.Bone();
right_arm.id = 2;
right_arm.name = "right_arm";
x_t = -TORSO_WIDTH / 2;
y_t = TORSO_HEIGHT;
z_t = 0;
right_arm.position.set(x_t,y_t,z_t);
x_alpha = 0 * Math.PI;
y_alpha = 0 * Math.PI;
z_alpha = 0 * Math.PI;
right_arm.rotation.set(x_alpha,y_alpha,z_alpha);
// add right_arm as child of torso
torso.add( right_arm );
This works just fine, and after loading the page I can access the model and traverse it correctly through the console.
However, when I try to render the skeleton in the scene things get tricky.
1. How can I add a THREE.SkinnedMesh for a custom skeleton structure?
In the documentation (check source code) a CylinderGeometry and a SkinnedMesh are created for all the bones jointly
var geometry = new THREE.CylinderGeometry( 5, 5, 5, 5, 15, 5, 30 );
var mesh = THREE.SkinnedMesh( geometry, material );
and then the bone structure is binded:
// See example from THREE.Skeleton for the armSkeleton
var rootBone = armSkeleton.bones[ 0 ];
mesh.add( rootBone );
// Bind the skeleton to the mesh
mesh.bind( armSkeleton );
This works perfectly for the simple example in the documentation (5 bones each one parent of the next one), but how can I adapt this example to a more complex structure in which some bones have multiple children? And how can I implement bones with different geometry? For instance I would like to implement joints like shoulder and elbow with a sphere for which I can only change rotation and body parts like arm and forearm with cylinders having different base radius.
Is it possible to define a SkinnedMesh for each joint independently and for a more complex structure than the one in the example? If so how do you link them all together?
2. Can I add a THREE.SkeletonHelper to the scene without defining a skinned mesh but using only the bones?
Since I don't know the answer to question 1 I decided to simply try to render the skeleton structure. This is done in other examples (such as this one) by creating a THREE.SkeletonHelper instance and adding that to the scene.
I tried passing the body_root variable (instead of the SkinnedMesh) to the constructor and the helper is created, but not rendered.
helper = new THREE.SkeletonHelper( body_root );
helper.material.linewidth = 3;
scene.add( helper );
In order to visualize the helper it needs to be binded to a mesh, even if the mesh is not added directly to the scene, i.e. adding the line mesh.bind(armSkeleton) visualizes the skeleton helper.
Is this possible at all to visualize the skeleton helper without defining a mesh? If so how?
NOTE on question 2:
I believe this should be possible, since the THREE.SkeletonHelper class defines internally its own geometry and material, so it should be possible to render it without needing the mesh of the skeleton:
THREE.SkeletonHelper = function ( object ) {
this.bones = this.getBoneList( object );
var geometry = new THREE.Geometry();
for ( var i = 0; i < this.bones.length; i ++ ) {
var bone = this.bones[ i ];
if ( bone.parent instanceof THREE.Bone ) {
geometry.vertices.push( new THREE.Vector3() );
geometry.vertices.push( new THREE.Vector3() );
geometry.colors.push( new THREE.Color( 0, 0, 1 ) );
geometry.colors.push( new THREE.Color( 0, 1, 0 ) );
}
}
geometry.dynamic = true;
var material = new THREE.LineBasicMaterial( { vertexColors: THREE.VertexColors, depthTest: false, depthWrite: false, transparent: true } );
THREE.LineSegments.call( this, geometry, material );
this.root = object;
this.matrix = object.matrixWorld;
this.matrixAutoUpdate = false;
this.update();
};

Three.js lines normal to a sphere

My goal is to create an interactive Earth that has lines normal to the surface so that you can click on them and it pulls up pictures that my health care team has taken from around the world. I have the world completely coded (or more accurately someone else did it and I made a few small changes).
Below is the code for the Earth which functions as expected. What I want to know is how to make lines normal to the surface and have them be clickable. It would be optimal if the lines faded and disappeared as they went to the back of the earth rotated or the user rotated the earth and the lines on the side the user couldn't see faded.
I thought about making an array of cities and having a location on the sphere be associated with it but I'm not really sure how to do that. I am very new to Three.js and HTML/JS in general.
it may be helpful to know that I am using three.mins.js, Detector.js, and TrackballControl.js
Code so far as follows:
(function () {
var webglEl = document.getElementById('webgl');
if (!Detector.webgl) {
Detector.addGetWebGLMessage(webglEl);
return;
}
var width = window.innerWidth,
height = window.innerHeight;
// Earth params
var radius = 0.5,
segments = 32,
rotation = 6;
var scene = new THREE.Scene();
var uniforms, mesh, meshes =[];
var camera = new THREE.PerspectiveCamera(45, width / height, 0.01, 1000);
camera.position.z = 1.5;
var renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);
scene.add(new THREE.AmbientLight(0x333333));
var light = new THREE.DirectionalLight(0xffffff, 1);
light.position.set(5,3,5);
scene.add(light);
var sphere = createSphere(radius, segments);
sphere.rotation.y = rotation;
scene.add(sphere)
var clouds = createClouds(radius, segments);
clouds.rotation.y = rotation;
scene.add(clouds)
var stars = createStars(90, 64);
scene.add(stars);
var controls = new THREE.TrackballControls(camera);
webglEl.appendChild(renderer.domElement);
render();
function render() {
controls.update();
sphere.rotation.y += 0.0005;
clouds.rotation.y += 0.0007;
requestAnimationFrame(render);
renderer.render(scene, camera);
}
function createSphere(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, segments),
new THREE.MeshPhongMaterial({
map: THREE.ImageUtils.loadTexture('images/Color_Map.jpg'),
bumpMap: THREE.ImageUtils.loadTexture('images/elev_bump_4k.jpg'),
bumpScale: 0.005,
specularMap: THREE.ImageUtils.loadTexture('images/water_4k.png'),
specular: new THREE.Color('grey')
})
);
}
function createClouds(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius + 0.003, segments, segments),
new THREE.MeshPhongMaterial({
map: THREE.ImageUtils.loadTexture('images/fair_clouds_4k.png'),
transparent: true
})
);
}
function createStars(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, segments),
new THREE.MeshBasicMaterial({
map: THREE.ImageUtils.loadTexture('images/galaxy_starfield.png'),
side: THREE.BackSide
})
);
}
}());
The hope is that it would look like this link but with Earth and not a building (http://3d.cl3ver.com/uWfsD?tryitlocation=3) [also click explore when you go there].
I built a quick demo that most faithfully represents what I think your needs are. It shows some images that seem to be attached to an Earth sphere through lines. It uses sprites to create those images (and the lines themselves, actually). I think it resembles quite well that demo of a building that you linked to. Here is the technique:
Images are added using GIMP to this template and saved as PNGs.
Those images are loaded as textures in the Js apps.
The sprite is created, using the loaded texture.
The sprite is added to a Object3D and its position set to (0,0,radiusOfTheEarthSphere)
The Object3D is added to the sphere, and rotated until the center of the sprite lies in the position in Earth that you want it to rest in.
Each frame, a dot product between a vector from the center of Earth to the camera and a vector from the center of the Earth to each sprite is used to calculate the sprite's opacity.
That equation in 6 is:
opacity = ((|cameraPosition - centerOfEarth| x |spriteCenter - centerOfEarth|) + 1) * 0.5
where "x" is dot product and "||" denotes normalization.
Also note that sprite center is different from its position due to the Object3D used as parent, I calculate its center using the .localToWorld(vec) method.
Please see the demo here: https://33983769c6a202d6064de7bcf6c5ac7f51fd6d9e.googledrive.com/host/0B9scOMN0JFaXSE93YTZRTE5XeDQ/test.html
It is hosted in my Google Drive and it may take some time to load. Three.js will give some errors in the console untill all textures are loaded because I coded it quickly just to show you my implementation ideas.

Three.js Mirrored Normal Maps Flipped Channel

I have added a normal map to a model in Three.js that is mirrored down the middle. It looks like one of the channels (green perhaps?) is flipped on the mirrored side.
I have one ambient light, one directional headlight, and one spotlight. Here is the code that I use to make the material:
// Create a MeshPhongMaterial for the model
var material = new THREE.MeshPhongMaterial();
material.map = THREE.ImageUtils.loadTexture(texture_color);
// Wrapping modes
//THREE.RepeatWrapping = 1000;
//THREE.ClampToEdgeWrapping = 1001;
//THREE.MirroredRepeatWrapping = 1002;
material.map.wrapS = THREE.RepeatWrapping;
material.map.wrapT = THREE.MirroredRepeatWrapping;
if (texture_normal != null) {
material.normalMap = THREE.ImageUtils.loadTexture(texture_normal);
material.normalMap.wrapS = THREE.RepeatWrapping;
material.normalMap.wrapT = THREE.MirroredRepeatWrapping;
}
material.wrapAround = true;
material.morphTargets = true;
material.shininess = 15;
material.specular = new THREE.Color(0.1, 0.1, 0.1);
material.ambient = new THREE.Color(0, 0, 0);
material.alphaTest = 0.5;
var mesh = new THREE.MorphAnimMesh( geometry, material );
// Turn on shadows
mesh.castShadow = true;
if (shadows) {
mesh.receiveShadow = true;
}
scene.add( mesh );
I tried all of the different combinations of material.normalMap.wrapS and material.normalMap.wrapT but that didn't solve it (tried diffuse map too). What am I doing wrong?
Thank you!
Normal maps are dependent on the geometry, so you can't just mirror it and expect it to work like a diffuse texture would.
To make it work, you need to flip the normal map's red channel wherever the UVWs are mirrored on the model.
http://www.polycount.com/forum/showthread.php?t=116922
Turns out I was using an older version (1.2) of the Blender Three.js exporter. By switching to the latest version (1.5) of the exporter from the r67 repository, Three.js now correctly handles mirrored normal maps with its Phong shader out of the box.
Edit: The Phong Shader was still having issues with the flipped channel. I ended up using the "Normal Map Shader" (see the Three.js examples) and that gave me correct results. Unfortunately the Normal Map Shader doesn't work with Morph animations, only Skeletal.

Categories