I am currently working on a 3D configurator.
So I should be able to import a logo on a FBX object, which normally already have UV coordinates.
The problem is : I am struggling since 3 days ago, trying to import a texture on a mesh but I can't map it using his UVs coordinates.
So, I have a texture with a logo.
When I map it on a simple Cube, no problem, it is working :
But when I try to apply the same texture to my mesh :
The texture is cropped.
So I've been looking inside the mesh json tree and I found it :
So there are uv coordinates, but it seems different from my cube because, when I look to his json, I don't find the same tree which is (on the cube) :
And finally, this is my code :
if(myMesh.name == 'Logo'){
// Texture
var texture = new THREE.TextureLoader().load('img/logoTesla_Verre_green.jpg', function(){
texture.needUpdate = true;
// Material
var material = new THREE.MeshLambertMaterial( {map: texture, morphTargets: true} );
material.needUpdate = true;
// Geometry Cube
var geometry = new THREE.BoxGeometry( 40, 40, 40 );
// Cube
var cube = new THREE.Mesh( geometry, material);
scene.add(cube);
// Duplicate logo mesh for testing
var newGeometry = myMesh.geometry;
var newMesh = new THREE.Mesh( newGeometry, material);
newMesh.position.y = 100;
newMesh.geometry.uvsNeedUpdate = true;
scene.add(newMesh);
});
}
My question is : Should I use the geometry.attributes.uv object to map my texture ? If yes, how to do that ?
Or should I convert these UV coordinates to a geometry.faceVertexUvs ???
Please, help me, I am totally lost :)
Nevermind, it has been solved by exporting the .fbx again.
Now the mapping is working fine !
But I don't know why...
Thank you for your question and answer. I was having the same problem where my custom FBX I imported was only taking the bottom left pixel of the canvas as the color for the whole mesh. (I was using texture = new THREE.CanvasTexture(ctx.canvas); to get my texture).
The issue for me was that the FBX had no UV mapping! How I solved it was I imported the fbx to maya, opened up the UV editor (under the modeling menu mode got to UV->UV editor ) then in the UV editor there was a Create section and I hit one of those options (i chose cylinder) and then exported it with the default fbx settings. I am very grateful this worked.
You can see the result of using a canvas context as a custom FBX texture here:
www.algorat.club/sweater
Related
We want to create a 3d shoe designing tool, where you can design patterns and upload them to the shoe.
I am trying to place an image on a Threejs material. I am able to update the map, but the texture is blurry. I am new to Threejs, so I do not have concepts clear. I don't understand if aspect ratio is the issue or something else.
This how I am loading texture:
var texture_loader = new THREE.TextureLoader();
var texture = texture_loader.load( 'https://ik.imagekit.io/toesmith/pexels-photo-414612_D4wydSedY.jpg', function ( texture ) {
texture.wrapS = texture.wrapT = THREE.RepeatWrapping;
texture.offset.set( 0, 0 );
texture.repeat.set( 1, 1 );
vamp.material = new THREE.MeshPhongMaterial({
map: texture,
color: new THREE.Color('#f2f2f2'),
shininess: 20,
});
});
This is what I am getting
But the expected behavior should be
If anyone could help, that would be great. Thanks
Here is the link to the Codepen code
The problem is that your UVs are occupying a very small area in texture coordinates. As they are now, it looks like your UVs are taking up this much room (see red area):
And that's why it gives the impression that your texture is blurry. What you need to do is make your UVs take up more space, like this:
There are 2 ways to achieve this.
Scale UVs up: Import your model into Blender, and change the UV mapping of the mesh to occupy more of the [0, 1] range.
Scale texture down: You could get creative with the texture.repeat property and use it to scale down your texture to match your existing UVs. Then you'd need to offset it so it's centered correctly. Something like:
texture.repeat = new THREE.Vector2(10, 10);
texture.offset = new THREE.Vector2(xx, yy);
I'm experiencing a graphical glitch with an imported model while using JSONLoader.
I can't really explain it, you'll have to see it.
It may have something to do with the different materials and the camera POV.
You can find the plunk here:
http://plnkr.co/edit/0VjHiGNmWFHxdoMWC3GV?p=info
JSONLoader part of the code:
var loader = new THREE.JSONLoader();
loader.load( 'tv.js',
function ( geometry, materials ) {
var tv = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial(materials) );
glScene.add(tv);
} );
a screenshot of the glitch
The "glitch" you are referring to is due to z-fighting.
Your camera near plane is 0.01 and far plane is 20000. Small values of the near plane can lead to depth-sorting precision problems.
In your case, set your near plane to, 1 or 10.
ref: http://www.opengl.org/wiki/Depth_Buffer_Precision.
three.js r.81
I have a small web app that I've designed for viewing bathymetric data of the seafloor in Three.js. Basically I am using a loader to bring in JSON models of the my extruded bathymetry into my scene and allowing the user to rotate the model or click next to load a new part of the seafloor.
All of my models have the same 2D footprint so are identical in two dimensions, only elevations and texture change from model to model.
My question is this: What is the most cost effective way to update my model?
Using scene.remove(mesh); then calling my loader again to load a new model and then adding it to the scene with scene.add(mesh);.
Updating the existing mesh by calling my loader to bring in material and geometry and then calling mesh.geometry = geometry;, mesh.material = material and then mesh.geometry.needsUpdate;.
I've heard that updating is pretty intensive from a computational point of view, but all of the articles that I've read on this state that the two methods are almost the same. Is this information correct? Is there a better way to approach my code in this instance?
An alternative that I've considered is skipping the step where I create the model (in Blender) and instead using a displacement map to update the y coordinates of my vertices. Then to update I could push new vertices on an existing plane geometry before replacing the material. Would this be a sound approach? At the very least I think the displacement map would be a smaller file to load than a .JSON file. I could even optimize the display by loading a GUI element to divide the mesh into more or fewer divisions for high or low quality render...
I dont know off the top of my head what exactly happens under the hood, but from what i remember i think these two are the exact same thing.
You aren't updating the existing mesh. A mesh extends from Object3D, so it just sits there, wiring some geometry and some materials.
mesh.geometry = geometry did not "update the mesh", or it did, but with new geometry (which may be the thing you are actually referring to as mesh).
In other words, you always have your container, but when you replace the geometry by doing =geometry you set it up for all sorts of GL calls in the next THREE.WebGLRenderer.render() call.
Where that new geometry gets attached to, be it an existing mesh, or a new one, shouldnt matter at all. The geometry is the thing that will trigger the low level webgl calls like gl.bufferData().
//upload two geometries to the gpu on first render()
var meshA = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
var meshB = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
//upload one geometry to the gpu on first render()
var bg = new THREE.BoxGeometry()
var meshA = new THREE.Mesh( bg );
var meshB = new THREE.Mesh( bg );
for ( var i = 0 ; i < someBigNumber ; i ++ ){
var meshTemp = new THREE.Mesh( bg );
}
//doesnt matter that you have X meshes, you only have one geometry
//1 mesh two geometries / "computations"
var meshA = new THREE.Mesh( new THREE.BoxGeometry() ); //first computation - compute box geometry
scene.add(meshA);
renderer.render( scene , camera ); //upload box to the gpu
meshA.geometry = new THREE.SphereGeometry();
renderer.render( scene , camera); //upload sphere to the gpu
THREE.Mesh seems to be the most confusing concept in three.js.
I'm using a plugin that implements 360 / VR video into our video player. It does this by using Three.js to create a sphere and taking the video itself and making it the material the sphere is created out of. The viewport is then set inside of the sphere to give it the 360 view.
The problem I'm running into is that the material is placed on the sphere using THREE.DoubleSide (THREE.BackSide would also work since we're only viewing it from the inside of the sphere), but the image is inverted since we are viewing it from the inside.
Is there a way to invert the image material that is placed on the sphere?
One way to create a spherical panorama, that is not inverted, is to use this pattern:
var geometry = new THREE.SphereBufferGeometry( 100, 32, 16 );
var material = new THREE.MeshBasicMaterial( { map: texture } );
var mesh = new THREE.Mesh( geometry, material );
mesh.scale.set( - 1, 1, 1 );
scene.add( mesh );
It is generally not advisable to set negative scale values in three.js, but in this case, since you are using MeshBasicMaterial which does not utilize normals, it is OK to do so.
three.js r.75
I have created a cube (skybox) that uses different materials for each side. There is no problem with that using MeshFaceMaterial:
var imagePrefix = "images-nissan/pano_";
var imageDirections = ["xpos", "xneg", "ypos", "yneg", "zpos", "zneg"];
var imageSuffix = ".png";
var skyGeometry = new THREE.BoxGeometry(1, 1, 1);
var materialArray = [];
for (var i = 0; i < 6; i++) {
materialArray.push(new THREE.MeshBasicMaterial({
map: THREE.ImageUtils.loadTexture(imagePrefix + imageDirections[i] + imageSuffix),
side: THREE.BackSide
}));
}
var skyMaterial = new THREE.MeshFaceMaterial(materialArray);
var skyBox = new THREE.Mesh(skyGeometry, skyMaterial);
skyBox.name = "interiorMesh";
scene.add(skyBox);
However, now I would like to add a material to one of the faces of the cube and combine the materials on this face of the cube.
So basically I would have one material on 5 faces and 2 materials on 1 face of the cube - I want to overlay that 'original' texture with another transparent png so it covers only a specific part of the original image. Both images have the same dimensions, only the new one is partially transparent. Is it even possible to do with CubeGeometry? Or do I need to do it with planes? Any help greatly appreciated!
You can for sure change material of one of faces. You cannot use two materials for one face though.
I would recommend creating additional texture as combination of previous two, making it into separate material and assign it to sixth face of the cube when needed. If it is possible, merge those images beforehand in your graphic editor of choice. If you can only do it in runtime, you will either have to use canvas to merge them or shader as recommended by #beiller.
I wouldn't recommend transparent planes, transparency can be very tricky sometimes and render in a weird way.
something similar is discussed here - Multiple transparent textures on the same mesh face in Three.js