Shadow Catcher in three.js / shadow on transparent material - javascript

I need to cast a shadow on a boxMesh while the mesh itself should be invisible.
I've found a technique on three.js GitHub Issue Tracker that's seemingly been working a few years ago but doesn't anymore - it involves the creation of a new shader.
Is there any other way or an updated version of that now not anymore working trick?

You can cast a shadow on a mesh having a transparent material by using THREE.ShadowMaterial. Use this pattern:
var material = new THREE.ShadowMaterial();
material.opacity = 0.5;
var mesh = new THREE.Mesh( geometry, material );
mesh.receiveShadow = true;
scene.add( mesh );
There is an example of its use in this three.js example.
three.js r.147

Related

three.js map mesh.geometry.attributes.uv

I am currently working on a 3D configurator.
So I should be able to import a logo on a FBX object, which normally already have UV coordinates.
The problem is : I am struggling since 3 days ago, trying to import a texture on a mesh but I can't map it using his UVs coordinates.
So, I have a texture with a logo.
When I map it on a simple Cube, no problem, it is working :
But when I try to apply the same texture to my mesh :
The texture is cropped.
So I've been looking inside the mesh json tree and I found it :
So there are uv coordinates, but it seems different from my cube because, when I look to his json, I don't find the same tree which is (on the cube) :
And finally, this is my code :
if(myMesh.name == 'Logo'){
// Texture
var texture = new THREE.TextureLoader().load('img/logoTesla_Verre_green.jpg', function(){
texture.needUpdate = true;
// Material
var material = new THREE.MeshLambertMaterial( {map: texture, morphTargets: true} );
material.needUpdate = true;
// Geometry Cube
var geometry = new THREE.BoxGeometry( 40, 40, 40 );
// Cube
var cube = new THREE.Mesh( geometry, material);
scene.add(cube);
// Duplicate logo mesh for testing
var newGeometry = myMesh.geometry;
var newMesh = new THREE.Mesh( newGeometry, material);
newMesh.position.y = 100;
newMesh.geometry.uvsNeedUpdate = true;
scene.add(newMesh);
});
}
My question is : Should I use the geometry.attributes.uv object to map my texture ? If yes, how to do that ?
Or should I convert these UV coordinates to a geometry.faceVertexUvs ???
Please, help me, I am totally lost :)
Nevermind, it has been solved by exporting the .fbx again.
Now the mapping is working fine !
But I don't know why...
Thank you for your question and answer. I was having the same problem where my custom FBX I imported was only taking the bottom left pixel of the canvas as the color for the whole mesh. (I was using texture = new THREE.CanvasTexture(ctx.canvas); to get my texture).
The issue for me was that the FBX had no UV mapping! How I solved it was I imported the fbx to maya, opened up the UV editor (under the modeling menu mode got to UV->UV editor ) then in the UV editor there was a Create section and I hit one of those options (i chose cylinder) and then exported it with the default fbx settings. I am very grateful this worked.
You can see the result of using a canvas context as a custom FBX texture here:
www.algorat.club/sweater

THREE.js - Graphical Glitch with an imported model

I'm experiencing a graphical glitch with an imported model while using JSONLoader.
I can't really explain it, you'll have to see it.
It may have something to do with the different materials and the camera POV.
You can find the plunk here:
http://plnkr.co/edit/0VjHiGNmWFHxdoMWC3GV?p=info
JSONLoader part of the code:
var loader = new THREE.JSONLoader();
loader.load( 'tv.js',
function ( geometry, materials ) {
var tv = new THREE.Mesh( geometry, new THREE.MeshFaceMaterial(materials) );
glScene.add(tv);
} );
a screenshot of the glitch
The "glitch" you are referring to is due to z-fighting.
Your camera near plane is 0.01 and far plane is 20000. Small values of the near plane can lead to depth-sorting precision problems.
In your case, set your near plane to, 1 or 10.
ref: http://www.opengl.org/wiki/Depth_Buffer_Precision.
three.js r.81

What is more cost effective updating a mesh or removing and adding from the scene?

I have a small web app that I've designed for viewing bathymetric data of the seafloor in Three.js. Basically I am using a loader to bring in JSON models of the my extruded bathymetry into my scene and allowing the user to rotate the model or click next to load a new part of the seafloor.
All of my models have the same 2D footprint so are identical in two dimensions, only elevations and texture change from model to model.
My question is this: What is the most cost effective way to update my model?
Using scene.remove(mesh); then calling my loader again to load a new model and then adding it to the scene with scene.add(mesh);.
Updating the existing mesh by calling my loader to bring in material and geometry and then calling mesh.geometry = geometry;, mesh.material = material and then mesh.geometry.needsUpdate;.
I've heard that updating is pretty intensive from a computational point of view, but all of the articles that I've read on this state that the two methods are almost the same. Is this information correct? Is there a better way to approach my code in this instance?
An alternative that I've considered is skipping the step where I create the model (in Blender) and instead using a displacement map to update the y coordinates of my vertices. Then to update I could push new vertices on an existing plane geometry before replacing the material. Would this be a sound approach? At the very least I think the displacement map would be a smaller file to load than a .JSON file. I could even optimize the display by loading a GUI element to divide the mesh into more or fewer divisions for high or low quality render...
I dont know off the top of my head what exactly happens under the hood, but from what i remember i think these two are the exact same thing.
You aren't updating the existing mesh. A mesh extends from Object3D, so it just sits there, wiring some geometry and some materials.
mesh.geometry = geometry did not "update the mesh", or it did, but with new geometry (which may be the thing you are actually referring to as mesh).
In other words, you always have your container, but when you replace the geometry by doing =geometry you set it up for all sorts of GL calls in the next THREE.WebGLRenderer.render() call.
Where that new geometry gets attached to, be it an existing mesh, or a new one, shouldnt matter at all. The geometry is the thing that will trigger the low level webgl calls like gl.bufferData().
//upload two geometries to the gpu on first render()
var meshA = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
var meshB = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
//upload one geometry to the gpu on first render()
var bg = new THREE.BoxGeometry()
var meshA = new THREE.Mesh( bg );
var meshB = new THREE.Mesh( bg );
for ( var i = 0 ; i < someBigNumber ; i ++ ){
var meshTemp = new THREE.Mesh( bg );
}
//doesnt matter that you have X meshes, you only have one geometry
//1 mesh two geometries / "computations"
var meshA = new THREE.Mesh( new THREE.BoxGeometry() ); //first computation - compute box geometry
scene.add(meshA);
renderer.render( scene , camera ); //upload box to the gpu
meshA.geometry = new THREE.SphereGeometry();
renderer.render( scene , camera); //upload sphere to the gpu
THREE.Mesh seems to be the most confusing concept in three.js.

Invert material image of sphere in three.js

I'm using a plugin that implements 360 / VR video into our video player. It does this by using Three.js to create a sphere and taking the video itself and making it the material the sphere is created out of. The viewport is then set inside of the sphere to give it the 360 view.
The problem I'm running into is that the material is placed on the sphere using THREE.DoubleSide (THREE.BackSide would also work since we're only viewing it from the inside of the sphere), but the image is inverted since we are viewing it from the inside.
Is there a way to invert the image material that is placed on the sphere?
One way to create a spherical panorama, that is not inverted, is to use this pattern:
var geometry = new THREE.SphereBufferGeometry( 100, 32, 16 );
var material = new THREE.MeshBasicMaterial( { map: texture } );
var mesh = new THREE.Mesh( geometry, material );
mesh.scale.set( - 1, 1, 1 );
scene.add( mesh );
It is generally not advisable to set negative scale values in three.js, but in this case, since you are using MeshBasicMaterial which does not utilize normals, it is OK to do so.
three.js r.75

My object isn't reflects the light in Three.js

I have some CubeGeometry based mesh in a three.js scene, and all of them reflects the PointLight what I'm using globally. But one of them, which made by "hand" with just THREE.Geometry (add vertices and faces by code) is not reflected. Even it has no color, I only can set color for this, if I set a THREE.Color to "emissive" key on the MeshPhongMaterial.
The geometry made by a JS function dinamically. I'm using this litghs:
pointLight = new THREE.PointLight(0xFFFEF0, 1, 100000)
pointLight.position = camera.position;
scene.add(
pointLight
);
And I'm creating the mentioned mesh with this code:
var floor = new THREE.Mesh(
ShelfArchitect.Utils.getFloorGeometry(walls),
new THREE.MeshPhongMaterial(materialParams)
);
I should add something on materialParams? Or what is the problem?
It sounds like the geometry made by "hand" is missing, or has incorrect, vertex normals.
You can do this:
geometry.computeFaceNormals();
geometry.computeVertexNormals();

Categories