I have tried a few different lights now (Directional, Spot, Point), but none of them produce a nice shadow on MeshFaceMaterial objects. Instead, the entire MeshFaceMaterial object will become black.
My Test Website (please view with a grain of salt, constantly being changed).
How can I use lights to create shadows on MeshFaceMaterials? Does MeshFaceMaterial support shadows? The documentation says "Affects objects using MeshLambertMaterial or MeshPhongMaterial."
Here is sample code of how I am loading .json model.
loader.load('sample-concrete.js', function ( geometry, materials ) {
mesh1 = new THREE.Mesh(
geometry, new THREE.MeshFaceMaterial( materials )
);
mesh1.rotation.x = -Math.PI / 2;
scene.add( mesh1 );
});
and here is a sample of the material from my .json file.
"materials": [
{
"DbgIndex" : 0,
"DbgName" : "Steel",
"colorDiffuse" : [0.3059, 0.0471, 0.0471],
"colorAmbient" : [0.3059, 0.0471, 0.0471],
"colorSpecular" : [1.0000, 1.0000, 1.0000],
"transparency" : 1.0,
"specularCoef" : 25.0,
"vertexColors" : false
}
Thank you.
A MeshFaceMaterial is just a collection of materials. So if your materials variable contains MeshLambertMaterial or MeshPhongMaterial you should be fine. Shadows will be generated from a DirectionalLight or a SpotLight.
Just make sure your renderer has:
renderer.shadowMapEnabled = true;
your light has:
light.castShadow = true;
each one of your meshes:
mesh.castShadow = true;
and you have at least one object (a plane for example) where you do:
plane.receiveShadow = true;
Related
How should I make an aoMap work for a normal THREE.Geometry? Is there a demo?
var uvs = geometry.attributes.uv.array;
geometry.addAttribute('uv2', new THREE.BufferAttribute(uvs, 2));
Above code is for BufferGeometry.
An aoMap requires the 2nd set of UVs. You can create a 2nd set of UVs by duplicating the first set if you want.
This is how to do it for Geometry:
geometry.faceVertexUvs[ 1 ] = geometry.faceVertexUvs[ 0 ];
And this is how to do it for BufferGeometry:
var uvs = geometry.attributes.uv.array;
geometry.addAttribute( 'uv2', new THREE.BufferAttribute( uvs, 2 ) );
... or more simply:
geometry.attributes.uv2 = geometry.attributes.uv;
three.js r.88
I have a problem with spotlight. I was using r.73 and had 50x simple Spotlights without shadows etc.. it works without problems, still 60fps also on mobile.
Now i was changed to r84 (The problem occurs above r73), and the spotlights are much better quality but also drop my frames. I know there was some changes with adding penumbra options in r74.. i not really understand how can i set down the quality..
On fiddle , you dont see qualityChanges, dont matter. but Frames will drope.
So my Question, is it possible to set up the spotlight of a way, i still have 60frames?
The mistake occurs only when the mesh (floor) is big enough.
var spotLightSize=50;
var spotLight=[];
var geometry = new THREE.BoxGeometry( 500, 1, 500 );
var material = new THREE.MeshPhongMaterial( {color: "blue"} );
var floor = new THREE.Mesh( geometry, material );
var renderer = new THREE.WebGLRenderer({precision:"lowp",alpha:true});
for (var i=0;i<spotLightSize;i++){
spotLight.push(new THREE.SpotLight("green" ,2,20,0.1,0,1));
spotLight[spotLight.length-1].position.set( 0, 5, 0 );
scene.add(spotLight[spotLight.length-1]);
var spotLightHelper = new THREE.SpotLightHelper( spotLight[spotLight.length-1] );
scene.add( spotLightHelper );
}
http://jsfiddle.net/killerkarnikel/hyqgjLLz/19/
I have a very simple CubeCamera/reflection example (lifted largeley from Stemkoski's reflection example). Code looks like this:
var sphereGeom = new THREE.SphereGeometry( 2, 32, 32 );
mirrorSphereCamera = new THREE.CubeCamera( 0.1, 5000, 512 );
scene.add( mirrorSphereCamera );
var mirrorSphereMaterial = new THREE.MeshBasicMaterial( { envMap: mirrorSphereCamera.renderTarget } );
mirrorSphere = new THREE.Mesh( sphereGeom, mirrorSphereMaterial );
mirrorSphere.position.set(0, 2, 0);
mirrorSphereCamera.position = mirrorSphere.position;
scene.add(mirrorSphere)
The render code looks like this
mirrorSphere.visible = false;
mirrorSphereCamera.updateCubeMap( renderer, scene );
mirrorSphere.visible = true;
renderer.render( scene, camera );
With three.js 60 it works fine though I get these warnings:
[.Offscreen-For-WebGL-0x7f80b4013200]RENDER WARNING: there is no texture bound to the unit 0
[.Offscreen-For-WebGL-0x7f80b4013200]RENDER WARNING: there is no texture bound to the unit 0
[.Offscreen-For-WebGL-0x7f80b4013200]RENDER WARNING: there is no texture bound to the unit 0
Wth three.js 75 I get this error:
WebGL: INVALID_OPERATION: bindTexture: textures can not be used with multiple targets
And with the latest build 82, I get these warnings:
THREE.WebGLPrograms.getTextureEncodingFromMap: don't use render targets as textures. Use their .texture property instead.
THREE.WebGLRenderer.setTextureCube: don't use cube render targets as textures. Use their .texture property instead.
In neither 75 nor 82 does the reflection work. And despite the warning from 82, the example at threejs.org still uses the renderTarget (AFAICT):
I'm still investigating, but any suggestions are welcome.
I'm a beginner using three.js and I am trying to have two materials defined on one single object, and alternate between the two using a visibilty flag, but with no success.
Is there another way, or can this be done?
var materials = [
new THREE.MeshPhongMaterial( { color: 0x00ff00,visible:true, shininess: 1 } ),
new THREE.MeshPhongMaterial( { color: 0xff0000,visible:false, shininess: 1 } )
];
obj= THREE.SceneUtils.createMultiMaterialObject( geometry, materials );
scene.add( obj);
scene.traverse(function (node){
if(node instanceof THREE.Mesh) {
node.visible =!node.visible;
}
});
I will entually aplly this to all objects in the scene that's why I'm using the scene.traverse
It looks like you're trying to apply visibility to the materials, yet you're checking the meshes during your traverse. Remove the visibility: true/false from your material definitions, and add the following line:
obj= THREE.SceneUtils.createMultiMaterialObject( geometry, materials );
obj.children[1].visible = false; // add this line
scene.add( obj);
This will apply visibility = false to the second mesh created by createMultiMaterialObject. Your traverse will then correctly flip the visibility of the meshes.
As you get to know THREE.js better, you'll want to look into THREE.MultiMaterial and geometry groups for applying multiple materials to a single mesh.
I have gone through the example here
.
var depthShader = THREE.ShaderLib[ "depthRGBA" ];
var depthUniforms = THREE.UniformsUtils.clone( depthShader.uniforms );
depthMaterial = new THREE.ShaderMaterial( { fragmentShader: depthShader.fragmentShader, vertexShader: depthShader.vertexShader, uniforms: depthUniforms } );
depthMaterial.blending = THREE.NoBlending;
// postprocessing
composer = new THREE.EffectComposer( Renderer );
composer.addPass( new THREE.RenderPass( Scene, Camera ) );
depthTarget = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.NearestFilter, magFilter: THREE.NearestFilter, format: THREE.RGBAFormat } );
var effect = new THREE.ShaderPass( THREE.SSAOShader );
effect.uniforms[ 'tDepth' ].value = depthTarget;
effect.uniforms[ 'size' ].value.set( window.innerWidth, window.innerHeight );
effect.uniforms[ 'cameraNear' ].value = Camera.near;
effect.uniforms[ 'cameraFar' ].value = Camera.far;
effect.renderToScreen = true;
composer.addPass( effect );
Which looks pretty good and the edges of the blocks are visible and highlighted,
on my code here
the edges are not that in the example . Is there any thing am missing
In order to get quality results with the SSAOShader, you need an accurate measure of depth in the depth buffer. As explained here, for a perspective camera, most of depth buffer precision is close to the near plane. That means you will get the best results if the object is located in the near part of the frustum.
So, by that argument, if your far plane is too close, then the object will be too close to the back of the frustum, and quality will be reduced.
On the other hand, if the far plane is too distant (as it is in your case), the object will be located in such a thin sliver of depth, that due to the precision of the depth buffer, there is not enough variability in depth across the object.
So you have to set your camera's near and far planes at values that give you the best results.
three.js r.75
It depends by your camera.far attribute. You've set it too high (100000).
Just set it to 1000 and you should have better results:
Camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 10, 1000);
Changing this will require you to move camera closer to your scene, otherwise it won't be visible at the startup and you'll need to zoom.
Camera.position.z = 200;
These changes worked fine for me.