Is problem between DirectionalLightHelper and CameraHelper causing issue with scene shadows? - javascript

I'm working on building up quite a complex static render in the browser with three.js and have gotten stuck early in the process trying to produce correct shadows with a single THREE.DirectionalLight representing the sun in my scene. All the geometry (contained in another .js file) has shadows enabled. The green sphere is for debugging purposes and is translated (50,0,50) to the center of the plane to represent the target for the camera and location of DirectionalLight.target. The directional light position and main camera position did set correctly.
My theory on why the shadows aren't working is because the orthogonal camera representing the shadow camera is pointing off in the wrong direction. I failed yesterday to figure out and solve the behaviour of the directional light helper (white line to the origin) and shadow camera helper (right).
I'm assuming the correct orientation, and the orientation I'm aiming for, has the directional light helper and shadow camera helper aligned to the center of the plane. After so much research yesterday, my shadow camera doesn't seem to automatically pick up the light position / light target vector. Why are they still anchored to the origin?
Does anyone have any suggestions about how to fix the DirectionalLight.target in my scene? Why are the DirectionalLightHelper and CameraHelper inconsistent?
// Set up
const canvus = document.getElementById('canvus');
const scene = new THREE.Scene();
const renderer = new THREE.WebGLRenderer({ canvas, antialias: true });
renderer.shadowMap.enabled = true;
renderer.shadowMap.type = THREE.PCFShadowMap;
//Camera
const camera = new THREE.PerspectiveCamera(50, window.innerWidth / window.innerHeight, 1, 500);
camera.position.set(200, 100, 100);
camera.lookAt(50, 0, 50);
// Lighting
const directionalLight = new THREE.DirectionalLight(0xffffff, 0.8);
directionalLight.position.set(100, 200, 200);
directionalLight.target.position.set(50, 0, 50);
directionalLight.castShadow = true;
directionalLight.shadow.bias = 0.0001;
directionalLight.shadow.mapSize.width = 1024; // default
directionalLight.shadow.mapSize.height = 1024; // default
const view_n = 50;
directionalLight.shadow.camera = new THREE.OrthographicCamera(
-view_n,
view_n,
view_n,
-view_n,
60,
150
);
scene.add(directionalLight, directionalLight.target);
//helpers
const lighthelper = new THREE.DirectionalLightHelper(directionalLight, 10);
const camerahelper = new THREE.CameraHelper(directionalLight.shadow.camera);
scene.add(lighthelper);
scene.add(camerahelper);
//Main Render
createBasicGeometry(scene); // from geometry.js
createGroundPlane(scene); // from geometry.js
renderer.render(scene, camera);
Update 2020-1-5
I had initially tried setting the camera and also found examples of people setting a new ortho shadow camera directly. As I'm motivated to overcome the issue, and for thoroughness, I updated my code to reflect the suggestion and unfortunately the problem persists. I've re-checked that all mesh geometry is set to both object.receiveShadow = true and object.castShadow = true with the MeshPhongMaterial. It's completely confounding why directionalLight.target.position.set(50, 0, 50) is not updating as expected. What is the cause of this behaviour?
// Updated Lighting
const view_n = 50;
directionalLight.castShadow = true;
directionalLight.shadow.bias = 0.0001;
directionalLight.shadow.camera.right = view_n;
directionalLight.shadow.camera.left = -view_n;
directionalLight.shadow.camera.top = view_n;
directionalLight.shadow.camera.bottom = -view_n;
directionalLight.shadow.camera.near = 60;
directionalLight.shadow.camera.far = 150;
directionalLight.shadow.mapSize.width = 1024; // default
directionalLight.shadow.mapSize.height = 1024; // default
scene.add(directionalLight, directionalLight.target);
When I dump the directionalLight I get the target position I expected, though not aligned correctly in the scene. While the camera position gives another strange results.
console.log(directionalLight.target.position);
//Vector3 {x: 50, y: 0, z: 50, isVector3: true}
console.log(directionalLight.shadow.camera.position);

directionalLight.shadow.camera = new THREE.OrthographicCamera(
-view_n,
view_n,
view_n,
-view_n,
60,
150
);
Please do not overwrite the camera reference of LightShadow.camera. Configure the directional light like so:
dirLight.castShadow = true;
dirLight.shadow.camera.top = view_n;
dirLight.shadow.camera.bottom = - view_n;
dirLight.shadow.camera.left = - view_n;
dirLight.shadow.camera.right = view_n;
dirLight.shadow.camera.near = 60;
dirLight.shadow.camera.far = 150;
Besides shadow casting only works if all shadow casting objects (like your boxes) have castShadow set to true. All shadow receiving objects (like your floor) must set receiveShadow to true.

Perhaps because I'm doing this render statically without an animation loop this problem cropped up but was solved by inserting updateMatrixWorld to the directional light target. (Unforuately I wasn't able to update the shadow's camerahelper but at least the shadows are now working as expected.)
directionalLight.target.updateMatrixWorld();
scene.add(directionalLight);
scene.add(directionalLight.target);

Related

Which axis is used for the near and far plane when using the perspective camera?

I have created an example project and read a great tutorial on three.js but one thing is not described well when creating the perpsective camera.
When an object is created an added into the scene for a default location of (0,0,0) and the camera is set and moved back 10 units to make the object easily visible, I have specified a near and far plane of 0.1 and 1000. I am not sure which axis this is specified on, however whichever axis it is specified on, none of the axis's on the default object are >= 0.1 to be visible given that the near and far planes specify that the visible objects must be between these planes.
Could someone please explain why my object is visible in the scene, which axis the near and far plane are or provide a very useful link describing it as I cannot find a link to explain well.
Code below if interested.
import * as THREE from 'three';
import 'bootstrap';
import css from '../css/custom_css.css';
let scene = new THREE.Scene();
/* Constants */
let WIDTH = window.innerWidth;
let HEIGHT = window.innerHeight;
/* Camera */
let field_of_view = 75;
let aspect_ratio = WIDTH/HEIGHT;
let near_plane = 0.1;
let far_plane = 1000;
let camera = new THREE.PerspectiveCamera(field_of_view, aspect_ratio, near_plane, far_plane);
// every object is initially created at ( 0, 0, 0 )
// we'll move the camera back a bit so that we can view the scene
camera.position.set( 0, 0, 10 );
/* Renderer */
let renderer = new THREE.WebGLRenderer({antialias:true, canvas: my_canvas });
renderer.setSize(WIDTH, HEIGHT);
renderer.setClearColor(0xE8E2DD, 1);
// Create the shape
let geometry = new THREE.BoxGeometry(1, 1, 1);
// Create a material, colour or image texture
let material = new THREE.MeshBasicMaterial( {
color: 0xFF0000,
wireframe: true
});
// Cube
let cube = new THREE.Mesh(geometry, material);
scene.add(cube);
// Game Logic
let update = function(){
cube.rotation.x += 0.01;
cube.rotation.y += 0.005;
};
// Draw Scene
let render = function(){
renderer.render(scene, camera);
};
// Run game loop, update, render, repeat
let gameLoop = function(){
requestAnimationFrame(gameLoop);
update();
render();
};
gameLoop();
The near and far planes for a perspective camera are relative to the camera itself, they are not on one of the global scene axes.
In your case, the near plane is a plane 0.1 units away from the camera along the camera's central axis or "look" vector, and the far plane is a plane 1000 units away. Things rendered must be between these two planes inside the camera's "view frustum".
So in your example, since the camera is looking at the object and the object is 10 units away, it's within the view frustum and is therefore visible.
See this youtube video for a more visual representation: https://www.youtube.com/watch?v=KyTaxN2XUyQ

Lights with materials in three.js not working properly

Finally, i need a normal shadow. But using Spot / Directional lights with Lambert / Phong materials i get no proper result:
picture with examples
When i use Spot Light with Lambert Material, this material don't react to light (pic. 1, 2).
When i use Spot Light with Phong Material, i get shadow, like pattern, not smooth (pic. 3, 4).
When i use Directional Light with Lambert / Phong material, i get smooth, but not proper shadow (pic. 5 - 8).
I use this preferences for shadows:
renderer.shadowMap.enabled = true;
renderer.shadowMapSoft = true;
renderer.shadowCameraNear = 3;
renderer.shadowCameraFar = camera.far;
renderer.shadowCameraFov = 50;
renderer.shadowMapBias = 0.0039;
renderer.shadowMapDarkness = 0.5;
renderer.shadowMapWidth = 1024;
renderer.shadowMapHeight = 1024;
And this for lights:
var ambientLight =new THREE.AmbientLight( 0x555555 );
scene.add(ambientLight);
and
var spotLight = new THREE.SpotLight( 0xffffff);
spotLight.position.set( 12, 22, -25 );
spotLight.castShadow = true;
scene.add(spotLight );
and
var directionalLight=new THREE.DirectionalLight( 0xffffff, 0.5 );
directionalLight.position.set( 12, 22, -25 );
directionalLight.castShadow = true;
scene.add(directionalLight);
Also, i use the same castShadow and receiveShadow propertyes for all of this examples.
If it needing, other code can be watched as sourcecode of this page:
Spot Light, Lambert Material
This code the same for all of my examples, excluding light - material combinations.
Realtime shadows in Three.js are tricky in general. Here are some basics to follow to improve your example.
Limit the shadow.camera-frustum:
spotLight.shadow.camera.near = 25;
spotLight.shadow.camera.far = 50;
spotLight.shadow.camera.fov = 30;
Increase the shadow.mapSize:
spotLight.shadow.mapSize.width = 2048;
spotLight.shadow.mapSize.height = 2048;
Use shadowBias to reduce artefacts:
spotLight.shadowBias = -0.003;
The result isnt perfect because now light seams inside the room are showing up. It requires more tweaking and trade-ofs, but maybe its good enough for your needs:
https://jsfiddle.net/wbrj8uak/8/
Just leaving a comment here, regarding 2pha´s updated example and why im restoring it:
setting the camera position results in a disappearing shadow inside the room. This is sure confusing for the poster who wants to have a shadow inside, thats why i just left his code the way he supplied it.

Manipulate objects in the browser with three.js using the mouse

I have this piece of code (see below) that I used to draw a cube with three.js:
// revolutions per second
var angularSpeed = 0.0;
var lastTime = 0;
function animate(){
// update
var time = (new Date()).getTime();
var timeDiff = time - lastTime;
var angleChange = angularSpeed * timeDiff * 2 * Math.PI / 1000;
cube.rotation.y += angleChange;
lastTime = time;
// render
renderer.render(scene, camera);
// request new frame
requestAnimationFrame(animate);
}
// renderer
var container = document.getElementById("container");
var renderer = new THREE.WebGLRenderer();
renderer.setSize(container.offsetWidth, container.offsetHeight);
container.appendChild(renderer.domElement);
// camera
var camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 1, 1000);
camera.position.z = 700;
// scene
var scene = new THREE.Scene();
// cube Length, Height, Width
var cube = new THREE.Mesh(new THREE.CubeGeometry(400, 200, 200), new THREE.MeshBasicMaterial({
wireframe: true,
color: '#ff0000'
}));
cube.rotation.x = Math.PI * 0.1;
scene.add(cube);
// start animation
animate();
Does anyone know whether is it possible to allow the user to change the size of the cube by dragging the edges using the mouse?
Check this jsfiddle. I reused the structure of draggableCubes, plus little changes :
to drag the vertices i created vertexHelpers (little spheres);
to avoid maths the trick is to use an invisible plane to drag your objects/vertices on, perpendicular to the camera. To see it in action, just set plane.visible=true
now we can correctly drag a vertexHelper, its distance to the center of the cube changes. We just have to scale the cube at the same ratio :
Within the mouseMove listener's function it becomes:
if(SELECTED){
var intersects=raycaster.intersectObject(plane);
//so we get the mouse 3D coordinates in intersects[0].point
var previousDistance=SELECTED.position.sub(cube.position).length();
var increaseRatio=intersects[0].point.sub(cube.position).length()/previousDistance;
cube.scale.set(
cube.scale.x*increaseRatio,
cube.scale.y*increaseRatio,
cube.scale.z*increaseRatio
);
//then update the vertexHelpers position (copy the new vertices positions)
}
EDIT :
In your question you precisely ask to resize a cube by dragging its edges. I did not remember it in the example and did not think about it intuitively, but it can be done the same way.
However, given lineWidth is not implemented in ANGLE (the program used on windows to translate WebGL), it is not easy to pick lines with a 1px-width. I remember a threejs example I could not find, where a geometry is associated to the line so it looks outlined. Basically you could do it by creating a cylinder as custom 'edgesHelpers' (i'm precisely not talking about the THREE.EdgesHelper) and they have to be resized each time the cube is too.
In your setup code add an eventlistener for mousemove:
// Event Handlers
renderer.domElement.addEventListener('mousemove', onDocumentMouseMove, false);
Then in the event handler code, you can check which object is selected and adjust the scale parameter.
function onDocumentMouseMove(event) {
...
var selected = raycaster.intersectObjects(objects);
if (selected.length > 0) {
// Do things to the scale parameter(s)... Just for demo purposes
selected.scale.x = selected.scale.x + selected[0].point.sub(offset).x / 1000;
selected.scale.y = selected.scale.y + selected[0].point.sub(offset).y / 1000;
return;
}
...
}
Since I am typing pseudo code here, all too easy to make an error, so I have left a test version here for you to try: http://www.numpty.co.uk/cubedrag.html
As you can see, size of selected object changes in horrible ways with the dragging of the mouse. You have me interested, so will look into making it proportional to movement if I get more time.

Reduce distance blurring in three.js

I have a large plane with a texture map in three.js and I'm finding that the default settings I'm using cause too much blurring in the mid-distance. I want to increase the DOF so more of the floor material is in focus (especially along the right side).
http://i.imgur.com/JBYtFk6.jpg
Original: http://jsfiddle.net/5L5vxjkm/4/
Performance is not a factor so anything that improves the texture fidelity and/or focus is acceptable provided it works on latest Firefox in Xvfb (ie, using OS mesa drivers).
I did attempt to adapt http://threejs.org/examples/webgl_postprocessing_dof.html but it isn't giving me the expected results (still too blurry):
With DOF Postprocessing: http://jsfiddle.net/u7g48bt2/1/
The abbreviated code is below (see jsFiddle link for complete source)
doRender = function() {
renderer = new THREE.WebGLRenderer({antialias:true, preserveDrawingBuffer:true});
FOV = 60;
camera = new THREE.PerspectiveCamera(FOV, WIDTH/HEIGHT, .1, 8000);
camera.position.x = -100;
camera.position.y = 300;
camera.position.z = 1000;
camera.lookAt(new THREE.Vector3( 0, 300, 0 )); // look down and center
// Add Floor planes
// FLOOR
floorTexture.needsUpdate = true;
var floorMaterial = new THREE.MeshPhongMaterial( { map: floorTexture, side: THREE.DoubleSide } );
var floorGeometry = new THREE.PlaneBufferGeometry(4*1024, 4*1024, 256, 256);
var floor = new THREE.Mesh(floorGeometry, floorMaterial);
floor.doubleSided = true;
floor.rotation.x = Math.PI / 2;
floor.rotation.z = Math.PI / 3.9; // increase to rotate CCW
scene.add(floor);
var moreFloor2 = floor.clone();
moreFloor2.translateY(-4*1024);
scene.add(moreFloor2);
}
window.onload = function() {
// Enable cross-origin access to images
THREE.ImageUtils.crossOrigin = '';
floorTexture = THREE.ImageUtils.loadTexture('http://i.imgur.com/iEDVgsN.jpg?1', THREE.UVMapping, doRender);
};
Solution was simple in the end:
floorTexture.anisotropy = renderer.getMaxAnisotropy();
Which sets anisotropy to 16 I think.
UPDATE: Works on FF for Windows but under Xvfb / Mesa renderer.maxAnisotropy returns 0. Any workarounds?
UPDATE 2: It LIES! Manually setting floorTexture.anisotropy to values up to 16 actually works, meaning the maxAnisotropy returned by three.js under xvfb/mesa is plain wrong. Therefore this solution does work after all with a minor change:
floorTexture.anisotropy = 16;
UPDATE 3: My mistake! Anisotropic was NOT working. Solution was to switch the backend mesa driver to one that does support it:
DISPLAY=:5 LIBGL_ALWAYS_SOFTWARE=1 GALLIUM_DRIVER=softpipe firefox &
Many thanks to glennk on dri-devel#irc.freenode.org for this fix.

Three.js point light not working with large meshes

I have the following problem. When I use Three.js point light like this:
var color = 0xffffff;
var intensity = 0.5;
var distance = 200;
position_x = 0;
position_y = 0;
position_z = 0;
light = new THREE.PointLight(color, intensity, distance);
light.position.set(position_x, position_y, position_z);
scene.add(light);
It works as expected when there is a "small" object (mesh) positioned close to the light on the scene. However, when there is a large object (let us say a floor):
var floorTexture = new THREE.ImageUtils.loadTexture( 'floor.jpg' );
floorTexture.wrapS = floorTexture.wrapT = THREE.RepeatWrapping;
floorTexture.repeat.set( 1, 1);
var floorMaterial = new THREE.MeshBasicMaterial( { map: floorTexture, side: THREE.DoubleSide } );
var floorGeometry = new THREE.PlaneGeometry(1000, 1000, 10, 10);
var floor = new THREE.Mesh(floorGeometry, floorMaterial);
floor.position.y = -0.5;
floor.rotation.x = Math.PI / 2;
scene.add(floor);
Then the light will not be shown on it. At first I thought it is due to the fact that the floor center is positioned further away from the point light so the point light cannot reach it with the distance set to 200 (even though part of the floor is closer than the mentioned distance). Therefore I have tryied to increase this distance - no luck.
There is a workaround to create a floor out of small parts. Then the point light again works as expected but there is a problem with this approach - namely it drastically decreases FPS due to the large number of "floor objects" to be rendered.
My guess is that I am missing something. I know that there are other types of light which cover the whole scene but I am trying to create a lamp, so I think I need to use a point light. But I might be wrong. Any help or hint how to make this work would be appreciated.
MeshBasicMaterial does not support lights. Use MeshPhongMaterial.
MeshLambertMaterial also supports lights, but it is not advisable in your case for reasons explained here: Three.js: What Is The Exact Difference Between Lambert and Phong?.
three.js r.66

Categories