Light position in custom shader (Shader Material) - javascript

I am developing volume rendering app in webgl, and the last thing i have to do is to create a lighting model. I have everything prepared, but i am not able to get the light position in a scene. I am adding the light this way:
var light = new THREE.DirectionalLight(0xFFFFFF, 1);
light.position.set(0.5, 0.5, 0.1).normalize();
camera.add(light);
I attach light to camera, because i need to have light static if i am moving with camera.
Problem is, that i am using ShaderMaterial (custom shader). I am not able to find any uniform variables, that represent light position. I have searched, that i should set:
material.lights = true;
but it caused errors.
Uncaught TypeError: Cannot set property 'value' of undefined
I have tried to add constant vector in vertex shader, but i need to multiply by inverse view matrix (if i am right). But GLSL 1.0 doesnt support inverse function. I have idea to send inverse view matrix to shader by uniform, but i dont know where can i get view matrix of scene in JS.
Thanks for help. I have tried everything :( ...
Bye.

If you are going to add the light as a child of the camera and set material.lights = true, then you must add the camera as a child of the scene.
scene.add( camera );
three.js r.57

if you're trying to project the light coordinates from model space to screen space using the camera, here's a function (courtesy of Thibaut Despoulain) that might help -
var projectOnScreen = function(object, camera)
{
var mat = new THREE.Matrix4();
mat.multiplyMatrices( camera.matrixWorldInverse, object.matrixWorld);
mat.multiplyMatrices( camera.projectionMatrix , mat);
var c = mat.n44;
var lPos = new THREE.Vector3(mat.n14/c, mat.n24/c, mat.n34/c);
lPos.multiplyScalar(0.5);
lPos.addScalar(0.5);
return lPos;
}

Related

Parallax effect using three.js

I would like to build a parallax effect from a 2D image using a depth map, similar to this, or this but using three.js.
Question is, where should I start with? Using just a PlaneGeometry with a MeshStandardMaterial renders my 2D image without parallax occlusion. Once I add my depth map as displacementMap property I can see some sort of displacement, but it is very low-res. (Maybe, since displacement maps are not meant to be used for this?)
My first attempt
import * as THREE from "three";
import image from "./Resources/Images/image.jpg";
import depth from "./Resources/Images/depth.jpg";
[...]
const geometry = new THREE.PlaneGeometry(200, 200, 10, 10);
const material = new THREE.MeshStandardMaterial();
const spriteMap = new THREE.TextureLoader().load(image);
const depthMap = new THREE.TextureLoader().load(depth);
material.map = spriteMap;
material.displacementMap = depthMap;
material.displacementScale = 20;
const plane = new THREE.Mesh(geometry, material);
Or should I use a Sprite object, which face always points to the camera? But how to apply the depth map to it then?
I've set up a codesandbox with what I've got so far. It also contains event listener for mouse movement and rotates the camera on movement as it is work in progress.
Update 1
So I figured out, that I seem to need a custom ShaderMaterial for this. After looking at pixijs's implementation I've found out, that it is based on a custom shader.
Since I have access to the source, all I need to do is rewrite it to be compatible with threejs. But the big question is: HOW
Would be awesome if someone could point me into the right direction, thanks!

three.js map mesh.geometry.attributes.uv

I am currently working on a 3D configurator.
So I should be able to import a logo on a FBX object, which normally already have UV coordinates.
The problem is : I am struggling since 3 days ago, trying to import a texture on a mesh but I can't map it using his UVs coordinates.
So, I have a texture with a logo.
When I map it on a simple Cube, no problem, it is working :
But when I try to apply the same texture to my mesh :
The texture is cropped.
So I've been looking inside the mesh json tree and I found it :
So there are uv coordinates, but it seems different from my cube because, when I look to his json, I don't find the same tree which is (on the cube) :
And finally, this is my code :
if(myMesh.name == 'Logo'){
// Texture
var texture = new THREE.TextureLoader().load('img/logoTesla_Verre_green.jpg', function(){
texture.needUpdate = true;
// Material
var material = new THREE.MeshLambertMaterial( {map: texture, morphTargets: true} );
material.needUpdate = true;
// Geometry Cube
var geometry = new THREE.BoxGeometry( 40, 40, 40 );
// Cube
var cube = new THREE.Mesh( geometry, material);
scene.add(cube);
// Duplicate logo mesh for testing
var newGeometry = myMesh.geometry;
var newMesh = new THREE.Mesh( newGeometry, material);
newMesh.position.y = 100;
newMesh.geometry.uvsNeedUpdate = true;
scene.add(newMesh);
});
}
My question is : Should I use the geometry.attributes.uv object to map my texture ? If yes, how to do that ?
Or should I convert these UV coordinates to a geometry.faceVertexUvs ???
Please, help me, I am totally lost :)
Nevermind, it has been solved by exporting the .fbx again.
Now the mapping is working fine !
But I don't know why...
Thank you for your question and answer. I was having the same problem where my custom FBX I imported was only taking the bottom left pixel of the canvas as the color for the whole mesh. (I was using texture = new THREE.CanvasTexture(ctx.canvas); to get my texture).
The issue for me was that the FBX had no UV mapping! How I solved it was I imported the fbx to maya, opened up the UV editor (under the modeling menu mode got to UV->UV editor ) then in the UV editor there was a Create section and I hit one of those options (i chose cylinder) and then exported it with the default fbx settings. I am very grateful this worked.
You can see the result of using a canvas context as a custom FBX texture here:
www.algorat.club/sweater

What is more cost effective updating a mesh or removing and adding from the scene?

I have a small web app that I've designed for viewing bathymetric data of the seafloor in Three.js. Basically I am using a loader to bring in JSON models of the my extruded bathymetry into my scene and allowing the user to rotate the model or click next to load a new part of the seafloor.
All of my models have the same 2D footprint so are identical in two dimensions, only elevations and texture change from model to model.
My question is this: What is the most cost effective way to update my model?
Using scene.remove(mesh); then calling my loader again to load a new model and then adding it to the scene with scene.add(mesh);.
Updating the existing mesh by calling my loader to bring in material and geometry and then calling mesh.geometry = geometry;, mesh.material = material and then mesh.geometry.needsUpdate;.
I've heard that updating is pretty intensive from a computational point of view, but all of the articles that I've read on this state that the two methods are almost the same. Is this information correct? Is there a better way to approach my code in this instance?
An alternative that I've considered is skipping the step where I create the model (in Blender) and instead using a displacement map to update the y coordinates of my vertices. Then to update I could push new vertices on an existing plane geometry before replacing the material. Would this be a sound approach? At the very least I think the displacement map would be a smaller file to load than a .JSON file. I could even optimize the display by loading a GUI element to divide the mesh into more or fewer divisions for high or low quality render...
I dont know off the top of my head what exactly happens under the hood, but from what i remember i think these two are the exact same thing.
You aren't updating the existing mesh. A mesh extends from Object3D, so it just sits there, wiring some geometry and some materials.
mesh.geometry = geometry did not "update the mesh", or it did, but with new geometry (which may be the thing you are actually referring to as mesh).
In other words, you always have your container, but when you replace the geometry by doing =geometry you set it up for all sorts of GL calls in the next THREE.WebGLRenderer.render() call.
Where that new geometry gets attached to, be it an existing mesh, or a new one, shouldnt matter at all. The geometry is the thing that will trigger the low level webgl calls like gl.bufferData().
//upload two geometries to the gpu on first render()
var meshA = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
var meshB = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
//upload one geometry to the gpu on first render()
var bg = new THREE.BoxGeometry()
var meshA = new THREE.Mesh( bg );
var meshB = new THREE.Mesh( bg );
for ( var i = 0 ; i < someBigNumber ; i ++ ){
var meshTemp = new THREE.Mesh( bg );
}
//doesnt matter that you have X meshes, you only have one geometry
//1 mesh two geometries / "computations"
var meshA = new THREE.Mesh( new THREE.BoxGeometry() ); //first computation - compute box geometry
scene.add(meshA);
renderer.render( scene , camera ); //upload box to the gpu
meshA.geometry = new THREE.SphereGeometry();
renderer.render( scene , camera); //upload sphere to the gpu
THREE.Mesh seems to be the most confusing concept in three.js.

Disappearing Objects - Three.js CanvasRenderer

I am very stuck, I do not understand why my objects are disappearing with canvas renderer. While it works exactly as expected with webGL renderer, I need to display this on mobile devices and as such do not have access to a webGL renderer
I have tried overdraw:true but this does not seem to make the missing objects disappear
http://jsfiddle.net/xH9GD/3/
When I comment out the room the boxes remain but they get very mangled on my iPhone.
I understand the concept of Z-fighting however I don't think this is occurring as the zPosition of each of the faces should be seperate to the others
floor = drawTopFacingWall(room.width, room.length );
wall1 = drawLeftFacingWall( room.length, room.depth );
wall2 = drawFrontFacingWall( room.width, room.depth );
wall3 = drawRightFacingWall( room.length, room.depth );
roof = drawBottomFacingWall( room.width, room.length );
wall4 = drawBackFacingWall( room.width, room.depth );
The "disappearing" geometry is caused by a limitation of CanvasRenderer due to the way it handles depth-sorting.
While WebGLRenderer sorts at the pixel level, CanvasRenderer sorts at the polygon level.
The best you can do is to increase the tessellation of your geometry.
var geometry = new THREE.PlaneGeometry( width, height, 10, 10 );
three.js r.66

ThreeJS camera.lookAt() has no effect, is there something I'm doing wrong?

In Three.js, I want a camera to be pointed at a point in 3D space.
For this purpose, I tried using the camera.lookAt function like so:
camera.lookAt(new THREE.Vector3(-100,-100,0));
However, I found out that the call has no effect whatsoever. It just does nothing at all. I tried changing the numbers in the vector, and I always get the same look on screen, when it should be changing.
I just found now that if I remove the THREE.TrackballControls I have in my code, the camera.lookAt() works as it should. Is there something wrong with how I use THREE.TrackballControls? This is how I initialize them:
controls = new THREE.TrackballControls( camera, renderer.domElement );
controls.rotateSpeed = 10.0;
controls.zoomSpeed = 1.2;
controls.panSpeed = 0.2;
controls.noZoom = false;
controls.noPan = false;
controls.staticMoving = true;
controls.dynamicDampingFactor = 1.0;
var radius = 5;
controls.minDistance = radius * 1.1;
controls.maxDistance = radius * 100;
controls.keys = [ 65, 83, 68 ]; // [ rotateKey, zoomKey, panKey ]*/
And then in my render function I do:
function render() {
controls.update();
renderer.render(scene, camera);
}
Documentation on Three.js is pretty scarce, so I thought I'd ask here. Am I doing something wrong?
Looking at the source code of THREE.TrackballControls, I figured out that I can make the camera look where I want by setting trackballControls.target to the THREE.Vector3 I want it to look at, and then rerendering the scene.
Yes Please beware... It seems that having THREE.TrackballControls or THREE.OrbitControls seems to override the camera.lookAt function as your are passing in your camera when you instantiate an instance of the controls. You might want to get rid of the controls and then performing camera.lookAt() or tween your camera some other way to verify that the controls are having a overriding effect on your Camera. I googled for a while why camera.lookat() seemed to have no effect.
In my opinion, we are not supposed to mess with the original code. I found a way around to achieve the objective of looking at any particular point.
After having declared your "control" variable, simply execute these two lines of code:
// Assuming you know how to set the camera and myCanvas variables
control = new THREE.OrbitControls(camera, myCanvas);
// Later in your code
control.object.position.set(camX, camY, camZ);
control.target = new THREE.Vector3(targetX, targetY, targetZ);
Keep in my mind that this will switch the center of the focus to your new target. In other words, your new target will be the center of all rotations of the camera. Some parts will be difficult to look at as you became familiar to manipulate the camera assuming the default center. Try zoom in as much as you can and you will have a sense of what I am saying
Hope this help.
I figured it out. To prevent THREE.TrackballControls or THREE.OrbitControls from overriding camera.lookAt upon initialization, you need to change the line that sets the control's target property to equal the sum of the camera.position vector and the camera.getWorldDirection() vector, instead of how it's currently implemented using a new THREE.Vector3() (which defaults to (0, 0, 0)).
So, for THREE.TrackballControls, change line 39 to:
this.target = new THREE.Vector3().addVectors(/*new line for readability*/
object.position, object.getWorldDirection());
Same goes for THREE.OrbitControls, on line 36.
I actaully haven't tested it on TrackballControls.js but it does work on OrbitControls.js. Hope this helps.
Here's an alternative solution: create an object (i.e. cube) with 0 dimensions.
var cameraTarget = new THREE.Mesh( new THREE.CubeGeometry(0,0,0));
In the render function set the camera.lookAt to the position of the cameraTarget.
function render() {
camera.lookAt( cameraTarget.position );
renderer.render( scene, camera );
}
Then just move cameraTarget around as you wish.
I ran into the same problem and was able to make it work by using OrbitControls.target. Below is what I did after declaring a controller.
controller = new THREE.OrbitControls(camera, renderer.domElement);
controller.addEventListener('change', renderer.domElement);
controller.target = new THREE.Vector3(0, 1, 0);

Categories