Three.js Fog , max opacity - javascript

What I am attempting:
I am building a lightweight GIS application that includes topography. One thing I would like to add is atmosphere haze.
My current code:
(Please excuse my code, I have multiple scenes)
fogColor = new THREE.Color(0x7EC0EE);
this.scenes[name].background = fogColor;
this.scenes[name].fog = new THREE.Fog(fogColor, 250, 2000);
//alternatively:
this.scenes[name].fog = new THREE.FogExp2( fogColor, .001 )
Problem encountered:
Both Fog and FogExp2 work well for the units of scale in my app when I am close to the ground. However when moving the camera farther above the ground, and looking down, eventually the earth turns 100% blue, as its obscured by the fog setting.
My Question:
Is there a way to apply a max opacity to the fog?
I would like the topography to stay hazy at a distance but not completely obscured by fog as a solid color. I was thinking I could calculate the furthest object in view, and adjust the fog setting on every camera change, but I am not sure how or if I am overthinking this. I'd like to calculate fog based on the amount of "air" between the camera and the object and never go over a certain opacity of fog. Is this done better in a shader?

There's no way to apply max opacity to fog, but you could change the fog's near and far parameters on the fly. For example:
var origin = new THREE.Vector3(0, 0, 0);
update() {
var dist = camera.position.distanceTo(origin);
fog.far = 2000 + dist;
}
I'm not sure what kind of units you're dealing with, so you might need to play with the way you calculate dist. With this approach, the further you are from 0, 0, 0, the further away the fog will reach.
With FogExp2, you could try modifying the .density property.

Related

Three JS: Weird rendering issues in close planes

I'm using Three.js to render some a few planes with textures, one or top of another.
The distance between the planes is 10 units (which I'm not sure what are these exactly).
All the planes are MeshBasicMaterial, with the following configuration:
let frontMaterial = new THREE.MeshBasicMaterial( {
map: getFrontCover(),
side: THREE.FrontSide
});
When the planes are placed at the regular distance of 10, I can see strange stripes rendered, like in the picture:
Assuming that the distance is mandatory, how can I solve this issue?
Thanks
There are a few things you can do.
make the near and far settings of your perspective camera as tight fitting to your content as possible.
For example let's assume the camera is 2 units from the book and the book is 1 unit deep. In that case setting your near and far setting to 0.5 and 3.5 would possibly solve the issue. In other words
.... new THREE.PerspectiveCamera(fieldOfView, aspect, 0.5 /* near /, 3.5 / far */
If those numbers are orders of magnitude off you'll get this issue
Use a logarithmic depth buffer
See this example: https://threejs.org/examples/?q=log#webgl_camera_logarithmicdepthbuffer
Set the material's polygon offset
You want to set this on the material for the mesh that is supposed to be behind
material.polygonOffset = true;
material.polygonOffsetFactor = 1;
material.polygonOffsetUnits = 1;

Web Audio Spatialization based on view direction

I'm trying to modulate the volume of some sound based on the view direction between the camera and the sound. So if you are fully looking at the sound source the volume is 100%, if you turn away it is turned down.
Setting the built-in directionalCone, which links to the Panner Audio API is not what i want. This defines if audio is enabled while the player is positioned inside the cone, i'd like to to work based on the view direction.
I have something working in Aframe, by doing a dot between the camera view direction and direction between the player and audio clip. However this (for some reason) is quite expensive, i'm wondering if there is some built in functionality that i am overlooking.
tick: function() {
if(!this.sound.isPlaying) return; //todo: this is true even outside the spatial distance!
var camFwd = this.camFwd;
this.camera.object3D.getWorldPosition(camFwd);
var dir = this.dir;
this.el.object3D.getWorldPosition(dir);
dir.subVectors(
camFwd, //camera pos
dir //element pos
).normalize();
this.camera.object3D.getWorldDirection(camFwd);
var dot = THREE.Math.clamp(camFwd.dot(dir), 0, 1);
//float dot = Mathf.dot(transform.forward, (camTrans.position-transform.position).normalized);
this.setVolume(THREE.Math.lerp(
this.data.minVolume,
this.data.maxVolume,
dot));
},
This gives the intended effect, but it shows up in the performance profiler as quite expensive. Especialy the getWorldDirection for some reason is costly, eventhough the hierarchy itself is simple.
Especialy the getWorldDirection for some reason is costly
Object3D.getWorldPosition() and Object3D.getWorldDirection() always force a recomputation of the object's world matrix. Depending on the time when tick is executed, it is sufficient to do this:
camFwd.setFromMatrixPosition( this.camera.object3D.matrixWorld );
dir.setFromMatrixPosition( this.el.object3D.matrixWorld );
The code just extracts the position from the world matrix without updating it. You can use a similar approach for the direction vector although the code is a bit more complex:
var e = this.camera.object3D.matrixWorld.elements;
camFwd.set( e[ 8 ], e[ 9 ], e[ 10 ] ).normalize();

Ugly render on clouds

I'm trying to implement the code from this tutorial, but in much greater proportions (radius = 100000 units).
I don't know if the size matters but on my earth render the clouds have a strange render.
As the tutorial does, I'm using two spheres and three textures (earth map, bump map, clouds).
Here the result (that's worse if the clouds are closer):
More the clouds are closer of the planet surface, more this glitch is visible. If the clouds are sufficiently far (but that's not realistic) the problem disappears completely.
What can I do?
Use logarithmic depth buffer instead of the linear one. This is a very simple change, just enable logarithmicDepthBuffer when you create your THREE.WebGLRenderer like so:
var renderer = new THREE.WebGLRenderer({ antialias: true, logarithmicDepthBuffer: true});
Here's an example you can have a look at:
http://threejs.org/examples/#webgl_camera_logarithmicdepthbuffer
Using polygonOffset as suggested by LJ_1102 is a possibility, but it shouldn't be necessary.
What you're experiencing is z-fighting due to insufficient depth buffer resolution.
You basically have three options to counteract this:
Write / use a multi-texture shader that renders all three textures on one sphere.
Increase the distance between the sphere faces. / Decrease the distance between your near and far clipping planes.
Use polygonOffset and the POLYGON_OFFSET_FILL renderstate to offset depth values written by your outer sphere. Read more about polygonOffset here.

How do I 'wrap' a plane over a sphere with three.js?

I am relatively new to three.js and am trying to position and manipulate a plane object to have the effect of laying over the surface of a sphere object (or any for that matter), so that the plane takes the form of the object surface. The intention is to be able to move the plane on the surface later on.
I position the plane in front of the sphere and index through the plane's vertices casting a ray towards the sphere to detect the intersection with the sphere. I then try to change the z position of said vertices, but it does not achieve the desired result. Can anyone give me some guidance on how to get this working, or indeed suggest another method?
This is how I attempt to change the vertices (with an offset of 1 to be visible 'on' the sphere surface);
planeMesh.geometry.vertices[vertexIndex].z = collisionResults[0].distance - 1;
Making sure to set the following before rendering;
planeMesh.geometry.verticesNeedUpdate = true;
planeMesh.geometry.normalsNeedUpdate = true;
I have a fiddle that shows where I am, here I cast my rays in z and I do not get intersections (collisions) with the sphere, and cannot change the plane in the manner I wish.
http://jsfiddle.net/stokewoggle/vuezL/
You can rotate the camera around the scene with the left and right arrows (in chrome anyway) to see the shape of the plane. I have made the sphere see through as I find it useful to see the plane better.
EDIT: Updated fiddle and corrected description mistake.
Sorry for the delay, but it took me a couple of days to figure this one out. The reason why the collisions were not working was because (like we had suspected) the planeMesh vertices are in local space, which is essentially the same as starting in the center of the sphere and not what you're expecting. At first, I thought a quick-fix would be to apply the worldMatrix like stemkoski did on his github three.js collision example I linked to, but that didn't end up working either because the plane itself is defined in x and y coordinates, up and down, left and right - but no z information (depth) is made locally when you create a flat 2D planeMesh.
What ended up working is manually setting the z component of each vertex of the plane. You had originaly wanted the plane to be at z = 201, so I just moved that code inside the loop that goes through each vertex and I manually set each vertex to z = 201; Now, all the ray start-positions were correct (globally) and having a ray direction of (0,0,-1) resulted in correct collisions.
var localVertex = planeMesh.geometry.vertices[vertexIndex].clone();
localVertex.z = 201;
One more thing was in order to make the plane-wrap absolutely perfect in shape, instead of using (0,0,-1) as each ray direction, I manually calculated each ray direction by subtracting each vertex from the sphere's center position location and normalizing the resulting vector. Now, the collisionResult intersection point will be even better.
var directionVector = new THREE.Vector3();
directionVector.subVectors(sphereMesh.position, localVertex);
directionVector.normalize();
var ray = new THREE.Raycaster(localVertex, directionVector);
Here is a working example:
http://jsfiddle.net/FLyaY/1/
As you can see, the planeMesh fits snugly on the sphere, kind of like a patch or a band-aid. :)
Hope this helps. Thanks for posting the question on three.js's github page - I wouldn't have seen it here. At first I thought it was a bug in THREE.Raycaster but in the end it was just user (mine) error. I learned a lot about collision code from working on this problem and I will be using it later down the line in my own 3D game projects. You can check out one of my games at: https://github.com/erichlof/SpacePong3D
Best of luck to you!
-Erich
Your ray start position is not good. Probably due to vertex coordinates being local to the plane. You start the raycast from inside the sphere so it never hits anything.
I changed the ray start position like this as a test and get 726 collisions:
var rayStart = new THREE.Vector3(0, 0, 500);
var ray = new THREE.Raycaster(rayStart, new THREE.Vector3(0, 0, -1));
Forked jsfiddle: http://jsfiddle.net/H5YSL/
I think you need to transform the vertex coordinates to world coordinates to get the position correctly. That should be easy to figure out from docs and examples.

Rendering spheres (or points) in a particle system

I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false

Categories