I have gone through the example here
.
var depthShader = THREE.ShaderLib[ "depthRGBA" ];
var depthUniforms = THREE.UniformsUtils.clone( depthShader.uniforms );
depthMaterial = new THREE.ShaderMaterial( { fragmentShader: depthShader.fragmentShader, vertexShader: depthShader.vertexShader, uniforms: depthUniforms } );
depthMaterial.blending = THREE.NoBlending;
// postprocessing
composer = new THREE.EffectComposer( Renderer );
composer.addPass( new THREE.RenderPass( Scene, Camera ) );
depthTarget = new THREE.WebGLRenderTarget( window.innerWidth, window.innerHeight, { minFilter: THREE.NearestFilter, magFilter: THREE.NearestFilter, format: THREE.RGBAFormat } );
var effect = new THREE.ShaderPass( THREE.SSAOShader );
effect.uniforms[ 'tDepth' ].value = depthTarget;
effect.uniforms[ 'size' ].value.set( window.innerWidth, window.innerHeight );
effect.uniforms[ 'cameraNear' ].value = Camera.near;
effect.uniforms[ 'cameraFar' ].value = Camera.far;
effect.renderToScreen = true;
composer.addPass( effect );
Which looks pretty good and the edges of the blocks are visible and highlighted,
on my code here
the edges are not that in the example . Is there any thing am missing
In order to get quality results with the SSAOShader, you need an accurate measure of depth in the depth buffer. As explained here, for a perspective camera, most of depth buffer precision is close to the near plane. That means you will get the best results if the object is located in the near part of the frustum.
So, by that argument, if your far plane is too close, then the object will be too close to the back of the frustum, and quality will be reduced.
On the other hand, if the far plane is too distant (as it is in your case), the object will be located in such a thin sliver of depth, that due to the precision of the depth buffer, there is not enough variability in depth across the object.
So you have to set your camera's near and far planes at values that give you the best results.
three.js r.75
It depends by your camera.far attribute. You've set it too high (100000).
Just set it to 1000 and you should have better results:
Camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 10, 1000);
Changing this will require you to move camera closer to your scene, otherwise it won't be visible at the startup and you'll need to zoom.
Camera.position.z = 200;
These changes worked fine for me.
Related
I'm trying to use pose estimation coordinates to animate a rigged model in three.js The pose estimation tech I'm using provides real time x,y,z coordinates from a person in a video feed and I'm trying to use those to move the 3D model accordingly. I used the code below (some of which I found in an answer to a related question) as a starting point...
let camera, scene, renderer, clock, rightArm;
init();
animate();
function init() {
camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 0.01, 10);
camera.position.set(2, 2, -2);
clock = new THREE.Clock();
scene = new THREE.Scene();
scene.background = new THREE.Color(0xffffff);
const light = new THREE.HemisphereLight(0xbbbbff, 0x444422);
light.position.set(0, 1, 0);
scene.add(light);
// model
const loader = new THREE.GLTFLoader();
loader.load('https://threejs.org/examples/models/gltf/Soldier.glb', function(gltf) {
const model = gltf.scene;
rightArm = model.getObjectByName('mixamorigRightArm');
scene.add(model);
});
renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.outputEncoding = THREE.sRGBEncoding;
document.body.appendChild(renderer.domElement);
window.addEventListener('resize', onWindowResize, false);
const controls = new THREE.OrbitControls(camera, renderer.domElement);
controls.target.set(0, 1, 0);
controls.update();
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
//This was my attempt at deriving the rotation from two vector3's and applying it to the model
//storedresults is simply an array where I store the pose estimation data for a given position
//getPosition is just a helper function for getting the vector three for a specific position
function setRightArmRotation() {
if (rightArm) {
if (storedresults === undefined || storedresults.length == 0) {
return;
} else {
if (vectorarray.length < 2) {
vectorarray.push(getPosition(12));
} else {
vectorarray.pop();
vectorarray.push(getPosition(12));
var quaternion = new THREE.Quaternion();
quaternion.setFromUnitVectors(vectorarray[0], vectorarray[1]);
var matrix = new THREE.Matrix4();
matrix.makeRotationFromQuaternion(quaternion);
rightArm.applyMatrix4(matrix);
}
}
}
}
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
if (rightArm) {
rightArm.rotation.z += Math.sin(t) * 0.005;
//setRightArmRotation()
}
renderer.render(scene, camera);
}
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/build/three.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/examples/js/loaders/GLTFLoader.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/examples/js/controls/OrbitControls.js"></script>
I also referred to this answer on finding rotations from two vectors but I haven't been successful in implementing it to achieve the desired results...
How to find rotation matrix between two vectors
I can get the Vector3 from the pose estimation tech easily, and I understand how most of what is in the jsfiddle works but I can't seem to put it all together to get the desired result of having my 3D model 'mirror' the movement of what is in my video using the pose estimation coords. I pretty much can just get the model to 'thrash around'.
As I understand it I need to manipulate the rotations of the bones to achieve the desired results, and to do that I need to compute those rotations using two vectors, but again after much research and trial and error I just can't seem to put it all together. Any help would be appreciated.
This is a rather big answer, so I'll split this down in to smaller parts.
I'm going to go out on a limb and guess you're using posenet (tensorflow library).
I've had this issue before, and it is very complicated to come up with a solution, but here's what worked for me.
First, using posenet, match each bone in the mesh, to a bone in the keypoints array.
Then I created a Skeleton class that recreated a skeleton in primitive terms, each bone had a seperate class, for instance elbow joint etc.. These all had max hyperextension and retraction values, it took a series of Vector3 values that basically showed the min and max xyz rotation values (similar to the range of movement we have) and converted this from degrees to radians.
from there, I applied all the rotations to each bone, the rotations were nowhere near perfect though, as the arms would often rotate inside the body. To combat this, I created a bounding box round each bone (to the outer mesh) and did collision detection on each part until the bone no longer collided with another part of the mesh, I had a tolerance score for each body part (i.e allow arms to be crossed or legs)
from there I had a "somewhat" good estimate which was applied to a 3d mesh.
I've got some of the original stuff on github if you'd like to see an example:
https://github.com/hudson1998x/2D-To-3D-Pose-Estimation/
So the inputs are three euler angles x,y,z in radians
I would like to convert this to a Vector location X,Y,Z with center as origin.
So if its possible to use https://threejs.org/docs/#api/en/math/Euler.toVector3 to get the Vector, i would like to know how. And also the alternate mathematical(sin/cos) solution is also appreciated.
so in this snippet axesHelper represents the angle and the cube should be at the location based on the euler.Use Dat gui to live edit the rotations.
//add Axis to represent Euler
const axesHelper = new THREE.AxesHelper( 5 );
scene.add( axesHelper );
//add cube to represent Vector
const geometry = new THREE.BoxGeometry( 0.1, 0.1, 0.1 );
const material = new THREE.MeshBasicMaterial( {color: 0x00ff00} );
const cube = new THREE.Mesh( geometry, material );
scene.add( cube );
render()
const gui = new GUI();
const angles={
degX:0,
degY:0,
degZ:0,
}
gui.add( angles, 'degX',0,360,1 ).onChange(function(){
axesHelper.rotation.x=THREE.MathUtils.degToRad(angles.degX)
render()
updateEULtoAngle()
});
gui.add( angles, 'degY',0,360,1 ).onChange(function(){
axesHelper.rotation.y=THREE.MathUtils.degToRad(angles.degY)
render()
updateEULtoAngle()
});
gui.add( angles, 'degZ',0,360,1 ).onChange(function(){
axesHelper.rotation.z=THREE.MathUtils.degToRad(angles.degZ)
render()
updateEULtoAngle()
});
console.log(THREE.MathUtils.radToDeg( axesHelper.rotation.x))
console.log(THREE.MathUtils.radToDeg( axesHelper.rotation.y))
console.log(THREE.MathUtils.radToDeg( axesHelper.rotation.z))
function updateEULtoAngle(){
let eul= new THREE.Euler(
THREE.MathUtils.degToRad(angles.degX),
THREE.MathUtils.degToRad(angles.degY),
THREE.MathUtils.degToRad(angles.degZ)
)
let vec= new THREE.Vector3()
eul.toVector3(vec)
console.log(eul,vec)
cube.position.copy(vec)
}
fake visual representation
cube following the axes Y axis
related: but has problem with axis matching How to convert Euler angles to directional vector?
Euler.toVector3() does not do what you are looking for. It just copies the x, y and z angles into the respective vector components.
I think you should have a look at THREE.Spherical which is an implementation for using spherical coordinates. You can express a point in 3D space with two angles (phi and theta) and a radius. It's then possible to use these data to setup an instance of Vector3 via Vector3.setFromSpherical() or Vector3.setFromSphericalCoords().
im very not sure whats going on but
let vec = new THREE.Vector3(0, 0, 1).applyEuler(eul)
worked for me
also check this:
https://github.com/mrdoob/three.js/issues/1606
I have a problem with spotlight. I was using r.73 and had 50x simple Spotlights without shadows etc.. it works without problems, still 60fps also on mobile.
Now i was changed to r84 (The problem occurs above r73), and the spotlights are much better quality but also drop my frames. I know there was some changes with adding penumbra options in r74.. i not really understand how can i set down the quality..
On fiddle , you dont see qualityChanges, dont matter. but Frames will drope.
So my Question, is it possible to set up the spotlight of a way, i still have 60frames?
The mistake occurs only when the mesh (floor) is big enough.
var spotLightSize=50;
var spotLight=[];
var geometry = new THREE.BoxGeometry( 500, 1, 500 );
var material = new THREE.MeshPhongMaterial( {color: "blue"} );
var floor = new THREE.Mesh( geometry, material );
var renderer = new THREE.WebGLRenderer({precision:"lowp",alpha:true});
for (var i=0;i<spotLightSize;i++){
spotLight.push(new THREE.SpotLight("green" ,2,20,0.1,0,1));
spotLight[spotLight.length-1].position.set( 0, 5, 0 );
scene.add(spotLight[spotLight.length-1]);
var spotLightHelper = new THREE.SpotLightHelper( spotLight[spotLight.length-1] );
scene.add( spotLightHelper );
}
http://jsfiddle.net/killerkarnikel/hyqgjLLz/19/
I have tried a few different lights now (Directional, Spot, Point), but none of them produce a nice shadow on MeshFaceMaterial objects. Instead, the entire MeshFaceMaterial object will become black.
My Test Website (please view with a grain of salt, constantly being changed).
How can I use lights to create shadows on MeshFaceMaterials? Does MeshFaceMaterial support shadows? The documentation says "Affects objects using MeshLambertMaterial or MeshPhongMaterial."
Here is sample code of how I am loading .json model.
loader.load('sample-concrete.js', function ( geometry, materials ) {
mesh1 = new THREE.Mesh(
geometry, new THREE.MeshFaceMaterial( materials )
);
mesh1.rotation.x = -Math.PI / 2;
scene.add( mesh1 );
});
and here is a sample of the material from my .json file.
"materials": [
{
"DbgIndex" : 0,
"DbgName" : "Steel",
"colorDiffuse" : [0.3059, 0.0471, 0.0471],
"colorAmbient" : [0.3059, 0.0471, 0.0471],
"colorSpecular" : [1.0000, 1.0000, 1.0000],
"transparency" : 1.0,
"specularCoef" : 25.0,
"vertexColors" : false
}
Thank you.
A MeshFaceMaterial is just a collection of materials. So if your materials variable contains MeshLambertMaterial or MeshPhongMaterial you should be fine. Shadows will be generated from a DirectionalLight or a SpotLight.
Just make sure your renderer has:
renderer.shadowMapEnabled = true;
your light has:
light.castShadow = true;
each one of your meshes:
mesh.castShadow = true;
and you have at least one object (a plane for example) where you do:
plane.receiveShadow = true;
My goal is to create an interactive Earth that has lines normal to the surface so that you can click on them and it pulls up pictures that my health care team has taken from around the world. I have the world completely coded (or more accurately someone else did it and I made a few small changes).
Below is the code for the Earth which functions as expected. What I want to know is how to make lines normal to the surface and have them be clickable. It would be optimal if the lines faded and disappeared as they went to the back of the earth rotated or the user rotated the earth and the lines on the side the user couldn't see faded.
I thought about making an array of cities and having a location on the sphere be associated with it but I'm not really sure how to do that. I am very new to Three.js and HTML/JS in general.
it may be helpful to know that I am using three.mins.js, Detector.js, and TrackballControl.js
Code so far as follows:
(function () {
var webglEl = document.getElementById('webgl');
if (!Detector.webgl) {
Detector.addGetWebGLMessage(webglEl);
return;
}
var width = window.innerWidth,
height = window.innerHeight;
// Earth params
var radius = 0.5,
segments = 32,
rotation = 6;
var scene = new THREE.Scene();
var uniforms, mesh, meshes =[];
var camera = new THREE.PerspectiveCamera(45, width / height, 0.01, 1000);
camera.position.z = 1.5;
var renderer = new THREE.WebGLRenderer();
renderer.setSize(width, height);
scene.add(new THREE.AmbientLight(0x333333));
var light = new THREE.DirectionalLight(0xffffff, 1);
light.position.set(5,3,5);
scene.add(light);
var sphere = createSphere(radius, segments);
sphere.rotation.y = rotation;
scene.add(sphere)
var clouds = createClouds(radius, segments);
clouds.rotation.y = rotation;
scene.add(clouds)
var stars = createStars(90, 64);
scene.add(stars);
var controls = new THREE.TrackballControls(camera);
webglEl.appendChild(renderer.domElement);
render();
function render() {
controls.update();
sphere.rotation.y += 0.0005;
clouds.rotation.y += 0.0007;
requestAnimationFrame(render);
renderer.render(scene, camera);
}
function createSphere(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, segments),
new THREE.MeshPhongMaterial({
map: THREE.ImageUtils.loadTexture('images/Color_Map.jpg'),
bumpMap: THREE.ImageUtils.loadTexture('images/elev_bump_4k.jpg'),
bumpScale: 0.005,
specularMap: THREE.ImageUtils.loadTexture('images/water_4k.png'),
specular: new THREE.Color('grey')
})
);
}
function createClouds(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius + 0.003, segments, segments),
new THREE.MeshPhongMaterial({
map: THREE.ImageUtils.loadTexture('images/fair_clouds_4k.png'),
transparent: true
})
);
}
function createStars(radius, segments) {
return new THREE.Mesh(
new THREE.SphereGeometry(radius, segments, segments),
new THREE.MeshBasicMaterial({
map: THREE.ImageUtils.loadTexture('images/galaxy_starfield.png'),
side: THREE.BackSide
})
);
}
}());
The hope is that it would look like this link but with Earth and not a building (http://3d.cl3ver.com/uWfsD?tryitlocation=3) [also click explore when you go there].
I built a quick demo that most faithfully represents what I think your needs are. It shows some images that seem to be attached to an Earth sphere through lines. It uses sprites to create those images (and the lines themselves, actually). I think it resembles quite well that demo of a building that you linked to. Here is the technique:
Images are added using GIMP to this template and saved as PNGs.
Those images are loaded as textures in the Js apps.
The sprite is created, using the loaded texture.
The sprite is added to a Object3D and its position set to (0,0,radiusOfTheEarthSphere)
The Object3D is added to the sphere, and rotated until the center of the sprite lies in the position in Earth that you want it to rest in.
Each frame, a dot product between a vector from the center of Earth to the camera and a vector from the center of the Earth to each sprite is used to calculate the sprite's opacity.
That equation in 6 is:
opacity = ((|cameraPosition - centerOfEarth| x |spriteCenter - centerOfEarth|) + 1) * 0.5
where "x" is dot product and "||" denotes normalization.
Also note that sprite center is different from its position due to the Object3D used as parent, I calculate its center using the .localToWorld(vec) method.
Please see the demo here: https://33983769c6a202d6064de7bcf6c5ac7f51fd6d9e.googledrive.com/host/0B9scOMN0JFaXSE93YTZRTE5XeDQ/test.html
It is hosted in my Google Drive and it may take some time to load. Three.js will give some errors in the console untill all textures are loaded because I coded it quickly just to show you my implementation ideas.