Let's say I have a parent and n children objects of the parent in a scene.
Edit: added code for mesh creation (assuming that I have a basic scene and camera set up)
parent group creation:
var geometry = new THREE.PlaneGeometry( 0, 0 );
var surface = new THREE.Group();
surface.geometry = geometry;
Child Mesh:
surface.add(new Sprite());
scene.add(surface);
Is there a way I can ignore the parent's transformation matrix for the children only (but keep its transformation intact as a parent )?
I know that matrixAutoUpdate = false; lets me update the child's transformation matrix manually, but it still applies the parent transformation matrix.
I want to ignore it completely for the children, but still be able to preserve its world transform and extract its rotation, translation, and scaling values.
In sense, there is only one object in the group so the group will behave like a single entity. And you can't associate the geometry with the group so my suggestion will add the plane as mesh and then do whatever you want to.
So updated code shall look like as follows:
var geometry = new THREE.PlaneGeometry( 0, 0 );
var material = new THREE.MeshBasicMaterial( {color: 0xffff00, side: THREE.DoubleSide} );
var plane = new THREE.Mesh( geometry, material );
var surface = new THREE.Group();
surface.add(plane);
surface.add(new Sprite());
scene.add(surface);
Whatever there is to say about javascript, it's a pleasure to hack with!
You can just overload the #updateMatrix function of the specific child Object3D instance by doing something like this:
myChildObject3D.updateMatrixWorld = function( force ) {
if ( this.matrixAutoUpdate ) this.updateMatrix();
if ( this.matrixWorldNeedsUpdate || force ) {
this.matrixWorld.copy( this.matrix );
this.matrixWorldNeedsUpdate = false;
force = true;
}
}
The original Object3D#updateMatrixWorld will multiply the local matrix and the parent's matrix. It also calls the update function on its children.
Here, I switch it with a function that only copies the local transformation matrix so that it's transformation is independent of its parent.
Related
I'm trying to detect when an object in Three.js is partially and fully occluded (hidden behind) another object.
My current simple solution casts a single ray to the the center of the object:
function getScreenPos(object) {
var pos = object.position.clone();
camera.updateMatrixWorld();
pos.project(camera);
return new THREE.Vector2(pos.x, pos.y);
}
function isOccluded(object) {
raycaster.setFromCamera(getScreenPos(object), camera);
var intersects = raycaster.intersectObjects(scene.children);
if (intersects[0] && intersects[0].object === object) {
return false;
} else {
return true;
}
}
However it doesn't account for the object's dimensions (width, height, depth).
Not occluded (because center of object is not behind)
Occluded (because center of object is behind)
View working demo:
https://jsfiddle.net/kmturley/nb9f5gho/57/
Currently thinking I could calculate the object box size, and cast Rays for each corner of the box. But this might still be a little too simple:
var box = new THREE.Box3().setFromObject(object);
var size = box.getSize();
I would like to find a more robust approach which could give partially occluded and fully occluded booleans values or maybe even percentage occluded?
Search Stack Overflow and the Three.js examples for "GPU picking." The concept can be broken down into three basic steps:
Change the material of each shape to a unique flat (MeshBasicMaterial) color.
Render the scene with the unique materials.
Read the pixels of the rendered frame to collect color information.
Your scenario allows you a few caveats.
Give only the shape you're testing a unique color--everything else can be black.
You don't need to render the full scene to test one shape. You could adjust your viewport to render only the area surrounding the shape in question.
Because you only gave a color only to your test part, the rest of the data should be zeroes, making finding pixels matching your unique color much easier.
Now that you have the pixel data, you can determine the following:
If NO pixels matchthe unique color, then the shape is fully occluded.
If SOME pixels match the unique color, then the shape is at least partially visible.
The second bullet says that the shape is "at least partially" visible. This is because you can't test for full visibility with the information you currently have.
What I would do (and someone else might have a better solution) is render the same viewport a second time, but only have the test shape visible, which is the equivalent of the part being fully visible. With this information in hand, compare the pixels against the first render. If both have the same number (perhaps within a tolerance) of pixels of the unique color, then you can say the part is fully visible/not occluded.
I managed to get a working version for WebGL1 based on TheJim01's answer!
First create a second simpler scene to use for calculations:
pickingScene = new THREE.Scene();
pickingTextureOcclusion = new THREE.WebGLRenderTarget(window.innerWidth / 2, window.innerHeight / 2);
pickingMaterial = new THREE.MeshBasicMaterial({ vertexColors: THREE.VertexColors });
pickingScene.add(new THREE.Mesh(BufferGeometryUtils.mergeBufferGeometries([
createBuffer(geometry, mesh),
createBuffer(geometry2, mesh2)
]), pickingMaterial));
Recreate your objects as Buffer Geometry (faster for performance):
function createBuffer(geometry, mesh) {
var buffer = new THREE.SphereBufferGeometry(geometry.parameters.radius, geometry.parameters.widthSegments, geometry.parameters.heightSegments);
quaternion.setFromEuler(mesh.rotation);
matrix.compose(mesh.position, quaternion, mesh.scale);
buffer.applyMatrix4(matrix);
applyVertexColors(buffer, color.setHex(mesh.name));
return buffer;
}
Add a color based on the mesh.name e.g. an id 1, 2, 3, etc
function applyVertexColors(geometry, color) {
var position = geometry.attributes.position;
var colors = [];
for (var i = 0; i < position.count; i ++) {
colors.push(color.r, color.g, color.b);
}
geometry.setAttribute('color', new THREE.Float32BufferAttribute(colors, 3));
}
Then during the render loop check the second scene for that texture, and match pixel data to the mesh name:
function isOccludedBuffer(object) {
renderer.setRenderTarget(pickingTextureOcclusion);
renderer.render(pickingScene, camera);
var pixelBuffer = new Uint8Array(window.innerWidth * window.innerHeight);
renderer.readRenderTargetPixels(pickingTextureOcclusion, 0, 0, window.innerWidth / 2, window.innerHeight / 2, pixelBuffer);
renderer.setRenderTarget(null);
return !pixelBuffer.includes(object.name);
}
You can view the WebGL1 working demo here:
https://jsfiddle.net/kmturley/nb9f5gho/62/
One caveat to note with this approach is that your picking scene needs to stay up-to-date with changes in your main scene. So if your objects move position/rotation etc, they need to be updated in the picking scene too. In my example the camera is moving, not the objects so it doesn't need updating.
For WebGL2 we will have a better solution:
https://tsherif.github.io/webgl2examples/occlusion.html
But this is not supported in all browsers yet:
https://www.caniuse.com/#search=webgl
I have a small web app that I've designed for viewing bathymetric data of the seafloor in Three.js. Basically I am using a loader to bring in JSON models of the my extruded bathymetry into my scene and allowing the user to rotate the model or click next to load a new part of the seafloor.
All of my models have the same 2D footprint so are identical in two dimensions, only elevations and texture change from model to model.
My question is this: What is the most cost effective way to update my model?
Using scene.remove(mesh); then calling my loader again to load a new model and then adding it to the scene with scene.add(mesh);.
Updating the existing mesh by calling my loader to bring in material and geometry and then calling mesh.geometry = geometry;, mesh.material = material and then mesh.geometry.needsUpdate;.
I've heard that updating is pretty intensive from a computational point of view, but all of the articles that I've read on this state that the two methods are almost the same. Is this information correct? Is there a better way to approach my code in this instance?
An alternative that I've considered is skipping the step where I create the model (in Blender) and instead using a displacement map to update the y coordinates of my vertices. Then to update I could push new vertices on an existing plane geometry before replacing the material. Would this be a sound approach? At the very least I think the displacement map would be a smaller file to load than a .JSON file. I could even optimize the display by loading a GUI element to divide the mesh into more or fewer divisions for high or low quality render...
I dont know off the top of my head what exactly happens under the hood, but from what i remember i think these two are the exact same thing.
You aren't updating the existing mesh. A mesh extends from Object3D, so it just sits there, wiring some geometry and some materials.
mesh.geometry = geometry did not "update the mesh", or it did, but with new geometry (which may be the thing you are actually referring to as mesh).
In other words, you always have your container, but when you replace the geometry by doing =geometry you set it up for all sorts of GL calls in the next THREE.WebGLRenderer.render() call.
Where that new geometry gets attached to, be it an existing mesh, or a new one, shouldnt matter at all. The geometry is the thing that will trigger the low level webgl calls like gl.bufferData().
//upload two geometries to the gpu on first render()
var meshA = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
var meshB = new THREE.Mesh( new THREE.BoxGeometry(1,1,1) );
//upload one geometry to the gpu on first render()
var bg = new THREE.BoxGeometry()
var meshA = new THREE.Mesh( bg );
var meshB = new THREE.Mesh( bg );
for ( var i = 0 ; i < someBigNumber ; i ++ ){
var meshTemp = new THREE.Mesh( bg );
}
//doesnt matter that you have X meshes, you only have one geometry
//1 mesh two geometries / "computations"
var meshA = new THREE.Mesh( new THREE.BoxGeometry() ); //first computation - compute box geometry
scene.add(meshA);
renderer.render( scene , camera ); //upload box to the gpu
meshA.geometry = new THREE.SphereGeometry();
renderer.render( scene , camera); //upload sphere to the gpu
THREE.Mesh seems to be the most confusing concept in three.js.
I have two meshes: mesh1 and mesh2. both have the same number of vertices and have extrusion.
mesh1 = 5000 vertices.
mesh2 = 5000 vertices.
I assign the vertices of mesh2 to mesh1. then I do:
mesh2.geometry.verticesNeedUpdate = true;
mesh2.geometry.vertices = mesh1.geometry.vertices;
thus the vertices of mesh2 are updated. but this happens too fast. I can not see an animation while it makes mesh2's vertices to mesh1's vertices.
I want to see the transformation when malla2 starts to become the malla1, I mean to see an animation of vertices when they are changing.
I used "Tween.js" for animations such as position and color. I'm not sure if this can help to view animations when vertices begin to change.
I do:
new TWEEN.Tween( mesh2.geometry.vertices ).to( mesh1.geometry.vertices, 1000 ).start();
but not works. sorry for my level of english.
As you've seen, this doesn't work -- in part because for each call to update the mesh vertices, you must also call geometry2.verticesNeedUpdate = true; for every one of these frames.
--
more specifically I think you will want to add .onUpdate(function() { geometry2.verticesNeedUpdate = true; }) to your tween.
I am currently working on a small project using the new Babylon.js framework. One of the issues I have run into is that I basically have two meshes. One of the meshes is supposed to be the background, and the other is supposed to follow the cursor to mark where on the other mesh you are targeting. The problem is that when I move the targeting mesh to the position of the cursor, it blocks the background mesh when I use scene.pick, resulting in the other mesh having its position set on its self.
Is there any way to ignore the targeting mesh when using scene.pick so that I only pick the background mesh or is there some other method I could use? If not, what would be the steps to implement this sort of feature to essentially raycast only through certain meshes?
If you need code samples or any other forms of description, let me know. Thanks!
Ok, it's easy.
So, we have two meshes. One is called "ground", the second "cursor". If you want to pick only on the ground you have two solutions :
First:
var ground = new BABYLON.Mesh("ground",scene);
ground.isPickable = true ;
var cursor = new BABYLON.Mesh("cursor", scene);
cursor.isPickable = false;
...
var p = scene.pick(event.clientX, event.clientY); // it return only "isPickable" meshes
...
Second:
var ground = new BABYLON.Mesh("ground",scene);
var cursor = new BABYLON.Mesh("cursor", scene);
...
var p = scene.pick(event.clientX, event.clientY, function(mesh) {
return mesh.name == "ground"; // so only ground will be pickable
});
...
regards.
I am developing volume rendering app in webgl, and the last thing i have to do is to create a lighting model. I have everything prepared, but i am not able to get the light position in a scene. I am adding the light this way:
var light = new THREE.DirectionalLight(0xFFFFFF, 1);
light.position.set(0.5, 0.5, 0.1).normalize();
camera.add(light);
I attach light to camera, because i need to have light static if i am moving with camera.
Problem is, that i am using ShaderMaterial (custom shader). I am not able to find any uniform variables, that represent light position. I have searched, that i should set:
material.lights = true;
but it caused errors.
Uncaught TypeError: Cannot set property 'value' of undefined
I have tried to add constant vector in vertex shader, but i need to multiply by inverse view matrix (if i am right). But GLSL 1.0 doesnt support inverse function. I have idea to send inverse view matrix to shader by uniform, but i dont know where can i get view matrix of scene in JS.
Thanks for help. I have tried everything :( ...
Bye.
If you are going to add the light as a child of the camera and set material.lights = true, then you must add the camera as a child of the scene.
scene.add( camera );
three.js r.57
if you're trying to project the light coordinates from model space to screen space using the camera, here's a function (courtesy of Thibaut Despoulain) that might help -
var projectOnScreen = function(object, camera)
{
var mat = new THREE.Matrix4();
mat.multiplyMatrices( camera.matrixWorldInverse, object.matrixWorld);
mat.multiplyMatrices( camera.projectionMatrix , mat);
var c = mat.n44;
var lPos = new THREE.Vector3(mat.n14/c, mat.n24/c, mat.n34/c);
lPos.multiplyScalar(0.5);
lPos.addScalar(0.5);
return lPos;
}