I'm trying to detect when an object in Three.js is partially and fully occluded (hidden behind) another object.
My current simple solution casts a single ray to the the center of the object:
function getScreenPos(object) {
var pos = object.position.clone();
camera.updateMatrixWorld();
pos.project(camera);
return new THREE.Vector2(pos.x, pos.y);
}
function isOccluded(object) {
raycaster.setFromCamera(getScreenPos(object), camera);
var intersects = raycaster.intersectObjects(scene.children);
if (intersects[0] && intersects[0].object === object) {
return false;
} else {
return true;
}
}
However it doesn't account for the object's dimensions (width, height, depth).
Not occluded (because center of object is not behind)
Occluded (because center of object is behind)
View working demo:
https://jsfiddle.net/kmturley/nb9f5gho/57/
Currently thinking I could calculate the object box size, and cast Rays for each corner of the box. But this might still be a little too simple:
var box = new THREE.Box3().setFromObject(object);
var size = box.getSize();
I would like to find a more robust approach which could give partially occluded and fully occluded booleans values or maybe even percentage occluded?
Search Stack Overflow and the Three.js examples for "GPU picking." The concept can be broken down into three basic steps:
Change the material of each shape to a unique flat (MeshBasicMaterial) color.
Render the scene with the unique materials.
Read the pixels of the rendered frame to collect color information.
Your scenario allows you a few caveats.
Give only the shape you're testing a unique color--everything else can be black.
You don't need to render the full scene to test one shape. You could adjust your viewport to render only the area surrounding the shape in question.
Because you only gave a color only to your test part, the rest of the data should be zeroes, making finding pixels matching your unique color much easier.
Now that you have the pixel data, you can determine the following:
If NO pixels matchthe unique color, then the shape is fully occluded.
If SOME pixels match the unique color, then the shape is at least partially visible.
The second bullet says that the shape is "at least partially" visible. This is because you can't test for full visibility with the information you currently have.
What I would do (and someone else might have a better solution) is render the same viewport a second time, but only have the test shape visible, which is the equivalent of the part being fully visible. With this information in hand, compare the pixels against the first render. If both have the same number (perhaps within a tolerance) of pixels of the unique color, then you can say the part is fully visible/not occluded.
I managed to get a working version for WebGL1 based on TheJim01's answer!
First create a second simpler scene to use for calculations:
pickingScene = new THREE.Scene();
pickingTextureOcclusion = new THREE.WebGLRenderTarget(window.innerWidth / 2, window.innerHeight / 2);
pickingMaterial = new THREE.MeshBasicMaterial({ vertexColors: THREE.VertexColors });
pickingScene.add(new THREE.Mesh(BufferGeometryUtils.mergeBufferGeometries([
createBuffer(geometry, mesh),
createBuffer(geometry2, mesh2)
]), pickingMaterial));
Recreate your objects as Buffer Geometry (faster for performance):
function createBuffer(geometry, mesh) {
var buffer = new THREE.SphereBufferGeometry(geometry.parameters.radius, geometry.parameters.widthSegments, geometry.parameters.heightSegments);
quaternion.setFromEuler(mesh.rotation);
matrix.compose(mesh.position, quaternion, mesh.scale);
buffer.applyMatrix4(matrix);
applyVertexColors(buffer, color.setHex(mesh.name));
return buffer;
}
Add a color based on the mesh.name e.g. an id 1, 2, 3, etc
function applyVertexColors(geometry, color) {
var position = geometry.attributes.position;
var colors = [];
for (var i = 0; i < position.count; i ++) {
colors.push(color.r, color.g, color.b);
}
geometry.setAttribute('color', new THREE.Float32BufferAttribute(colors, 3));
}
Then during the render loop check the second scene for that texture, and match pixel data to the mesh name:
function isOccludedBuffer(object) {
renderer.setRenderTarget(pickingTextureOcclusion);
renderer.render(pickingScene, camera);
var pixelBuffer = new Uint8Array(window.innerWidth * window.innerHeight);
renderer.readRenderTargetPixels(pickingTextureOcclusion, 0, 0, window.innerWidth / 2, window.innerHeight / 2, pixelBuffer);
renderer.setRenderTarget(null);
return !pixelBuffer.includes(object.name);
}
You can view the WebGL1 working demo here:
https://jsfiddle.net/kmturley/nb9f5gho/62/
One caveat to note with this approach is that your picking scene needs to stay up-to-date with changes in your main scene. So if your objects move position/rotation etc, they need to be updated in the picking scene too. In my example the camera is moving, not the objects so it doesn't need updating.
For WebGL2 we will have a better solution:
https://tsherif.github.io/webgl2examples/occlusion.html
But this is not supported in all browsers yet:
https://www.caniuse.com/#search=webgl
Related
I have been creating a simple Three.js application and so far I have a centered text in the scene that shows "Hello World". I have been copying the code examples to try and understand what is happening and so far Ihave it working but I am failing to completely understand why.
My confusion comes from reading all the Three.js tutorials describing that a Geometry object is responsible for creating the shape of the object in the scene. Therefore I did not think it would not make sense to have a position on something that is describing the shape of the mesh.
/* Create the scene Text */
let loader = new THREE.FontLoader();
loader.load( 'fonts/helvetiker_regular.typeface.json', function (font) {
/* Create the geometry */
let geometry_text = new THREE.TextGeometry( "Hello World", {
font: font,
size: 5,
height: 1,
});
/* Create a bounding box in order to calculate the center position of the created text */
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0); // Center the text by offsetting half the width
/* Currently using basic material because I do not have a light, Phong will be black */
let material_text = new THREE.MeshBasicMaterial({
color: new THREE.Color( 0x006699 )
});
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
//debugger;
scene.add(textMesh);
console.log('added mesh')
} );
Here is the code that I use to add to shape and my confusion comes from the following steps.
/* Create a bounding box in order to calculate the center position of the created text */
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0); // Center the text by offsetting half the width
First, we translate the geometry to the left to center the text inside the scene.
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
Secondly, we set the position of the mesh.
My confusion comes from the fact that we need both of these operations to occur to move the mesh backwards and become centered.
However I do not understand why these operations should be done of the geometry, infact what confuses me more is that why does textMesh.position.set(0, 0, -20); not override my previously performed translation and simply move the mesh to (0,0,-20). removing my previous translation. It seems that both are required.
AFAIK it is recommended (in scenegraph) to transform (translate, rotate, scale) the whole mesh (with simple) rather than prepare transformed geometry and use it to create "untransformed" mesh, since the mesh in second case is not transform-friendly. Basically "cumulative" transform will be just illegal, giving wrong, unexpected results. Even simple movement.
But sometimes it is useful to create transformed geometry and use it for some algos/computations or in meshes.
You are getting somehow "expected" results in your "combined transform" case because it is just particular case (for example it can work only if object position is (0, 0, 0) etc)
mesh.position.set doesn't modify geometry: it is only a property of mesh and it is used to compute final mesh triangles. This computation involves geometry and object matrix which is composed from object position, object quaternion (3D-rotation) and object scale. Object's geometry can be modified by "matrix" operations but none of such operations are performed dynamically by mesh.
I'm new to PixiJS and I'm trying something simple like a painting application.
I'm having difficulty trying to capture a collection of shapes as a single grouping. I'm not interested in working code for this as I'd like to figure that out on my own; I'm simply interested in knowing whether I'm on the right track or if I need to explore some other PixiJS concepts to get what I need.
I have one canvas in which I can drag shapes such as rectangles, ellipse, and lines. These "strokes" are being stored as individual Graphics objects, for instance:
var shape = new PIXI.Graphics();
shape.position.set(...);
...
shape.lineStyle(...)
.beginFill(...)
.drawRect(...)
.endFill();
...
stage.addChild(shape);
...
renderer.render(stage);
I'm also holding onto these shapes in an array:
shapes.push(shape);
Now that I have these displayed as well as have the order of the strokes available, I'd like to be able to capture them somehow. Imagine maybe taking the drawing and saving it, or perhaps using it as a thumbnail in a gallery, or simply just storing it on the back-end in a database, preferably keeping all the raw strokes so that they can be scaled up or down as desired.
For now, I'm simply trying to take this collection of strokes and display them again by holding them, clearing the graphics from my canvas, and then plopping down what I have held.
Looking at this example, I've been able to get a texture that I can reliably reproduce wherever I click with the mouse:
http://jsfiddle.net/gzh14bcn/
This means I've been able to take the first part that creates the texture object, and I tweaked the second part to create and display the sprites when I click the mouse.
When I try to replace this example code with my own code to create the texture itself, I can't get that part to work.
So this example snippet works fine when I try to create a sprite from it:
var texture = new PIXI.RenderTexture(renderer, 16, 16);
var graphics = new PIXI.Graphics();
graphics.beginFill(0x44FFFF);
graphics.drawCircle(8, 8, 8);
graphics.endFill();
texture.render(graphics);
FYI to create sprites:
var sprite = new PIXI.Sprite(texture);
sprite.position.set(xPos, yPos);
stage.addChild(sprite);
Since I have my shapes in the shapes array or on the stage, what is the preferred way I proceed to capture this as a single grouping from which I can create one or more sprites?
So basicaly you've got how to make some PIXI.Graphics shape
var pixiRect = new PIXI.Graphics();
pixiRect.lineStyle(..);
pixiRect.beginFill(..);
pixiRect.drawRect(..);
pixiRect.endFill(..);
(You can draw as many rects/circles/shapes as you want into one PIXI.Graphics)
But to convert it to texture you must tell renderer to create it
var texture = renderer.generateTexture(pixiRect);
Then you can easily create PIXI.Sprite from this texture
var spr = new PIXI.Sprite(texture);
And the last thing is to add it to your stage or array, but you can also make some empty PIXI.Container and then addChild to that and you've got your array
option - add sprite (created from graphics) to stage
stage.addChild(spr);
option - push it to your array
shapes.push(spr);
option - if you have var shapes = new PIXI.Container(); you can make a container for your sprites
shapes.addChild(spr);
Working example : https://jsfiddle.net/co7Lrbq1/3/
EDIT:
to position your canvas above you have to addChild it later, it means first addChild has zIndex = 0 and every addChild adds a layer on top of last
I figured it out. My stage is a container:
var stage = new PIXI.Container();
var canvas = new PIXI.Graphics();
canvas.lineStyle(4, 0xffffff, 1);
canvas.beginFill(0xffffff);
canvas.drawRect(canvasStartX, canvasStartY, 500, 600);
canvas.endFill();
stage.addChild(canvas);
I changed this to the following:
var canvas = new PIXI.Container();
var canvasRect = new PIXI.Graphics();
canvasRect.lineStyle(4, 0xffffff, 1);
canvasRect.beginFill(0xffffff);
canvasRect.drawRect(canvasStartX, canvasStartY, 500, 600);
canvasRect.endFill();
canvas.addChild(canvasRect);
stage.addChild(canvas);
Then, I replaced stage with canvas where appropriate and canvas with canvasRect where appropriate.
Finally, I got my texture with:
var texture = canvas.generateTexture(renderer);
At the moment, this grabbed the entire width/height of the stage, but I think I just need to tweak a bit on how I create my canvas above and I should be fine.
I am trying to take any three.js geometry and subdivide its existing faces into smaller faces. This would essentially give the geometry a higher "resolution". There is a subdivision modifier tool in the examples of three.js that works great for what I'm trying to do, but it ends up changing and morphing the original shape of the geometry. I'd like to retain the original shape.
View the Subdivision Modifier Example
Example of how the current subdivision modifier behaves:
Rough example of how I'd like it to behave:
The subdivision modifier is applied like this:
let originalGeometry = new THREE.BoxGeometry(1, 1, 1);
let subdivisionModifier = new THREE.SubdivisionModifier(3);
let subdividedGeometry = originalGeometry.clone();
subdivisionModifier.modify(subdividedGeometry);
I attempted to dig around the source of the subdivision modifier, but I wasn't sure how to modify it to get the desired result.
Note: The subdivision should be able to be applied to any geometry. My example of the desired result might make it seem that a three.js PlaneGeometry with increased segments would work, but I need this to be applied to a variety of geometries.
Based on the suggestions in the comments by TheJim01, I was able to dig through the original source and modify the vertex weight, edge weight, and beta values to retain the original shape. My modifications should remove any averaging, and put all the weight toward the source shape.
There were three sections that had to be modified, so I went ahead and made it an option that can be passed into the constructor called retainShape, which defaults to false.
I made a gist with the modified code for SubdivisionGeometry.js.
View the modified SubdivisionGeometry.js Gist
Below is an example of a cube being subdivided with the option turned off, and turned on.
Left: new THREE.SubdivisionModifier(2, false);
Right: new THREE.SubdivisionModifier(2, true);
If anyone runs into any issues with this or has any questions, let me know!
The current version of three.js has optional parameters for PlaneGeometry that specify the number of segments for the width and height; both default to 1. In the example below I set both widthSegments and heightSegments to 128. This has a similar effect as using SubdivisionModifier. In fact, SubdivisionModifier distorts the shape, but specifying the segments does not distort the shape and works better for me.
var widthSegments = 128;
var heightSegments = 128;
var geometry = new THREE.PlaneGeometry(10, 10, widthSegments, heightSegments);
// var geometry = new THREE.PlaneGeoemtry(10,10); // segments default to 1
// var modifier = new THREE.SubdivisionModifier( 7 );
// geometry = modifier.modify(geometry);
https://threejs.org/docs/#api/en/geometries/PlaneGeometry
X and y are coordinates of object. Z is always 0. How can I move (with visible move animation, not popping up in different location) this object to new location using Three.js?
EDIT: Code example of object (mesh)
var mesh = new THREE.Mesh(new THREE.PlaneBufferGeometry(2, 2), new THREE.MeshBasicMaterial({img: THREE.ImageUtils.loadTexture('img.png')}))
scene.add(mesh)
EDIT 2: I can make my mesh jump to new position with mesh.position.x=newx and mesh.position.y=newy but I want it to look smooth like in JQuery animate().
The key to animating any kind of object is to move it small amounts, at a high refresh rate.
This means rendering the scene many times but moving the object in the direction you wish a little bit per frame.
e.g.
var direction = new THREE.Vector3(0.3, 0.5, 0); // amount to move per frame
function animate() {
object.position.add(direction); // add to position
renderer.render(camera, scene); // render new frame
requestAnimationFrame(animate); // keep looping
}
requestAnimationFrame(animate);
I am currently working on a small project using the new Babylon.js framework. One of the issues I have run into is that I basically have two meshes. One of the meshes is supposed to be the background, and the other is supposed to follow the cursor to mark where on the other mesh you are targeting. The problem is that when I move the targeting mesh to the position of the cursor, it blocks the background mesh when I use scene.pick, resulting in the other mesh having its position set on its self.
Is there any way to ignore the targeting mesh when using scene.pick so that I only pick the background mesh or is there some other method I could use? If not, what would be the steps to implement this sort of feature to essentially raycast only through certain meshes?
If you need code samples or any other forms of description, let me know. Thanks!
Ok, it's easy.
So, we have two meshes. One is called "ground", the second "cursor". If you want to pick only on the ground you have two solutions :
First:
var ground = new BABYLON.Mesh("ground",scene);
ground.isPickable = true ;
var cursor = new BABYLON.Mesh("cursor", scene);
cursor.isPickable = false;
...
var p = scene.pick(event.clientX, event.clientY); // it return only "isPickable" meshes
...
Second:
var ground = new BABYLON.Mesh("ground",scene);
var cursor = new BABYLON.Mesh("cursor", scene);
...
var p = scene.pick(event.clientX, event.clientY, function(mesh) {
return mesh.name == "ground"; // so only ground will be pickable
});
...
regards.