I wanted to know how to have a Aframe Component for any entity that define if the entity is seen by the camera, like a bool attribute.
"isSeen"= true || false
I tried with trigonometry (knowing the rotation of the camera, and the Entities' positions), but I failed.
How about frustums: checking out if a point(x, y ,z) is within the camera's field of view .
The code is quite simple. To use it within a-frame, You could create a component, which will check if the point is seen on each render loop:
AFRAME.registerComponent('foo', {
tick: function() {
if (this.el.sceneEl.camera) {
var cam = this.el.sceneEl.camera
var frustum = new THREE.Frustum();
frustum.setFromMatrix(new THREE.Matrix4().multiplyMatrices(cam.projectionMatrix,
cam.matrixWorldInverse));
// Your 3d point to check
var pos = new THREE.Vector3(x, y, z);
if (frustum.containsPoint(pos)) {
// Do something with the position...
}
}
}
}
Check it out in my fiddle
Related
I'm trying to use pose estimation coordinates to animate a rigged model in three.js The pose estimation tech I'm using provides real time x,y,z coordinates from a person in a video feed and I'm trying to use those to move the 3D model accordingly. I used the code below (some of which I found in an answer to a related question) as a starting point...
let camera, scene, renderer, clock, rightArm;
init();
animate();
function init() {
camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 0.01, 10);
camera.position.set(2, 2, -2);
clock = new THREE.Clock();
scene = new THREE.Scene();
scene.background = new THREE.Color(0xffffff);
const light = new THREE.HemisphereLight(0xbbbbff, 0x444422);
light.position.set(0, 1, 0);
scene.add(light);
// model
const loader = new THREE.GLTFLoader();
loader.load('https://threejs.org/examples/models/gltf/Soldier.glb', function(gltf) {
const model = gltf.scene;
rightArm = model.getObjectByName('mixamorigRightArm');
scene.add(model);
});
renderer = new THREE.WebGLRenderer({
antialias: true
});
renderer.setPixelRatio(window.devicePixelRatio);
renderer.setSize(window.innerWidth, window.innerHeight);
renderer.outputEncoding = THREE.sRGBEncoding;
document.body.appendChild(renderer.domElement);
window.addEventListener('resize', onWindowResize, false);
const controls = new THREE.OrbitControls(camera, renderer.domElement);
controls.target.set(0, 1, 0);
controls.update();
}
function onWindowResize() {
camera.aspect = window.innerWidth / window.innerHeight;
camera.updateProjectionMatrix();
renderer.setSize(window.innerWidth, window.innerHeight);
}
//This was my attempt at deriving the rotation from two vector3's and applying it to the model
//storedresults is simply an array where I store the pose estimation data for a given position
//getPosition is just a helper function for getting the vector three for a specific position
function setRightArmRotation() {
if (rightArm) {
if (storedresults === undefined || storedresults.length == 0) {
return;
} else {
if (vectorarray.length < 2) {
vectorarray.push(getPosition(12));
} else {
vectorarray.pop();
vectorarray.push(getPosition(12));
var quaternion = new THREE.Quaternion();
quaternion.setFromUnitVectors(vectorarray[0], vectorarray[1]);
var matrix = new THREE.Matrix4();
matrix.makeRotationFromQuaternion(quaternion);
rightArm.applyMatrix4(matrix);
}
}
}
}
function animate() {
requestAnimationFrame(animate);
const t = clock.getElapsedTime();
if (rightArm) {
rightArm.rotation.z += Math.sin(t) * 0.005;
//setRightArmRotation()
}
renderer.render(scene, camera);
}
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/build/three.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/examples/js/loaders/GLTFLoader.js"></script>
<script src="https://cdn.jsdelivr.net/npm/three#0.125.2/examples/js/controls/OrbitControls.js"></script>
I also referred to this answer on finding rotations from two vectors but I haven't been successful in implementing it to achieve the desired results...
How to find rotation matrix between two vectors
I can get the Vector3 from the pose estimation tech easily, and I understand how most of what is in the jsfiddle works but I can't seem to put it all together to get the desired result of having my 3D model 'mirror' the movement of what is in my video using the pose estimation coords. I pretty much can just get the model to 'thrash around'.
As I understand it I need to manipulate the rotations of the bones to achieve the desired results, and to do that I need to compute those rotations using two vectors, but again after much research and trial and error I just can't seem to put it all together. Any help would be appreciated.
This is a rather big answer, so I'll split this down in to smaller parts.
I'm going to go out on a limb and guess you're using posenet (tensorflow library).
I've had this issue before, and it is very complicated to come up with a solution, but here's what worked for me.
First, using posenet, match each bone in the mesh, to a bone in the keypoints array.
Then I created a Skeleton class that recreated a skeleton in primitive terms, each bone had a seperate class, for instance elbow joint etc.. These all had max hyperextension and retraction values, it took a series of Vector3 values that basically showed the min and max xyz rotation values (similar to the range of movement we have) and converted this from degrees to radians.
from there, I applied all the rotations to each bone, the rotations were nowhere near perfect though, as the arms would often rotate inside the body. To combat this, I created a bounding box round each bone (to the outer mesh) and did collision detection on each part until the bone no longer collided with another part of the mesh, I had a tolerance score for each body part (i.e allow arms to be crossed or legs)
from there I had a "somewhat" good estimate which was applied to a 3d mesh.
I've got some of the original stuff on github if you'd like to see an example:
https://github.com/hudson1998x/2D-To-3D-Pose-Estimation/
I have a scene including an Object3D representing a globe and multiple mesh elements representing points on this globe. I use OrbitControls to allow interaction. Additionally I attach HTMLElements to the points on the globe. Since a globe basically is a sphere, points might not be visible for the camera when placed on the back.
How can I detect whether or not such a point is visible for the camera/hidden by the object? Doing so I want to hide the HTMLElement in relation to the mesh's visibility. The HTMLElement's position is updated on render, hence this check should happen on render as well I assume:
private render() {
this.renderer.render(this.scene, this.camera);
this.points.forEach(({ label, mesh }) => {
const screen = this.toScreenPosition(mesh);
label.style.transform = `translate3d(${screen.x - 15}px, ${screen.y}px, 0)`;
});
this.requestId = window.requestAnimationFrame(this.render.bind(this));
}
Working code within render:
this.points.forEach(({ label, mesh }) => {
const screen = this.toScreenPosition(mesh);
label.style.transform = `translate3d(${screen.x - 15}px, ${screen.y}px, 0)`;
const direction = new Vector3();
direction.copy(mesh.position).sub(this.camera.position).normalize();
this.raycaster.set(this.camera.position, direction);
const intersections = this.raycaster.intersectObject(this.scene, true);
const intersected = intersections.length > 0 ? intersections[0].object.uuid === mesh.uuid : false;
if (intersected && label.style.opacity === "0") {
label.style.opacity = "1";
} else if (!intersected && label.style.opacity === "1") {
label.style.opacity = "0";
}
});
I recommend a simple algorithm with two steps:
First, check if the given point is in the view frustum at all. The code for implementing this feature is shared in: three.js - check if object is still in view of the camera.
If the test passes, you have to verify whether the point is occluded by a 3D object or not. A typical way for checking this is a line-of-sight test. Meaning you setup a raycaster from your camera's position and the direction that points from your camera to the given point. You then test if 3D objects in your scene intersect with this ray. If there is no intersection, the point is not occluded. Otherwise it is and you can hide the respective label.
All I want to do is load an OBJ file and translate its coordinates to the world origins (0,0,0) so that orbit controls work perfectly (no Pivot points please).
I'd like to load random OBJ objects with different geometries/center points and have them translated automatically to the scene origin. In other words, a 'hard coded' translate solution for a specific model won't work
This has got to be one of the most common scenarios for Three JS (basic 3d object viewer), so I'm surprised I can't find a definitive solution on SO.
Unfortunately there are a lot of older answers with deprecated functions, so I would really appreciate a new answer even if there are similar solutions out there.
Things I've tried
the code below fits the object nicely to the camera, but doesn't solve the translation/orbiting problem.
// fit camera to object
var bBox = new THREE.Box3().setFromObject(scene);
var height = bBox.size().y;
var dist = height / (2 * Math.tan(camera.fov * Math.PI / 360));
var pos = scene.position;
// fudge factor so the object doesn't take up the whole view
camera.position.set(pos.x, pos.y, dist * 0.5);
camera.lookAt(pos);
Apparently the geometry.center() is good for translating an object's coordinates back to the origin, but the THREE.GeometryUtils.center has been replaced by geometry.center() and I keep getting errors when trying to use it.
when loading OBJs, geometry has now been replaced by bufferGeometry. I can't seem to cast the buffergeometry into geometry in order to use the center() function. do I have to place this in the object traverse > child loop like so? this seems unnecessarily complicated.
geometry = new THREE.Geometry().fromBufferGeometry( child.geometry );
My code is just a very simple OBJLoader.
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
}
} );
scene.add(object);
});
(BTW first real question on SO so forgive any formatting / noob issues)
Why not object.geometry.center()?
var objLoader = new THREE.OBJLoader();
objLoader.setPath('assets/');
objLoader.load('BasketballNet_Skull.obj', function (object) {
object.traverse( function ( child ) {
if ( child instanceof THREE.Mesh ) {
child.material = material;
child.geometry.center();
}
} );
scene.add(object);
OK figured this out, using some very useful functions from Meshviewer Master, an older Three JS object viewer.
https://github.com/ideesculture/meshviewer
All credit to Gautier Michelin for this code
https://github.com/gautiermichelin
After loading the OBJ, you need to do 3 things:
1. Create a Bounding Box based on the OBJ
boundingbox = new THREE.BoundingBoxHelper(object, 0xff0000);
boundingbox.update();
sceneRadiusForCamera = Math.max(
boundingbox.box.max.y - boundingbox.box.min.y,
boundingbox.box.max.z - boundingbox.box.min.z,
boundingbox.box.max.x - boundingbox.box.min.x
)/2 * (1 + Math.sqrt(5)) ; // golden number to beautify display
2. Setup the Camera based on this bounding box / scene radius
function showFront() {
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
controls.reset();
camera.position.z = 0;
camera.position.y = 0;
camera.position.x = sceneRadiusForCamera;
camera.lookAt(scene.position);
}
(the mesh viewer code also contains functions for viewing left, top, etc)
3. Reposition the OBJ to the scene origin
Like any centering exercise, the position is then the width and height divided by 2
function resetObjectPosition(){
boundingbox.update();
size.x = boundingbox.box.max.x - boundingbox.box.min.x;
size.y = boundingbox.box.max.y - boundingbox.box.min.y;
size.z = boundingbox.box.max.z - boundingbox.box.min.z;
// Repositioning object
objectCopy.position.x = -boundingbox.box.min.x - size.x/2;
objectCopy.position.y = -boundingbox.box.min.y - size.y/2;
objectCopy.position.z = -boundingbox.box.min.z - size.z/2;
boundingbox.update();
if (objectCopy !== undefined) objectCopy.rotation.z = 0;
}
From my understanding of your question, you want the objects that are added to the scene in the origin of the camera view. I believe the common way of achieving an object viewer solution is adding camera controls to your camera in the scene mostly THREE.OrbitControls and specifying the target for the camera as the object that you want to focus on. This makes the object focused to be in the center and the camera rotation and movement will be based on that object.
Three.js r.71
I'm just getting into Three.js (awesome btw) but am having an issue. I am trying to stream geometry and position/scale/rotation changes between clients using Socket.io and NodeJS. On the server I store the JSON representation of the scene and stream object changes between clients.
When the object's matrix changes (position, scale, rotation), I stream the new matrix to the server and forward it to the other clients. On the other clients I call applyMatrix() with the streamed object (the source object's matrix).
The problem I ran into is that when calling applyMatrix(sourceMatrix), it seems to multiple the existing scale by the scale found in sourceMatrix. For example, when the current object has a scale of x: 2, y:1, z:1, and I apply a matrix with the same scale, after calling applyMatrix, the destination object's scale is x:4, y:1, z:1.
This seems like a bug to me, but wanted to double check.
// Client JS:
client.changeMatrix = function (object) {
// Set the object's scale to x:2 y:1 z:1 then call this twice.
var data = {uuid: object.uuid, matrix: object.matrix};
socket.emit('object:changeMatrix', data);
};
socket.on('object:matrixChanged', function (data) {
var cIdx = getChildIndex(data.uuid);
if (cIdx >= 0) {
scene.children[cIdx].applyMatrix(data.matrix);
// At this point, the object's scale is incorrect
ng3.viewport.updateSelectionHelper();
ng3.viewport.refresh();
}
});
// Server JS:
socket.on('object:changeMatrix', function (data) {
socket.broadcast.emit('object:matrixChanged', data);
});
#WestLangley is correct, I really did not understand what apply matrix is doing (and still don't quite know what it is used for).
I solved my problem by manually setting each element in the source matrix and calling decompose:
// Client JS:
socket.on('object:matrixChanged', function (data) {
var cIdx = getChildIndex(data.uuid);
var child = null;
var key;
if (cIdx >= 0) {
child = scene.children[cIdx];
for (key in child.matrix.elements) {
if (child.matrix.elements.hasOwnProperty(key)) {
child.matrix.elements[key] = data.matrix.elements[key];
}
}
child.matrix.decompose(child.position, child.quaternion, child.scale);
}
}
Unfortunately, once the server picks up the Matrix object, calling:
child.matrix.copy(data.matrix);
does no work. That's why I ended up setting each element manually.
Trying out http://www.goxtk.com, great stuff!
Is there a quick way to get the bounding box for a model or some other point that could be used as the center of rotation of the camera? Worst case, what's the best way to loop over the points? Thanks for any replies!
It is possible to query each X.object() for its centroid, like this:
...
r = new X.renderer('r');
r.init();
o = new X.object();
o.load('test.vtk');
r.add(o);
r.render();
r.onShowtime = function() {
// print the centroid
console.log(o.points().centroid());
};
...
You have to overload the onShowtime function of the X.renderer to be sure that the X.object was properly setup (.vtk file loaded etc.).
To configure the camera, you can do f.e. the following:
...
r.camera().setPosition(-400,0,0); // set the position
r.camera().setFocus(-10,-10,-10); // set the focus point
r.camera().setUp(1,0,0); // set the (normalized) up vector
r.render();
...
Anyway, to loop over the points:
...
// o is an X.object
var numberOfPoints = o.points().count();
var pointArrayLength = o.points().length(); // equals numberOfPoints * 3
var allPoints = o.points().all(); // as a flat 1D array optimized for WebGL
// just loop it :)
...