I have a webVR project that used resetPose to reset the origin of the scene but apparently is now deprecated. I used it to reset the camera, so the user would look at the center of the scene again.
I assume this function isn't too hard to replicate: you either have to rotate the scene or the camera to the new origin. However I'm not that experienced with webVR or more importantly THREE.js.
I've tried to lookAt the center of the scene with the camera but I think the problem is that the webVR has control over it so I can't just move it.
Example initialisation of camera and scene
// Create a three.js camera.
camera = new THREE.PerspectiveCamera(75, window.innerWidth / window.innerHeight, 0.1, 10000);
camera.name = "Perspective Camera";
var group = new THREE.Group();
group.name = "Camera Group";
group.rotateY(Math.PI);
group.add(camera);
group.position.set(0, configuration.sphereRadius, 0);
group.permanent = true;
scene.add(group);
// Apply VR headset positional data to camera.
controls = new THREE.VRControls(camera);
According to this issue immersive-web/webxr droped support for resetPose method. They claim that devices have hardware button to force resetting pose or reset pose should be handled by application. Three.js don't have such method.
Here is simple but dirty way to reset pose (recenter, reset camera) by exiting VR and entering it again.
async function recenter() {
if (renderer.xr.isPresenting) {
let session = renderer.xr.getSession();
let buttonVR = document.getElementById("VRButton");
await session.end();
buttonVR.click();
}
}
Related
In this example:
https://glitch.com/~query-aframe-cameras
I have registered a component which launches a projectile in the direction the user is looking (with a little boost for elevation)
Spacebar or Screen Tap to launch - be sure to be looking above the horizon!
It fails in mobile vr (stereo camera) mode:
Projectiles continue to fire, but from the default orientation of the mono, not the stereo camera
I'm using:
var cam = document.querySelector('a-scene').camera.el.object3D;
var camVec = new THREE.Vector3();
var camDir = cam.getWorldDirection(camVec);
to get the camera information and spit the projectiles back
QUERY:
HOW DO I GET THE STEREO CAMERA INFORMATION
The problem will be due to this bug in THREE.js.
https://github.com/mrdoob/three.js/issues/18448
Following the documented workaround, the code here works with correct direction tracking on my Quest 2.
https://cuddly-endurable-soap.glitch.me
(there's a couple of other changes in here: to test this on my VR headset I also had to upgrade to A-Frame 1.1.0, and make the code fire the balls on a timer, since I don't have any keys to press when using it! Neither of these was enough to fix the problem, though).
This code will set up CamDir as you need it.
var camQ = new THREE.Quaternion();
cam.updateMatrixWorld();
camQ.setFromRotationMatrix(cam.matrixWorld);
var camDir = new THREE.Vector3(0, 0, 1);
camDir.applyQuaternion(camQ);
The following code also works (and is more efficient), but is less easy to understand.
var camDir = new THREE.Vector3();
var e = cam.matrixWorld.elements;
camDir.set(e[ 8 ], e[ 9 ], e[ 10 ]).normalize();
I picked it up from here:
https://lace-fern-ghost.glitch.me/
Which I found a link to from here:
https://github.com/aframevr/aframe/issues/4412
(but I can't find any reference to explain why these components matrixWorld.elements are the right ones to use here to get the direction).
I'm following along with this youTube tutorial https://www.youtube.com/watch?v=axGQAMqsxdw to create a POV game. I got to episode 8 of the tutorial, then google chrome began showing a white screen no matter what I changed in the javascript. There are no warnings or problems when inspecting the code with chrome's developer tools either.
I've tried to start from scratch with completely new files. I've cleared the caches for chrome. I've double checked the source code with my altered code, but none of my alterations seem like they would cause a problem (I am new to javascript though, so that could be a problem). Below is the newly started js file based 100% off the youTube tutorial.
var scene, camera, renderer, mesh;
function init(){
scene = new THREE.Scene();
scene.background = new THREE.Color( 0x8CD9FF );
camera = new THREE.PerspectiveCamera (90, 1280/720, 0.1, 10);
mesh = new THREE.Mesh(
new THREE.BoxGeometry(1,1,1),
new THREE.MeshBasicMaterial ({color: 0xff9999, wireframe: true})
);
scene.add(mesh);
renderer = new THREE.WebGLRenderer();
renderer.setSize(1280/720);
document.body.appendChild(renderer.domElement);
animate();
}
function animate(){
requestAnimationFrame(animate);
renderer.render(scene,camera);
}
window.onload = init;
Take a look at the THREE documentation concerning renderer size here.
You need to set width and height pixels and not the ratio, so try to replace renderer.setSize(1280/720); with renderer.setSize(1280, 720)
I am trying to implement drag controls on this text geometry that I am creating in the viewer. I create the text like so:
createText(params) {
const textGeometry = new TextGeometry(params.text,
Object.assign({}, {
font: new Font(FontJson),
params
}));
const geometry = new THREE.BufferGeometry;
geometry.fromGeometry(textGeometry);
const material = this.createColorMaterial(
params.color);
const text = new THREE.Mesh(
geometry, material);
text.scale.set(params.scale, params.scale, params.scale);
text.position.set(
params.position.x,
params.position.y,
10);
this.intersectMeshes.push(text);
this.viewer.impl.scene.add(text);
this.viewer.impl.sceneUpdated(true);
return text;
}
This works great, the meshes get added to the viewer, I can see them. Fantastic! Thus far, it is good. Now, I want to be able to drag them around with my mouse after I have added them. I noticed that Three.js already has drag controls built in, so I just implemented them like so:
enableDragging(){
let controls = new THREE.DragControls( this.viewer, this.viewer.impl.camera.perspectiveCamera, this.viewer.impl.canvas );
controls.addEventListener( 'dragstart', dragStartCallback );
let startColor;
controls.addEventListener( 'dragend', dragendCallback );
function dragStartCallback(event) {
startColor = event.object.material.color.getHex();
event.object.material.color.setHex(0x000000);
}
function dragendCallback(event) {
event.object.material.color.setColor(startColor);
}
}
After a big of debugging, I have seen where the problem occurs. For some reason, when I click on one of the meshes, the raycaster doesn't find any intersections. I.E. the array I get back is empty. No matter where I click on these objects.
Is my implementation wrong, or did I provision these meshes wrong to make them draggable? I have gotten the drag controls to work outside of the viewer, just not within it.
This will not work, looking at the code of DragControls, the viewer implementation is too different in the way it implements the camera. You would need to either implement a custom version of DragControls or take a look at my transform tool and adapt it for custom meshes:
Moving visually your components in the viewer using the TransformTool
I'm using Babylon.js 2.4.0.
I have a mesh (in the shape of a couch) loaded from a .obj file, and a camera set up like this:
let camera = new BABYLON.FreeCamera('camera1', new BABYLON.Vector3(0, 2, 0), scene);
camera.checkCollisions = true;
camera.applyGravity = true;
camera.ellipsoid = new BABYLON.Vector3(1, 1, 1);
camera.attachControl(canvas, false);
camera.speed = 0.5;
camera.actionManager = new BABYLON.ActionManager(scene);
I want to set up an event so that when I walk through the couch, "intersection" is logged to the console:
let action = new BABYLON.ExecuteCodeAction(
{ trigger: BABYLON.ActionManager.OnIntersectionEnterTrigger, parameter: { mesh: couchMesh }},
(evt) => {
console.log("intersection");
}
);
this.camera.actionManager.registerAction(action);
When I walk through the mesh, nothing is logged to the console.
I've created an example on the Babylon.js Playground using an example that they provide to check that it wasn't a problem with my mesh or camera set up, and it doesn't appear to be (the playground doesn't work either).
A camera in Babylon.js has no action manager, so even if you set one it won't really work.
To get this to work using action managers, you could define an invisible box around the camera, with a predefined size and attach the action manager to the mesh created. then set the mesh's parent to be the camera, and you are done. Here is your playground with those changes - http://www.babylonjs-playground.com/#KNXZF#3
Another solution is to use the internal collision system of babylon js, and set the camera's onCollide function to actually do something :) Here is en example - http://www.babylonjs-playground.com/#KNXZF#4
Notice that in the second playground, the camera won't go throug the box, as the collision system prevents it from doing so. I am not sure about your usecase, so it is hard to say which one of the two will work better.
If you need a "gate" system (knowing when a player moved through a gate, for example), use the 1st method. The 2nd is much cleaner, but has its downsides.
I recently started to work with WebGL by using the Threejs libary.
I tried to create some sort of image viewer, but I just can't seem to figure out how to do it right.
I want to display the image, a texture fetched from a HTML5-Canvas, on a PlaneGeometry object.
My code looks like this:
var RunWebGL = function(img, canvas){
var renderer = new THREE.WebGLRenderer();
renderer.setSize(img.width, img.height);
document.body.append(renderer.domElement);
//Init the Scene;
var scene = new THREE.Scene();
//Init the Camera;
var camera = new THREE.PerspectiveCamera(70, img.width/img.height, 1, 1000);
scene.add(camera);
//Init the texture from the image displayed in the canvas. Then use this image to create the Mesh used to give the created plane a texture;
var texture = new THREE.Texture(canvas);
var material = new THREE.MeshBasicMaterial({map : texture});
var geometry = new THREE.PlaneGeometry(img.widht, img.height);
var mesh = new THREE.Mesh(geometry, material);
//Init the Geometry and the texture(material) and mesh it together;
scene.add(mesh);
renderer.setClearColor(0xEEEEEE);
texture.needsUpdate = true;
renderer.render(scene, camera);
};
The HTML5-Canvas, which provides the image, displays it as expected.
Everything I get from the WebGL part tho is a black square everytime this code executes. On the search for an answer on this problem, I added "renderer.setClearColor();" - Which hasn't changed anything, execpt that I'm now getting a smooth grey instead of a dead black square.
I'm getting no Errors/Warnings/Whatsoever.
The img-object that is passed into my RunWebGL function is a standard JavaScript Image object.
If I could use this image as a texture instead of the canvas, I would be really happy. But an answer using the Canvas, or even a hint could help to decrease my struggle.
//Edit: I have created a JS-Fiddle that shows my problem. Here's the link: https://jsfiddle.net/a5ufknLu/2/
Cheers and thanks in advance,
Michael