does someone of you know a possibility to exclude Entities from the Camera Fuse???
I would like to use the Fuse cursor to trigger click events, but still i don't want the cursor to fuse on every single entity.
I'm sure there is a possibility to do what I want... but I cant find my way there. o0
just for people to know... if you use raycaster and curser in a entity you need to set the raycaster before the curser:
<a-entity raycaster="objects: .clickable" cursor> WORKS!
<a-entity cursor raycaster="objects: .clickable"> DONT!
https://aframe.io/docs/0.4.0/components/cursor.html#configuring-the-cursor-through-the-raycaster-component
The cursor builds on top of and depends on the raycaster component. If we want to customize the raycasting pieces of the cursor, we can do by changing the raycaster component properties. Say we want set a max distance, check for intersections less frequently, and set which objects are clickable:
https://aframe.io/docs/0.4.0/components/raycaster.html#whitelisting-entities-to-test-for-intersection
To select or pick the entities we want to test for intersection, we can use the objects property. If this property is not defined, then the raycaster will test every object in the scene for intersection. objects takes a query selector value:
<a-cursor raycaster="objects: .clickable"></a-cursor>
<a-box class="clickable"></a-box>
Related
I am trying to use the look-at feature of Ar.js dynamically through js. But for some reason, the 3d-object is not looking out of the screen.
Answering the topic
You can set the look-at component using javascript, by adding it to any a-frame entity:
// using an existing entity as a target
entity.setAttribute("look-at", "#selector");
// using a Vector3 position as target
entity.setAttribute("look-at", {x: 0, y: 0, z: 0})
Answering this specific case
The look-at component (and the used by it lookAt method) assume, that the model is orented towards the z-axis.
The duck model is oriented towards the x axis.
If it's possible, reorient the model in a 3D modelling software like blender.
If not, then you can use a parent node to reorient it:
<!-- Parent node will look at the user--/>
<a-entity look-at="[camera]">
<!-- Child node with the rotation offset--/>
<a-entity rotation="0 -90 0" gltf-model ....>
Check it out in this glitch
If there are more models, you can create a rotation offset array, and apply it to the [gltf-model] node each time you load a new model.
The <a-camera> already contains a camera. To change its properties just use the <a-camera> attributes:
<a-camera fov="60" ....></a-camera>
You have to call the lookAt method on the object3D of your entity. Something like this :
var camera = document.querySelector("[camera]");
var position = camera.object3D.position;
entityEl.object3D.lookAt(new THREE.Vector3(position.x, position.y, position.z));
I am new to AR.js and want to make a very simple demo with AR.j. I get a 3d scanning model from sketchfab,and put it into AR.js.
I tried light="type: point" intensity: 5.1, but the light doesn't looks like the model on sketchfab. When I tried with light="type: ambient", the whole model is black.
<a-entity id="point_light_1" light="type: point; intensity: 5.1;" position="0 0 0"></a-entity>
Here is the example from sketchfab and what I get. How could I get the same render effect like sketchfab shows?
A-frame Document would help you.
https://aframe.io/docs/0.9.0/components/light.html#sidebar
The effect of point light change according to the distance from materials. The closer the light bulb gets to an object, the greater the object is lit.Try to set the position of light nearer.
when i tried with light="type: ambient" whole model is black.
Maybe the model is in the shadowed area.
I have six planes set up as cube with textures to display a 360-degree jpg set. I positioned the planes 1000 away and made them 2000 (plus a little because the photos have a tiny bit of overlap) in height and width.
The a-camera is positioned at origin, within this cube, with wasd controls set to false, so the camera is limited to rotating in place. (I am coding on a laptop, using mouse drag to move the camera.)
I also have a sphere (invisible), placed in between the camera and the planes, and have added an event listener to it. This seemed simpler than putting event listeners on each of the six planes.
My current problem is wanting to enforce minimum and maximum tilt limits. Following is the function "handleTilt" for this purpose. The minimum tilt allowed depends on the size of the fov.
function handleTilt() {
console.log("handleTilt called");
var sceneEl = document.querySelector("a-scene");
var elCamera = sceneEl.querySelector("#rotationCam");
var camRotation = elCamera.getAttribute('rotation');
var xTilt = camRotation['x'];
var fov = elCamera.getAttribute('fov');
var minTilt = -65 + fov/2;
camRotation['x'] = xTilt > minTilt ? xTilt : minTilt;
// enforce maximum (straight up)
if (camRotation['x'] > 90) {
camRotation['x'] = 90;
}
console.log(camRotation);
}
The event handler is set up in this line:
<a-entity geometry="primitive:sphere" id="clickSphere"
radius="50" position="0 0 0" mousemove="handleTilt()">
When I do this, a console.log call on #clickSphere shows the event handler exists. But it is never invoked when I run the program and move the mouse to drag the camera to different angles.
As an alternative, I made the #clickSphere listen for onClick as follows:
<a-entity geometry="primitive:sphere" id="clickSphere"
radius="50" position="0 0 0" onclick="handleTilt()">
The only change is "mousemove" to "onclick". Now, the "handleClick()" function executes with each click, and if the camera was rotated to a value less than the minimum, it is put back to the minumum.
One bizarre thing, though, after clicking and adjusting the rotation a few times, the program goes into a state where I can't rotate the camera down below the minimum any more. It is as if the mousemove listener had become engaged, even though the only listener coded is the onclick. I can't for the life of me figure out why this kicks in.
Would it be possible to get some advice as to what I might be doing wrong, or a plan for troubleshooting? I'm new to aframe and JavaScript.
An alternative plan for enforcing the min and max camera tilts in real time would also be an acceptable solution.
I just pushed out this piece on the docs for ya: https://aframe.io/docs/0.6.0/components/look-controls.html#customizing-look-controls
While A-Frame’s look-controls component is primarily meant for VR with sensible defaults to work across platforms, many developers want to use A-Frame for non-VR use cases (e.g., desktop, touchscreen). We might want to modify the mouse and touch behaviors.
The best way to configure the behavior is to copy and customize the current look-controls component code. This allows us to configure the controls how we want (e.g., limit the pitch on touch, reverse one axis). If we were to include every possible configuration into the core component, we would be left maintaining a wide array of flags.
The component lives within a Browserify/Webpack context so you’ll need to replace the require statements with A-Frame globals (e.g., AFRAME.registerComponent, window.THREE, AFRAME.constants.DEFAULT_CAMERA_HEIGHT), and get rid of the module.exports.
Can modify https://github.com/aframevr/aframe/blob/master/src/components/look-controls.js to hack in your min/max just for mouse/touch.
I would like to apply the three.js script TrackballControls to a moving object in a way that preserves the ability to zoom and rotate the camera while the object moves through the scene. For example, I would like to be able to have the camera "follow" a moving planet, while the user retains the ability to zoom and rotate around the planet. Here is a basic jsfiddle:
http://jsfiddle.net/mareid/8egUM/3/
(The mouse control doesn't actually work on jsfiddle for some reason, but it does on my machine).
I would like to be able to attach the camera to the red sphere so that it moves along with it, but without losing the ability to zoom and rotate. I can get the camera to follow the sphere easily enough by adding lines like these to the render() function:
mouseControls.target = new THREE.Vector3(object.position);
camera.position.set(object.position.x,object.position.y,object.position.z+20);
But if I do that, the fixed camera.position line overrides the ability of TrackballControls to zoom, rotate, etc. Mathematically, I feel like it should be possible to shift the origin of all of the TrackballControls calculations to the centre of the red sphere, but I can't figure out how to do this. I've tried all sorts of vector additions of the position of the red sphere to the _this.target and _eye vectors in TrackballControls.js, to no avail.
Thanks for your help!
I'd recommend OrbitControls rather than TrackballControls. My answer here applies primarily to OrbitControls; the same principle might also work with TrackballControls (I have not looked at its code in a while).
If you look at how OrbitControls works you should notice it uses this.object for the camera, and this.target for where the camera is looking at, and it uses that vector (the difference between them) for some of its calculations. Movement is applied to both positions when panning; only the camera is moved when rotating. this.pan is set to shift the camera, but out of the box it only deals with a panning perpendicular to the this.object to this.target vector because of the way it sets the panning vector.
Anyway, if you subtract the vector from object.position to controls.target and set controls.pan to that vector, that should make it jump to that object:
Fixed example code:
Add a helper to OrbitControls.js which has access to its local pan variable:
function goTo( vector ) {
// Cloning given vector since it would be modified otherwise
pan.add( vector.clone().sub( this.target ) );
}
Your own code:
function animate() {
requestAnimationFrame(animate);
mouseControls.goTo( object.position ); // Call the new helper
mouseControls.update();
render();
}
You'll also probably want to set mouseControls.noPan to true.
I have a little tool that draws up a grid of circles(representing holes) that allows the user to add text and lines to these circles. Right now I have it set up so if the user clicks on any of the holes then wherever the hole is moved so is every other element on the Paper object. What I am trying to implement next is the ability to rotate everything as one object. I realize that for this to work that I need to know the central point of all the objects, which I can easily get.
What I want to know is should I draw everything on another object. This object will act as another Paper object of sorts, but will only serve for movement and rotation. Any click events on the holes drawn on the object will be passed on to the parent (i.e. the pseudo-paper object everything is drawn on). Is this possible? If so how would I draw everything onto say, a rectangle? And if not what would be the best way to go implementing it?
What you need is a Set. You create it, push objects to it, and then treat it as an entire group, in your case by applying transformations.
Example:
var elements = paper.set();
if (!view.text) {
view.text = App.R.text(0, 0, this.value);
view.text.attr({
'font-size': font_size,
});
elements.push(view.text);
}
elements.transform('something');
Note that you can also bind events to this entire set.