I have an A-Frame multiplayer project, which relies on each user moving around the map, and it works well in desktop browsers. The stumbling block is the lack of motion controls in mobile, like the WASD-controls for the camera.
What workaround would you suggest? Perhaps, adding on-screen buttons and triggering keypress events with jQuery?
Or is there a more straightforward method of adding a function that would change the camera position for each on-screen button press? In this case, is there a way to find the exact functions attributed to the W, A, S, D keys?
Among several things I've tried was:
$(".up-button").click(function(){
window.dispatchEvent(new KeyboardEvent('keypress',{keyCode: 38}));
And another one:
var e = jQuery.Event("keydown");
e.which = 38;
$(window).trigger(e);
Neither is producing any change.
Have you tried something like:
window.dispatchEvent(new KeyboardEvent('keypress',{keyCode: 38} ) );
to simulate pressing the up arrow key?
You could simulate pressing wasd or up down left right.
The rotation might be a little more tricky though,
you would have to simulate the onMouseDown and onMouseMove events.
I see what you mean. Most people wouldn't even have a controller to properly move the character in your project/game. I've seen in a demo in A-Frame that is pretty clever in my opinion. Basically if the user is looking down on floor, then the camera moves forward. This is good as it requires no outside input whatsoever so it works on everything.
As how to implement it, this may be a solution:
//First calculate the verticle angle of the camera
var verticalAngle = getVerticalAngle(camera.getWorldDirection());
console.log('vertical angle:' + verticalAngle);
function getVerticalAngle(cameraVector) {
return vRadiansToDegrees(Math.atan(cameraVector.y));
}
function vRadiansToDegrees(radians) {
return radians * 180 / Math.PI;
}
And then move the camera forward if the user is looking down on the floor
if (verticalAngle < -43) {
//move camera
}
Related
I have a mobile web app written in JavaScript, using alloyfinger.js.
I tried Hammer.js, but it didn't work with certain iPhone models (eg iPhone 7+).
I suspect my question is the same for various gesture detection libraries.
My app detects rotate events, but I can't seem to write the correct code to ignore rotate events below a minimum angle of rotation. The code below hides the rotated element, even when the amount of rotation is very small.
handleRotatEvent(evt) {
const angle = evt.angle;
const absAngle = Math.abs(angle);
const minAngle = (90 * Math.PI / 180);
// convert 90 degrees to radians
if (absAngle < minAngle)
return;
hideItem(evt);
// use the event to find the target DOM element and hide it
}
My intent is to ignore rotations smaller than 90 degrees. Otherwise, hide the rotated element.
What am I misunderstanding or doing wrong here?
Thanks!
Adam Leffert
https://www.leffert.com
#dntzhang, the author of AlloyFinger.js answered this question on Twitter. He said that the angle resets as the user rotates the element. My more fundamental intent was to detect small rotations as a way to distinguish rotate events from swipe or pan events. He said that the correct way to do that is to check the number of fingers touching the element. One for swipe or pan. Two for rotate. Much simpler.
I have six planes set up as cube with textures to display a 360-degree jpg set. I positioned the planes 1000 away and made them 2000 (plus a little because the photos have a tiny bit of overlap) in height and width.
The a-camera is positioned at origin, within this cube, with wasd controls set to false, so the camera is limited to rotating in place. (I am coding on a laptop, using mouse drag to move the camera.)
I also have a sphere (invisible), placed in between the camera and the planes, and have added an event listener to it. This seemed simpler than putting event listeners on each of the six planes.
My current problem is wanting to enforce minimum and maximum tilt limits. Following is the function "handleTilt" for this purpose. The minimum tilt allowed depends on the size of the fov.
function handleTilt() {
console.log("handleTilt called");
var sceneEl = document.querySelector("a-scene");
var elCamera = sceneEl.querySelector("#rotationCam");
var camRotation = elCamera.getAttribute('rotation');
var xTilt = camRotation['x'];
var fov = elCamera.getAttribute('fov');
var minTilt = -65 + fov/2;
camRotation['x'] = xTilt > minTilt ? xTilt : minTilt;
// enforce maximum (straight up)
if (camRotation['x'] > 90) {
camRotation['x'] = 90;
}
console.log(camRotation);
}
The event handler is set up in this line:
<a-entity geometry="primitive:sphere" id="clickSphere"
radius="50" position="0 0 0" mousemove="handleTilt()">
When I do this, a console.log call on #clickSphere shows the event handler exists. But it is never invoked when I run the program and move the mouse to drag the camera to different angles.
As an alternative, I made the #clickSphere listen for onClick as follows:
<a-entity geometry="primitive:sphere" id="clickSphere"
radius="50" position="0 0 0" onclick="handleTilt()">
The only change is "mousemove" to "onclick". Now, the "handleClick()" function executes with each click, and if the camera was rotated to a value less than the minimum, it is put back to the minumum.
One bizarre thing, though, after clicking and adjusting the rotation a few times, the program goes into a state where I can't rotate the camera down below the minimum any more. It is as if the mousemove listener had become engaged, even though the only listener coded is the onclick. I can't for the life of me figure out why this kicks in.
Would it be possible to get some advice as to what I might be doing wrong, or a plan for troubleshooting? I'm new to aframe and JavaScript.
An alternative plan for enforcing the min and max camera tilts in real time would also be an acceptable solution.
I just pushed out this piece on the docs for ya: https://aframe.io/docs/0.6.0/components/look-controls.html#customizing-look-controls
While A-Frame’s look-controls component is primarily meant for VR with sensible defaults to work across platforms, many developers want to use A-Frame for non-VR use cases (e.g., desktop, touchscreen). We might want to modify the mouse and touch behaviors.
The best way to configure the behavior is to copy and customize the current look-controls component code. This allows us to configure the controls how we want (e.g., limit the pitch on touch, reverse one axis). If we were to include every possible configuration into the core component, we would be left maintaining a wide array of flags.
The component lives within a Browserify/Webpack context so you’ll need to replace the require statements with A-Frame globals (e.g., AFRAME.registerComponent, window.THREE, AFRAME.constants.DEFAULT_CAMERA_HEIGHT), and get rid of the module.exports.
Can modify https://github.com/aframevr/aframe/blob/master/src/components/look-controls.js to hack in your min/max just for mouse/touch.
I am working on a Transformer Pad and developing a drawing plate.
I use PhoneGap(javascript) to write the code instead of JAVA.
But the touchmove event is quite weird.
I think as I move my finger on the pad, it will continuously to collect the coordinates which I
touch on the canvas. BUT IT DOES NOT!
It's ridiculous, it only collect "1" coordinate: the first point on the canvas my finger moves to.
Here are my code about the "Touch && Move event":
function touchStart(event){
if (event.targetTouches.length == 1) {
var touch = event.targetTouches[0];
if (event.type == "touchstart") {
line_start_x= touch.pageX- canvas_org_x;
line_start_y= touch.pageY- canvas_org_y;
context.beginPath();
context.moveTo(line_start_x, line_start_y);
}//if
}//if 1
}//function.
function Touch_Move(event){
line_end_x= event.touches[0].pageX- canvas_org_x;
line_end_y= event.touches[0].pageY- canvas_org_y;
context.lineTo(line_end_x, line_end_y);
context.stroke();
test++;
}
I don't know why each time I move my finger on the pad, trying to draw a line, curve, or anything
I want. As the finger moves, only a VERY SHORT segment appears. So I declare a variable:"var test=0" in the beginning of this js file. I found that although I move my finger on the pad without leaving it or stopping, the value of "test" remains 1. It means that I move my finger on it. But it doesn't
continuously to trigger the event: "Touch_Move".
What can I do now? I need a corresponding event to "mousemove" on touch pad. At least, the event has to continuously be triggered.
Thank you!
Oh, Jesus. I finally find the answers:
Please take reference for this site.
If you work on an Android System,remember the 'touchmove' only fire ONCE as your finger
moves around the pad. So if you wanna draw a line or something, you have to do this:
function onStart ( touchEvent ) {
if( navigator.userAgent.match(/Android/i) ) { // if you already work on Android system, you can skip this step
touchEvent.preventDefault(); //THIS IS THE KEY. You can read the difficult doc released by W3C to learn more.
}
And if you have more time, you can read the W3C's document about introducing the
'movetouch', it is REALLY hard to understand. The doc sucks.
I'm designing user controls and I'm trying to code the controls for the mouse. This is what I came up with to get user input.
var mouseInput = new GLGE.MouseInput(window);
window.onmousemove = function(ev){
var dx = window.mouseX - prevMousePos.x;
var dy = window.mouseY - prevMousePos.y;
prevMousePos ={
x:window.mouseX,
y:window.mouseY
};
// I do movement calculations with dx and dy here
}
However what I came up with above isn't perfect because if the mouse reaches the end of the window it would not detect movement.
Is there a better way of detecting mouse movement? I'd rather not calculate it using its co-ordinates because using that method I'm unable to calculate distance moved when the mouse is at the edge of the screen.
PS: If anyone was wondering, what I'm designing is sorta like Google Streetview or a first person shooter. I just want the user to be able to move the mouse in one direction infinitely.
I mean, you're already using an onmuseover event handler (the most efficient way because Javascript is async). So you just compute distances from he previous, only when the user moves the mouse. If he doesn't further move the muse, the player just proceeds in the same direction prviously computed.
No, there is no method to handle mouse movements outside of the browser window.
I am creating a web app that I am wanting to be usable on mobile devices.
The initial mechanic of said app requires the current X/Y coordinates of the mouse - which doesn't require clicking and dragging, just simply moving the mouse around a browser window.
Now I have been looking in to the various jQuery/Javascript libraries concerning Touch and Gestures, yet I am unable to find anything that suits what I am after.
So basically I am looking for a way to get the x/y pos of a single finger touch/drag.
Eg: Touch the screen at 0, 0 then hold and drag it to 480,320. The values would then interpolate between x1, y1 to x2, y2.
Thanks.
$(function() {
var $log = $("#log");
function updateLog(x, y) {
$log.html('X: '+x+'; Y: '+y);
}
document.addEventListener('touchstart', function(e) {
updateLog(e.changedTouches[0].pageX, e.changedTouches[0].pageY);
}, false);
document.addEventListener('touchmove', function(e) {
e.preventDefault();
updateLog(e.targetTouches[0].pageX, e.targetTouches[0].pageY);
}, false);
});
You shouldn't need a library for this, it's fairly easy to write your own code using the touchstart, touchmove and touchend events.
http://www.sitepen.com/blog/2008/07/10/touching-and-gesturing-on-the-iphone/
As a side note, if you're using the iOS simulator for testing, be aware that it can sometimes give incorrect coordinates in a way that an actual device wouldn't, so you'll always need to keep a physical iOS device handy to verify that your code works.