iPhone web app - touch and gesture events - javascript

looking at the (quite outdated) documentation on the apple website and across the internet for the touch and gesture javascript events, I can't help but notice that the gesture event which is returned during the function being called, only contains the values 'scale' and 'rotation'. Have I missed something or are they the only two values returned?
It's hard to see the values of a returned object on the iPhone as they are annoyingly only shown as their type and name. e.g when logging the returned object in the console, it just shows up as '[object TouchEvent]'.
What I am trying to achieve is to run a function once a two finger swipe occurs, then whether the swipe was left or right, change the page accordingly.
I have tried a more complex touchevent and the (what I thought would be easier) gestureevent, but the gestureevent only returns the scale and rotation (apparently), and the touchevent seems quite complex in the way I am doing it.
Does anyone know if the object returned is just the scale and rotation, and if so, do you know how I can get the same effect with the touchevent instead?

You should use touchevent that provides you trajectory [of movements] to detect is this trajectory falls into "swipe" category using your own definition of swipe.
Usually "swipe" gesture is context specific: e.g. it should start in particular DOM element and probably end there. It means that for particular DOM element some gesture is not a "swipe" but for its container for example it is a swipe. So you cannot generate bubbling event for that in general.
Zoom and rotation gestures can be detected without knowing context - so system generates them for you.
There are ready to use frameworks and libraries that have swipe gesture detectors. At least for some popular containers like items in vertical list and so on.

Related

Handling only single-touch events with jQuery

I'm handling touch events on a <canvas> area (using jQuery). I'd like to only handle single touch events, leaving multiple touch events uncancelled (e.g. so you can pinch-and-zoom).
The problem is that no matter how simultaneous the touches are, I get a touchstart with one point, followed by a touchmove with one point, and finally a touchstart with 2 points.
If I cancel the first touchdown event with preventDefault(), there's no zooming, even if I don't cancel after the touchmove nor the second touchdown.
If I don't preventDefault() on that first event, then touchmove events will scroll the page up and down in addition to my handling of the touch movement.
(testing on Android 8.1.0 - Samsung Galaxy Tab A)
I've looked around at various postings and web searches, e.g.:
jquery preventdefault on first touch only
or How to prevent default handling of touch events? or Javascript support for touch without blocking browsers pinch support
... and while not specific to this situation, I get the feeling that I'm out of luck. Seems a shortcoming of touch event handling, however, insofar as it makes multiple touches impossible to detect on the first event.
------------ EDIT -----------
Thanks for the suggestions folks! I do hope someone finds the answer useful, but unfortunately it doesn't fit my purposes, and I thought I should explain why (superfluous, perhaps, but may provide someone more insight).
To give some background, this is for a reusable module that provides an API for elements, including drawing 'sprites' and handling mouse/touch events like 'down', 'up', 'drag', etc. As such, I need to consider pros and cons carefully in the context of reusability and clarity.
The solutions mentioned here, and all others that I've found or indeed can conceive of, require a delay. There are two problems with this:
The minor problem is that any delay-based implementation of "multiple touch" is subjective. Multiple-touching is not timed out — you can theoretically touch with one finger, take a leisurely sip of your coffee (presumably with your other hand), then touch with another finger, and still be able to (e.g.) zoom. If this were the only problem, I could probably live with a pre-determined time-out, but it would be based on my perception of users' habits. (I could forsee, for instance, someone touching a 'sprite' over a dense background like a geographical map, realizing there's some detail they want to focus on, and then trying to zoom in.)
If I did delay on down, say by choosing a 300ms delay, it becomes a bit of a rabbit hole. A lot can happen in about a third of a second; likely, they could start a 'sprite' drag. There are then two choices:
If I wait to make sure it's a single touch, I miss (or cache) at least one 'move' event, and all dragging would then show a slight hesitation at the start. A third of a second is well within the bounds of perceptibility, so this is unacceptable.
Or, I could detect slight movement and assume that it's the start of a motion gesture like dragging. I'd then have to raise the API's 'down' and 'move' events simultaneously, a distasteful twiddle but again tolerable. More ambiguous is the threshold for determining it's actual motion. A very steady touch can easily get 4-6 pixels of movement on a touch screen, and I've seen well over 12 pixels for shaky touches. The gap could well be large enough to show an unseemly jitter.
As you can imagine, this is already a very processor-intensive module, especially problematic on mobile devices. Considering the increased code complexity and size (and further divergence of mouse vs touch event code paths), the possible need to introduce several tweakable settings that may be rather opaque to the user, and the fact that any solution is a compromise (point 1), I've decided that the least bad choice is to live with this limitation. The highest priorities here are smooth graphics handling and lean code (both size and processor intensiveness).
So at this point I'm willing to forego multiple touch gestures on the element. Zooming outside the element works fine, and is not unexpected behaviour — witness Google Maps.
For simpler applications, it should often be acceptable to delay detection of 'touchdown' to check for further touches.
Add a timer for your second tap in the middle of your first tap function.
For example:
$(myzone).touchstart(function(e){
e.preventDefault();
$(myzone).bind('touchstart',myeventTouch)
action = setTimeout(function(e){
$(myzone).unbind('touchstart',myeventTouch)
clearTimeout(action);
}, 500);
})
function myeventTouch(){
//When double tap do..
}
If you don't want to do this, you can also add a jQuery plugin in your page, for example I searched for one and found jquery.doubletap.js https://gist.github.com/attenzione/7098476
Use this:
$(SELECTOR).on('doubletap',function(event){
alert('doubletap');
});

Disable Gyroscope/Device Orientation reading in VRView for web

I have a web application used for virtual house tours. Currently I am using VRView for these tours and it has worked pretty good, however I’ve ran in to an issue with the gyroscope that I need fixed as soon as possible.
VRView will automatically rotate the camera based on a users device orientation. As a user turns their phone, the virtual house tour will also turn, so the user is able to “look around” the house. This is great for most use cases, however lower end devices have issues when processing this sort of change. I need a way for users to disable the automatic rotation, and simply swipe on their phones to look around.
I’ve tried the permissions api and trying to revoke access to gyroscope, but due to browser compatibility with that api, it doesn’t work. I also can’t find any documentation on this in the VRView library. Any help is much appreciated.
tldr;
You're right, this doesn't seem to be available via their API. It looks like you may have to fork the library and make some adjustments. If you want to go down this path, I'd suggest forking the repo, seeing if you can successfully disable the motion emitter, and then see if you can use the webvr-polyfill to initiate drag controls. It may also be possible to just disable the gyro-based rotation via webvr-polyfill directly.
More in-depth explanation:
The motion information is being published to the VR View iframes (which I believe then feed them to the webvr-polyfill controls) in two locations:
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/scripts/js/device-motion-sender.js#L35
https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/api/iframe-message-sender.js#L45
When a browser's UA (user agent) flag indicates it can't use gyro controls, you would need to include a flag in that disables this functionality (or disables the listener in the iframe).
Normally, to enable drag rotation, I think you would then need to write a listener for the start and end of drag events that would translate those events into camera rotation. (Something similar to what this person is suggesting: https://github.com/googlearchive/vrview/issues/131#issuecomment-289522607)
However, it appears that the controls are imported via web-vr-polyfill. The 'window.WebVRConfig' object is coming from web-vr-polyfill, if I'm following this correctly.
See here: https://github.com/googlearchive/vrview/blob/bd12336b97eccd4adc9e877971c1a7da56df7d69/src/embed/main.js#L77
The code above looks like VR View is adjusting the WebVRConfig when it detects a certain flag (in this case the 'YAW_ONLY' attribute). I believe you would have to do something similar.
https://github.com/immersive-web/webvr-polyfill
See here for the YAW_ONLY attribute: https://github.com/immersive-web/webvr-polyfill/blob/e2958490653bfcda4df83881d554bcdb641cf45b/src/webvr-polyfill.js#L68
See here for an example adjusting controls in webvr-polyfill:
https://github.com/immersive-web/webvr-polyfill#using

About Crafty - Isometric and gravity; also fourway

I have two questions about Crafty (I've also asked in their google group community, but it seems very few people look at that).
I've followed this tutorial http://buildnewgames.com/introduction-to-crafty/ and also took a look at the "isometric" demo in crafty's website of a bunch of blocks (http://craftyjs.com/demos/isometric/). And I've been trying some stuff by combining what I've learned in both.
(Q1) When I use the fourway component (used in the tutorial a lot), if I hold the left arrow key for example and CTRL-TAB out the current tab while holding left, and then go back (not necessarily holding left anymore), then the my character seems to get stuck in moving to the "left" direction. It also happens for the other 3 directions. Is that a known issue? Is there anyway to fix it without changing crafty?
It happens here with firefox 29 and chrome 34. My code is pretty much the one in the final version presented at the tutorial's end (it's not the same, but even when it was the same I already had this issue).
By the way, when this happens, if I CTRL-TAB out and back again holding that left key, things go back to normal (the movement stops).
(Q2) The isometric-ish features interprets Z as being height, and the gravity component uses Y for height. Isn't this a problem? Can I, maybe, for example, tell gravity to use something else, other than y, for height?
Regarding (Q1), the movement is managed by keydown and keyup events. If you change the tab when the start of a movement was triggered, the fourway component never gets any keyup event to stop again. You could use a workaround like the following:
Crafty.settings.modify("autoPause", true);
Enabling autoPause (somewhere in your init function) will pause your game when the browser tab crafty is running in is inactive. You can then react to this event by triggering keyup events or preventing the player component to move like this:
player.bind('Pause', function() {
this.disableControl();
});
player.bind('Unpause', function() {
this.enableControl();
});
You might want to stop animation there too if you handle that in your player component..

Dealing with touchMove events in iOS and Ember

I'm adding iOS support to an Ember application which uses Flame.js. Most Flame.js widgets don't have built-in support for touch events at the moment, so I'm working on adding it to the ones I need. I've got touchStart and touchEnd working just fine for clicking and certain state transitions.
However, touchMove has been a mess so far. I need it for dragging, and to do that I need to track where the movement began and where it is. So far I haven't been able to get all that information consistently from touchMove. Various resources suggest that I should be looking in the event.touches array for my data, but the jsFiddles I've built so far all throw TypeError when I try to get length on that array, which they claim is undefined. (The usual places to look, event.pageX, event.pageY, etc. are also undefined.)
I'm testing with an iPad and with Phantom Limb, and with the latter I was able to get at data by accessing originalEvent, but that doesn't work on the actual iPad, I think because the originalEvent attribute is an artifact of how Phantom Limb works. (My theory is that accessing originalEvent is accessing the original mouseMove that Phantom Limb transformed into a touchMove.)
Why is the event.touches array undefined for me? More to the point, where can I get touch position data so I can drag?
Here's my most representative jsFiddle. Click the button to get a Panel, which you whould be able reposition by dragging its title bar if this was working. The extensions of Flame.State in the titleView of App.TestPanel are overriding the original states from Flame itself.
I don't think it is just an artifact of PhantomLimb. We have a similar challenge, and use the following:
normalizeTouchEvent = function(event) {
if (!event.touches) {
event.touches = event.originalEvent.touches;
}
if (!event.pageX) {
event.pageX = event.originalEvent.pageX;
}
if (!event.pageY) {
event.pageY = event.originalEvent.pageY;
}
};

How to check keyboard state while painting on HTML5 canvas?

I'm just mucking around a bit with HTML5 canvas and painting. I happened to find this article by Opera that describes well how to set up a simple application.
I would like to extend it so that the user may set up a vanishing point (1 point perspective) that may be used to constrain painting. To allow this I need to figure out a way to define some modifier keys to limit the result (i.e. constraints map as (key=>axis) a=>x, s=>y, d=>z).
Is there some way to check out which key the user has pressed while handling the "mousedown" event?
AFAIK it only will work when the document have focus.
You should add a listener to the body attending for the key press event, when it has triggered, you store it in a variable emptying it afterwards when user triggers key release, an example should be like this :
document.body.onkeydown=function(evt){evt=evt?evt:window.event;console.log(evt)};
document.body.onkeyup=function(evt){evt=evt?evt:window.event;console.log(evt)};
then you only should identify evt.keyCode and act with it.
You could try third party libraries like shortcut.js too.

Categories