I am currently developing a web application for iPad. I have a html canvas, where I am wanting to track the coordinates of the touchmove event.
I can track the touchstart event, which outputs the coordinate I have pressed, but when I try and output my current coordinates as I am moving across the canvas, it is doing nothing, I have the following code which is using angular2, where this.currentArrowPoint is the label I am outputting the values to on the screen:
e.preventDefault();
this.currentArrowPoint = ["a", "b"];
this.currentYPosition = ["e.changedTouches[0].pageY];
this.currentArrowPoint = [this.currentYPosition];
It is definitely entering the mousemove, as I am getting the a and b back, but then when I output anything relating to the event e, I get absolutely no output, not even if I try and output just e. This works on everything but apple devices, so wondering if I am missing something special about apple browsers or devices?
Thanks
As you can read here, only Chrome and Firefox support changedTouches in a TouchEvent, which in turn means that Safari does not support this.
I also guess you did not mean to put e.changedTouches[0].pageY in between quotes?
You should try and track touches based on their index on touchstart (such a waste why they didn't call this touchdown....) and figure out on touchmove if any of these touches have changed. You are using a canvas, so I can imagine you are trying to draw something. You could stop drawing once you count more than 1 touch. This way you are always sure you are tracking the right finger (or toe.. I'm not judging)
I find walk around solution if you want do swipe or something like that on iOS 13 with 3 event listener.
const someFucntion = () =>{
let touchstartX = 0;
let touchendX = 0;
const gesturedZone = this.mainWrapperRef.current; // ref, querySelector or something else
gesturedZone.addEventListener('touchstart', (event) => {
if (event.touches.length === 1) {
touchstartX = event.touches[0].screenX;
}
}, false);
gesturedZone.addEventListener('touchmove', (event) => {
if (event.touches.length === 1) {
touchendX = event.touches[0].screenX;
}
}, false);
gesturedZone.addEventListener('touchend', () => {
this.handleGesture(touchstartX, touchendX);
}, false);
}
const handleGesture = (touchstartX, touchendX) => {
// do some staff
};
Related
I'm attempting to implement some swiping functionality into my app, but I'm currently struggling to get the correct behaviour I want.
I have a function which is called onTouchStart which saves the current touch objects' clientX and clientY position to the state, like so:
touchStart(e) {
const touchObj = e.touches[0];
this.setState({
startX: touchObj.clientX,
startY: touchObj.clientY,
})
}
After the users begins to move their finger across the screen, I call another function onTouchMove, which saves the current clientX and clientY positions:
touchMove(e) {
const touchObj = e.touches[0];
this.setState({
currentX: touchObj.clientX,
currentY: touchObj.clientY,
})
}
To then figure out whether a swipe was an up or down swipe, I call another function which takes the start positions and takes away the current positions. I also compare it against a threshold of 150 to make sure that it was an actual swipe, and not a long swipe (like a scroll). Here's what that looks like:
touchEnd(e) {
this.setState({
touchStarted: false,
})
if (Math.abs(this.state.startX - this.state.currentX) < this.state.threshold) {
this.setState({
direction: this.state.startY > this.state.currentY ? "top" : "bottom",
})
console.log(this.state.direction);
}
}
The problem
This is detecting up and down swipes fine, but on the very first swipe a user does, the console.log returns the original state of none (as opposed to up or down).
How do I ensure that the initial swipe is always set to the correct direction?
The problem continues further, whereby the user will have to swipe twice in the opposite direction for the state to be saved to the actual direction which is being swiped, like so:
up // none
up // up
down // up
down // down
Apologies if I've not articulated my problem too well. Here's a link to a CodePen where you can see the problem in further detail (you'll need a phone or an emulator to see the touch events).
Here's also a link to a post which I've been following to get this functionality working.
Any pointers would be highly appreciated. Thanks!
Here is the link to the fixed pen.
touchEnd(e)
{
var s = {
touchStarted: false
};
if (Math.abs(this.state.startX - this.state.currentX) < this.state.threshold)
{
s.direction = this.state.startY > this.state.currentY ? "top" : "bottom";
}
this.setState(s, () => console.log(this.state.direction))
}
The only "problem" was that you assumed that setState is synchronous and therefore checked state value immediately after its call, whereas it is not. If you want to check value of the state after setState has finished - use its second argument to pass callback function.
I have a Chrome extension in which I'm trying to jump forward or backward (based on a user command) to a specific time in the video by setting the currentTime property of the video object. Before trying to set currentTime, a variety of operations work just fine. For example:
document.getElementsByTagName("video")[1].play(); // works fine
document.getElementsByTagName("video")[1].pause(); // works fine
document.getElementsByTagName("video")[1].muted = true; // works fine
document.getElementsByTagName("video")[1].muted = false; // works fine
BUT as soon as I try to jump to a specific point in the video by doing something like this:
document.getElementsByTagName("video")[1].currentTime = 500; // doesn't work
No errors are thrown, the video pauses, and any attempted actions after this point do nothing. So the items shown above (play/pause/mute/unmute) no longer work after attempting to set currentTime. If I read the value of currentTime after setting it, it correctly displays the new time that I just set it to. Yet nothing I do will make it play, and in fact even trying to make the video play by clicking the built-in toolbar no longer works. So, apparently setting currentTime wreaks all kinds of havoc in the video player. Yet if I reload the video, all works as before as long as I don't try to set currentTime.
I can easily jump to various times (backward or forward) by sliding the slider on the toolbar, so there must be some way internally to do that. Is there some way I can discover what code does a successful time jump? Because it's a Chrome extension I can inject custom js into the executing Hulu js, but I don't know what command I would send.
Any ideas?
Okay I fiddled around with it for a little while to see how I could reproduce the click event on the player and came up with the following solution:
handleViewer = function(){
var thumbnailMarker = $('.thumbnail-marker'),
progressBarTotal = thumbnailMarker.parent(),
controlsBar = $('.controls-bar'),
videoPlayer = $('#content-video-player');
var init = function(){
thumbnailMarker = $('.thumbnail-marker');
progressBarTotal = thumbnailMarker.parent();
controlsBar = $('.controls-bar');
videoPlayer = $('#content-video-player');
},
check = function(){
if(!thumbnailMarker || !thumbnailMarker.length){
init();
}
},
show = function(){
thumbnailMarker.show();
progressBarTotal.show();
controlsBar.show();
},
hide = function(){
controlsBar.hide();
},
getProgressBarWidth = function(){
return progressBarTotal[0].offsetWidth;
};
return {
goToTime: function(time){
var seekPercentage,
duration;
check();
duration = videoPlayer[0].duration;
if(time > 0 && time < duration){
seekPercentage = time/duration;
this.jumpToPercentage(seekPercentage);
}
},
jumpToPercentage: function(percentage){
check();
if(percentage >= 1 && percentage <= 100){
percentage = percentage/100;
}
if(percentage >= 0 && percentage < 1){
show();
thumbnailMarker[0].style.left = (getProgressBarWidth()*percentage)+"px";
thumbnailMarker[0].click();
hide();
}
}
}
}();
Once that code is initialized you can do the following:
handleViewer.goToTime(500);
Alternatively
handleViewer.jumpToPercentage(50);
I've tested this in chrome on a MacBook pro. Let me know if you run into any issues.
Rather than try to find the javascript responsible for changing the time, why not try to simulate the user events that cause the time to change?
Figure out the exact sequence of mouse events that trigger the time change.
This is probably some combination of mouseover, mousedown, mouseup, and click.
Then recreate those events synthetically and dispatch them to the appropriate elements.
This is the approach taken by extensions like Stream Keys and Vimium.
The video should be ready to play before setting the currentTime.
Try adding this line before setting currentTime?
document.getElementsByTagName("video")[1].play();
document.getElementsByTagName("video")[1].currentTime = 500;
Looks like it works if you first pause, then set currentTime, then play again.
document.getElementsByTagName("video")[1].pause()
document.getElementsByTagName("video")[1].currentTime = 800.000000
document.getElementsByTagName("video")[1].play()
Probably would need to hook into some event like onseeked to put in the play command to make it more robust.
I'm facing a strange problem that I think leaves HammerJS internal event loop with a stuck event that ruins subsequent detections.
This only happens on Internet Explorer Edge on a Touch Device with PointerEvents.
Basically, when using HammerJS for a PAN event (panstart -> panmove -> panend), and you cross the current frame boundary (for example, into an IFRAME, or just outside the browser window) AND you release your finger there, then HammerJS never receives the CANCEL event and the session kind of stays stuck.
From then on, all gestures are reported incorrectly, with one more finger ('pointer') than you're using: For example, it will report a PINCH or ROTATE (2 pointers) just tapping (1 pointer) and so on.
I haven't found a way to reset the Hammer Manager once it enters this ghost state. This breaks my app.
I've prepared a Fiddle with a full working example. Please execute it under a Windows/Touch device !
https://jsfiddle.net/28cxrupv/5/
I'd like to know, either how to detect the out-of-bounds event, or just how could I manually reset the Hammer Manager instance if I am able to detect myself by other means that there are stuck events.
UPDATE
I've found in my investigations that the problem is at the lowest level in HammerJS: the PointerEvents handler has an array of detected pointers this.store and there's the stuck event with an old timestamp.
I've found a way to patch Hammer.JS so it can detect stuck pointers. I don't know if this is wrong, but apparently it works!
On HammerJS PointerEvents handler, there's an array this.store that keeps all current pointer events. It's there where, when we pan out of the window and release the touch, a stuck event is kept forever.
Clearing this array causes Hammer to go back to normal again.
I just added a condition where, if we are processing a Primary touch (start of a gesture?), and the store is not empty, it clears the store automatically.
How it works is, on the next interaction with the stuck hammer instance, the internal store gets reset and the gesture is interpreted properly.
On Hammer.js 2.0.6, around line 885
/**
* handle mouse events
* #param {Object} ev
*/
handler: function PEhandler(ev) {
var store = this.store;
var removePointer = false;
var eventTypeNormalized = ev.type.toLowerCase().replace('ms', '');
var eventType = POINTER_INPUT_MAP[eventTypeNormalized];
var pointerType = IE10_POINTER_TYPE_ENUM[ev.pointerType] || ev.pointerType;
var isTouch = (pointerType == INPUT_TYPE_TOUCH);
// get index of the event in the store
var storeIndex = inArray(store, ev.pointerId, 'pointerId');
// start and mouse must be down
if (eventType & INPUT_START && (ev.button === 0 || isTouch)) {
// NEW CONDITION: Check the store is empty on a new gesture
// http://stackoverflow.com/questions/35618107/cross-frame-events-on-ie-edge-break-hammerjs-v2
if (ev.isPrimary && store.length) {
window.console.warn ("Store should be 0 on a primary touch! Clearing Stuck Event!");
this.reset();
}
if (storeIndex < 0) {
store.push(ev);
storeIndex = store.length - 1;
}
} else if (eventType & (INPUT_END | INPUT_CANCEL)) {
removePointer = true;
}
// it not found, so the pointer hasn't been down (so it's probably a hover)
if (storeIndex < 0) {
return;
}
// update the event in the store
store[storeIndex] = ev;
this.callback(this.manager, eventType, {
pointers: store,
changedPointers: [ev],
pointerType: pointerType,
srcEvent: ev
});
if (removePointer) {
// remove from the store
store.splice(storeIndex, 1);
}
}
});
and also I define the function "reset":
/**
* Reset internal state
*/
reset: function() {
this.store = (this.manager.session.pointerEvents = []);
},
I am trying to use Mapbox/Leaflet to create a Heatmap. This is the exact example that I have working properly: https://www.mapbox.com/mapbox.js/example/v1.0.0/leaflet-heat/
The operative code is this:
map.on({
movestart: function () { draw = false; },
moveend: function () { draw = true; },
mousemove: function (e) {
if (draw) {
heat.addLatLng(e.latlng);
}
}
})
However, this does not work for touch screens. I have watched this video to get an idea of what I need to change: https://www.youtube.com/watch?v=wwffqMAS8K8#t=100
Being new to JS and webapps in general, I am unsure of how to use the syntax explained at around 36:00 minutes into this video. He provides a function that forks mouse/touch handling differently according to what type of event is detected:
var posX, posY;
function positionHandler(e) {
if ((e.clientX) && (e.clientY)) {
posX = e.clientX; posY = e.clientY;
} else if (e.targetTouches) {
posX = e.targetTouches[0].clientX;
posY = e.targetTouches[0].clientY;
e.preventDefault();
}
}
I understand that here we define a function, positionHandler, to return the position of a mouse or touch event on the screen. But I do not know how to integrate this and make it work with the Leaflet syntax above.
How do I adjust the example above for it to work on both desktops and touch screens? Hopefully I have shown here that I've tried to do the research but I'm stuck.
The answer to this question lies in the API for Leaflet. You should be able to use some of their event handlers in order to add this functionality. While I did not have luck just turning on the iHandler map.tap.enable(); I reconfigured it by replacing mouseend with click and mousemove with dblclick.
See documentation here for further information: http://leafletjs.com/reference.html#map-interaction-handlers
I'm writing web application which should support both mouse and touch interactions.
For testing I use touch screen device with Windows 7. I've tried to sniff touch events in latest Firefox and Chrome canary and got the following results:
On touch Firefox fires touch and corresponding mouse event.
Chrome fires touchstart/mousedown, touchend/mouseup pairs, but mousemove fired in very strange manner: one/two times while touchmove.
All mouse events handled as always.
Is there any way to handle mouse and touch evens simultaneously on modern touch screens? If Firefox fires a pair of touch and mouse event what happens on touchmove with mousemove in Chrome? Should I translate all mouse events to touch or vice versa? I hope to find right way to create responsive interface.
You can't really predict in advance which events to listen for (eg. for all you know a USB touch screen could get plugged in after your page has loaded).
Instead, you should always listen to both the touch events and mouse events, but call preventDefault() on the touch events you handle to prevent (now redundant) mouse events from being fired for them. See http://www.html5rocks.com/en/mobile/touchandmouse/ for details.
You should rather check availability of touch interface and bind events according to that.
You can do something like this:
(function () {
if ('ontouchstart' in window) {
window.Evt = {
PUSH : 'touchstart',
MOVE : 'touchmove',
RELEASE : 'touchend'
};
} else {
window.Evt = {
PUSH : 'mousedown',
MOVE : 'mousemove',
RELEASE : 'mouseup'
};
}
}());
// and then...
document.getElementById('mydiv').addEventListener(Evt.PUSH, myStartDragHandler, false);
If you want to handle both in same time and browser does not translate well touch events into mouse events, you can catch touch events and stop them - then corresponding mouse event shouldn't be fired by browser (you won't have double events) and you can fire it yourself as mouse event or just handle it.
var mydiv = document.getElementsById('mydiv');
mydiv.addEventListener('mousemove', myMoveHandler, false);
mydiv.addEventListener('touchmove', function (e) {
// stop touch event
e.stopPropagation();
e.preventDefault();
// translate to mouse event
var clkEvt = document.createEvent('MouseEvent');
clkEvt.initMouseEvent('mousemove', true, true, window, e.detail,
e.touches[0].screenX, e.touches[0].screenY,
e.touches[0].clientX, e.touches[0].clientY,
false, false, false, false,
0, null);
mydiv.dispatchEvent(clkEvt);
// or just handle touch event
myMoveHandler(e);
}, false);
The solutions on this thread are outdated - for those (like me) who still land here in 2021, there is a new W3 specification for pointer events. These events combine mouse and touch into one.
https://developer.mozilla.org/en-US/docs/Web/API/Pointer_events
https://www.w3.org/TR/pointerevents/
MouseEvents and TouchEvents do not technically provide exactly the same functionality, but for most purposes , they can be used interchangeably. This solution does not favor one over the other, as the user may have both a mouse and a touch screen. Instead, it allows the user to use which ever input device they wish, as long as they wait at least five seconds before changing inputs. This solution ignores mouse pointer emulation on touchscreen devices when the screen is tapped.
var lastEvent = 3 ;
var MOUSE_EVENT = 1;
var TOUCH_EVENT = 2 ;
element.addEventListener('touchstart', function(event)
{
if (lastEvent === MOUSE_EVENT )
{
var time = Date.now() - eventTime ;
if ( time > 5000 )
{
eventTime = Date.now() ;
lastEvent = TOUCH_EVENT ;
interactionStart(event) ;
}
}
else
{
lastEvent = TOUCH_EVENT ; ;
eventTime = Date.now() ;
interactionStart(event) ;
}
}) ;
element.addEventListener('mousedown', function(event)
{
if (lastEvent === TOUCH_EVENT )
{
var time = Date.now() - eventTime ;
if ( time > 5000 )
{
eventTime = Date.now() ;
lastEvent = MOUSE_EVENT ;
interactionStart(event) ;
}
}
else
{
lastEvent= MOUSE_EVENT ;
eventTime = Date.now() ;
interactionStart(event) ;
}
}) ;
function interactionStart(event) // handle interaction (touch or click ) here.
{...}
This is by no means a win all solution, I have used this a few times , and have not found problems with it, but to be fair i usually just use it to start animation when a canvas it tapped , or to provide logic to turn a div into a button. I leave it to you all to use this code , find improvements and help to improve this code.(If you do not find a better solution ).
I found this thread because I have a similar & more complex problem:
supposing we create a js enabled scrollable area with arrows NEXT/PREVIOUS which we want not only to respond to touch and mouse events but also to fire them repeatedly while the user continues to press the screen or hold down his/her mouse!
Repetition of events would make my next button to advance 2 positions instead one!
With the help of closures everything seems possible:
(1) First create a self invoking function for variable isolation:
(function(myScroll, $, window, document, undefined){
...
}(window.myScroll = window.myScroll || {}, jQuery, window, document));
(2) Then, add your private variables that will hold internal state from setTimeout():
/*
* Primary events for handlers that respond to more than one event and devices
* that produce more than one, like touch devices.
* The first event in browser's queue hinders all subsequent for the specific
* key intended to be used by a handler.
* Every key points to an object '{primary: <event type>}'.
*/
var eventLock = {};
// Process ids based on keys.
var pids = {};
// Some defaults
var defaults = {
pressDelay: 100 // ms between successive calls for continuous press by mouse or touch
}
(3) The event lock functions:
function getEventLock(evt, key){
if(typeof(eventLock[key]) == 'undefined'){
eventLock[key] = {};
eventLock[key].primary = evt.type;
return true;
}
if(evt.type == eventLock[key].primary)
return true;
else
return false;
}
function primaryEventLock(evt, key){
eventLock[key].primary = evt.type;
}
(4) Attach your event handlers:
function init(){
$('sth').off('mousedown touchstart', previousStart).on('mousedown touchstart', previousStart);
$('sth').off('mouseup touchend', previousEnd).on('mouseup touchend', previousEnd);
// similar for 'next*' handlers
}
Firing of events mousedown and touchstart will produce double calls for handlers on devices that support both (probably touch fires first). The same applies to mouseup and touchend.
We know that input devices (whole graphic environments actually) produce events sequentially so we don't care which fires first as long a special key is set at private eventLock.next.primary and eventLock.previous.primary for the first events captured from handlers next*() and previous*() respectively.
That key is the event type so that the second, third etc. event are always losers, they don't acquire the lock with the help of the lock functions eventLock() and primaryEventLock().
(5) The above can be seen at the definition of the event handlers:
function previousStart(evt){
// 'race' condition/repetition between 'mousedown' and 'touchstart'
if(!getEventLock(evt, 'previous'))
return;
// a. !!!you have to implement this!!!
previous(evt.target);
// b. emulate successive events of this type
pids.previous = setTimeout(closure, defaults.pressDelay);
// internal function repeats steps (a), (b)
function closure(){
previous(evt.target);
primaryEventLock(evt, 'previous');
pids.previous = setTimeout(closure, defaults.pressDelay);
}
};
function previousEnd(evt){
clearTimeout(pids.previous);
};
Similar for nextStart and nextEnd.
The idea is that whoever comes after the first (touch or mouse) does not acquire a lock with the help of function eventLock(evt, key) and stops there.
The only way to open this lock is to fire the termination event handlers *End() at step (4): previousEnd and nextEnd.
I also handle the problem of touch devices attached in the middle of the session with a very smart way: I noticed that a continuous press longer than defaults.pressDelay produces successive calls of the callback function only for the primary event at that time (the reason is that no end event handler terminates the callabck)!
touchstart event
closure
closure
....
touchend event
I define primary the device the user is using so, all you have to do is just press longer and immediately your device becomes primary with the help of primaryEventLock(evt, 'previous') inside the closure!
Also, note that the time it takes to execute previous(event.target) should be smaller than defaults.pressDelay.
(6) Finally, let's expose init() to the global scope:
myScroll.init = init;
You should replace the call to previous(event.target) with the problem at hand: fiddle.
Also, note that at (5b) there is a solution to another popular question how do we pass arguments to a function called from setTimeout(), i.e. setTimeout(previous, defaults.pressDelay) lacks an argument passing mechanism.
I have been using this jQuery helper to bind both touch and click events.
(function ($) {
$.fn.tclick = function (onclick) {
this.bind("touchstart", function (e) { onclick.call(this, e); e.stopPropagation(); e.preventDefault(); });
this.bind("click", function (e) { onclick.call(this, e); }); //substitute mousedown event for exact same result as touchstart
return this;
};
})(jQuery);