WRT building a Firefox Add-on.
Is it possible to get the element under the mouse via some XPCOM or javascript method? (non-js-ctypes please as that requires OS specificity)
I want to detect what is under the mouse when user presses Ctrl + Shift + M.
Right now I'm adding a mouseover listener to the document when the user presses this hotkey, so I can get the element under the mouse when he moves it, but not the element that was under the mouse exactly when he pressed the hotkey combination.
I just looked through the source for code that gets (or stores and makes available) the cursor position. I didn't find anything one could use (from Javascript, XPCOM or not). I might have missed something... MXR is your friend.
However, if you want to avoid mousemove (and this is a good idea in general), you can just look for the innermost hovered element, e.g. like so.
function getInnermostHovered() {
var n = document.querySelector(":hover");
var nn;
while (n) {
nn = n;
n = nn.querySelector(":hover");
}
return nn;
}
(fiddle demoing the principle)
While this is what I'd consider a hack, it seems to work good enough most of the time, but will fail if the element has mouse events disabled via pointer-events. There could be other issues I didn't think of...
Of course, this can return nothing when the document has no hovered element (e.g. the mouse is not actually within the document).
Related
I want to offer very limited drawing possibilities in a html canvas, on any device. Only three things can happen when the user interacts : a "unit" (it might be 40px for example, or 10, it is not very important) square appears where there was none, a rectangle disappears when "clicked", and lastly, several rectangles are fused.
The first two need a click to be detected (same down and up coordinates), the latter needs a drag to be detected (different down and up coordinates).
Therefore, the only thing the app needs to do is to detect (and remember) down, then up coordinates, whether it is a touch, a click, or anything at all.
Lastly, I do not wish to use jquery or any lib, but rather learn something from my coding.
Does this code look ok for that purpose? Can you propose ameliorations?
canvas.ontouchstart = canvas.onmousedown = onDown;
function onDown(e) {
saveDownCoords(e);
e.preventDefault();
};
canvas.ontouchend = canvas.onmouseup = onUp;
function onUp(e) {
...do whatever;
};
Second question, about preventDefault(), stopPropagation() (or whatever it is called): I have read it was needed to stop events from registering twice, as touches and clicks - but under which circumstances and devices, exactly, do touch events then click events fire for a unique user physical action?
For your first question, it's better to use element.addEventListener() : https://developer.mozilla.org/en-US/docs/Web/API/EventTarget/addEventListener
canvas.addEventListener("mouseup", function(event) {
event.preventDefault(); // If you need to
// ... Do whatever
}
As for your second question, the event.preventDefault() call is there to stop the default behavior. It could be used to prevent double-clicking from selecting text, for example.
Question
Having a html5 video element with controls, like:
<video class="media video" src="somevid.ogg" controls></video>
And with a click handler assigned to it:
$('.media.video').on('click', function (event) { /* ... */ });
Is it possible to determine inside the handler method if the user simply clicked the video element or one of the controls?
Haven't found documentation on this one, and searched the event object for clues, but no success.
My alternative solutions:
Assign all listeners to the video, and based on video change events determine if the click has been made on a control element, or the video - is too much hassle & unclean.
Use custom control UI, with the default one disabled - if there is no answer for my question i'd go with this.
Is it possible to determine inside the handler method if the user
simply clicked the video element or one of the controls?
No, the controls are considered part of the video element itself as they are provided internally by the browser. Therefor a click on the element will only register with that and not the sub-elements.
You could use your point #1 though to listen to the various command events which corresponds to clicking a control. IMO the proper way which I would recommend.
If you only need to determine if click was on the video or not you could probably get away with doing a simple height check but layout/look of the controls could change "next week" and force a browser version check to get accurate height which is going the wrong direction.
But for example, inside the event handler for click/mousedown or mouseup:
var rect = videoElement.getBoundingClientRect(), // get abs. position of element
ctrlHeight = 40, // a guess of ctrl area height
y = event.clientY - rect.top; // relative y pos. to video el
if (y >= rect.height - ctrlHeight) {...in ctrl area...}
The other approach, but a tad more work, is to provide your own controls as in your point #2. Either as buttons or in form of image and image map, or even render them to canvas. This gives you full control of placement, looks and so forth.
You will probably end up with the same amount of event handlers as in the first approach as you will need to listen to the events for each button instead...
I was playing with drag n drop in full forms (so no instant upload). I though small part was gonna be highlighting a certain fieldset when hovered over with a file. Enter dragover and dragenter events (and dragleave etc).
Turns out it's not such a small part. The Fiddle: http://jsfiddle.net/rudiedirkx/epp74/
Try it out: drag over a fieldset and move around a bit. The first over triggers the fieldset's dragenter event (fieldset is yellow). The moving around after that (within the same fieldset) triggers dragenters and dragleaves (fieldset no more yellow), which is bad.
Which is why I wanted to make what IE made for mouseover and mouseout a long time ago: mouseenter and mouseleave (they trigger just once). For drag events, the exact same thing applies: they should trigger only once in the exact same way. JS libraries spoof these IE events by using Event.fromElement and Event.toElement (and compare them against the event owner element). (See jQuery or Mootools source for specifics.)
To make the same for drag events, I need the same fromElement and toElement. You can see in the Fiddle, I try, but I can't find them.
Anybody know where they are? Why they're not available?
I'm using Chrome primarily, which doesn't have a fromElement in the dragenter event, but does have a toElement in the dragleave event. In Firefox it's slightly worse (but more logical): both are empty.
Any and all ideas are so very welcome.
edit
After a little more debugging I've found out that Chrome's toElement in dragleave isn't always correct. It's never 'bigger' thanthis, but sometimes it should be: when I leave the fieldset (this) to its parent form (toElement). When I do that, both this and toElement are the fieldset (which is incorrect, right?).
edit Solution:
I ended up with something like this: http://jsfiddle.net/rudiedirkx/Lwd3md71/ which ignores elements in the event, and uses the event coordinates to find the element under the mouse. To make it trigger max once per animation frame, it uses requestAnimationframe, which results into 31-59 fps.
Firefox provides the relatedTarget event property, but Chrome and Safari don't. Sadly, this issue has been open for a couple years as this Chrome bug and this Webkit bug.
Edit: The issue has been fixed in Chrome.
There is a way of faking the relatedTarget for a "dragleave" event, which is to set a variable from the accompanying "dragenter" event -- since dragleave is always preceded by dragenter, a variable set in the latter will be available to the former:
var relatedTarget = null;
document.addEventListener('dragenter', function(e)
{
relatedTarget = e.target;
}, false);
document.addEventListener('dragleave', function(e)
{
console.log('target = ' + e.target + ' relatedTarget = ' + relatedTarget);
}, false);
It won't work the other way round, but you don't really need dragenter for anything else if you use it this way -- i.e. the dragleave alone is enough to tell you when the mouse is moving into, or entirely out of, a particular element.
I have a silly (and hopefully easily fixed) problem, which I will now attempt to describe.
The scenario-> I am trying to create a context menu using HTML / CSS / JS. Just a DIV with a high z-order that appears where a user right-clicks. Simple, and that portion works. The portion which does not is my attempt to make the menu disappear if the user clicks somewhere where a context menu is not supported; I am attempting to achieve this end with a general function in the BODY tag that fires onclick. Since the BODY tag is given a z-order of -1, and any other tags which might trigger the context menu to appear are given a higher z-order value, my hope was that if I right-clicked an element with a z-order of, say, 3, then it would fire the showMenu() function; instead, it appears that it does this, as well as passes the event to the underlying BODY tag, which causes the menu to become hidden again.
As you might imagine, it is incredibly frustrating. Does anyone know how to make prevent events from being passed down? (The INPUT button is what you may want to look at, the A anchor is something similar, but not coded to work just yet).
Here's the HTML code:
http://pastebin.com/YeTxdHYq
And here's my CSS file:
http://pastebin.com/5hNjF99p
This appears to be a problem with IE, Firefox, and Chrome.
A lot of DOM events "bubble" from the bottom object up through container objects, which means they'll eventually reach the body. But you can stop this - try adding the following code to the click handler on your element:
e.cancelBubble = true;
if (e.stopPropagation) e.stopPropagation();
...where e is the variable you already have in your function representing the event object.
event.stopPropagation(); should work in modern browsers, but the old IE way was event.cancelBubble = true; - to be safe you can just do both (but as shown above check that .stopPropagation is defined before trying to call it).
With the above code added, if you click on the element your function will stop container objects (include the body) from seeing the click. If you click somewhere else your function isn't called so then the body will process the click.
There's more info about this at MDN and QuirksMode.org.
Note: I've ignored the z-order issue because in this case I think it is a non-issue - all elements are descendents of the body so (unless you stop it) I would expect events to bubble to the body regardless of z-order.
=I am trying to develop a simple drag/drop UI in my web application. An item can be dragged by a mouse or a finger and then can be dropped into one of several drop zones. When an item is dragged over a drop zone (but not yet released), that zone is highlighted, marking safe landing location. That works perfectly fine with mouse events, but I'm stuck with touchstart/touchmove/touchend family on the iPhone/iPad.
The problem is that when an item's ontouchmove event handler is called, its event.touches[0].target always points to the originating HTML element (the item) and not the element which is currently under the finger. Moreover, when an item is dragged by finger over some drop zone, that drop zone's own touchmove handlers isn't called at all. That essentially means I can't determine when a finger is above any of the drop zones, and therefore can't highlight them as needed. At the same time, when using a mouse, mousedown is correctly fired for all HTML elements under the cursor.
Some people confirm that it's supposed to work like that, for instance http://www.sitepen.com/blog/2008/07/10/touching-and-gesturing-on-the-iphone/:
For those of you coming from the normal web design world, in a normal mousemove event, the node passed in the target attribute is usually what the mouse is currently over. But in all iPhone touch events, the target is a reference to the originating node.
Question: is there any way to determine the actual element under a finger (NOT the initially touched element which can be different in many circumstances)?
That's certainly not how event targets are supposed to work. Yet another DOM inconsistency that we're probably all now stuck with forever, due to a vendor coming up with extensions behind closed doors without any review.
Use document.elementFromPoint to work around it.
document.elementFromPoint(event.clientX, event.clientY);
The accepted answer from 2010 no longer works: touchmove does not have a clientX or clientY attribute. (I'm guessing it used to since the answer has a number of upvotes, but it doesn't currently.)
Current solution is:
var myLocation = event.originalEvent.changedTouches[0];
var realTarget = document.elementFromPoint(myLocation.clientX, myLocation.clientY);
Tested and works on:
Safari on iOS
Chrome on iOS
Chrome on Android
Chrome on touch-enabled Windows desktop
FF on touch-enabled Windows desktop
Does NOT work on:
IE on touch-enabled Windows desktop
Not tested on:
Windows Phone
I've encountered the same problem on Android (WebView + Phonegap). I want to be able to drag elements around and detect when they are being dragged over a certain other element.
For some reason touch-events seem to ignore the pointer-events attribute value.
Mouse:
if pointer-events="visiblePainted" is set then event.target will point to the dragged element.
if pointer-events="none" is set then event.target will point to the element under the dragged element (my drag-over zone)
This is how things are supposed to work and why we have the pointer-events attribute in the first place.
Touch:
event.target always points to the dragged element, regardless of pointer-events value which is IMHO wrong.
My workaround is to create my own drag-event object (a common interface for both mouse and touch events) that holds the event coordinates and the target:
for mouse events I simply reuse the mouse event as is
for touch event I use:
DragAndDrop.prototype.getDragEventFromTouch = function (event) {
var touch = event.touches.item(0);
return {
screenX: touch.screenX,
screenY: touch.screenY,
clientX: touch.clientX,
clientY: touch.clientY,
pageX: touch.pageX,
pageY: touch.pageY,
target: document.elementFromPoint(touch.screenX, touch.screenY)
};
};
And then use that for processing (checking whether the dragged object is in my drag-over zone). For some reason document.elementFromPoint() seems to respect the pointer-events value even on Android.
Try using event.target.releasePointerCapture(event.pointerId) in the pointerdown handler.
We're now in 2022, this is intended and specified behavior - it's called "Implict Pointer Capture"
See the W3 spec on Pointer Events
Direct manipulation devices should behave exactly as if setPointerCapture was called on the target element just before the invocation of any pointerdown listeners. The hasPointerCapture API may be used (eg. within any pointerdown listener) to determine whether this has occurred.
elementFromPoint is a possible solution, but it seems you can also use releasePointerCapture as shown in the following demo. Touching and holding on the green div will get mouse move events for targets outside of it, whereas the red div has the default behavior.
const outputDiv = document.getElementById('output-div');
const releaseDiv = document.getElementById('test-release-div');
const noreleaseDiv = document.getElementById('test-norelease-div');
releaseDiv.addEventListener('pointerdown', function(e) {
outputDiv.innerHTML = "releaseDiv-pointerdown";
if (e.target.hasPointerCapture(e.pointerId)) {
e.target.releasePointerCapture(e.pointerId);
}
});
noreleaseDiv.addEventListener('pointerdown', function(e) {
outputDiv.innerHTML = "noreleaseDiv-pointerdown";
});
document.addEventListener('pointermove', function(e) {
outputDiv.innerHTML = e.target.id;
});
<div id="output-div"></div>
<div id="test-release-div" style="width:300px;height:100px;background-color:green;touch-action:none;user-select:none">Touch down here and move around, this releases implicit pointer capture</div>
<div id="test-norelease-div" style="width:300px;height:100px;background-color:red;touch-action:none;user-select:none">Touch down here and move around, this doesn't release implicit pointer capture<div>
So touch events have different "philosophy" when it comes to how they interact:
Mouse moves = "hover" like behavior
Touch moves = "drags" like behavior
This difference comes from the fact that there can not be a touchmove without touchstart event preceding it as a user has to touch screen to start this interaction. With mouse of course a user can mousemove all over the screen without ever pressing buttoon (mousedown event)
This is why basically we can't hope to use things like hover effects with touch:
element:hover {
background-color: yellow;
}
And this is why when user touches the screen with 1 finger the first event (touchstart) acquires the target element and the subsequent events (touchmove) will hold the reference to the original element where touch started. It feels wrong but there is this logic that you might need original target info as well. So ideally in future there should be both (source target and current target) available.
So common practice of today (2018) where screens can be mouse AND touch at the same time is still to attach both listeners (mouse and touch) and then "normalize" event coordinates and use above mentioned browser api to find element in those coordinates:
// get coordinates depending on pointer type:
var xcoord = event.touches? event.touches[0].pageX : event.pageX;
var ycoord = event.touches? event.touches[0].pageY : event.pageY;
// get element in coordinates:
var targetElement = document.elementFromPoint(xcoord, ycoord);
// validate if this is a valid element for our case:
if (targetElement && targetElement.classList.contains("dropZone")) {
}
JSP64's answer didn't fully work since event.originalEvent always returned undefined. A slight modification as follows works now.
var myLocation = event.touches[0];
var realTarget = document.elementFromPoint(myLocation.clientX, myLocation.clientY);