Input date/time picker prevents DOM events from triggering - javascript

So I'm working with an input type='time', and I don't understand why when the picker is opened, DOM events stop triggering.
My first assumption was shadow DOM, but it doesn't appear in the chrome devtool with the shadow tree setting enabled (I see correctly other shadow trees).
Furthermore, the picker even keeps focus outside the web page, for instance, if I wanted to close my broswer or click an icon on my OS's taskbar, the first click would make the picker close and only the second would do the action, although this behaviour occurs on Ubuntu 20.04 but not on Windows 10.
Here's a snippet that logs key presses, but whenever the picker is opened, events don't get triggered anymore.
window.addEventListener("keydown", () => console.log("key pressed !"));
<input type="time" id="input">
Is there a way to keep listening to events while the picker is opened ?
If there isn't, why so ?
Do the events get sent but the listeners can't catch them ?
Do the events not get sent at all ?
What is that interface made out of, is it even HTML ?

I interpret the question this way: when you refer to the "picker" UI, you mean the user interface elements which are not initially present within the bounding rectangle at first render. Here's an example from a similar question which shows a style used by Chromium browsers (around the time of Chrome version 83):
In the example above, I think you are referring to the "picker" UI as the lower rectangle containing the two vertical columns of two-digit numbers.
What is that interface made out of, is it even HTML ?
If you examine the shadow DOM in a Chromium browser, you can see that there is no HTML corresponding to the "picker" in this case. This is rendered like other native UI components provided by the browser, for example, the Window.prompt() dialog below:
window.addEventListener('keydown', ({key}) => console.log(`${key} was pressed`));
window.prompt('Try typing here first, then focus the demo window and type there afterward');
The code snippet above will result in a natively rendered prompt dialog UI like the one pictured in the screenshot below:
The user agent (browser) is fully responsible for the implementation, style, behavior, etc. for this UI: you cannot influence, control, modify, or respond to it like you can using DOM APIs.
Is there a way to keep listening to events while the picker is opened ?
No.
If there isn't, why so ?
Explained above.
Do the events get sent but the listeners can't catch them ?
No.
Do the events not get sent at all ?
There are no keydown events dispatched when the native browser UI is focused.

Related

Switching touch events to mouse events by using Chrome Extension API

I am developing a Chrome Extension as panel in dev console for mobile interface testing and I need to pick elements (Similar to element picker in Dev Console) and process things according to picked element.
User should be able to use responsive or any other mobile interface. Changing to a custom device does not work for me.
I should be able to listen MouseEvents like mousemove instead of pointermove. Because, user should move mouse cursor over the site and catch elements as they move. Pointer events does not provide this.
It would be also OK, if i could initialise The Element Picker programatically and listen events from it.
I am not expecting someone to share a code sample. Check this API or naming correction is also welcome.
Thanks in advance

Is there a cross browser event for activate in the browser?

Is there a cross browser event that can be used to show a message to the user returning to their web page?
For example, a user has ten applications or tabs open. They get a new notification from our app and I show a notification box. When they switch to our tab I want to begin our notification animation.
The activate event is common on desktop applications but so far, on the window, document and body, neither the "activate" or "DOMActivate" do anything when swapping between applications or tabs but the "focus" and "blur" do. This event works but the naming is different and the events that should be doing this are not.
So is the right event to use cross browser or is there another event?
You can test by adding this in the console or page and then swapping between applications or tabs:
window.addEventListener("focus", function(e) {console.log("focused at " + performance.now()) } )
window.addEventListener("blur", function(e) {console.log("blurred at " + performance.now()) } )
Update:
In the link to the possible duplicate is a link to the W3 Page Visibility doc here.
It says to use the visibilitychange event to check when the page is visible or hidden like so:
document.addEventListener('visibilitychange', handleVisibilityChange, false);
But there are issues:
The Document of the top level browsing context can be in one of the
following visibility states:
hidden
The Document is not visible at all on any screen. visible
The Document is at least partially visible on at least one screen. This is the same condition under which the hidden attribute is set to
false.
So it explains why it's not firing when switching apps. But even when switching apps and the window is completely hidden the event does not trigger (in Firefox).
So at the end of the page is this note:
The Page Visibility API enables developers to know when a Document is
visible or in focus. Existing mechanisms, such as the focus and blur
events, when attached to the Window object already provide a mechanism
to detect when the Document is the active document.
So it would seem to suggest that it's accepted practice to use focus and blur to detect window activation or app switching.
I found this answer that is close to what would be needed to make a cross browser solution but needs focus and blur (at least for Firefox).
Observation:
StackOverflow has a policy against mentioning frameworks or libraries. The answers linked here have upvotes for the "best" answer.
But these can grow outdated. Since yesterday I found mention of two frameworks (polyfills) that attempt to solve this same problem here for visibly and isVis (not creating a link). If this is a question and answer site and a valid answer is, "here is some code that works for me" but "Here is the library I created using the same code that can be kept up to date and maintained on github" is not valid then in my opinion it's missing it's goal.
I know above should probably go to meta and I have but they resist changing the status quo for some reason. Mentioning it here since it's a relevant example.
The Page lifecycle API can be used to listen for visibilitychange events.
[This event triggers] when a user navigates to a new page, switches tabs, closes a tab, minimizes or closes the browser, or switches apps on mobile operating systems. Quote
Current browser support
Reference on MDN

How can I get a WKWebView to show the keyboard on iOS?

My iOS app uses a WKWebView with contenteditable = true on a specific div. I'd like to have code to make the keyboard show up for the web view, so the user can just start typing. Things I've tried that have had no effect:
Telling the web view to becomeFirstResponder (a long shot, because the web view wouldn't know what div to use).
Injecting JS to tell the div to focus(). (This works in other browsers, but sadly not in WKWebView)
Simulating touch events in JS via TouchEvent and dispatchEvent() in the hope of making it seem that the user had tapped on the div.
In the third case I also used addEventListener() to observe the simulated touches and compare them to real touch events from tapping the screen. It looks like the key difference is that the event's isTrusted value is false for the simulated touches.
I get that it's a potential security issue to let apps simulate touch events, but I didn't have any other ideas. I'm trying to get the keyboard to appear, what the user types is up to them and not something I want to mess with. Basically I want the same thing as calling becomeFirstResponder() on a UITextView.
This is very similar to a WebKit issue 142757 but I haven't figured out how to use the suggested workaround linked from there.
Clarification: I can set up and use an editable web view, but the keyboard doesn't appear until I tap on the web view. I'm trying to make the keyboard appear automatically, without requiring a tap to initiate editing.
I tried this in an iPad playground, and it works without any action on my part. It’s possible there is another view that is capturing touches, or “contenteditable” is misspelled, or something else?

Electron app interacting with WebView drag-drop

I'm developing a (desktop) application in Electron (v1.6.2) that has a <webview> element (hosting 'guest' web pages) and a number of text fields, both <textarea> and <input type="text">.
We would like the user to be able to select text within the guest page inside the WebView and drag-and-drop it into the application's fields -- but, this isn't working.
Without doing anything special, I can select text in the guest page and drag-and-drop it into other applications outside of Electron.
E.g. -
Dropping it into a text editor, a text-field in a web-browser or my terminal window works fine.
It even works dropping it into a field in a different instance of my Electron application.
I can also drag text that is in the application, but not inside the WebView, and drop that in the fields Ok.
However, when dragging the text selected in the WebView, the fields in the application are not sensitive to events -- i.e. they receive no dragover or drop events, and focus does not switch to them as you would expect.
I've tried adding event handlers to the <webview> to intercept the mouse events (mosedown, mousemove, mouseup) in order to manually control things and tried to use event.preventDefault = true to disable the events from passing down to the guest page.
Everything behaves as I would expect util the moment when it is recognised that you are dragging text. Visually, this is the moment when the pointer switches to a closed fist and a ghostly rectangle appears showing the selected text. At that moment all mouse events cease to be received by the <webview> event handlers. It seems that the application is 'frozen' when text is being dragged.
Is there anything I can do about this?
It would work either to: -
- prevent the WebView from actually dragging text, and for me to simulate it programmatically;
- or to find a way to 'unfreeze' the application during the text-dragging, so that the fields are active, can see events and can receive dropped text.
This is a bug present in Electron v1.6.2.
It is fixed in Electron v1.6.5, released at the end of March 2017.
I've now updated to the latest stable release, v1.6.11, and the issue is fixed.

Display Triggered JavaScript Events

I am working on a website that uses masonry. I know that the masonry rebuilds itself when the window width changes and I want to be able to trigger that rebuilding at will, like when one of the elements' height is changed. The problem is I don't see any event listeners related to the window's width that I can copy the code from. Is there a way I can see which events are being triggered at the moment?
In Chrome Developer Tools (press F12 within Chrome), go to the Sources tab.
In the right hand pane, expand > Event Listener Breakpoints and tick the ones you want to break on.
Alternatively, if it is using jQuery event handling, you can install the jQuery Debugger extension for Chrome Devtools, and it gives you a jQuery Events tab in the right hand column on the Elements tab. That shows you what events are bound to using jQuery for the selected element. Try selecting the <html> tag or the <body> tag and see if you can find it there.
One final option is to search in the JS you are using for the string "resize".

Categories