I am developing a Chrome Extension as panel in dev console for mobile interface testing and I need to pick elements (Similar to element picker in Dev Console) and process things according to picked element.
User should be able to use responsive or any other mobile interface. Changing to a custom device does not work for me.
I should be able to listen MouseEvents like mousemove instead of pointermove. Because, user should move mouse cursor over the site and catch elements as they move. Pointer events does not provide this.
It would be also OK, if i could initialise The Element Picker programatically and listen events from it.
I am not expecting someone to share a code sample. Check this API or naming correction is also welcome.
Thanks in advance
Related
So I'm working with an input type='time', and I don't understand why when the picker is opened, DOM events stop triggering.
My first assumption was shadow DOM, but it doesn't appear in the chrome devtool with the shadow tree setting enabled (I see correctly other shadow trees).
Furthermore, the picker even keeps focus outside the web page, for instance, if I wanted to close my broswer or click an icon on my OS's taskbar, the first click would make the picker close and only the second would do the action, although this behaviour occurs on Ubuntu 20.04 but not on Windows 10.
Here's a snippet that logs key presses, but whenever the picker is opened, events don't get triggered anymore.
window.addEventListener("keydown", () => console.log("key pressed !"));
<input type="time" id="input">
Is there a way to keep listening to events while the picker is opened ?
If there isn't, why so ?
Do the events get sent but the listeners can't catch them ?
Do the events not get sent at all ?
What is that interface made out of, is it even HTML ?
I interpret the question this way: when you refer to the "picker" UI, you mean the user interface elements which are not initially present within the bounding rectangle at first render. Here's an example from a similar question which shows a style used by Chromium browsers (around the time of Chrome version 83):
In the example above, I think you are referring to the "picker" UI as the lower rectangle containing the two vertical columns of two-digit numbers.
What is that interface made out of, is it even HTML ?
If you examine the shadow DOM in a Chromium browser, you can see that there is no HTML corresponding to the "picker" in this case. This is rendered like other native UI components provided by the browser, for example, the Window.prompt() dialog below:
window.addEventListener('keydown', ({key}) => console.log(`${key} was pressed`));
window.prompt('Try typing here first, then focus the demo window and type there afterward');
The code snippet above will result in a natively rendered prompt dialog UI like the one pictured in the screenshot below:
The user agent (browser) is fully responsible for the implementation, style, behavior, etc. for this UI: you cannot influence, control, modify, or respond to it like you can using DOM APIs.
Is there a way to keep listening to events while the picker is opened ?
No.
If there isn't, why so ?
Explained above.
Do the events get sent but the listeners can't catch them ?
No.
Do the events not get sent at all ?
There are no keydown events dispatched when the native browser UI is focused.
I want to be able to mimic the events of a user typing on a keyboard within a Google Chrome Extension in much the same way that one can do it with puppeteer such that it generally fires all the same events as would a user typing manually.
Are there methods available in Google Chrome libraries to facilitate this? I have tried using JavaScript and DOM manipulations client side directly including firing off events for mouse clicks etc... This works on some pages in lieu of manual events but on a few pages that are crafty only manual and puppeteer library keyboard/mouse events accurately mimic and fire off all events.
If there is a library that facilitates this for Google Chrome Extensions I’d really appreciate someone pointing me in the right direction.
My iOS app uses a WKWebView with contenteditable = true on a specific div. I'd like to have code to make the keyboard show up for the web view, so the user can just start typing. Things I've tried that have had no effect:
Telling the web view to becomeFirstResponder (a long shot, because the web view wouldn't know what div to use).
Injecting JS to tell the div to focus(). (This works in other browsers, but sadly not in WKWebView)
Simulating touch events in JS via TouchEvent and dispatchEvent() in the hope of making it seem that the user had tapped on the div.
In the third case I also used addEventListener() to observe the simulated touches and compare them to real touch events from tapping the screen. It looks like the key difference is that the event's isTrusted value is false for the simulated touches.
I get that it's a potential security issue to let apps simulate touch events, but I didn't have any other ideas. I'm trying to get the keyboard to appear, what the user types is up to them and not something I want to mess with. Basically I want the same thing as calling becomeFirstResponder() on a UITextView.
This is very similar to a WebKit issue 142757 but I haven't figured out how to use the suggested workaround linked from there.
Clarification: I can set up and use an editable web view, but the keyboard doesn't appear until I tap on the web view. I'm trying to make the keyboard appear automatically, without requiring a tap to initiate editing.
I tried this in an iPad playground, and it works without any action on my part. It’s possible there is another view that is capturing touches, or “contenteditable” is misspelled, or something else?
I'm developing a (desktop) application in Electron (v1.6.2) that has a <webview> element (hosting 'guest' web pages) and a number of text fields, both <textarea> and <input type="text">.
We would like the user to be able to select text within the guest page inside the WebView and drag-and-drop it into the application's fields -- but, this isn't working.
Without doing anything special, I can select text in the guest page and drag-and-drop it into other applications outside of Electron.
E.g. -
Dropping it into a text editor, a text-field in a web-browser or my terminal window works fine.
It even works dropping it into a field in a different instance of my Electron application.
I can also drag text that is in the application, but not inside the WebView, and drop that in the fields Ok.
However, when dragging the text selected in the WebView, the fields in the application are not sensitive to events -- i.e. they receive no dragover or drop events, and focus does not switch to them as you would expect.
I've tried adding event handlers to the <webview> to intercept the mouse events (mosedown, mousemove, mouseup) in order to manually control things and tried to use event.preventDefault = true to disable the events from passing down to the guest page.
Everything behaves as I would expect util the moment when it is recognised that you are dragging text. Visually, this is the moment when the pointer switches to a closed fist and a ghostly rectangle appears showing the selected text. At that moment all mouse events cease to be received by the <webview> event handlers. It seems that the application is 'frozen' when text is being dragged.
Is there anything I can do about this?
It would work either to: -
- prevent the WebView from actually dragging text, and for me to simulate it programmatically;
- or to find a way to 'unfreeze' the application during the text-dragging, so that the fields are active, can see events and can receive dropped text.
This is a bug present in Electron v1.6.2.
It is fixed in Electron v1.6.5, released at the end of March 2017.
I've now updated to the latest stable release, v1.6.11, and the issue is fixed.
I'm trying to find a way to open the default context menu on a [X,Y] viewport position inside a Chrome extension (content script). Is it possible? How can I do it?
The thing is, I've handlers on mouseup, mousemove mousedown and contextmenu which inhibit context menu and do some magic (gestures) so it'd be nice if I could just open the default context menu if no gesture was detected in mouseup. (i.e. if there was just single RMB click.)
I'm actually trying to fix this neat looking extension https://github.com/RyutaKojima/simpleGestures for the Gtk+ version of Chrome where the context menu is shown on mouse down.
Without this ability I guess I'll have to do temporarily disable mouse(up|down) and contextmenu handlers and somehow simulate RMB click in mouseup (if I won't get stuck on another limitation).
Context menu invocation is a "user gesture" (Chrome uses it to allow an extension access a set of otherwise restricted functionality like reading/writing of clipboard or DOM of the active page in case only activetab permission is given in the extension's manifest). Thus your only way out is to make an accompanying crossplatform native app that can send a rightclick mouse event. It will be installed separately and communicate with your extension via Native Messaging API.