Lets imagine I have a simple page. With the following content.
<form>
<input type="text" id="startText">
</form>
I have a chrome extension with an script that triggers on this page loading. I also have configured all the relevant permission in chrome (i.e. clipboardRead). The script that triggers on page load is called action.js. It currently has a single line of code:
document.getElementById("startText").value = "text";
I know that I can use the "execCommand('paste')" function to paste within a chrome extension. But I can't figure out how to modify the above code, so that it pastes the contents of the user's clipboard into the input element.
I would try something like:
document.getElementById("startText").value.execCommand('paste')
But that, unsurprisingly, does not work.
The clipboard can only be accessed via background pages, due to security reasons. The problem is that background pages cannot interact with the DOM, only content scripts can. Check out this gist, which solves this problem with messages passing between the background page and the content script.
As of 2014 and this bugfix, you can now use copy/paste directly in content_scripts, assuming you have declared the proper permissions in the manifest.
It's important to remember that execCommand('paste') does not return the contents of the clipboard, but actually triggers a paste action into the focused element and selection region of the document. Therefore, the code to paste into your input element would be:
document.getElementById("startText").select();
var didSucceed = document.execCommand('paste');
If you wish to capture formatted text, you will need to use a DIV contentEditable=true instead of a TEXTAREA.
If you would like to see a working example that uses the older method of using a background page, you can see my BBCodePaste extension.
Related
I am trying to build an automated Puppeteer script to download my monthly bank transactions from my bank website.
However, I am encountering a strange error (see attached Imgur for pictures of this behavior)
https://imgur.com/a/rSwCAxj
Problem: querySelector returns null on DOM element that is clearly visible:
Screenshot: https://imgur.com/d540E6p
(1) Input box for username is clearly visible on site (https://internet.ocbc.com/internet-banking/),
(2) However, when I run document.querySelector('#access-code'), console returns null.
I'm wondering why this behavior is so, and what are the circumstances that a browser would return null on a querySelector(#id) query when the DOM node is clearly visible.
# EDIT: Weird workaround that works:
I was continuing to play around with the browser, and used DevTools to inspect the DOM element and use it to Copy the JS Path.
Weirdly, after using Chrome Devtools to copy the JS Path, document.querySelector('#access-code') returned the correct element.
Screenshot of it returning the correct element: https://imgur.com/a/rSwCAxj
In both cases, the exact same search string is used for document.querySelector.
I believe that you cannot get proper value using document.querySelector('#access-code') because a website use frameset.
In the website there is frame with src to load content
<frame src="/internet-banking/Login/Login">
DOMContentLoading is executed when main document is loaded and not wait for frame content to be loaded.
First of all you need to have listener for load event.
window.addEventListener("load",function() {
...
});
And later on you cannot simply use document.querySelector('#access-code')
because input yuo want to get is inside frame. You will need to find a way to access frame content and than inside of it use simple querySelector.
So something like:
window.addEventListener("load",function() {
console.log(window.frames[0].document.querySelector('#access-code'));
});
BTW please see in: view-source:https://internet.ocbc.com/internet-banking/ looks like website is mostly rendered client-side.
I am trying to build a content editor. This contenteditor will load a HTML document (with JavaScript) into for example a #result element. The problem with this, is that if inside this HTML element there is for example $("input").hide();, then all of my inputs are gone throughout the whole page, so not just inside the loaded HTML (my goal).
What I want to do with the editor is when a client clicks on an element that represents something in the database, the info of this element will popup and the user will be able to edit this. (So, if a user hovers over a form with the class "contact-form" (which is in the database, connected to the loaded page) a new window will popup with information about this specific form element.
Also, I cannot completely disable Javascript, since the loaded HTML might contain Javascript for styling etc.
My goal: Remove Javascript, that can be annoying when a user loads in an HTML file. Like an alert(); Also, remove the ability for the Javascript to edit somehthing outside it's own DOM.
P.S. I am open to better workarounds like using an iframe for this, BUT I want to be able to hover over elements in interact with them.
Edit: It seems that this question might be a bit too broad, looking at the comments. Summary of my question: How can I disable alert() for a specific div and how can I create a sandbox so that code inside a div, can only change elements from inside that div.
What you're looking for is HTML sanitization. This is the process by which you remove any dangerous content from a snippet of HTML on the server, before it's loaded in the browser. There are plenty of sanitization libraries out there that can strip script tags, object tags, etc. Just remember, you can't sanitize using javascript because by the time you've injected your script, another malicious script may have already loaded and run.
The only way to effectively sandbox a javascript environment is with iframes. You'll notice that websites like CodePen, JSBin and JSFiddle use them extensively. There's something called the ShadowDOM, which is the basis of Web Components, but it isn't very well supported yet.
To make it possible to run your own frontend scripts that allow for hovering, you can inject your script after your sanitization process. This way, if it's loaded inside an iframe your script will also be loaded.
Finally, alert() doesn't belong to any elements on the DOM. You can trigger an alert as soon as the page loads, for example. However, if you're trying to prevent alerts from popping up on user interactions, you could try removing all event listeners from a particular element. This won't be necessary if you sanitize the HTML of script tags, however, since the script wouldn't have had a chance to load so there won't be any event listeners.
You can use ShadowDOM to load an html document into a host node. See also WHY SHADOW DOM?
I'm trying to implement a UI that would let the end user upload multiple file sot a server, on a custom UI - pretty much the same way GMail or Outlook.net is doing it.
Few things to node:
The <input type="file"> is ugly - and not standard (IE shows a button named 'Browse' to the left of the file name. Chrome shows a button named 'Choose' to the right of the file name).
Most suggestions how to do the UI suggests hiding a input file element with opacity=0, but on top of by custom UI. The 'click' event will open the dialog box, and upon return the file name (without the path) will be available as a $('#file').val(). See this question, as well as the sample on jsfiddle.
I'm also aware HTML5 has a multiple="multiple" attribute now, which will let the user select multiple files.
However, I'm looking for a multiple file solution, which will work on IE8 and above (as well as WebKit, Mozila).
Some people suggested Google is using Flash. This is not true. Their multi file upload is working when flash is disabled.
Now, here is my biggest surprise: Using the developer tools (F12) on both IE and Chrome, looking at both GMail and Outlook.NET - both implementations do not have a <input type='file'> element in the tree (as far as I can tell). Moreover, both implementations are working with IE8 (flash disabled).
What is going on? How do they do it?
EDIT: Why do I think they don't use file input element? Open the developer tools (F12), switch to Console, type: document.getElementsByTagName('input'). There are 24 input elements, 19 of which are type=hidden, none is type=file.
EDIT 2:Thank you all responders and commentators. Indeed - the "there is no other way" argument (in comment) below was valid. As it turns out, both Outlook.NET and GMail will have a <input type='file'> element, which they will add dynamically, only when the user clicks the 'Attach a file' button. Then, they will send a 'click' event to the element, which will trigger the select file dialog.
The witness this, use the F12 development tool (either in Chrome, or in IE), and in the interactive console type: document.querySelectorAll('input[type=file]'). Note that in both implementations, the element is a direct child of body (with display=none).
They do not use iframe for the upload (unlike the only answer below), but simple XHR code to upload, which is now, in HTML5 is available.
The best resource on the Web for how to do it is: https://developer.mozilla.org/en-US/docs/Using_files_from_web_applications. I've went through the steps of #Jay below (which are great), but the Mozilla page is simpler, which is my recommendation. Also, take a quick look at the jsfiddle sample on #Niranjan comment.
I recently implemented a multi file upload UI for an old asp.net website, but the concepts should be the same.
I'm not very good at writing (summarizing code) but here goes.
Create a number of IFrames. I had problems trying to write IFrames after the document loaded due to security restrictions, so had the server render as many as I though the user would use at once.
Add an 'upload' button and handler which first adds a load handler to one of the iframes.
var frame = $('iframe:first');
in the frame load handler ---
frame.load(function () { /* all the code below* /});
2.a. Write the input tag for file and what ever other elements you like into the frame like this
frame.contents().find('body').html("html goes here");
2.b. Now add a handler to the file input in your frame and submit that form:
frame.contents().find('#fileUpload').change( /*submit the form */)
2.c. Now invoke the file upload dialog
frame.contents().find('#fileUpload').click();
2.d. Now that line will block until the dialog returns. When it does you have to check the value of the file upload control for null in case they canceled. This is where i marked the iframe as not in use.
2.e. Ether way you'll need to unbind from the load of the iframe and rebind to a different method that will handle the return (upload complete)
frame.unbind('load');
frame.load(function () { /* handle file uploaded */})
2.e.1. This is where I reported success to the user and released the frame so it could be reused.
2.e.2. Finally unbind from load again from the upload complete method
All of that is in your frame load handler
3.Now cause the frame to load
frame.load();
At least thats how I did it. I uploaded all the files to a handler which reported file % and a loop inside the parent page fired off ajax getting and displaying the progress of each file.
The main idea is if you want multi file upload in an 'ajaxy' style but not using flash or Html 5 you'll need to use a collection of iframes and some fancy script.
Hope this helps.
I have a page with a lots of javascript. However, the page once rendered remains static, there are no moving things or special effects, etc... It should be possible to render the same HTML without any javascript at all using only the plain HTML and CSS. This is exactly what I want - I would like to get a no javascript version of the particular page. Surely, I do not expect any dynamic behavior, so I am OK if buttons are dead, for example. I just want them rendered.
Now, I do not want an image. It needs to be an HTML with CSS, may be embedded with the HTML, which is fine too.
How can I do it?
EDIT
I am sorry, but I must have not been clear. My web site works with javascript and will not work without it. I do not want to check if it works without, I know it will not and I really do not care about it. This is not what I am asking. I am asking about a specific page, which I want to grab as pure HTML + CSS. The fact that its dynamic nature is lost is of no importance.
EDIT2
There is a suggestion to gram the HTML from the DOM inspector. This is what I did the first thing - in Chrome development utils copied as HTML the root html element and saved it to a file. Of course, this does not work, because it continues to reference the CSS files on the web. I guess I should have mentioned that I want it to work from the file system.
Next was to save the page as complete with all the environment using some kind of the Save menu (browser dependent). It saves the page and all the related files forming a closure, which can be open from the file system. But the html has to be manually cleaned up of all the javascript - tedious and error prone.
EDIT3
I seem to keep forgetting things. Images should be preserved, of course.
I have to do a similar task on a semi-regular basis. As yet I haven't found an automated method, but here's my workflow:
Open the page in Google Chrome (I imagine FireFox also has the relevant tools);
"Save Page As" (complete page), rename the html page to something nicer, delete any .js scripts which got downloaded, move everything into a single folder;
On the original page, open the Elements tab (DOM inspector), find and delete any tags which I know cause problems (Facebook "like" buttons for example) (I also try to delete script tags at this stage because it's easier) and copy as HTML (right-click the <html> tag. Paste this into (replace) the downloaded HTML file (remember to keep the DOCTYPE which doesn't get copied;
Search all HTML files for any remaining script sections and delete (also delete any noscript content), and search for on (that's with a space at the start but StackOverflow won't render it) to remove handlers (onload, onclick, etc);
Search for images (src=, url(), find common patterns in image filenames and use regular expressions to replace them globally. So for example src="/images/myimage.png" => |/images/||. This needs to be applied to all HTML and CSS files. Also make sure the CSS files have the correct path (href). While doing this I usually replace all href (links) with #;
Finally open the converted page in a browser (actually I tend to do this early on so that I can see if any change I make causes it to break), use the Console tab to check for 404 errors (images that didn't get downloaded or had a different name) and the Network tab to check if anything is still being loaded from the online version;
For any files which didn't get downloaded I go back to the original page and use the Resources tab to find them and download manually;
(Optional) Cull any content which isn't needed (tracker images/iframes, unused CSS, etc).
It's a big job. I'd love a tool which automated all that, but so far I haven't found one. The pages I download are quite badly made (shops) which have a lot of unusual code, so that's why there are so many steps. You might not need to follow every step.
I just discovered that the iframe version of the Facebook Like button doesn't honor its query parameters when the iframe is created with JavaScript, rather than included directly in the document's HTML.
Please have a look at this jsFiddle that I created:
http://jsfiddle.net/qQsCC/
I generated a Like button at the URL linked above and first included the HTML exactly as it was provided. Then, I broke it down into the JavaScript code needed to create and append an identical element to the DOM.
In the "Result" window, you'll see the HTML version of the button on top, and the JavaScript-created version below. While the value of the src attribute is identical for both (as well as all other HTML attributes), the lower button doesn't appear to honor any of the parameters that I've passed, such as colorscheme or font.
Does anyone know why this is happening, or have any suggestions for how I might avoid this behavior?
The use case here is that I'm creating HTML ads that will include the iframe version of the "Like" button; a requirement is that the ad can only load 50KB of data initially, then up to 1MB after window.onload has fired. Since the "Like" button weighs in over 50KB alone, I need to construct the iframe using JavaScript after window.onload rather than just including the <iframe> element in the ad's HTML.
When you add url using HTML, html entities are automatically decoded. This doesn't nappen in javascript. So you need to decode the url before passing it to javasript eg:
like.src = 'http://www.facebook.com/plugins/like.php?href=http%3A%2F%2Fwww.facebook.com%2F&send=false&layout=standard&width=450&show_faces=false&action=like&colorscheme=dark&font=arial&height=35';
Hope this helps
Updated JSfidle:
http://jsfiddle.net/qQsCC/1/