I'm creating an electron application that is designed to take untrusted code and attempt to execute it in an iframe (the code is in a custom format which specifies the composition of "modules" the application is designed to use). I am struggling to find a solution where I can use the iframe to sandbox the untrusted code, and allow it to access some but not all of the application's files. For example, I have JQuery in the app source code, as well as TinyMCE. These would be useful tools to allow the iframe access to.
I can use <iframe src="Sandbox/index.html" sandbox="allow-scripts allow-same-origin"></iframe>,
however I cannot prevent the iframe from having access to just about any file it might want, and by extension the same problem exists for the entire renderer process. For example, let's say I have a directory structure like this:
Projects/
sneakysecrets.js
ElectronProject/
index.js - (main process)
page.html - (host for the iframe)
Sandbox/
iframecontent.html
inside sneakysecrets.js:
console.log('escape detected');
and inside the body of iframecontent.html:
<script src="../../sneakysecrets.js"></script>
If I set the iframe element in page.html like this (allow-same-origin):
<iframe src="Sandbox/index.html" sandbox="allow-scripts allow-same-origin"></iframe>
then I have access to everything inside the application directory. And, I assumed, nothing more. But interestingly, in the console I see escape detected! How do I limit the access any render process has to files within (and outside of!) the project?
If I remove allow-same-origin from the iframe, it cannot access the tools in the application that it needs
My ideal goal is to setup electron to treat a certain subdirectory in the project as the "Server", and prevent navigation outside of that. But despite efforts I cannot find any documentation relating to limiting OS navigation in this case. So, I'm either missing something, or this is just an inherent vulnerability that I cannot avoid, and I should adjust my approach.
Related
I never thought to ask this, but I work on a security product and so we implement pretty strict protection against XSS:
We disallow < and > in user input both server- and client-side
If the user does manage to make a request containing either of those characters, the server will disable their account and leave a warning for an admin
Angular also sanitizes interpolated content before injecting it into the DOM
This is all great and dandy, except, it hurts UX and it's bad for performance. Surely, SURELY, there is a way to just tell the browser NOT to execute <script> tags added after initial document parsing, right? We use a modern bundled workflow and any lazy-loading of JavaScript will be done via import("/some/js/module") calls which get rebased by the bundler but will never be fed a dynamic value at runtime.
Even if there isn't a way to straight up tell the browser not to run dynamically added (by JS after page load) <script> tags, is there a tried and true workflow for rendering, say, markdown + HTML subset user-produced content in iframes? I am familiar with iframes at a high-level, but I mean can the parent document/page manipulate the DOM content of the iframe or something so even if it does add a <script> tag inside the iframe, the script code will not have access to the parent document's JS environment?
Actually that would be cool as a sandboxed way to display user content because they could intentionally include a script and make a little interactive widget for other users to mess with, in theory (maybe an antifeature in practice).
You can do it with CSP (Content Security Policy)
https://developers.google.com/web/fundamentals/security/csp#inline-code-considered-harmful
Example:
Allow only :
<script nonce="EDNnf03nceIOfn39fn3e9h3sdfa">
// Some inline code I can't remove yet, but need to asap.
</script>
with
Content-Security-Policy: script-src 'nonce-EDNnf03nceIOfn39fn3e9h3sdfa'
Start by blocking all with:
default-src 'none'
So I'm trying to follow the instructions for a firefox extension using WebExtensions and I want to attach a content script to certain pages (as discussed here: https://developer.mozilla.org/en-US/Add-ons/WebExtensions/Modify_a_web_page ).
Problem is I don't want to specify the set of pages I want the content_script to run on in manifest.json when I write the extension but load it from local storage, i.e., I have an options page set up where the user can specify the pages the content script should be run on. Is it possible to dynamically change the list of pages to be modified that is normally set using the content_script directive in manifest.json?
Thanks
No, there is no way to modify the URLs into which injection will occur for a manifest.json content_script (either CSS or JavaScript). The specified code will be injected in all matching URLs. This is for multiple reasons.
One of the reasons this is not possible is security/transparency to the user. The manifest.json explicitly declares which URLs your content script will be modifying, states that it will be modifying the active tab, or that it will have access all URLs/tabs. If you were permitted to change the URLs, then you would effectively be given the ability to access all URLs without explicitly declaring that you are doing so.
Yes, it would be possible to have a way to declare that you are going to do so. Chrome has an experimental way to do so with chrome.declarativeContent. In Chrome this is considered experimental, even after being available for a couple/few years. It is not available in Firefox. When it will be available, or even if it will be available in Firefox is unclear. In addition, even in Chrome, it lacks some features available to other methods of injecting scripts (e.g. run_at/runAt).
In order to have complete control over injecting, or not injecting, you will need to perform the injection via tabs.insertCSS() and/or tabs.executeScript(). Injecting, or not injecting, scripts and CSS with these methods is fully under the control of the JavaScript in your extension. It is possible to get similar functionality using these methods to what you obtain with manifest.json content_script entries, but with more control. This greater control comes at the expense of greater complexity.
How about that? There is a bunch of events to listen, you can pick any one which would satisfy uour requirements:
onBeforeNavigate
onCommitted
onDOMContentLoaded
onCompleted.
The documentation is here. And this is an example of how to make red background on subdomains of example.com. Of course, you can build the list of URL filters dynamically, this is just a PoC:
browser.webNavigation.onCommitted.addListener(
function(details) {
browser.tabs.insertCSS(details.tabId, {code: 'body {background:red}'})
},
{url: [{hostSuffix: '.example.com'}]}
);
Could the leiningen plug-in figwheel or boot's counterpart be used within arbitrary webpages? I'm thinking of it as an replacement of the browser's builtin developer console.
Here is a simple scenario of how I'd imagine this workflow:
You open an arbitrary website in the browser. Beside that, you have a browser repl inside a terminal window, which is provided by one of the tools mentioned above. (I guess they both use 'weasel' for this.)
Inside the terminal one could access the current state of the weppages' DOM.
E.g: (set! (.. js/window style backgroundColor) "green"))
I guess this should not be too problematic to archive. However, I faced the following problems:
Both tools do actually just inject a bunch of JavaScript into the users's HTML page. It's basically: The users's ClojureScript compiled to JavaScript plus additional implementation of the hot-reloading mechanism via websockets. The second is just omitted when the project comes into production.
My idea was to just inject the whole bundle to another page.
I used boot for the try.
After setting up the boot's ClojureScript REPL, I opened localhost:port in a browser. It's inital source looks like this:
<!doctype html>
<html>
<head>
<title>Hello, World!</title>
</head>
<body>
<script src="js/main.js"></script>
</body>
</html>
The after main.js has been executed on page-load, many (more than 100) further javaScript tags are injected to the page. My initial idea was to just open another page now, say duckduckgo.com, and inject that one script tag to it, augmented with an absolute path to localhost.
so, at the duckduckgo.com page, inside the developer console I did this:
tag = document.createElement("script");
tag.src = "http://localhost:3000/js/main.js";
document.body.appendChild(tag);
As expected the script gets injected, and this always leads to the immediate execution of its code. I was expecting that all the other script tags get injected now automatically. Finally the webSockets should be connected to the ClojureScript repl.
However, there's the following error in the browser console: A call to document.write() from an asynchronously-loaded external script was ignored.
Indeed, many of the further script tags have been injected. But not all of them. Effectively, the socket connection is not established.
So, it looks like some script tags are injected by the mechanism I used myself (via appendChild), others should be done by document.write("<script... The later causes the problem here.
Does anybody know a way to archive this?
I'm new to chrome extensions. I'm writing a little plug-in that zooms in a page when the user presses a button (very new). However, it won't run unless I allow unsafe scripts and it won't carry over to new pages, ostensibly because of the unsafe scripts. All I'm doing is zooming.
What I really want to know is, if it is not asking for information or directly accessing their computer, what makes a script unsafe?
There are three things making a script unsafe for Google extensions:
Inline JavaScript
It's a common beginer mistake (I have made it). You can't put inline JavaScript statements. For example, you can't handle event this way:
<img src="myImage.jpg" onclick="doSomething()">
The correct way to do this is to do define an Id for your DOM element, the image in my example, and to set the event handler in a separate JavaScript file:
page.html:
<img src="myImage.jpg" id="myImage">
<script src="script.js"></script>
script.js:
//In vanilla Javascript :
document.getElementById("myImage").onClick(doSomething);
//In JQuery
$("#myImage").on("click", doSomething);
Eval and related functions
All functions that can evaluate String as JavaScript in the fly are unsafe.
So the eval function is not allowed, such as new Function("return something.value");
Remote scripts
Only local scripts are safe. If you are using for example jQuery, you have to include the library in your extension. Loading external library via CDN links is considered as unsafe.
It's a quick overview, you can read more about this and have the explanations of this restrictions on Google Chrome extension Content Security Policy
Another thing to consider is how you're sourcing your files.
For example, if you source a file using http://, but access the site using https://, you will get an unsafe scripts error.
This is the scenario:
I'm working on a new ASP.NET application that uses master pages for virtually all of the web pages in it, a few of them nested to 4 levels. Due to the size of the project, I was forced to organize the web pages into folders, at various depth levels. The global, Master (in uppercase) page, located at the root directory, contains some useful Javascript functions that are used all along the web app, and I wanted to place these functions together in a single .js file, in order to keep things, well, in order :D . (They're currently embedded into the Master page).
I discovered recently, however, that <script> tags placed on the <head> block can't have their paths specified as, for example, "~/Scripts/Utils.js", since ASP.NET does not seem to recognize these paths on <script> tags (yet, strangely enough, it does recognize them on <link> tags). I'm trying to avoid inserting a <script> tag on every single web page on the site specifying a relative path to the .js file (that's sort of why I wanted to use master pages in the first place).
So, in essence, given this scenario, I want to ask you, what's the recommended way of inserting, via code, <script> tags into the <head> block of a web page so that I can use, when specifying the link to the .js file, something like Something.Something(Something, Page.ResolveURL("~/Scripts/Utils.js")); in the global Master page, so it will resolve to the right path on all the web pages of the application, no matter what directory are they inside?
Or is this not the right approach, and I should be using something else entirely?
You can use the ClientScriptManager:
Page.ClientScript.RegisterClientScriptInclude("MyScript", ResolveUrl("~/Scripts/MyScript.js"));
The first argument is a unique key representing the script file, this is to stop subsequent scripts being registered with the same key. E.g. you may have some shared code that does the same thing and could be executed multiple times, so by specifying a key, you ensure the script is only registered the once.