In Browser Javascript Editor and Execution - javascript

I am developing an Enyo web application and would like to allow users to write their Javascript code in the browser and execute it.
I can do this by using window.eval. However, I have read about the evils of eval.
Is there anyone that could shed some light on how examples like http://learn.knockoutjs.com/, http://jsfiddle.net, etc do in browser execution safely and what the best practices are?

Eval is considered evil for all but one specific case, which is your case of generating programs during runtime (or metaprogramming). The only alternative would be to write your parser/interpreter (which can be done relatively easily in javascript, but rather for a simpler language than javascript itself - I did it and it was fun). Thus using eval() function here is legitimate (for making a browser-side compiler to a reasonably fast code, you need to use eval for generated compiled javascript anyway).
However, problem with eval is security, because evaluated code has the same privileges and access to its environment as your script that runs it. This is a topic quite hot recently and EcmaScript 5 was designed to partially address this issue by introducing strict mode, because the strict-mode code can be statically analyzed for dangerous operations.
This is usually not enough (or problematic for backward compatibility reasons), so there are approaches like Caja that solves security by analyzing the code on a server and allows only strict safe subset of javascript be used.
Another often used approach is protect the user, but not protecting from malicious attacks using running the user generated javascript in an <iframe> element embedded in the parent page (usually used by sites like jsfiddle). But it is not secure for the iframe can access its parent page and get to its content.
Even in this iframe approach there has been some progress recently e.g. in chrome to make it less vulnerable by using sandbox attribute
<iframe src="sandboxedpage.html" sandbox="allow-scripts"></iframe>
where you can even specify different privileges.
Hopefully, we will have an easy way to use safe and easy metaprogramming soon, but we are not there yet.

Related

Are evals in Javascript really a security risk, when the code is available anyway. [duplicate]

This question already has answers here:
Exploiting JavaScript's eval() method
(2 answers)
Closed 9 years ago.
I ended up looking at this question as I was using evals in my current piece of code.
Why is using the JavaScript eval function a bad idea?
When you have javascript code in the browser, you download the javascript as part of the HTML or as a separate file. The source code is there for anyone to look at and modify. I don't see how injection attacks via an eval() could be any worse than hacking at the source code and altering that to do what the attacker wants.
Can someone explain what I am missing? Or some scenario where an eval is dangerous, that couldn't (easily) be achieved by altering the source code.
In the case of Javascript injection attacks, you're not worried about the browser user providing untrusted code. You're worried about code from other places. For instance, if you download data from a third-party site, and eval it, this code will be executed in the context of the user's page, and may be able to do bad things with the user's data. So you're trusting that third-party not to send you nefarious JS.
Of course, we often do this routinely -- many of us link to the Google or Microsoft CDNs to get jQuery and other libraries. These are well-known sites, and we choose to trust them to get the performance benefits. But as the sites become less trustworthy, you have to be more careful, and not just execute whatever they send you blindly.
To some extent, cross-site AJAX rules limit the damage that this third-party code can do. These browser changes were put in place precisely because XSS attacks were being performed, and sending user private data to the attackers. But eval still allows for some types of malware, so you have to be careful in using it.
Even in a situation where you're certain that the string you're going to evaluate is trusted and safe, it's not always worth going with eval.
For example, if you'll ever want your webpage to be available on Mozilla Firefox OS as an application, eval will break it, as it's banned. See this page for details.
Similarly, simple use of eval will not work in Google Chrome Extensions, as per this doc.
And if you're not 100% positive on safety of the string you want to evaluate, you should avoid eval entirely.
Let's start from the potato example in the question you linked:
eval('document.' + potato + '.style.color = "red"');
Let's suppose you have an input field, called potato, in your site when you ask users to choose "body" or "forms[0]" and you use this input and the code above to change the color of the body or first form on the pages you deliver to other users (using a DB for example).
Now suppose a moderately evil user put this inside potato :
"title;alert('test');({style:{color:1}})"
Then what happens is an alert. But this could be worse, like a ajax call to server providing confidential content of the page.
As you can see, it's about the same kind of problem than SQL injections.
Of course, you have to be very stupid to do this, that is using user supplied strings and putting them in evaled strings on other computers on pages with sensitive content. That's why I argue that "eval is evil" is mostly annoying FUD.
But another important point is that eval is slow and most of the time useless as you can have better, cleaner, more maintainable solutions.
Eval isn't evil, but it's generally bad practice.

Does javascript "fake privacy" pose a security risk?

Javascript doesn't let you give private data or methods to objects, like you can in C++. Oh, well actually, yes it does, via some workarounds involving closure. But coming from a Python background, I am inclined to believe that "pretend privacy" (via naming conventions and documentation) is good enough, or maybe even preferable to "enforced privacy" (enforced by Javascript itself). Sure, I can think of situations where this is not true -- e.g. people interface with my code without RTFM but I get blamed -- but I'm not in that situation.
But, something gives me pause. Javascript guru Douglas Crockford, in "Javascript: The Good Parts" and elsewhere, repeatedly refers to fake-privacy as a "security" issue. For example, "an attacker can easily access the fields directly and replace the methods with his own".
I'm confused by this. It seems to me that if I follow minimal security practices (validate, don't blindly trust, data sent from a browser to my server; don't include third-party scripts on my site without inspecting them) then there is no situation where pretend-privacy is less "secure" than enforced privacy. Is that right? If not, what's a situation where pretend-privacy versus enforced-privacy has security implications?
Not in itself. However, it does mean you cannot safely load untrusted JavaScript code into your HTML documents, as Crockford points out. If you really need to run such untrusted JavaScript code in the browser (e.g. for user-submitted widgets in social networking sites), consider iframe sandboxing.
As a Web developer, your security problem is often that major Internet advertising brokers do not support (or even prohibit) framing their ad code. Unfortunately, you have to trust Google to not deliver malicious JavaScript, whether intentionally or unintentionally (e.g. they get hacked).
Here is a short description of iframe sandboxing I had posted as an answer to another question:
Set up a completely separate domain name (e.g. "exampleusercontent.com") exclusively for user-submitted HTML, CSS, and JavaScript. Do not allow this content to be loaded through your main domain name. Then embed the user content in your pages using iframes.
If you need tighter integration than simple framing, window.postMessage() may help, allowing scripts in different frames to communicate with each other in a controlled manner.
It seems the answer is "No, fake privacy is fine". Here are some elaborations:
In javascript as it exists today, you cannot include an unknown and untrusted third-party script on your webpage. It can wreak havoc: It can rewrite all the HTML on the page, it can prompt the user for his password and then send it to an evil server, etc. etc. Javascript coding style makes no difference to this basic fact. See PleaseStand's answer for a discussion of methods to deal with this.
An incompetent but not evil script might unintentionally mess things up through name conflicts. This is a good argument against creating lots of global variables with common names, but has nothing to do with whether to avoid fake-private variables. For example, my banana-selling website might use the fake-private variable window.BANANA_STORE_MODULE.cart.__cart_item_array. It is not completely impossible that this variable would be accidentally overwritten by a third-party script, but it's extraordinarily unlikely.
There are ideas floating around for a future modification of javascript that would provide a controlled environment where untrusted code can act in prescribed ways. I could let the untrusted third-party javascript interact with my javascript through specific exposed methods, and block the third-party script from accessing the HTML, etc. If this ever exists, it could be a scenario where private variables are essential for security. But it doesn't exist yet.
Writing clear and bug-free code is always, obviously, helpful for security. Insofar as truly-private variables and methods make it easier or harder to write clear and bug-free code, there's a security implication. Whether they are helpful or not will always be a matter of debate and taste, and whether your background is, say, C++ (where private variables are central) versus Python (where private variables are nonexistent). There are arguments in both directions, including the famous blog post Javascript Private Variables are Evil.
For my part, I will keep using fake privacy: A leading underscore (or whatever) indicates to myself and my collaborators that some property or method is not part of the publicly-supported interface of a module. My fake-privacy code is more readable (IMO), and I have more freedom in structuring it (e.g. a closure cannot span two files), and I can access those fake-private variables while I debug and experiment. I'm not going to worry that these programs are somehow more insecure than any other javascript program.

How can I sandbox untrusted user-submitted JavaScript content?

I need to serve user-submitted scripts on my site (sort of like jsfiddle). I want the scripts to run on visitors browsers in a safe manner, isolated from the page they are served on. Since the code is submitted by users, there is no guarantee it is trustworthy.
Right now I can think of three options:
Serve the user-submitted content in an iframe from a different domain, and rely on the same-origin policy. This would require setting up an additional domain which I'd like to avoid if possible. I believe this is how jsfiddle does it. The script can still do some damage, changing top.location.href for example, which is less than ideal. http://jsfiddle.net/PzkUw/
Use the sandbox attribute. I suspect this is not well supported across browsers.
Sanitize the scripts before serving them. I would rather not go there.
Are there any other solutions, or recommendations on the above?
Update
If, as I suspect, the first option is the best solution, what can a malicious script do other than change the top window location, and how can I prevent this? I can manipulate or reject certain scripts based on static code analysis but this is hard given the number of ways objects can be accessed and the difficulty analysing javascript statically in general. At the very least, it would require a full-blown parser and a number of complex rules (some, but I suspect not all, of which are present in JSLint).
Create a well defined message interface and use JavaScript Web Worker for the code you want to sandbox. HTML5 Web Workers
Web Workers do not have access to the following DOM objects.
The window object
The document object
The parent object
So they can't redirect your page or alter data on it.
You can create a template and a well defined messaging interface so that users can create web worker scripts, but your script would have the final say on what gets manipulated.
EDIT Comment by Jordan Gray plugging a JavaScript library that seems to do what I described above. https://github.com/eligrey/jsandbox
Some ideas of tools that could be helpful in your application - they attack the problem from two different directions: Caja compiles the untrusted JavaScript code to something that is safe while AdSafe defines a subset of JavaScript that is safe to use.
Caja
Caja
The Caja Compiler is a tool for making third party HTML, CSS and JavaScript safe to embed in your website. It enables rich interaction between the embedding page and the embedded applications. Caja uses an object-capability security model to allow for a wide range of flexible security policies, so that your website can effectively control what embedded third party code can do with user data.
AdSafe
AdSafe
ADsafe makes it safe to put guest code (such as third party scripted advertising or widgets) on a web page. ADsafe defines a subset of JavaScript that is powerful enough to allow guest code to perform valuable interactions, while at the same time preventing malicious or accidental damage or intrusion. The ADsafe subset can be verified mechanically by tools like JSLint so that no human inspection is necessary to review guest code for safety. The ADsafe subset also enforces good coding practices, increasing the likelihood that guest code will run correctly.
As mentioned, the sandbox attribute of the iframe is already supported by major browsers, but I would additionally suggest a mixed solution: to start a web-worker inside the sandboxed iframe. That would give a separate thread, and protect event the sandboxed iframe's DOM from the untrusted code. That is how my Jailed library works. Additionally you may workaround any restrictions by exporting any set of functions into the sandbox.
If you want to sandbox some piece of code by removing it's access to say the window, document and parent element you could achieve it by wrapping it in a closure where these are local empty variables:
(function(window, document, parent /* Whatever you want to remove */){
console.log(this); // Empty object
console.log(window); // undefined
console.log(document); // undefined
console.log(parent); // undefined
}).call({});
Calling it with an empty object is important because otherwise this will point to the window object

What is javascript injection and how it could be use in software testing?

What is javascript injection? Is it similar to SQL Injection?
How can I use javascript injection in software testing?
JS injection is running javascript from the client-side invoked by the client. You can do it in a browser or in console like in chrome. In testing it can be helpful because you can interact with live web apps without having to rewrite, recompile, and retest. It can also be quite useful in hacking by altering webpages while you are on them, i.e. making a weak password validation script always return true granting you logon access. In chrome, press ctrl+shift+j and go to console. There you can play around with some javascript and see how it is for yourself. Other browsers use the url bar like:
javascript:alert(some element = some val)
XSS is usually the attack to read up on when one talks about javascript injection. Basically you load malicious javascript into a web page that can be later used for phishing.
I don't think there are great javascript tools that can uncover XSS vulnerabilities. When it comes to security it still needs a person (preferably security expert) to come up with testing possibly with the help of tools.
While most of the people here reffer to client side javascript injection (aka cross-site scripting)
The expression "cross-site scripting" originally referred to the act of loading the attacked, third-party web application from an
unrelated attack site, in a manner that executes a fragment of
JavaScript prepared by the attacker in the security context of the
targeted domain (a reflected or non-persistent XSS vulnerability).
Wikipedia
with the rise of NoSQL we have a new kind of injections -- serverside javascript injection SSJS which in some sense very similar to SQL injections. Consider to look at this paper (pdf!) that describes both of them.
You could be referring to how you can open up any web page's javascript in a console like firebug and overwrite the functions defined there. by doing that and adding additional code (or removing) you can output data that is supposed to be "encapsulated" in closures... it really can go much further than that, though.
in some browsers you can even do this in the url bar if you don't mind writing in one single line
NOTE: cross site scripting which is something i totally forgot about until nonnb mentioned it. haha duuuhhh

Are DOM based XSS attacks still possible in modern browsers?

I am currently doing some research into XSS prevention but I am a bit confused about DOM based attacks. Most papers I have read about the subject give the example of injecting JavaScript through URL parameters to modify the DOM if the value is rendered in the page by JavaScript instead of server-side code.
However, it appears that all modern browsers encode all special characters given through URL parameters if rendered by JavaScript.
Does this mean DOM based XSS attacks cannot be performed unless against older browsers such as IE6?
They are absolutely possible. If you don't filter output that originated from your users, that output can be anything, including scripts. The browser doesn't have a way to know whether it is a legitimate script controlled by you or not.
It's not a matter of modern browsers, it's the basic principle that the browser treats every content that comes from your domain as legitimate to execute.
There are other aspects that are indeed blocked (sometimes, not always) by modern browsers (although security flaws always exist) like cross-domain scripting, 3rd party access to resources etc.
Forget about those old-school XSS exampls from 10 years ago. Programmers who write javascript to render page by taking something unescaped from query params have either been fired or switched to frameworks like angular/backbone a long time ago.
However, reflected/stored XSS still widely exists. This requires proper escaping from both server side and client side. Modern frameworks all provide good support for escaping sensitive characters when rendering the HTML. For example, when rendering views from model data, angular has $sce(strict contextual escaping) service (https://docs.angularjs.org/api/ng/service/$sce) to address possible XSS threats. backbone models also have methods like "model.escape(attribute)" (http://backbonejs.org/#Model-escape) to eliminate the XSS threats.

Categories