I want to allow user contributed Javascript in areas of my website.
Is this completely insane?
Are there any Javascript sanitizer scripts or good regex patterns out there to scan for alerts, iframes, remote script includes and other malicious Javascript?
Should this process be manually authorized (by a human checking the Javascript)?
Would it be more sensible to allow users to only use a framework (like jQuery) rather than giving them access to actual Javascript? This way it might be easier to monitor.
Thanks
I think the correct answer is 1.
As soon as you allow Javascript, you open yourself and your users to all kinds of issues. There is no perfect way to clean Javascript, and people like the Troll Army will take it as their personal mission to mess you up.
1. Is this completely insane?
Don't think so, but near. Let's see.
2. Are there any Javascript sanitizer scripts or good regex patterns out there to scan for alerts, iframes, remote script includes and other malicious Javascript?
Yeah, at least there are Google Caja and ADSafe to sanitize the code, allowing it to be sandboxed. I don't know up to what degree of trustworthiest they provide, though.
3. Should this process be manually authorized (by a human checking the Javascript)?
It may be possible that sandbox fails, so it would be a sensible solution, depending on the risk and the trade-off of being attacked by malicious (or faulty) code.
4. Would it be more sensible to allow users to only use a framework (like jQuery) rather than giving them access to actual Javascript? This way it might be easier to monitor.
JQuery is just plain Javascript, so if you're trying to protect from attacks, it won't help at all.
If it is crucial to prevent these kind of attacks, you can implement a custom language, parse it in the backend and produce the controlled, safe javascript; or you may consider another strategy, like providing an API and accessing it from a third-party component of your app.
Take a look at Google Caja:
Caja allows websites to safely embed DHTML web applications from third parties, and enables rich interaction between the embedding page and the embedded applications. It uses an object-capability security model to allow for a wide range of flexible security policies, so that the containing page can effectively control the embedded applications' use of user data and to allow gadgets to prevent interference between gadgets' UI elements.
Instead of checking for evil things like script includes, I would go for regex-based whitelisting of the few commands you expect to be used. Then involve a human to authorize and add new acceptable commands to the whitelist.
Think about all of the things YOU can do with javascript. Then think about the things you would do if you could do it on someone elses site. These are things that people will do just because they can, or to find out if they can. I don't think it is a good idea at all.
It might be safer to design/implement your own restricted scripting language, which can be very similar to JavaScript, but which is under the control of your own interpreter.
Probably. The scope for doing bad things is going to be much greater than it is when you simply allow HTML but try to avoid alloing JavaScript.I do not know.Well, two things: do you really want to spend your time doing this, and if you do this you had better make sure they see the javascript code rather than actual live JavaScript!I can't see why this would make any difference, unless you do have someone approving posts and that person happens to be more at home with jQuery than plain JavaScript.
Host it on a different domain. Same-origin security policy in browsers will then prevent user-submitted JS from attacking your site.
It's not enough to host it on a different subdomain, because subdomains can set cookies on higher-level domain, and this could be used for session fixation attacks.
Related
Today, I was starting to build a trading tool. Luckily, TradingView.com gives you free access, where you can embed charts legally on your website via iframes.
I planned to select a stock symbol within the iframe (see picture: left corner "BTCUSDT") and read the chosen symbol name via javascript. For example, when I choose "BTCUSDT" (or Bitcoin) I want to fetch this value, so I know which cryptocurrency I want to order by using another API service. I found out that "for security reasons" this is not possible.
However, in the picture you can see that iframe elements can be easily accessed by hand. So, why can't we read them out from javascript as well? What kind of (effective) security breach would that be?
Well... I can understand that some people might use this for evil phishing purposes but will this effectively stop them? They might use a proxy or other workaround for that. On the other hand, only reading a simple value for a tool meant to be embedded cross-site makes it more complicated than it should be.
Python has a beautiful library called "BeautifulSoup". Here you can enter a URL, and it reads all the DOM elements from a website. I don't understand how this is possible in python but restricted in javascript.
I have found no reasonable answer or solution for these kind of scenarios. If this is meant for higher security, there are many ways to read out those values other than relying on javascript. So why restrict them?
Javascript doesn't let you give private data or methods to objects, like you can in C++. Oh, well actually, yes it does, via some workarounds involving closure. But coming from a Python background, I am inclined to believe that "pretend privacy" (via naming conventions and documentation) is good enough, or maybe even preferable to "enforced privacy" (enforced by Javascript itself). Sure, I can think of situations where this is not true -- e.g. people interface with my code without RTFM but I get blamed -- but I'm not in that situation.
But, something gives me pause. Javascript guru Douglas Crockford, in "Javascript: The Good Parts" and elsewhere, repeatedly refers to fake-privacy as a "security" issue. For example, "an attacker can easily access the fields directly and replace the methods with his own".
I'm confused by this. It seems to me that if I follow minimal security practices (validate, don't blindly trust, data sent from a browser to my server; don't include third-party scripts on my site without inspecting them) then there is no situation where pretend-privacy is less "secure" than enforced privacy. Is that right? If not, what's a situation where pretend-privacy versus enforced-privacy has security implications?
Not in itself. However, it does mean you cannot safely load untrusted JavaScript code into your HTML documents, as Crockford points out. If you really need to run such untrusted JavaScript code in the browser (e.g. for user-submitted widgets in social networking sites), consider iframe sandboxing.
As a Web developer, your security problem is often that major Internet advertising brokers do not support (or even prohibit) framing their ad code. Unfortunately, you have to trust Google to not deliver malicious JavaScript, whether intentionally or unintentionally (e.g. they get hacked).
Here is a short description of iframe sandboxing I had posted as an answer to another question:
Set up a completely separate domain name (e.g. "exampleusercontent.com") exclusively for user-submitted HTML, CSS, and JavaScript. Do not allow this content to be loaded through your main domain name. Then embed the user content in your pages using iframes.
If you need tighter integration than simple framing, window.postMessage() may help, allowing scripts in different frames to communicate with each other in a controlled manner.
It seems the answer is "No, fake privacy is fine". Here are some elaborations:
In javascript as it exists today, you cannot include an unknown and untrusted third-party script on your webpage. It can wreak havoc: It can rewrite all the HTML on the page, it can prompt the user for his password and then send it to an evil server, etc. etc. Javascript coding style makes no difference to this basic fact. See PleaseStand's answer for a discussion of methods to deal with this.
An incompetent but not evil script might unintentionally mess things up through name conflicts. This is a good argument against creating lots of global variables with common names, but has nothing to do with whether to avoid fake-private variables. For example, my banana-selling website might use the fake-private variable window.BANANA_STORE_MODULE.cart.__cart_item_array. It is not completely impossible that this variable would be accidentally overwritten by a third-party script, but it's extraordinarily unlikely.
There are ideas floating around for a future modification of javascript that would provide a controlled environment where untrusted code can act in prescribed ways. I could let the untrusted third-party javascript interact with my javascript through specific exposed methods, and block the third-party script from accessing the HTML, etc. If this ever exists, it could be a scenario where private variables are essential for security. But it doesn't exist yet.
Writing clear and bug-free code is always, obviously, helpful for security. Insofar as truly-private variables and methods make it easier or harder to write clear and bug-free code, there's a security implication. Whether they are helpful or not will always be a matter of debate and taste, and whether your background is, say, C++ (where private variables are central) versus Python (where private variables are nonexistent). There are arguments in both directions, including the famous blog post Javascript Private Variables are Evil.
For my part, I will keep using fake privacy: A leading underscore (or whatever) indicates to myself and my collaborators that some property or method is not part of the publicly-supported interface of a module. My fake-privacy code is more readable (IMO), and I have more freedom in structuring it (e.g. a closure cannot span two files), and I can access those fake-private variables while I debug and experiment. I'm not going to worry that these programs are somehow more insecure than any other javascript program.
A persistent follow-up of an admittedly similar question I had asked: What security restrictions should be implemented in allowing a user to upload a Javascript file that directs canvas animation?
I like to think I know JS decent enough, and I see common characters in all the XSS examples I've come accoss, which I am somewhat familiar with. I am lacking good XSS examples that could bypass a securely sound, rationally programmed system. I want people to upload html5 canvas creations onto my site. Any sites like this yet? People get scared about this all the time it seems, but what if you just wanted to do it for fun for yourself and if something happens to the server then oh well it's just an animation site and information is spread around like wildfire anyway so if anyone cares then i'll tell them not to sign up.
If I allow a single textarea form field to act as an IDE using JS for my programming language written in JS, and do string replacing, filtering, and validation of the user's syntax before finally compiling it into JS to be echoed by PHP, how bad could it get for me to host that content? Please show me how you could bypass all of my combined considerations, with also taking into account the server-side as well:
If JavaScript is disabled, preventing any POST from getting through, keeping constant track of user session.
Namespacing the Class, so they can only prefix their functions and methods with EXAMPLE.
Making instance
Storing my JS Framework in an external (immutable in the browser?) JS file, which needs to be at the top of the page for the single textarea field in the form to be accepted, as well as a server-generated key which must follow it. On the page that hosts the compiled user-uploaded canvas game/animation (1 per page ONLY), the server will verify the correct JS filename string before echoing the rest out.
No external script calls! String replacing on client and server.
Allowing ONLY alphanumeric characters, dashes and astericks.
Removing alert, eval, window, XMLHttpRequest, prototyping, cookie, obvious stuff. No native JS reserved words or syntax.
Obfuscating and minifying another external JS file that helps to serve the IDE and recognize the programming language's uniquely named Canvas API methods.
When Window unloads, store the external JS code in to two dynamically generated form fields to be checked by the server in POST. All the original code will be cataloged in the DB thoroughly for filtering purposes.
Strict variable naming conventions ('example-square1-lengthPROPERTY', 'example-circle-spinMETHOD')
Copy/Paste Disabled, setInterval to constantly check if enabled by the user. If so, then trigger a block to the database, change window.location immediately and check the session ID through POST to confirm in case JS becomes disabled between that timeframe.
I mean, can I do it then? How can one do harm if they can't use HEX or ASCII and stuff like that?
I think there are a few other options.
Good places to go for real-life XSS tests, by the way, are the XSS Cheat Sheet and HTML5 Security Cheetsheet (newer). The problem with that, however, is that you want to allow Javascript but disallow bad Javascript. This is a different, and more complex, goal than the usual way of preventing XSS, by preventing all scripts.
Hosting on a separate domain
I've seen this referred to as an "iframe jail".
The goal with XSS attacks is to be able to run code in the same context as your site - that is, on the same domain. This is because the code will be able to read and set cookies for that domain, intiate user actions or redress your design, redirect, and so forth.
If, however, you have two separate domains - one for your site, and another which only hosts the untrusted, user-uploaded content, then that content will be isolated from your main site. You could include it in an iframe, and yet it would have no access to the cookies from your site, no access to redress or alter the design or links outside its iframe, and no access to the scripting variables of your main window (since it is on a different domain).
It could, of course, set cookies as much as it likes, and even read back the ones that it set. But these would still be isolated from the cookies for your site. It would not be able to affect or read your main site's cookies. It could also include other code which could annoy/harrass the user, such as pop-up windows, or could attempt to phish (you'd need to make it visually clear in your out-of-iframe UI that the content served is not part of your site). However, this is still sandboxed from your main site, where you own personal payload - your session cookies and the integrity of your overarching page design and scripts, is preserved. It would carry no less but no more risk than any site on the internet that you could embed in an iframe.
Using a subset of Javascript
Subsets of Javascript have been proposed, which provide compartmentalisation for scripts - the ability to load untrusted code and have it not able to alter or access other code if you don't give it the scope to do so.
Look into things like Google CAJA - whose aim is to enable exactly the type of service that you've described:
Caja allows websites to safely embed DHTML web applications from third parties, and enables rich interaction between the embedding page and the embedded applications. It uses an object-capability security model to allow for a wide range of flexible security policies, so that the containing page can effectively control the embedded applications' use of user data and to allow gadgets to prevent interference between gadgets' UI elements.
One issue here is that people submitting code would have to program it using the CAJA API. It's still valid Javascript, but it won't have access to the browser DOM, as CAJA's API mediates access. This would make it difficult for your users to port some existing code. There is also a compilation phase. Since Javascript is not a secure language, there is no way to ensure code cannot access your DOM or other global variables without running it through a parser, so that's what CAJA does - it compiles it from Javascript input to Javascript output, enforcing its security model.
htmlprufier consists of thousands of regular expressions that attempt "purify" html into a safe subset that is immune to xss. This project is bypassesed very few months, because it isn't nearly complex enough to address the problem of XSS.
Do you understand the complexity of XSS?
Do you know that javascript can exist without letters or numbers?
Okay, they very first thing I would try is inserting a meta tag that changes the encoding to I don't know lets say UTF-7 which is rendered by IE. Within this utf-7 enocded html it will contain javascript. Did you think of that? Well guess what there is somewhere between a hundred thousand and a a few million other vectors I didn't think of.
The XSS cheat sheet is so old my grandparents are immune to it. Here is a more up to date version.
(Oah and by the way you will be hacked because what you are trying to do fundamentally insecure.)
Question: What precautions should I take when I let clients add custom JS scripts to their pages?
IF you want more details:
I am working on a custom CMS like project for a company, The CMS has number of "groups" that each subscriber "owns" where they do their own thing.
The new requirements is that some groups want to add google analytics to see how they are doing. So I naturally added a column in the table and made code adjustements so if there is some data in that column, I just use the following line in master page to set the script out:
ScriptManager.RegisterClientScriptBlock(Page, typeof(Page), "CustomJs", CustomJs, true);
It works just fine, only, It got me thinking...
It's really easy for someone with good knowledge of how to access cookies etc from from js. Sure, each group is moderated and only super admin can add this javascript, sure, they wouldn't be silly enough to hack their own group. Each group has their own code so its not possible to hack other groups BUT STILL
I am not really comfortable in letting user's add their own javascript codes.
I could monitor each group myself, but the groups are growing really quick and I will hit a time when I will no longer be able to do that.
So, to brief it up: What precautions should I take to avoid any mishaps ?
ps: did try to google, no convincing answers anywhere.
Instead of allowing the users to add their own Javascript files, and given that the only requirement here is for google analytics, why not just let them put their analytics ID into the CMS and if it's present, output the relevant Google Analytics code?
This way you fulfill the users requirement and also avoid the need to protect against malicious scripting.
Letting users use Javascript is in general, a very bad idea. Don't do it unless you have to.
I once I had a problem where I need to let clients use Javascript, but, the clients weren't necessarily trusted, so, I modified cofeescript so that only a small subset was compilable to javascript, and it worked pretty well. This may be waaaay too overkill for you.
You should not let your users access cookies, that's always a pain. Also, no localStorage or webSQL if you're one of the HTML5 people, and, no document.write() because that's another form of eval as JSLint tells you.
And, the problem with letting people have javascript is that even if you believe you have trusted users, someone may get a password, and you don't want that person to get access to all the other accounts in the group.
Automatically recognizing whether some JavaScript code is malicious or sandboxing it is close to impossible. If you don't want to allow hacking your site you are left with only few options:
Don't allow users to add JavaScript at all.
Only allow predefined JavaScript code, e.g. for Google Analytics.
Have all custom JavaScript inspected by a human before it is allowed to display on the site. Never trust scripts loaded from third party sites - these can change from one day to another and turn malicious.
If you have no other choice, you may consider separating path/domain of user javascripts (and cookies).
For example your user have page:
user1.server.com
and you keep user pages at
user1.server.com
So, if you set session cookies to the user1.server.com, it'll render them unobtainable for user scripts from other domains (e.g. user2.server.com).
Another option may be executing all user's javascript at server JS engine (thus controlling all it's I/O and limiting access to browser resources).
There is no simple and easy solution anyway, so better consider using options from other answers (e.g. predifined script API, human inspection).
This question already has answers here:
Closed 13 years ago.
Duplicate:
Do web sites really need to cater for browsers that don’t have Javascript enabled?
Only supporting users who have Javascript enabled.
How common is it for Javascript to be disabled
How many people disable Javascript?
I've been doing web applications on and off for a few years now and each application I write seems to have more javascript than the previous one.
A frequent comment is: "But what if the user turns off Javascript?".
I take the point, but I've never actually seen a user do this. Not once.
Have you?
This comes up about every other week or so. Did you search first?
See these:
https://stackoverflow.com/questions/121108/how-many-people-disable-javascript
https://stackoverflow.com/questions/379735/how-common-is-it-for-javascript-to-be-disabled
Only supporting users who have Javascript enabled
Do web sites really need to cater for browsers that don't have Javascript enabled?
The main points are:
Google doesn't use javascript when indexing
Mobile browsers (smart phones like the iPhone) sometimes have bad or non-existent javascript
Screen readers don't do javascript well, if at all, and many developers are legally required to support them.
Thanks to filters like NoScript, the number of people browsing with javascript disabled (at least initially) may actually be going up.
So yes, you still need to worry about it.
It depends entirely on what sort of coverage you require.
Do you need 80% 90% 100% of users to be able to use your site / application?
People DO turn off Javascript. The question is, does your site need to work for those people? Can it just tell them to turn it on if they want to continue?
Yes, it happens.
NoScript is a Firefox add-on - downloaded by plenty of people.
No Script
You should always make sure your site works without javascript.
People turn javascript off for security reasons. Companys sometimes have javascript forced off at their inhouse computers. Also spiders don't have javascript so your site not working without javascript is bad SEO practice.
5% of users have JavaScript turned off.
It has become a standard at my office (for better or for worse) to assume that the user has JS installed and turned on. The number of people who have it turned off is getting smaller and smaller every day, but this still doesn't mean that you should forgo performing the necessary validation for submission on the server side as well just in case (as well as some other scenarios).
I would say that it is not safe to assume javascript is always on, but it is safe to REQUIRE javascript be turned on.
In other words, you don't need to jump through hoops to make something work without it, just display a message or redirect.
Javascript is an essential technology, and it's not unreasonable to require it.
It's rare, but it's possible. If you are launching an application for "everyone" to use on the internet, then yes, you'll have to prepare for such an event. It really depends on your target audience, but the safest assumption is that someone will have it turned off.
From a security perspective, you definitely need to handle this situation, as turning off JavaScript (or worse yet hijacking the scripts you wrote) is an easy to bypass business logic and validation, if it isn't double checked on the server. Requiring it to be turned on is not a good enough defense for stopping people in this situation. Remember you're requesting that the browser tells you what it enabled and disabled. The user (or attacker in this case) is in control of the browser, and you can't trust what it says as it's easy to modify the HTTP headers.
Depends on who your target audience is. Some users turn off JS for various reasons. Usually, they will enable it for individual sites that need it, but they might not do that if you don't tell them they need it.
If your site just fails to load correctly, they'll assume it's broken. If it shows a "you need JS to view this page" message, then at least they'll know what to do.
Some will then enable Javascript for your site specifically, but some won't, and they simply won't be able to use your site, unless it is functional without Javascript.
It's rare, but it happens. It really depends on who your user base is. If it's for corporate users, a lot of them have default security settings with javascript disabled. If it's for... pretty much anyone else, odds are they'll have it turned on.
I run by default with javascript off for new sites (NoScript) plugin. I think many tech-savvy users do the same. At least the ones who are paranoid about XSS attacks.
It is best practice to code for users that have JavaScript turned off.
As web developer your goal should be to provide the core basic functionality (without JavaScript). This enables all users to fully use your site. Then through the use of JavaScript, in a process known as "progressive enhancement", spruce up elements of the site for users that have JavaScript turned on.
And in the case where JavaScript is off...your site should gracefully degrade.
Web development is one of those arenas where you can't expect anything. Code for all users to maximise your site's accessibility.