What precautions should I take before I let client add javascript to a webpage? - javascript

Question: What precautions should I take when I let clients add custom JS scripts to their pages?
IF you want more details:
I am working on a custom CMS like project for a company, The CMS has number of "groups" that each subscriber "owns" where they do their own thing.
The new requirements is that some groups want to add google analytics to see how they are doing. So I naturally added a column in the table and made code adjustements so if there is some data in that column, I just use the following line in master page to set the script out:
ScriptManager.RegisterClientScriptBlock(Page, typeof(Page), "CustomJs", CustomJs, true);
It works just fine, only, It got me thinking...
It's really easy for someone with good knowledge of how to access cookies etc from from js. Sure, each group is moderated and only super admin can add this javascript, sure, they wouldn't be silly enough to hack their own group. Each group has their own code so its not possible to hack other groups BUT STILL
I am not really comfortable in letting user's add their own javascript codes.
I could monitor each group myself, but the groups are growing really quick and I will hit a time when I will no longer be able to do that.
So, to brief it up: What precautions should I take to avoid any mishaps ?
ps: did try to google, no convincing answers anywhere.

Instead of allowing the users to add their own Javascript files, and given that the only requirement here is for google analytics, why not just let them put their analytics ID into the CMS and if it's present, output the relevant Google Analytics code?
This way you fulfill the users requirement and also avoid the need to protect against malicious scripting.

Letting users use Javascript is in general, a very bad idea. Don't do it unless you have to.
I once I had a problem where I need to let clients use Javascript, but, the clients weren't necessarily trusted, so, I modified cofeescript so that only a small subset was compilable to javascript, and it worked pretty well. This may be waaaay too overkill for you.
You should not let your users access cookies, that's always a pain. Also, no localStorage or webSQL if you're one of the HTML5 people, and, no document.write() because that's another form of eval as JSLint tells you.
And, the problem with letting people have javascript is that even if you believe you have trusted users, someone may get a password, and you don't want that person to get access to all the other accounts in the group.

Automatically recognizing whether some JavaScript code is malicious or sandboxing it is close to impossible. If you don't want to allow hacking your site you are left with only few options:
Don't allow users to add JavaScript at all.
Only allow predefined JavaScript code, e.g. for Google Analytics.
Have all custom JavaScript inspected by a human before it is allowed to display on the site. Never trust scripts loaded from third party sites - these can change from one day to another and turn malicious.

If you have no other choice, you may consider separating path/domain of user javascripts (and cookies).
For example your user have page:
user1.server.com
and you keep user pages at
user1.server.com
So, if you set session cookies to the user1.server.com, it'll render them unobtainable for user scripts from other domains (e.g. user2.server.com).
Another option may be executing all user's javascript at server JS engine (thus controlling all it's I/O and limiting access to browser resources).
There is no simple and easy solution anyway, so better consider using options from other answers (e.g. predifined script API, human inspection).

Related

Security on Web page that will allow user to add javascript dynamically

I have implemented a requirement in my website where I can allow my end user to configure a link, to execute any javascript that he may require. Since, he can type in any javascript that he requires he also has the ability to open different web pages, create new pages via javascript, edit elements in the page via javascript and so on.
I have some security concerns over this functionality and would like to get some opinion from everyone. Is it possible that any malicious or unethical script could be added to the page that could bring about law and order problem or credibility issues? If so, is it possible to place in some code that would restrict the type of javascript that my user may add?
There's a thing called ADsafe which was developed for banner ads that is a strict subset of Javascript which is meant to prevent malicious code. I don't think you'd be able to do things like
open different web pages, create new pages via javascript, edit elements in the page via javascript and so on
though. I think you should re-think your needs, and try to determine if you can come up with a way to offer the ability for a user to choose from pre-determined code that you write, perhaps customizing it within certain bounds.
Then again, if you're absolutely sure that the javascript is only going to run for the user who entered it, there shouldn't be anything they can do that will screw it up for anyone else. If a user was determined he or she could simply inject their javascript in through other means, like a rewriting proxy or extension or simply the javascript console.

Make programming langugage for your web app in JS that compiles to JS w/ PHP to ensure thorough filtering of user-uploaded html5 canvas animations?

A persistent follow-up of an admittedly similar question I had asked: What security restrictions should be implemented in allowing a user to upload a Javascript file that directs canvas animation?
I like to think I know JS decent enough, and I see common characters in all the XSS examples I've come accoss, which I am somewhat familiar with. I am lacking good XSS examples that could bypass a securely sound, rationally programmed system. I want people to upload html5 canvas creations onto my site. Any sites like this yet? People get scared about this all the time it seems, but what if you just wanted to do it for fun for yourself and if something happens to the server then oh well it's just an animation site and information is spread around like wildfire anyway so if anyone cares then i'll tell them not to sign up.
If I allow a single textarea form field to act as an IDE using JS for my programming language written in JS, and do string replacing, filtering, and validation of the user's syntax before finally compiling it into JS to be echoed by PHP, how bad could it get for me to host that content? Please show me how you could bypass all of my combined considerations, with also taking into account the server-side as well:
If JavaScript is disabled, preventing any POST from getting through, keeping constant track of user session.
Namespacing the Class, so they can only prefix their functions and methods with EXAMPLE.
Making instance
Storing my JS Framework in an external (immutable in the browser?) JS file, which needs to be at the top of the page for the single textarea field in the form to be accepted, as well as a server-generated key which must follow it. On the page that hosts the compiled user-uploaded canvas game/animation (1 per page ONLY), the server will verify the correct JS filename string before echoing the rest out.
No external script calls! String replacing on client and server.
Allowing ONLY alphanumeric characters, dashes and astericks.
Removing alert, eval, window, XMLHttpRequest, prototyping, cookie, obvious stuff. No native JS reserved words or syntax.
Obfuscating and minifying another external JS file that helps to serve the IDE and recognize the programming language's uniquely named Canvas API methods.
When Window unloads, store the external JS code in to two dynamically generated form fields to be checked by the server in POST. All the original code will be cataloged in the DB thoroughly for filtering purposes.
Strict variable naming conventions ('example-square1-lengthPROPERTY', 'example-circle-spinMETHOD')
Copy/Paste Disabled, setInterval to constantly check if enabled by the user. If so, then trigger a block to the database, change window.location immediately and check the session ID through POST to confirm in case JS becomes disabled between that timeframe.
I mean, can I do it then? How can one do harm if they can't use HEX or ASCII and stuff like that?
I think there are a few other options.
Good places to go for real-life XSS tests, by the way, are the XSS Cheat Sheet and HTML5 Security Cheetsheet (newer). The problem with that, however, is that you want to allow Javascript but disallow bad Javascript. This is a different, and more complex, goal than the usual way of preventing XSS, by preventing all scripts.
Hosting on a separate domain
I've seen this referred to as an "iframe jail".
The goal with XSS attacks is to be able to run code in the same context as your site - that is, on the same domain. This is because the code will be able to read and set cookies for that domain, intiate user actions or redress your design, redirect, and so forth.
If, however, you have two separate domains - one for your site, and another which only hosts the untrusted, user-uploaded content, then that content will be isolated from your main site. You could include it in an iframe, and yet it would have no access to the cookies from your site, no access to redress or alter the design or links outside its iframe, and no access to the scripting variables of your main window (since it is on a different domain).
It could, of course, set cookies as much as it likes, and even read back the ones that it set. But these would still be isolated from the cookies for your site. It would not be able to affect or read your main site's cookies. It could also include other code which could annoy/harrass the user, such as pop-up windows, or could attempt to phish (you'd need to make it visually clear in your out-of-iframe UI that the content served is not part of your site). However, this is still sandboxed from your main site, where you own personal payload - your session cookies and the integrity of your overarching page design and scripts, is preserved. It would carry no less but no more risk than any site on the internet that you could embed in an iframe.
Using a subset of Javascript
Subsets of Javascript have been proposed, which provide compartmentalisation for scripts - the ability to load untrusted code and have it not able to alter or access other code if you don't give it the scope to do so.
Look into things like Google CAJA - whose aim is to enable exactly the type of service that you've described:
Caja allows websites to safely embed DHTML web applications from third parties, and enables rich interaction between the embedding page and the embedded applications. It uses an object-capability security model to allow for a wide range of flexible security policies, so that the containing page can effectively control the embedded applications' use of user data and to allow gadgets to prevent interference between gadgets' UI elements.
One issue here is that people submitting code would have to program it using the CAJA API. It's still valid Javascript, but it won't have access to the browser DOM, as CAJA's API mediates access. This would make it difficult for your users to port some existing code. There is also a compilation phase. Since Javascript is not a secure language, there is no way to ensure code cannot access your DOM or other global variables without running it through a parser, so that's what CAJA does - it compiles it from Javascript input to Javascript output, enforcing its security model.
htmlprufier consists of thousands of regular expressions that attempt "purify" html into a safe subset that is immune to xss. This project is bypassesed very few months, because it isn't nearly complex enough to address the problem of XSS.
Do you understand the complexity of XSS?
Do you know that javascript can exist without letters or numbers?
Okay, they very first thing I would try is inserting a meta tag that changes the encoding to I don't know lets say UTF-7 which is rendered by IE. Within this utf-7 enocded html it will contain javascript. Did you think of that? Well guess what there is somewhere between a hundred thousand and a a few million other vectors I didn't think of.
The XSS cheat sheet is so old my grandparents are immune to it. Here is a more up to date version.
(Oah and by the way you will be hacked because what you are trying to do fundamentally insecure.)

Does googlebot recognize an HTML <title> tag altered by javascript ?

I have a search engine on my website and it works via ajax. I want to have a specific <title> for each search attempt. To achive that I have to alter every time after I recieve a response from ajax.
Do you have any idea if googlebot will see this altered and use it to index my webpage?
Thanks for any help!
Do you have any idea if googlebot will see this altered and use it to index my webpage?
Most likely not.
You should change the title on server side.
Google bot does something similar to opening the page URL using notepad. It will see the JavaScript code as a plain text, which tries to change the title; but it will not see the result of the script execution of course.
EDIT:
Ajax enabled web pages are crawled using the same principle, unless they follow the techniques for Ajax-enabled web sites, as suggested by google:
AJAX crawling: Guide for webmasters and developers
Well, google added many features to its search engine over past years, and probably it will be able to see the changes. But how do you imagine a client should reach a page which address doesn't change, but content does after few clicks? You must combine AJAX with normal separate pages; this will also add compatibility for clients that have JavaScript disabled. E.g. have all pages redirect to the one working with AJAX if JavaScript is enabled and user-agent string doesn't match *bot*.
Simply, Google will not index any dynamic content from your page.
As Slava said, Google added many features to its search engine over past years, and probably it will be able to see the changes. But even if Google does eventually start indexing dynamically changed content, I think it is still uninteresting from a search engine optimization standpoint that those content will not be indexed as quickly as the others served from server.
It's important to know what you're getting and what you're losing. Yes, you may be adding functionality to your page easily and enhancing the user experience, but if you don't get the data indexed, you lose all that juicy keywordy goodness. :)

Is it possible to sanitize Javascript code?

I want to allow user contributed Javascript in areas of my website.
Is this completely insane?
Are there any Javascript sanitizer scripts or good regex patterns out there to scan for alerts, iframes, remote script includes and other malicious Javascript?
Should this process be manually authorized (by a human checking the Javascript)?
Would it be more sensible to allow users to only use a framework (like jQuery) rather than giving them access to actual Javascript? This way it might be easier to monitor.
Thanks
I think the correct answer is 1.
As soon as you allow Javascript, you open yourself and your users to all kinds of issues. There is no perfect way to clean Javascript, and people like the Troll Army will take it as their personal mission to mess you up.
1. Is this completely insane?
Don't think so, but near. Let's see.
2. Are there any Javascript sanitizer scripts or good regex patterns out there to scan for alerts, iframes, remote script includes and other malicious Javascript?
Yeah, at least there are Google Caja and ADSafe to sanitize the code, allowing it to be sandboxed. I don't know up to what degree of trustworthiest they provide, though.
3. Should this process be manually authorized (by a human checking the Javascript)?
It may be possible that sandbox fails, so it would be a sensible solution, depending on the risk and the trade-off of being attacked by malicious (or faulty) code.
4. Would it be more sensible to allow users to only use a framework (like jQuery) rather than giving them access to actual Javascript? This way it might be easier to monitor.
JQuery is just plain Javascript, so if you're trying to protect from attacks, it won't help at all.
If it is crucial to prevent these kind of attacks, you can implement a custom language, parse it in the backend and produce the controlled, safe javascript; or you may consider another strategy, like providing an API and accessing it from a third-party component of your app.
Take a look at Google Caja:
Caja allows websites to safely embed DHTML web applications from third parties, and enables rich interaction between the embedding page and the embedded applications. It uses an object-capability security model to allow for a wide range of flexible security policies, so that the containing page can effectively control the embedded applications' use of user data and to allow gadgets to prevent interference between gadgets' UI elements.
Instead of checking for evil things like script includes, I would go for regex-based whitelisting of the few commands you expect to be used. Then involve a human to authorize and add new acceptable commands to the whitelist.
Think about all of the things YOU can do with javascript. Then think about the things you would do if you could do it on someone elses site. These are things that people will do just because they can, or to find out if they can. I don't think it is a good idea at all.
It might be safer to design/implement your own restricted scripting language, which can be very similar to JavaScript, but which is under the control of your own interpreter.
Probably. The scope for doing bad things is going to be much greater than it is when you simply allow HTML but try to avoid alloing JavaScript.I do not know.Well, two things: do you really want to spend your time doing this, and if you do this you had better make sure they see the javascript code rather than actual live JavaScript!I can't see why this would make any difference, unless you do have someone approving posts and that person happens to be more at home with jQuery than plain JavaScript.
Host it on a different domain. Same-origin security policy in browsers will then prevent user-submitted JS from attacking your site.
It's not enough to host it on a different subdomain, because subdomains can set cookies on higher-level domain, and this could be used for session fixation attacks.

Is it safe to assume Javascript is always turned on? [duplicate]

This question already has answers here:
Closed 13 years ago.
Duplicate:
Do web sites really need to cater for browsers that don’t have Javascript enabled?
Only supporting users who have Javascript enabled.
How common is it for Javascript to be disabled
How many people disable Javascript?
I've been doing web applications on and off for a few years now and each application I write seems to have more javascript than the previous one.
A frequent comment is: "But what if the user turns off Javascript?".
I take the point, but I've never actually seen a user do this. Not once.
Have you?
This comes up about every other week or so. Did you search first?
See these:
https://stackoverflow.com/questions/121108/how-many-people-disable-javascript
https://stackoverflow.com/questions/379735/how-common-is-it-for-javascript-to-be-disabled
Only supporting users who have Javascript enabled
Do web sites really need to cater for browsers that don't have Javascript enabled?
The main points are:
Google doesn't use javascript when indexing
Mobile browsers (smart phones like the iPhone) sometimes have bad or non-existent javascript
Screen readers don't do javascript well, if at all, and many developers are legally required to support them.
Thanks to filters like NoScript, the number of people browsing with javascript disabled (at least initially) may actually be going up.
So yes, you still need to worry about it.
It depends entirely on what sort of coverage you require.
Do you need 80% 90% 100% of users to be able to use your site / application?
People DO turn off Javascript. The question is, does your site need to work for those people? Can it just tell them to turn it on if they want to continue?
Yes, it happens.
NoScript is a Firefox add-on - downloaded by plenty of people.
No Script
You should always make sure your site works without javascript.
People turn javascript off for security reasons. Companys sometimes have javascript forced off at their inhouse computers. Also spiders don't have javascript so your site not working without javascript is bad SEO practice.
5% of users have JavaScript turned off.
It has become a standard at my office (for better or for worse) to assume that the user has JS installed and turned on. The number of people who have it turned off is getting smaller and smaller every day, but this still doesn't mean that you should forgo performing the necessary validation for submission on the server side as well just in case (as well as some other scenarios).
I would say that it is not safe to assume javascript is always on, but it is safe to REQUIRE javascript be turned on.
In other words, you don't need to jump through hoops to make something work without it, just display a message or redirect.
Javascript is an essential technology, and it's not unreasonable to require it.
It's rare, but it's possible. If you are launching an application for "everyone" to use on the internet, then yes, you'll have to prepare for such an event. It really depends on your target audience, but the safest assumption is that someone will have it turned off.
From a security perspective, you definitely need to handle this situation, as turning off JavaScript (or worse yet hijacking the scripts you wrote) is an easy to bypass business logic and validation, if it isn't double checked on the server. Requiring it to be turned on is not a good enough defense for stopping people in this situation. Remember you're requesting that the browser tells you what it enabled and disabled. The user (or attacker in this case) is in control of the browser, and you can't trust what it says as it's easy to modify the HTTP headers.
Depends on who your target audience is. Some users turn off JS for various reasons. Usually, they will enable it for individual sites that need it, but they might not do that if you don't tell them they need it.
If your site just fails to load correctly, they'll assume it's broken. If it shows a "you need JS to view this page" message, then at least they'll know what to do.
Some will then enable Javascript for your site specifically, but some won't, and they simply won't be able to use your site, unless it is functional without Javascript.
It's rare, but it happens. It really depends on who your user base is. If it's for corporate users, a lot of them have default security settings with javascript disabled. If it's for... pretty much anyone else, odds are they'll have it turned on.
I run by default with javascript off for new sites (NoScript) plugin. I think many tech-savvy users do the same. At least the ones who are paranoid about XSS attacks.
It is best practice to code for users that have JavaScript turned off.
As web developer your goal should be to provide the core basic functionality (without JavaScript). This enables all users to fully use your site. Then through the use of JavaScript, in a process known as "progressive enhancement", spruce up elements of the site for users that have JavaScript turned on.
And in the case where JavaScript is off...your site should gracefully degrade.
Web development is one of those arenas where you can't expect anything. Code for all users to maximise your site's accessibility.

Categories