I am a web developer for a web site that is occassionally plagued by form bots. Recently I received an error notification of a problem with the form submission that should be impossible for a human user. You cannot submit the form without JavaScript enabled but the server side script received a form field value that the JavaScript validation will not allow.
I suspect that a form bot managed to submit the form without running the JavaScript but I'm not entirely sure this is the problem because a real user had a similar problem. I know how to use honeypot fields as a countermeasure for form bots but I need to test my countermeasures. Therefore I need a working form bot to attack my form so I can see what the result would be and to verfiy that my countermeasures will work.
I think you can use PHP with Curl to submit web forms but I can't find any sample code. I would prefer to use an actual form bot so I can be sure that the honeypot fields aren't easily circumvented.
Does anyone know what is currently being used to attack web forms? How do you test your countermeasures to ensure they are effective?
Personally, I use a FireFox extension called Tamper Data. You submit the form normally, but then you can modify the HTTP parameters (Variables, cookies, etc) before it's sent to the server. That way, you can manually change the validated fields. You could automate it with PHP and CURL...
The thing is, you don't want to run an actual bot against it, because that would only test one (maybe two) methods of breaking your validation. You want to run your own, that way you can test every possible combination that you can think of. If you automate it with PHP/CURL, you could then run the test with every change (an integration test) to verify that you didn't "break" anything... It shouldn't be too hard to write, since the CURL functions are pretty well documented...
What about captchas to protect your form?
Related
This question already has answers here:
JavaScript: client-side vs. server-side validation
(13 answers)
Closed 5 years ago.
When validating forms with javascript and php, why do we need to include the same validation rules (ie correct format, checking fields are filled in, values within bounds) on both the server and browser? I understand that javascript can be turned off, so why not just have everything on the server and save checking it twice?
Utilising client side validation, generally, gives a much nicer experience for the user. For example, instead of the user filling out a form, then submitting it to find out they've made mistakes, with Javascript we can let them know as they finish typing. That is a much easier/cleaner/quicker way for the user to fix their mistakes.
When using client sided validation, you must also use backend validation because javascript can be manipulated. E.G. What happens if they go into the dev tools inspector and alter the validation script to allow malicious details to be entered? Or what about if they inject their own malicious script? Or what about if they turn off Javascript in the browser as a whole? That is where the backend validation comes into play.
It is important to validate the form submitted by the user because it can have inappropriate values. So validation is must.
The JavaScript provides you the facility the validate the form on the client side so processing will be fast than server-side validation. So, most of the web developers prefer JavaScript form validation.
Through JavaScript, we can validate name, password, email, date, mobile number etc fields.
As others have said, you should do both. Here's why:
Client Side
You want to validate input on the client side first because you can give better feedback to the average user. For example, if they enter an invalid email address and move to the next field, you can show an error message immediately. That way the user can correct every field before they submit the form.
If you only validate on the server, they have to submit the form, get an error message, and try to hunt down the problem.
(This pain can be eased by having the server re-render the form with the user's original input filled in, but client-side validation is still faster.)
Server Side
You want to validate on the server side because you can protect against the malicious user, who can easily bypass your JavaScript and submit dangerous input to the server.
It is very dangerous to trust your UI. Not only can they abuse your UI, but they may not be using your UI at all, or even a browser. What if the user manually edits the URL, or runs their own Javascript, or tweaks their HTTP requests with another tool? What if they send custom HTTP requests from curl or from a script, for example?
(This is not theoretical; eg, I worked on a travel search engine that re-submitted the user's search to many airlines, bus companies, etc, by sending POST requests as if the user had filled each company's search form, then gathered and sorted all the results. Those companies' form JS was never executed, and it was crucial for us that they provide error messages in the returned HTML. Of course, an API would have been nice, but this was what we had to do.)
Not allowing for that is not only naive from a security standpoint, but also non-standard: a client should be allowed to send HTTP by whatever means they wish, and you should respond correctly. That includes validation.
Server side validation is also important for compatibility - not all users, even if they're using a browser, will have JavaScript enabled.
I'd suggest server side validation and use JS/AJAX to do the client side, that way you have 1 set of validation rules on the server and you still get client side response without page refresh. Remember that some people choose not to have JS turned on (I had clients working in Government departments where JS was turned off) so you need a fallback position.
To clarify: create pre-validation JS function that posts your data using an AJAX call to the server, the server then applies the validation rules to the data and return a response (XML/JSON/whatever) to your JS validation function. Your JS will then process the response checking for success/failure. On failure display an error message (probably some data set in the response), and on success continue to post the form data to the server for processing. The server then needs to run the validation on the post data again (using the same validation function/logic) to ensure nothing has been tampered with along the way. Then, finally, process the data on the server to do whatever you want with it
Each time user sends data to the server it gets validated on the server and returns the response whether the data is valid(200 HTTP Code) or invalid(400 HTTP Code).
Example Payload
{ "name": "#batman#123" } //Invalid data gets sent to the server and processed
This whole process consumes both memory and time. So in order to save those, we don't send the previous request at all. In the previous example client while entering his name "#batman#123" will get an error and the whole process of using server is removed.
From my experience and opinion you need to do validation in JS only. Yes, it can sounds weird, but no need to do server side validation. Just throw exception and 400 response code.
Using this approach you will avoid headache with issues like
we have different error messages for all fields if user use german language. Why it happen?
Also you will be available to create clean restful service at backend, and nice js frontend. Of course, if you will do this way, you should take care that your client part smart enough.
Notes:
what happens if the user turns off JS in their browser?
it will mean that user don't want to use vendor provided client service. Probably, user will see "please enable js". It will mean, that user should send valid requests (following vendor api).
How is the server side code going to decide it needs to throw an exception if you don't do validation there?
Database can throw exception because of foreign or unique or primary key constraint failed, server can throw exception due datatype mismatch. Link to newly added data will be malformed or invisible.
You can manipulate client sided javascript to do what ever you want.
When I manipulated your JS validation and it goes to MySQL database, you can use XSS attack.
If you will send valid request, but with XSS attack (for an example in comment field) you will be able to perform this attack only against yourself, because only you trying to use invalid (or self-written) client side, which allow xss.
Which is better to do client side or server side validation?
In our situation we are using
jQuery and MVC.
JSON data to pass between our View and Controller.
A lot of the validation I do is validating data as users enter it.
For example I use the the keypress event to prevent letters in a text box, set a max number of characters and that a number is with in a range.
I guess the better question would be, Are there any benefits to doing server side validation over client side?
Awesome answers everyone. The website that we have is password protected and for a small user base(<50). If they are not running JavaScript we will send ninjas. But if we were designing a site for everyone one I'd agree to do validation on both sides.
As others have said, you should do both. Here's why:
Client Side
You want to validate input on the client side first because you can give better feedback to the average user. For example, if they enter an invalid email address and move to the next field, you can show an error message immediately. That way the user can correct every field before they submit the form.
If you only validate on the server, they have to submit the form, get an error message, and try to hunt down the problem.
(This pain can be eased by having the server re-render the form with the user's original input filled in, but client-side validation is still faster.)
Server Side
You want to validate on the server side because you can protect against the malicious user, who can easily bypass your JavaScript and submit dangerous input to the server.
It is very dangerous to trust your UI. Not only can they abuse your UI, but they may not be using your UI at all, or even a browser. What if the user manually edits the URL, or runs their own Javascript, or tweaks their HTTP requests with another tool? What if they send custom HTTP requests from curl or from a script, for example?
(This is not theoretical; eg, I worked on a travel search engine that re-submitted the user's search to many partner airlines, bus companies, etc, by sending POST requests as if the user had filled each company's search form, then gathered and sorted all the results. Those companies' form JS was never executed, and it was crucial for us that they provide error messages in the returned HTML. Of course, an API would have been nice, but this was what we had to do.)
Not allowing for that is not only naive from a security standpoint, but also non-standard: a client should be allowed to send HTTP by whatever means they wish, and you should respond correctly. That includes validation.
Server side validation is also important for compatibility - not all users, even if they're using a browser, will have JavaScript enabled.
Addendum - December 2016
There are some validations that can't even be properly done in server-side application code, and are utterly impossible in client-side code, because they depend on the current state of the database. For example, "nobody else has registered that username", or "the blog post you're commenting on still exists", or "no existing reservation overlaps the dates you requested", or "your account balance still has enough to cover that purchase." Only the database can reliably validate data which depends on related data. Developers regularly screw this up, but PostgreSQL provides some good solutions.
Yes, client side validation can be totally bypassed, always. You need to do both, client side to provide a better user experience, and server side to be sure that the input you get is actually validated and not just supposedly validated by the client.
I am just going to repeat it, because it is quite important:
Always validate on the server
and add JavaScript for user-responsiveness.
The benefit of doing server side validation over client side validation is that client side validation can be bypassed/manipulated:
The end user could have javascript switched off
The data could be sent directly to your server by someone who's not even using your site, with a custom app designed to do so
A Javascript error on your page (caused by any number of things) could result in some, but not all, of your validation running
In short - always, always validate server-side and then consider client-side validation as an added "extra" to enhance the end user experience.
You must always validate on the server.
Also having validation on the client is nice for users, but is utterly insecure.
Well, I still find some room to answer.
In addition to answers from Rob and Nathan, I would add that having client-side validations matters. When you are applying validations on your webforms you must follow these guidelines:
Client-Side
Must use client-side validations in order to filter genuine requests coming from genuine users at your website.
The client-side validation should be used to reduce the errors that might occure during server side processing.
Client-side validation should be used to minimize the server-side round-trips so that you save bandwidth and the requests per user.
Server-Side
You SHOULD NOT assume the validation successfully done at client side is 100% perfect. No matter even if it serves less than 50 users. You never know which of your user/emplyee turn into an "evil" and do some harmful activity knowing you dont have proper validations in place.
Even if its perfect in terms of validating email address, phone numbers or checking some valid inputs it might contain very harmful data. Which needs to be filtered at server-side no matter if its correct or incorrect.
If client-side validation is bypassed, your server-side validations comes to rescue you from any potential damage to your server-side processing. In recent times, we have already heard lot of stories of SQL Injections and other sort of techniques that might be applied in order to gain some evil benefits.
Both types of validations play important roles in their respective scope but the most strongest is the server-side. If you receive 10k users at a single point of time then you would definitely end up filtering the number of requests coming to your webserver. If you find there was a single mistake like invalid email address then they post back the form again and ask your user to correct it which will definitely eat your server resources and bandwidth. So better you apply javascript validation. If javascript is disabled then your server side validation will come to rescue and i bet only a few users might have accidentlly disable it since 99.99% of websites use javascript and its already enabled by default in all modern browsers.
You can do Server side validation and send back a JSON object with the validation results for each field, keeping client Javascript to a minimum (just displaying results) and still having a user friendly experience without having to repeat yourself on both client and server.
Client side should use a basic validation via HTML5 input types and pattern attributes and as these are only used for progressive enhancements for better user experience (Even if they are not supported on < IE9 and safari, but we don't rely on them). But the main validation should happen on the server side..
I will suggest to implement both client and server validation it keeps project more secure......if i have to choose one i will go with server side validation.
You can find some relevant information here
https://web.archive.org/web/20131210085944/http://www.webexpertlabs.com/server-side-form-validation-using-regular-expression/
I came across an interesting link that makes a distinction between gross, systematic, random errors.
Client-Side validation suits perfectly for preventing gross and random errors. Typically a max length for any input. Do not mimic the server-side validation rule; provide your own gross, rule of thumb validation rule (ex. 200 characters on client-side; a specific n chars less than 200 on server-side dictated by a strong business rule).
Server-side validation suits perfectly for preventing systematic errors; it will enforce business rules.
In a project I'm involved in, the validation is done on the server through ajax requests. On the client I display error messages accordingly.
Further reading: gross, systematic, random errors:
https://answers.yahoo.com/question/index?qid=20080918203131AAEt6GO
JavaScript can be modified at runtime.
I suggest a pattern of creating a validation structure on the server, and sharing this with the client.
You'll need separate validation logic on both ends, ex:
"required" attributes on inputs client-side
field.length > 0 server-side.
But using the same validation specification will eliminate some redundancy (and mistakes) of mirroring validation on both ends.
Client side data validation can be useful for a better user experience: for example, I a user who types wrongly his email address, should not wait til his request is processed by a remote server to learn about the typo he did.
Nevertheless, as an attacker can bypass client side validation (and may even not use the browser at all), server side validation is required, and must be the real gate to protect your backend from nefarious users.
If you are doing light validation, it is best to do it on the client. It will save the network traffic which will help your server perform better. If if it complicated validation that involves pulling data from a database or something, like passwords, then it best to do it on the server where the data can be securely checked.
My client needs me to implement captcha on his form. The form's action is set to an external page, to which we do not have access.
I wanted to use Google's reCaptcha but it seems that piece of code (which does the checking) needs to be placed in the targed page (which we cannot access).
What is the solution? I tried with using some simple Javascript array and jQuery checking of the value but it seems that spammers after couple of months learned how to dig the values out of the page code (yes, the values are written there, it's javascript - and I do not know better way) because the spam is arriving again.
A good client-side way would be even better. If you know a script or some code to be used here it'd be very appreciated.
Host another server with captcha and submit your form there. In case of success, submit the form from the captcha server to the one you don't have access to.
on a site the user can enter an email account to gain access. do not want this to be hacked by script kitties.
the input items are generated by javascript and posted via ajax. do I need things like fuzzy word matches in this environment?
Any time you give some user the possibility to input something, every time your application expects some data from the users, those data can be forged.
No matter how your form is built : your webserver espects some data ; those form and data can be forged/faked ; so, you must be prepared for anything that could be sent to your application.
Still, you can add some levels of security, using, for example :
HTTPS so communications cannot be listened to
A nonce in your form, to make things harder when it comes to forging forms
I assume you mean adequate security against someone writing a script to fish for for valid e-mails using a brute-force style attack? If so then no, your presumption that "script kiddies" are incapable of either scripting a full-fledged browser instance that can execute your JavaScript content or determining what URL your AJAX ultimately submits to and then forging requests is false.
If you want to protect against these kinds of attacks, then the only effective way to do so is to add code on the server side. For instance, you could track the number of incorrect access attempts posted per IP address, and block requests (for like an hour or so) from any IP that posts more than, say, 10 invalid requests in a 5-minute time span. Then you are reasonably safe against this kind of attack until you come across someone with a million-IP bot-net and a grudge against your site.
Another form of protection is to send some random code from the server to the client that gets submitted back with the form (for instance, as a hidden field), and code the server so that ignores any form submits that do not include this code. This solution works best if you have some way of verifying that the user is trustworthy before you display the form (so it's not really useful in the context of a login form, but it could help secure any post-login forms that you may have). Otherwise it is not too hard for an attacker to compose a script that just grabs a code from your server, and includes it in a forged request.
JavaScript + Ajax forms are just a more fancy means of forms. It's still a request with post/get data so same security measures should be undertaken as per normal HTML form.
Wether you use Ajax or basic HTTP requests, don't send back data you don't want users to see either way. Don't offer services or functionality by means of JavaScript/Ajax you wouldn't offer by means of basic HTTP requests.
Script injection does not need an JavaScript/Ajax vulnerabilities, it just needs unsecure backend code that doesn't catch and eliminate code injections.
I been reading on asp.net mvc learning site about JavaScript injection and man it is an eye opener.
I never even realized/thought about someone using JavaScript to do some weird ass injection attacks.
It however left me with some unanswered questions.
First
When do you use html.encode? Like do you use it only when you are going to display information that that user or some other user had submitted?
Or do I use it for everything. Like say I have form that a user submits, this information will never be displayed to any of the users, should I be still using html.encode?
How would I do it like I am not sure how to put inside say and Html.TextBox() the html.encode tag.
Second
What happens say I have on my site a rich html editor. The user is allowed to use it and make things bold and whatever. Now I want to display information back to the user through a label. I can't Html.Encode it since then all the bold and stuff will not be rendered.
Yet I can't leave it like it is since what would stop a user to add some Javascript attack?
So what would I do? Use Regex to filter out all tags?
Third
There is also another tag you can use called "AntiforgeryToken" when would you use this one?
Thanks
Edit
Almost everyone says use a "White List" and "Black List" how would I write this list and compare it to incoming values(examples in C# would be nice)?
Good question!
For the first answer, I would consider looking here at a previous asked question. As the answer discusses, using HTML Encode will not protect you completely against all XSS attacks. To help with this, you should consider using the Microsoft Web Protection Library (AntiXSS in particular), available from Microsoft.
As has already been mentioned, using a list of allowed tags is the best thing to do, leaving others to be stripped out.
The AntiforgeryToken token works to prevent request forgery (CSRF) because it gives the user a cookie which is validated against the rendered form field when the page is posted. There's no reason that I am aware of that means that you can't use this in all of your forms.
Use HTML Encode for any data being displayed that has been submitted by a user. You don't need to use it when submitting into the database otherwise you would get odd data like: Simon '&' Sons. Really I don't see the harm to use it on any content written to the page dynamically.
Use a list of allowed tags and discard everything else for your HTML editor. As people said, use a whitelist.
The third one is meant to prevent a Cross-site request forgery attack. You use this to stop people being able to do a POST using a 'stolen' cookie from the user. So you may require a authenticated cookie before accepting a post but a malicious user could take that cookie when a user visits their site and then submit a form to your site claiming to be them.
See here for more:
http://haacked.com/archive/2009/04/02/anatomy-of-csrf-attack.aspx
How to use it:
http://blog.codeville.net/2008/09/01/prevent-cross-site-request-forgery-csrf-using-aspnet-mvcs-antiforgerytoken-helper/
Always validate the input received against a whitelist. If you use a blacklist you could and probably will come up against encoding issues. Always use a whitelist when validating input.
Do not rely on client side validation to validate the user input. Client side validation is great for helping the user input correct data. But a malicious user will not use this and could bypass the client side validation. Client side validate is should never be considered as a security fix. Using javascript to validate input should not be used. As you can see javascript is very easy to change and modify on any html page. Also javascript can be disabled in browser. So give additional check in your code behind file.
Additionally validate the input every time, not just when the data is initially accepted. For example if you set a cookie, make sure that cookie is the same value and it is correct on each and every request. A malicious user could modify and change the value anytime during the session.
There are various levels of security that can be implemented based on the design considerations of your application.
I would go with the following basic rules:
Sanitize all input, removing known malicious sections (for instance, <script> tags in a rich HTML editor). Regex based pattern matching is commonly used for this kind of sanitization.
Remove all input that are not in your white-list of allowed values.
Encode any HTML before storing in the database and Decode it back when it is being retrieved for display.
Edit:#Phoenix talks about validation in this context so I thought I'd add this. I have said this before and I reiterate: I am not against script based validation. I only caution people not to rely on it expressly. A common design pattern is to validate basic criteria using script based validation and apply rigorous validation on the server side when that data is submitted.