I was wondering something about Recaptcha validation.
I currently implemented the recaptcha in my form, but the recaptcha doesnt seem to prevent the submit handler of the form. So I assume I have to write my own specific code for it. However!...is this bot-safe? Cant bots just disable the recaptcha validation and still submit the form?
Please enlighten me,
Thanks!
You need to plug the recaptcha in a manner that it is required to have a successful test to submit the form. As you can see, on the official docs of Google, they will handle the challenge for the user (or bot) and send you a success, either false or true.
If it does not pass, do not allow them to submit the form. If they do disable the JS, they will not be able to submit it neither ofc.
And yeah, a recaptcha works well against bots, it's the goal of it. Some low recaptchas can be a bit too naive and data bots could detect some patterns and still succeed a submission but they are a few and you can still go for a score of 1 or 0.9. It basically determines "how much are you human", 1 guarantees that it's really unlikely that it's a bot.
Why isn't the submission blocked in your case ? It's the thing that you need to investigate or post a question with it's content.
This module is aimed towards Nuxt, so it may help be a good start.
I have a webform that gets submitted to an endpoint/URL that I cannot access, to make any changes.
I would like to know if there is any way to listen for or capture errors that could come from the POST URL. An example would be if the URL is no longer valid, or there is a network issue, so it takes a long time or times out.
So specifically, once the form is submitted, is there any way in Javascript, on the client, to listen for or be aware of any errors due to issues with the server side URL the form is posting to. And to clarify, the form is submitted as a standard web form, not using Ajax or any newer JS implementation.
Thanks
This question already has answers here:
JavaScript: client-side vs. server-side validation
(13 answers)
Closed 5 years ago.
When validating forms with javascript and php, why do we need to include the same validation rules (ie correct format, checking fields are filled in, values within bounds) on both the server and browser? I understand that javascript can be turned off, so why not just have everything on the server and save checking it twice?
Utilising client side validation, generally, gives a much nicer experience for the user. For example, instead of the user filling out a form, then submitting it to find out they've made mistakes, with Javascript we can let them know as they finish typing. That is a much easier/cleaner/quicker way for the user to fix their mistakes.
When using client sided validation, you must also use backend validation because javascript can be manipulated. E.G. What happens if they go into the dev tools inspector and alter the validation script to allow malicious details to be entered? Or what about if they inject their own malicious script? Or what about if they turn off Javascript in the browser as a whole? That is where the backend validation comes into play.
It is important to validate the form submitted by the user because it can have inappropriate values. So validation is must.
The JavaScript provides you the facility the validate the form on the client side so processing will be fast than server-side validation. So, most of the web developers prefer JavaScript form validation.
Through JavaScript, we can validate name, password, email, date, mobile number etc fields.
As others have said, you should do both. Here's why:
Client Side
You want to validate input on the client side first because you can give better feedback to the average user. For example, if they enter an invalid email address and move to the next field, you can show an error message immediately. That way the user can correct every field before they submit the form.
If you only validate on the server, they have to submit the form, get an error message, and try to hunt down the problem.
(This pain can be eased by having the server re-render the form with the user's original input filled in, but client-side validation is still faster.)
Server Side
You want to validate on the server side because you can protect against the malicious user, who can easily bypass your JavaScript and submit dangerous input to the server.
It is very dangerous to trust your UI. Not only can they abuse your UI, but they may not be using your UI at all, or even a browser. What if the user manually edits the URL, or runs their own Javascript, or tweaks their HTTP requests with another tool? What if they send custom HTTP requests from curl or from a script, for example?
(This is not theoretical; eg, I worked on a travel search engine that re-submitted the user's search to many airlines, bus companies, etc, by sending POST requests as if the user had filled each company's search form, then gathered and sorted all the results. Those companies' form JS was never executed, and it was crucial for us that they provide error messages in the returned HTML. Of course, an API would have been nice, but this was what we had to do.)
Not allowing for that is not only naive from a security standpoint, but also non-standard: a client should be allowed to send HTTP by whatever means they wish, and you should respond correctly. That includes validation.
Server side validation is also important for compatibility - not all users, even if they're using a browser, will have JavaScript enabled.
I'd suggest server side validation and use JS/AJAX to do the client side, that way you have 1 set of validation rules on the server and you still get client side response without page refresh. Remember that some people choose not to have JS turned on (I had clients working in Government departments where JS was turned off) so you need a fallback position.
To clarify: create pre-validation JS function that posts your data using an AJAX call to the server, the server then applies the validation rules to the data and return a response (XML/JSON/whatever) to your JS validation function. Your JS will then process the response checking for success/failure. On failure display an error message (probably some data set in the response), and on success continue to post the form data to the server for processing. The server then needs to run the validation on the post data again (using the same validation function/logic) to ensure nothing has been tampered with along the way. Then, finally, process the data on the server to do whatever you want with it
Each time user sends data to the server it gets validated on the server and returns the response whether the data is valid(200 HTTP Code) or invalid(400 HTTP Code).
Example Payload
{ "name": "#batman#123" } //Invalid data gets sent to the server and processed
This whole process consumes both memory and time. So in order to save those, we don't send the previous request at all. In the previous example client while entering his name "#batman#123" will get an error and the whole process of using server is removed.
From my experience and opinion you need to do validation in JS only. Yes, it can sounds weird, but no need to do server side validation. Just throw exception and 400 response code.
Using this approach you will avoid headache with issues like
we have different error messages for all fields if user use german language. Why it happen?
Also you will be available to create clean restful service at backend, and nice js frontend. Of course, if you will do this way, you should take care that your client part smart enough.
Notes:
what happens if the user turns off JS in their browser?
it will mean that user don't want to use vendor provided client service. Probably, user will see "please enable js". It will mean, that user should send valid requests (following vendor api).
How is the server side code going to decide it needs to throw an exception if you don't do validation there?
Database can throw exception because of foreign or unique or primary key constraint failed, server can throw exception due datatype mismatch. Link to newly added data will be malformed or invisible.
You can manipulate client sided javascript to do what ever you want.
When I manipulated your JS validation and it goes to MySQL database, you can use XSS attack.
If you will send valid request, but with XSS attack (for an example in comment field) you will be able to perform this attack only against yourself, because only you trying to use invalid (or self-written) client side, which allow xss.
My client needs me to implement captcha on his form. The form's action is set to an external page, to which we do not have access.
I wanted to use Google's reCaptcha but it seems that piece of code (which does the checking) needs to be placed in the targed page (which we cannot access).
What is the solution? I tried with using some simple Javascript array and jQuery checking of the value but it seems that spammers after couple of months learned how to dig the values out of the page code (yes, the values are written there, it's javascript - and I do not know better way) because the spam is arriving again.
A good client-side way would be even better. If you know a script or some code to be used here it'd be very appreciated.
Host another server with captcha and submit your form there. In case of success, submit the form from the captcha server to the one you don't have access to.
I am a web developer for a web site that is occassionally plagued by form bots. Recently I received an error notification of a problem with the form submission that should be impossible for a human user. You cannot submit the form without JavaScript enabled but the server side script received a form field value that the JavaScript validation will not allow.
I suspect that a form bot managed to submit the form without running the JavaScript but I'm not entirely sure this is the problem because a real user had a similar problem. I know how to use honeypot fields as a countermeasure for form bots but I need to test my countermeasures. Therefore I need a working form bot to attack my form so I can see what the result would be and to verfiy that my countermeasures will work.
I think you can use PHP with Curl to submit web forms but I can't find any sample code. I would prefer to use an actual form bot so I can be sure that the honeypot fields aren't easily circumvented.
Does anyone know what is currently being used to attack web forms? How do you test your countermeasures to ensure they are effective?
Personally, I use a FireFox extension called Tamper Data. You submit the form normally, but then you can modify the HTTP parameters (Variables, cookies, etc) before it's sent to the server. That way, you can manually change the validated fields. You could automate it with PHP and CURL...
The thing is, you don't want to run an actual bot against it, because that would only test one (maybe two) methods of breaking your validation. You want to run your own, that way you can test every possible combination that you can think of. If you automate it with PHP/CURL, you could then run the test with every change (an integration test) to verify that you didn't "break" anything... It shouldn't be too hard to write, since the CURL functions are pretty well documented...
What about captchas to protect your form?