I have a website with simple form that posts input as tweets to a twitter account.
There is no captcha or any kind of anti-spam system installed, so it's only logical that spam-bots have been posting their bogus content like crazy.
Now, I want to build a validation that does not require any sort of captcha.
This is what I've been thinking and I need your honest opinion.
If robots are posting their stuff without using mouse or keyboard (probably by using some sort of generated script), then that actually means that they cannot give any sort of focus to the form itself.
My idea is to create a simple js condition that the form cannot be submitted without gaining some sort of focus.
What do you think about this?
Any other sort of similar ideas?
thank you!
An idea of tweeting messages that have not been validated sounds pretty unsecured on its own. Still, there are a few ways to minimize the spam.
For instance, you could implement a honeypot variation that adds another field to the submission form, but in a manner that's hidden from a live user. The bot would mistakenly fill it out and you could decline the submission on the server-side - believe it or not, most comment spam comes from crawlers, so even a basic approach like this could drastically decrease the amount of malicious content submitted.
You are correct in your assumption that most robots post "without using mouse or keyboard" - it is much more trivial to do so than to simulate client-side activity. Again, you could use this to your advantage - add a field that would be JS-populated and validate its content on the server-side.
Related
When you pay through online payment systems ( being with or without 3DSecure), you fill in the form and validate, and from a strictly visual point of view, things seems pretty straightforward. But behind, there is often multiple redirections, which are handled through JavaScript.
Basically, your data is submitted, and you land on a page with a pre-filled form, which is immediately submitted through JavaScript, sometimes multiple times in a row (with fast enough connection, you don't even see those steps from browser).
I was wondering why they do it that way (instead of proper back-end redirections), and I can't find an answer to it.
My guess is that it's just to make it harder for scripts to follow it, but it's still possible to do it (so why bother), and to my opinion, the "dirty aspect" of it (from a coder point of view) is not worth the constraints it gives to scripts that would attempt an automatic validation.
Do you have any insights on this?
From my view, using the JavaScript will detect the bot or human efficiently.
As you can already saw, how the Google validate the bot.
It's just simple a check box, but it's quite complicated if you try to write the bot to verify or pass the check. (Now I still don't know how to pass by it ^)
My question is obvious. Can spams (I mean blog spams) make use of javascript on my site?
For example "Laravel" needs csrf code on forms. If I pass csrf code as a javascript variable and append the code using jquery append method into the form, will spams be able to submit the form?
Of course, as the source code is on the client side, they can either run it, alter it before running it, or reverse engineer it and populate the form field with the required value directly. That's why any spam protection mechanism always need a server side parte. Otherwise you're just making it harder to break... That said, think if the result is worth the effort to reverse engineer your spam protection.
So, our company has been developing anti-spam for websites and I can say that many spambots(about 30%) are able to use javascript.
Users on my site have a publicly-visible profile where they accept subscriptions via a simple HTML form. These subscriptions are merged into this user's email list.
Someone could write a script that registers emails constantly to destroy/flood a user's list. This could be mitigated by using IP-based rate-limiting, but this solution does not work if the script runs in a distributed environment.
The only strategy I can think of is using a CAPTCHA, but I'd really like to avoid doing this. What else can I try?
Your question essentially boils down to "How can I tell humans and computers apart without using a CAPTCHA?"
This is indeed quite a complex question with a lot of different answers and approaches. In the following I'll try to name a few. Some of the ideas were taken from this article (German).
Personally I think some kind of CAPTCHA would be a perfect solution. This doesn't
have to be necessarily warped text in an image, you could also use logic puzzles or simple
calculations. But with the following methods you could try to avoid CAPTCHAs; keep in mind that these methods will always be easier to bypass than CAPTCHAs which require user interaction.
Use a hidden field as a honeypot in your form (either type=hidden or use CSS). If this field is filled out (or has another value than you'd expect), you have detected a bot (spam bots usually don't perform semantic analyses, so they fill out everything they find). However this won't work correctly if the bot is specifically targeted at you or simply learns the name of the field and avoids it.
Use JavaScript to check how fast the form is submitted. Of course humans need some time (at least a few seconds) to fill in a form whereas bots are a lot faster.
You should also check if the form is submitted more than once in a short time. This could be done via JavaScript if you use AJAX forms and/or server-side.
The drawback is (as you mentioned yourself), it won't work in distributed systems.
Use JavaScript to detect focus events, clicks or other mouse events that indicate you're dealing with a human. This method is described in this blog article (including some source code examples).
Check if the user works with a standard web browser; spammers sometimes use self-written programs. You could check the user agent string, but this can be manipulated easily. Feature detection would be another possibility.
Of course methods 2-4 won't work if a user has JavaScript disabled. In this case you could display a regular CAPTCHA in <noscript> tags for example. In any case you should always combine several methods to get an effective and user friendly test.
What finally comes to my mind (in your specific case) is checking the validity of the email addresses entered (not only syntactically but also check if the addresses really exist). This can be done in several ways (see this question on SO) - none of them is really reliable, though. So, again, you will have to combine different methods in order to reliably tell humans and bots apart.
Assuming that whoever starts spamming your website specifically targets your website (not a random spam-bot) and will try actively work around all countermeasures then the only option is some kind of captcha, as anything else can be automatically avoided.
All non-captcha methods of preventing fake/spam submissions work either by exploiting flaws in script doing the automated submissions or analyzing the content submitted. With the type of submissions content analysis isn't really an option here. So what is left is a wide variety of automated submission prevention used in fighting for example spam comments:
CSS based solutions ( such as this one: http://wordpress.org/extend/plugins/spam-honeypot/ )
JS based solutions: hidden field is filled by data computed by javascript - if the content is submitted by something as simple as spam script that doesn't support java script it's easily detectable
It's possible to work around those two if the attacker knows they are there - for example when your website is a selected, not random, target.
To summarize: there are plenty solutions that will quite successfully stop random spam submissions, but if someone is specifically targeting your website the only real thing that will work is something that computers are bad at - CAPTCHA.
I'm working on an ajax application that makes extensive use of jQuery. I'm not worried about whether or not the application degrades gracefully.
So far I have been using Malsup's excellent jQuery form plugin to create forms that submit ajax requests. (For example, to submit updated record information.)
However I am considering dispensing with form tags altogether, and instead manually constructing $.post() statements when needed.
I'm wondering: What are peoples' thoughts on the best way to submit a large amount of information to the server - considering graceful degradation is not a requirement. Are there perils with just using $.post()?
Thanks in advance
Nope, not at all. That's all the plugin is doing anyway, under the hood.
The form tag does at least provide you with a nice structural grouping of your form tags, so that you can query for them more easily, though.
You've said it yourself - the peril is that it won't degrade gracefull!
Have jQuery add an extra field called UsingjQuery, then output your results based on whether this field is set or not.
This way users with javascript turned off (mobile clients, etc) will still be able to submit.
edit: Saw you mentioned 'degrades gracefully' but somehow didn't see it said 'not worried about' first!
Having a form tag does allow one javascript trick that jQuery doesn't support without: $('form').reset() ...
I stopped using FORM tags some time ago, but I also have a set of captured users that I know exactly what platform they are using.
I agree with David Pfeffer, however, I would also make the point that on occasion, form tags can get in your way. I've specifically had problems where I wanted multiple forms inside of a table, but that caused really ugly problems with positioning. So, I wound up dropping just the input elements in, copying them into a form that was elsewhere on the page, then submitting that form. It was a bit of a pain in the butt.
If you can do away with forms, and aren't worried about degradation, then I would highly consider it.
Anyone have suggestions for creating an extremely simple form verification field using jquery? I need to block basic form spam. Would love to have some type of 1+1= field that us used to make sure it's a human submitting the form. I don't have the abilty to put .php or .asp on the site so it would need to rely on jquery or some other method.
Any suggestions??
It's highly possible the robot script doesn't even have Javascript therefore it would be useless. I don't think this is feasible without a server-side solution.
You cannot solve this issue on the client side (with Jquery). In a spam-submitting situation, the client cannot be trusted.
To solve this issue you will need to modify things server side.
A couple of suggestions that are probably enough
Send a hash of some session variables unique to this user with the form, and validate on reply. This is to prevent replay attacks by detecting bad hashes.
Add fields that should be left blank, and hide them using css. Spam bots will fill them out. Filter out submissions that contain data in these fields.
I've been searching for something like that also and I think I've found something useful that also looks nice:
http://www.webdesignbeach.com/beachbar/ajax-fancy-captcha-jquery-plugin
I haven't tried this yet though.
Regards
Sigersted