JavaScript : Find Captcha on a web page - javascript

I,am working on Captcha decode/break Firefox extension and I want to find captcha field on a page if it exists. I want to make a generic thing so that when ever a page is loaded, I get the captcha image.In short, Whenever a page is loaded, It checks for a captcha and if there, It gets its image.
An approach i was trying is that to find text 'captcha' on a page and then img tag if exist,can any one plz tell me a better solution that can run on max sites.Thanks in advance.

Saadsaf,
In one of your comments you mentioned that:
"...may be u r right, But actually i'm not using it for any enehical purpose, I'm just collecting captchas"
Collecting Captchas is pretty much a useless task.
How Captchas Work:
The way a Captcha works is that the user enters the characters in the image, and then submit a form - at which time the characters entered are compared to those in the image. If the two sets of characters match perfectly, a 'Pass' condition is declared and the guarded process is allowed to continue.
The characters in the Captcha image are generally not stored anywhere that the user has access to, otherwise there would be a severe security issue for Captchas. Most commonly, Captchas will be compared server-side so that the client has as little access to the character string as is possible.
Why collecting Captchas is a poor idea:
If you were to "collect" captchas for use on your sites, you would have to look at each and every one (thousands, if you figure out your code to collect them for you) and then somehow store the correct characters corresponding to each image for later use.
Between writing code to find you Captchas and then going through them all manually to correctly read the characters in each one, you will waste weeks or months of your life away.
What to do instead:
If you are interested in using Captchas on your sites to protect forms and prevent spam and abusive robots, your best bet is to learn how to create your own custom Captchas. There are endless resources at your disposal for just such a thing, including YouTube, Google, Stack Overflow, and more.
Where to start:
Hop on Google and search "How to create a Captcha". That is a good start. Other useful search terms might be "Custom Captcha", "PHP Captcha", "JavaScript and PHP Captcha"... Try the same searches on YouTube. Search for "Captcha" here on Stack Overflow.
Good luck. I hope you have only the best of intentions in mind when using this site.

Related

XSS attack in JavaScript, how to unsanitize HTML [duplicate]

User equals untrustworthy. Never trust untrustworthy user's input. I get that. However, I am wondering when the best time to sanitize input is. For example, do you blindly store user input and then sanitize it whenever it is accessed/used, or do you sanitize the input immediately and then store this "cleaned" version? Maybe there are also some other approaches I haven't though of in addition to these. I am leaning more towards the first method, because any data that came from user input must still be approached cautiously, where the "cleaned" data might still unknowingly or accidentally be dangerous. Either way, what method do people think is best, and for what reasons?
Unfortunately, almost no one of the participants ever clearly understands what are they talking about. Literally. Only Kibbee managed to make it straight.
This topic is all about sanitization. But the truth is, such a thing like wide-termed "general purpose sanitization" everyone is so eager to talk about is just doesn't exist.
There are a zillion different mediums, each require it's own, distinct data formatting. Moreover - even single certain medium require different formatting for it's parts. Say, HTML formatting is useless for javascript embedded in HTML page. Or, string formatting is useless for the numbers in SQL query.
As a matter of fact, such a "sanitization as early as possible", as suggested in most upvoted answers, is just impossible. As one just cannot tell in which certain medium or medium part the data will be used. Say, we are preparing to defend from "sql-injection", escaping everything that moves. But whoops! - some required fields weren't filled and we have to fill out data back into form instead of database... with all the slashes added.
On the other hand, we diligently escaped all the "user input"... but in the sql query we have no quotes around it, as it is a number or identifier. And no "sanitization" ever helped us.
On the third hand - okay, we did our best in sanitizing the terrible, untrustworthy and disdained "user input"... but in some inner process we used this very data without any formatting (as we did our best already!) - and whoops! have got second order injection in all its glory.
So, from the real life usage point of view, the only proper way would be
formatting, not whatever "sanitization"
right before use
according to the certain medium rules
and even following sub-rules required for this medium's different parts.
It depends on what kind of sanitizing you are doing.
For protecting against SQL injection, don't do anything to the data itself. Just use prepared statements, and that way, you don't have to worry about messing with the data that the user entered, and having it negatively affect your logic. You have to sanitize a little bit, to ensure that numbers are numbers, and dates are dates, since everything is a string as it comes from the request, but don't try to do any checking to do things like block keywords or anything.
For protecting against XSS attacks, it would probably be easier to fix the data before it's stored. However, as others mentioned, sometimes it's nice to have a pristine copy of exactly what the user entered, because once you change it, it's lost forever. It's almost too bad there's not a fool proof way to ensure you application only puts out sanitized HTML the way you can ensure you don't get caught by SQL injection by using prepared queries.
I sanitize my user data much like Radu...
First client-side using both regex's and taking control over allowable characters
input into given form fields using javascript or jQuery tied to events, such as
onChange or OnBlur, which removes any disallowed input before it can even be
submitted. Realize however, that this really only has the effect of letting those
users in the know, that the data is going to be checked server-side as well. It's
more a warning than any actual protection.
Second, and I rarely see this done these days anymore, that the first check being
done server-side is to check the location of where the form is being submitted from.
By only allowing form submission from a page that you have designated as a valid
location, you can kill the script BEFORE you have even read in any data. Granted,
that in itself is insufficient, as a good hacker with their own server can 'spoof'
both the domain and the IP address to make it appear to your script that it is coming
from a valid form location.
Next, and I shouldn't even have to say this, but always, and I mean ALWAYS, run
your scripts in taint mode. This forces you to not get lazy, and to be diligent about
step number 4.
Sanitize the user data as soon as possible using well-formed regexes appropriate to
the data that is expected from any given field on the form. Don't take shortcuts like
the infamous 'magic horn of the unicorn' to blow through your taint checks...
or you may as well just turn off taint checking in the first place for all the good
it will do for your security. That's like giving a psychopath a sharp knife, bearing
your throat, and saying 'You really won't hurt me with that will you".
And here is where I differ than most others in this fourth step, as I only sanitize
the user data that I am going to actually USE in a way that may present a security
risk, such as any system calls, assignments to other variables, or any writing to
store data. If I am only using the data input by a user to make a comparison to data
I have stored on the system myself (therefore knowing that data of my own is safe),
then I don't bother to sanitize the user data, as I am never going to us it a way
that presents itself as a security problem. For instance, take a username input as
an example. I use the username input by the user only to check it against a match in
my database, and if true, after that I use the data from the database to perform
all other functions I might call for it in the script, knowing it is safe, and never
use the users data again after that.
Last, is to filter out all the attempted auto-submits by robots these days, with a
'human authentication' system, such as Captcha. This is important enough these days
that I took the time to write my own 'human authentication' schema that uses photos
and an input for the 'human' to enter what they see in the picture. I did this because
I've found that Captcha type systems really annoy users (you can tell by their
squinted-up eyes from trying to decipher the distorted letters... usually over and
over again). This is especially important for scripts that use either SendMail or SMTP
for email, as these are favorites for your hungry spam-bots.
To wrap it up in a nutshell, I'll explain it as I do to my wife... your server is like a popular nightclub, and the more bouncers you have, the less trouble you are likely to have
in the nightclub. I have two bouncers outside the door (client-side validation and human authentication), one bouncer right inside the door (checking for valid form submission location... 'Is that really you on this ID'), and several more bouncers in
close proximity to the door (running taint mode and using good regexes to check the
user data).
I know this is an older post, but I felt it important enough for anyone that may read it after my visit here to realize their is no 'magic bullet' when it comes to security, and it takes all these working in conjuction with one another to make your user-provided data secure. Just using one or two of these methods alone is practically worthless, as their power only exists when they all team together.
Or in summary, as my Mum would often say... 'Better safe than sorry".
UPDATE:
One more thing I am doing these days, is Base64 encoding all my data, and then encrypting the Base64 data that will reside on my SQL Databases. It takes about a third more total bytes to store it this way, but the security benefits outweigh the extra size of the data in my opinion.
I like to sanitize it as early as possible, which means the sanitizing happens when the user tries to enter in invalid data. If there's a TextBox for their age, and they type in anything other that a number, I don't let the keypress for the letter go through.
Then, whatever is reading the data (often a server) I do a sanity check when I read in the data, just to make sure that nothing slips in due to a more determined user (such as hand-editing files, or even modifying packets!)
Edit: Overall, sanitize early and sanitize any time you've lost sight of the data for even a second (e.g. File Save -> File Open)
The most important thing is to always be consistent in when you escape. Accidental double sanitizing is lame and not sanitizing is dangerous.
For SQL, just make sure your database access library supports bind variables which automatically escapes values. Anyone who manually concatenates user input onto SQL strings should know better.
For HTML, I prefer to escape at the last possible moment. If you destroy user input, you can never get it back, and if they make a mistake they can edit and fix later. If you destroy their original input, it's gone forever.
Early is good, definitely before you try to parse it. Anything you're going to output later, or especially pass to other components (i.e., shell, SQL, etc) must be sanitized.
But don't go overboard - for instance, passwords are hashed before you store them (right?). Hash functions can accept arbitrary binary data. And you'll never print out a password (right?). So don't parse passwords - and don't sanitize them.
Also, make sure that you're doing the sanitizing from a trusted process - JavaScript/anything client-side is worse than useless security/integrity-wise. (It might provide a better user experience to fail early, though - just do it both places.)
My opinion is to sanitize user input as soon as posible client side and server side, i'm doing it like this
(client side), allow the user to
enter just specific keys in the field.
(client side), when user goes to the next field using onblur, test the input he entered
against a regexp, and notice the user if something is not good.
(server side), test the input again,
if field should be INTEGER check for that (in PHP you can use is_numeric() ),
if field has a well known format
check it against a regexp, all
others ( like text comments ), just
escape them. If anything is suspicious stop script execution and return a notice to the user that the data he enetered in invalid.
If something realy looks like a posible attack, the script send a mail and a SMS to me, so I can check and maibe prevent it as soon as posible, I just need to check the log where i'm loggin all user inputs, and the steps the script made before accepting the input or rejecting it.
Perl has a taint option which considers all user input "tainted" until it's been checked with a regular expression. Tainted data can be used and passed around, but it taints any data that it comes in contact with until untainted. For instance, if user input is appended to another string, the new string is also tainted. Basically, any expression that contains tainted values will output a tainted result.
Tainted data can be thrown around at will (tainting data as it goes), but as soon as it is used by a command that has effect on the outside world, the perl script fails. So if I use tainted data to create a file, construct a shell command, change working directory, etc, Perl will fail with a security error.
I'm not aware of another language that has something like "taint", but using it has been very eye opening. It's amazing how quickly tainted data gets spread around if you don't untaint it right away. Things that natural and normal for a programmer, like setting a variable based on user data or opening a file, seem dangerous and risky with tainting turned on. So the best strategy for getting things done is to untaint as soon as you get some data from the outside.
And I suspect that's the best way in other languages as well: validate user data right away so that bugs and security holes can't propagate too far. Also, it ought to be easier to audit code for security holes if the potential holes are in one place. And you can never predict which data will be used for what purpose later.
Clean the data before you store it. Generally you shouldn't be preforming ANY SQL actions without first cleaning up input. You don't want to subject yourself to a SQL injection attack.
I sort of follow these basic rules.
Only do modifying SQL actions, such as, INSERT, UPDATE, DELETE through POST. Never GET.
Escape everything.
If you are expecting user input to be something make sure you check that it is that something. For example, you are requesting an number, then make sure it is a number. Use validations.
Use filters. Clean up unwanted characters.
Users are evil!
Well perhaps not always, but my approach is to always sanatize immediately to ensure nothing risky goes anywhere near my backend.
The added benefit is that you can provide feed back to the user if you sanitize at point of input.
Assume all users are malicious.
Sanitize all input as soon as possible.
Full stop.
I sanitize my data right before I do any processing on it. I may need to take the First and Last name fields and concatenate them into a third field that gets inserted to the database. I'm going to sanitize the input before I even do the concatenation so I don't get any kind of processing or insertion errors. The sooner the better. Even using Javascript on the front end (in a web setup) is ideal because that will occur without any data going to the server to begin with.
The scary part is that you might even want to start sanitizing data coming out of your database as well. The recent surge of ASPRox SQL Injection attacks that have been going around are doubly lethal because it will infect all database tables in a given database. If your database is hosted somewhere where there are multiple accounts being hosted in the same database, your data becomes corrupted because of somebody else's mistake, but now you've joined the ranks of hosting malware to your visitors due to no initial fault of your own.
Sure this makes for a whole lot of work up front, but if the data is critical, then it is a worthy investment.
User input should always be treated as malicious before making it down into lower layers of your application. Always handle sanitizing input as soon as possible and should not for any reason be stored in your database before checking for malicious intent.
I find that cleaning it immediately has two advantages. One, you can validate against it and provide feedback to the user. Two, you do not have to worry about consuming the data in other places.

Make JS Code unreadable/heavy to "hack"

I want to write a little game where the users has to click on appearing elements/objects in a given time. In detail the objects appears in holes onto the ground and after x seconds the objects disappear. The gamer has y lifes and all clicks gets counted until he lost the game.
After that his highscore gets posted to a database (via form post or AJAX). Long story short how can I avoid the user faking his highscore before sending? The program language is JS.
I know its not possible to hide all the code and make it not hack-able. But I think it's enough if the code is so difficult that the user has to do a lot of work to understand where he has to intervent to send faked data.
Has anybody some ideas howto make the code as difficult as its possible?
Thanks in advance for any ideas :)
You should never really try to make your source code unreadable. It will make as great a headache for yourself than any obstruction to anyone modifying it.
That said, you could refactor all your variable names to complete gibberish and play with whitespace, but anyone seriously trying to understand your code could revert that in a decent text editor. To make it any more complex would take away from the efficiency of your program - otherwise you could fill it with useless calls to functions that don't do anything and strange incrementation of counters that the program does not depend on.
there are compressors that do exact the job you want! Some of them can be downloaded and used as offline tools, some are directly via web accessible:
http://javascriptcompressor.com
like jquery and others you can use your code to maintain the scripts and deliver a faster loadable packed version that is hardly readable
How about this:
Create two PHP pages, with one containing the game interface and the other containing the game's code. Program the first one so that it creates a one-time-use string that the tag will pass along as a parameter when it calls the JS code from the second one. Program the second one so it checks the validity of the string sent. If the string is valid, the script should output the JS code, then invalidate the string.
Then, when the user copies the URL of the script, pastes it into his browser, and hits "Return," all he sees is either a blank page or a "not authorized" message.

How do you troubleshot google analytics code?

Can anyone share best practices for troubleshooting google anlytics code?
Has anyone built a debugging tool? Does google have a linter hidden somewhere? Does anybody have a good triage logic diagram?
I'll periodically set up different parts of GA and it seems like every time I do it takes 4 or 5 days to get it working.
The workflow looks like this:
Read the docs on the feature (e.g. events, custom variables).
Implement what appears to be the correct code based on the docs.
Wait a day.
See no data.
Google every version of the problem I can imagine. Find what may be a solution.
Change my code.
Wait a day.
See no data.
Loop:
Randomly move elements of the tracking code around.
Wait a day.
If other parts break, tell ceo, get yelled at, revert changes.
If data appears, break.
Pray it continues to work/I never have to change the tracking code again.
For obvious reasons, I'm not satisfied with this workflow and hoping someone has figured out something I haven't.
Everything I do, debugging GA code, stops and starts with the Google Analytics Debugger Chrome Extension. It prints out to the console a summary of the data it has sent to Google Analytics which, for all purposes except testing profile filters, is all you need. It'll eliminate the "wait a day" step.
If you're not a fan of Google Chrome, you can inspect the HTTP requests yourself to see how the data is parsing. You can use this guide to figure out what each paramater in the URL represents.
In terms of ensuring the features I've installed or the code itself is working, I'll open a fresh browser (cleared of cookies), and navigate to the site I'm testing via Google search. I'll proceed to navigate to all of the pertinent pages, and trigger all the pertinent events, all the while ensuring that the requests are being sent to Google, and that the session isn't broken at any point (by either keeping an eye on the Session Count, or ensuring that the traffic source doesn't change from organic/google to direct or a self-referral.
Screenshot:
To begin with, this answer isn't at odds with any portion of either of the two answers before mine--i.e. you could certainly implement them all without conflict.
My answer just reflects my own priority, which is that the latency issue. Latency makes debugging far more difficult than it should be. Ten minutes of latency while waiting for the compiler to finish is irritating, four hours (minimum GA latency) is painful.
So for me, the first step in building a GA de-bugging framework was to somehow get the GA results in real-time--in other words, if i changed a regular expression filter, i needed to catch the traffic processed by that filter. So removing the 4-24 hour latency in getting results from the GA server was critical.
The easiest way i have found so far to do this is to modify the GA tracking code on each page of your Site so that it sends a copy of each GIF Request to your own server.
To do this, immediately before the call to trackPageview(), add this line:
pageTracker._setLocalRemoteServerMode();
This will send the entire request header to your server access log, which you can parse in real time. (Specifically, your server writes to the access log one line at a time--one line corresponds to one request. All of the GA data is packaged and set as a request header, so there's perfect coincidence between the two.
yahelc answer is great, but I'd like to add my 2c here.
Get yourself a nice sniffer to see the hits flowing.
Nice options:
Wasp
Charles
HTTPFox
Fiddler
Then implement your changes on QA.
Test this new setup on QA. Things you should keep an eye on.
Always make sure that the basic pageview fires. It should have at least an utmp value and no utmt set.
Make sure the visitor Id doesn't get overwritten. This is the second number on the __utma cookie. This number should be your userid, if it changes then things are broken.
Make sure your pageviews contain the page and session variables you set. If you set any. They are coded into the param utme.
Make sure that any Visitor custom var is fired before your basic pageview. utmt=custom variable
Make sure the source data is not overwritten (Campaign/medium/source/content/keyword) - These are set on the __utmz cookie. If it gets overwritten by direct or a referral of you own site there's something wrong.
If you miss any event it may be due a reqired field missing or the last value being a float or string. The value of an event must be an integer.
If you're using the ecomerce double check all your parameters. Make sure that you're firing everything as strings here and that unused parametrs are empty strings.
triple check your account number. UA-XXXXX-X.
If your doing something with custom JS make sure to test on all browsers, and try to get at least the basic tracking on a safe zone where you are sure things won't break.
Send debug info about javascript code that might break GA to GA. Check this.

JavaScript non-persistent security question

Despite my paranoia I've never really gotten around to understanding web security more, so my lack of knowledge is causing me a bit of confusion for this.
Example: Let's say you have 2 text boxes, both are for user input.
The user types in whatever they want into those two text boxes and clicks a button, the button then uses a bit of JavaScript and concatenates whatever is in those two text boxes and displays it out in a div.
My question is, in this case, since it's using JavaScript client side, do you need to really sanitize user input?
What if it outputted to a text box instead of a div? Or as an alert?
I understand that when it comes to forms/PHP you always want to sanitize input, but I'm not really familiar with JavaScript security precautions.
It's my understanding that since this is client-side, and no data is being saved by the server, that whatever the user does (tries to throw in some malicious code or whatnot) won't affect anyone but that user, correct?
No this is not a security issue. The reason why is because an attacker has to force a victim's the browser into making this action in order for it to be XSS.
However, if you grab input from something like document.location and then print it to the page using document.write() then this is DOM based XSS. But this is very a uncommon form of XSS.
You don't have to sanitize anything that is not going to the server.
If people want to do something to their instance of your page, the only one they can hurt is themselves. Look at everything you can do with an extension like GreaseMonkey ... we're talking a lot more than just concatenating strings and displaying them.

What precautions should I take to prevent XSS on user submitted HTML?

I'm planning on making a web app that will allow users to post entire web pages on my website. I'm thinking of using HTML Purifier but I'm not sure because HTML Purifier edits the HTLM and it's important that the HTML is maintained just how it was posted. So I was thinking making some regex to get rid of all script tags and all the javascript attributes like onload, onclick, etc.
I saw a Google video a while ago that had a solution for this. Their solution was to use another website to post javascript in so the original website cannot be accessed by it. But I don't wanna purchase a new domain just for this.
be careful with homebrew regexes for this kind of thing
A regex like
s/(<.*?)onClick=['"].*?['"](.*?>)/$1 $3/
looks like it might get rid of onclick events, but you can circumvent it with
<a onClick<a onClick="malicious()">="malicious()">
running the regex on that will get you something like
<a onClick ="malicious()">
You can fix it by repeatedly running the regex on that string until it doesn't match, but that's just one example of how easy it is to get around simple regex sanitizers.
The most critical error people make when doing this is validating things on input.
Instead, you should validate on display.
The context matters when determing what is XSS and what isn't. Therefore, you can happily accept any input, as long as you pass it through appropriate cleaning functions when displaying it.
Consider that something that constitutes 'XSS' will be different when the input is placed in a '<a href="HERE"> as opposed to <a>here!</a>.
Thus, all you need to do, is make sure that any time you write user data, you consider, very carefully, where you are displaying it, and make sure that it can't escape the context you are writing it to.
If you can find any other way of letting users post content, that does not involve HTML, do that. There are plenty of user-side light markup systems you can use to generate HTML.
So I was thinking making some regex to get rid of all script tags and all the javascript attributes like onload, onclick, etc.
Forget it. You cannot process HTML with regex in any useful way. Let alone when security is involved and attackers might be deliberately throwing malformed markup at you.
If you can convince your users to input XHTML, that's much easier to parse. You still can't do it with regex, but you can throw it into a simple XML parser, and walk over the resulting node tree to check that every element and attribute is known-safe, and delete any that aren't, then re-serialise.
HTML Purifier edits the HTLM and it's important that the HTML is maintained just how it was posted.
Why?
If it's so they can edit it in their original form, then the answer is simply to purify it on the way out to be displayed in the browser, not on the way in at submit-time.
If you must let users input their own free-form HTML — and in general I'd advise against it — then HTML Purifier, with a whitelist approach (ban all elements/attributes that aren't known-safe) is about as good as it gets. It's very very complicated and you may have to keep it up to date when hacks are found, but it's streets ahead of anything you're going to hack up yourself with regexes.
But I don't wanna purchase a new domain just for this.
You can use a subdomain, as long as any authentication tokens (in particular, cookies) can't cross between subdomains. (Which for cookies they can't by default as the domain parameter is set to only the current hostname.)
Do you trust your users with scripting capability? If not don't let them have it, or you'll get attack scripts and iframes to Russian exploit/malware sites all over the place...
Make sure that user content doesn't contain anything that could cause Javascript to be ran on your page.
You can do this by using an HTML stripping function that gets rid of all HTML tags (like strip_tags from PHP), or by using another similar tool. There are actually many reasons besides XSS to do this. If you have user submitted content, you want to make sure that it doesn't break the site layout.
I belive you can simply use a sub-domain of your current domain to host Javascript, and you will get the same security benefits for AJAX. Not cookies however.
In your specific case, filtering out the <script> tag and Javascript actions is probably going to be your best bet.
1) Use clean simple directory based URIs to serve user feed data.
Make sure when you dynamically create URIs to address the user's uploaded data, service account, or anything else off your domain make sure you don't post information as parameters to the URI. That is an extremely easy point of manipulation that could be used to expose flaws in your server security and even possibly inject code onto your server.
2) Patch your server.
Ensure you keep your server up to date on all the latest security patches for all the services running on that server.
3) Take all possible server-side protections against SQL injection.
If somebody can inject code to your SQL database that can execute from services on your box that person will own your box. At that point they can then install malware onto your webserver to be feed back to your users or simple record data from the server and send it out to a malicious party.
4) Force all new uploads into a protected sandboxed area to test for script execution.
No matter how you try to remove script tags from submitted code there will be a way to circumvent your safeguards to execute script. Browsers are sloppy and do all kinds of stupid crap they are not supposed to do. Test your submissions in a safe area before you publish them for public consumption.
5) Check for beacons in submitted code.
This step requires the previous step and can be very complicated, because it can occur in script code that requires a browser plugin to execute, such as Action Script, but is just as much a vulnerability as allowing JavaScript to execute from user submitted code. If a user can submit code that can beacon out to a third party then your users, and possibly your server, is completely exposed to data loss to a malicious third party.
You should filter ALL HTML and whitelist only the tags and attributes that are safe and semantically useful. WordPress is great at this and I assume that you will find the regular expressions used by WordPress if you search their source code.

Categories