I try to wrap my head around how to really secure ajax calls of any kind that are publicly available.
Let’s say the JavaScript on a public page (so no user authentication of any kind) contains an AJAX call to a PHP script (REST API or just a script, it doesn’t matter) that does a lot of heavy lifting. So any user can just look into the source code, find the AJAX call, rebuild and execute it, and execute it again a million times in a second and DDoS your site that way - not so great. At first I thought a HTTP_REFERER check could be helpful, but as any header field, also this is manipulable (just use a curl request) so the gain of security wouldn’t be too high.
The next approach was about a combination of using session ids, cookies, etc. to build some kind of access key for every page viewer and when someone exceeds the limit the AJAX call would run into an error. Sounds great so far, but just by cleaning the cookies, etc. everything will be reseted. So also no real solution. But, of course! Use the IP! Great idea! Users in public networks, that use only one IP for internet access will be totally happy, if one miscreant will block the service for all of them by abusing the call... not. So, also no great solution.
So, I’m really stuck here and can’t think of any great answer for my problem.
I also thought about API keys, or something alike. But that is an information that is also extractable from the JavaScript source. So how to prevent other servers using your service in a proxy kind of manner serving your data to their users? (e.g. you implemented the GMaps API in your website (or any other API) and someone uses your script accessing the API with your key)
tl;dr
Is there any good way to really secure your publicly viewable AJAX calls from abusing them for DDoSing your site, presenting your data on other sites, etc.
I think you're overthinking what AJAX is. When your site makes an ajax request, server side, it's the same as any other page request (even if some scripts are more process intensive). You need to protect your entire site, and not just specific scripts. If your server does not have any DDoS protection, it can be attacked through any page. Look into services like CloudFare
As #Sage Mentioned it is similar to normal http request. You can use normal authentication as the http headers/cookie information will be passed on to the server every time you make ajax call. For clear view you can look into developer console on browser. Its the same as exposing you website root url. Just make sure you have authentications checks for ajax calls too.
Related
I have an API (1) on which I have build an web application with its own AJAX API (2). The reason for this is not to expose the source API.
However, the web application uses AJAX (through JQuery) go get new data from its AJAX API, the data retrieved is currently in XML.
Lately I have secured the main API (1) with an authorization algorithm. However, I would like to secure the web application as well so it cannot be parsed. Currently it is being parsed to get the hash used to call the AJAX API, which returns XML.
My question: How can I improve the security and decrease the possibility of others able to parse my web application.
The only ideas I have are: stop sending XML, but send HTML instead. Use flash (yet, this is not an option).
I understand that since the site is public, and no login can be implemented, it can be hard to refuse access to bots (non legitimate users). Also, Flash is not an option... it never is ;)
edit
The Web Application I am referring to: https://bikemap.appified.net/
This is somewhat of an odd request; you wish to lock down a system that your own web application depends on to work. This is almost always a recipe for disaster.
Web applications should always expect to be sidelined, so the real security must come from the server; tarpitting, session tokens, throttling, etc.
If that's already in place, I don't see any reason why should jump through hoops in your own web application to give robots a tougher time ... unless you really want to separate humans from robots ;-)
One way to reduce the refactoring pain on your side is to wrap the $.ajax function in a piece of code that could sign the outgoing requests (or somehow add fields to it) ... then minify / obscurify that code and hope it won't get decoded so fast.
For what I understand cross-domain AJAX calls are not possible for security reasons.
I've understood that's it's possible to do it by using JSON-P though.
My question: why are cross-domain AJAX calls forbidden, but actually possible in a less practical way? It would be simpler to just authorize it.
How are you supposed to do for those kind of simple scenarios:
geocoding a location by calling Google Maps webservice
fetching Flickr images through its webservice
ajax to a different domain but it's the same application (server farms for example?)
... (those are just examples)
If I have to wrap/proxy these calls with a server-side script, that's just boring and time lost... You can't make a full Javascript application in the end? (if you want to use external webservices I mean)
why are cross-domain AJAX calls forbidden
You are logged on to your bank, right? OK, I'll just make a Ajax request to your bank and read your account number, sort code, and so on.
How are you supposed to do for those kind of simple scenarios
Server side proxy
JSON-P
CORS
If I have to wrap/proxy these calls with a server-side script, that's just boring and time lost
Many things would be easier if we didn't have to worry about security. We wouldn't need locks on doors, passwords on accounts, etc, etc.
Theoretically JS runs in the browser, then after the first download can be easily copied and made to run directly from the local, without going through the remote server. Because I need to sell an application * js (pay-as-you-use) I need to check each request and make it available ONLY if required by that particular site and, of course, only if he paid.
It doesn't work. As soon as someone downloaded a copy of the JavaScript file, he or she can always save a copy of it and even redistribute it.
Thus you cannot protect the JavaScript itself - but assuming you rely on some client-server interaction (i.e. AJAX), the server would not respond to requests coming from non-authorized sources, thus rendering the client-side worthless.
If you need to protect your business logic, don't put it into JavaScript. Alternatively, sue everybody who uses your scripts without having obtained a license (not sure if this is practical, though ...).
I wouldn't make the JS file that you plan to sell available directly on a URL like
yourdomain.com/yourfile.js
I would offer it on a URL like
yourdomain.com/getfile
Where /getfile is a URL that is processed by a PHP/Java etc server-side language where you can check whatever credentials you need to check, be it requesting domain name, IP address, some token or something else.
if your application is made in java you can use a ServletFilter to check if the request is valid (if the IP is correct, or maybe you can use a ticket like the facebook, twitter, whatyouwant rest API), and if isn't valid don't show nothing
if you aren't using java I think that something similar can be made with every programming language
It may be a little more trouble than it's worth. Yes, you could require clients to provide a token and whitelist certain domains, etc. But they can still open any site that uses that particular JavaScript -- even someone else's -- and just Save As... .
A better bet is controlling the script's interaction with your server. If it makes any AJAX calls a server you control, then take that chance to authenticate. If it doesn't depend on data from you in that way, I think you'll just have to face the problem that anyone dedicated enough will be able to download your script and will be able to use it with a little bit of playing around.
Your best bet is, in addition to the above, keep track of domains that have paid and search every once in a while to find if anyone's taking your code.
I'm building an API and want Ajax to be able to interact with it. The API needs to allow inserting, updating, and deletion of data. Is it a good idea to allow any of these operations via GET?
For example: http://api.domain.com/insert_person/?name=joe
My original plan way to use GET for my "getting" methods (basically, just a simple DB query) and POST for add, edit, and delete. Problem is JS same-origin policy which would make it hard for Ajax to interact with my API. There is a jQuery workaround for GET (via JSONP).
Suggestions?
In a word: NO
GET should always be used only for retrieving information and should never have side effects, ever.
This is a best practice across just about every web api out there and has to do with both the intent of the verb as well as how existing software expects things to behave.
If you're trying to get around the same origin policy, GET via JSONP is the only possible front-end solution. If you've got control of the back end you can setup a proxy service that is on the same domain as the page, but relays to and from the API service.
If you're going to go down the JSONP GET path, make sure you read up on XSS and CSRF.
Add another layer of to handle your code and interact with your database (different domain).
You would still use POST and you can make a request to your db in the server side, using what ever language your are working with, example php will use curl.(to make request to a different domain)
If you allow to interact with your db using get, then anyone can simply type the url with the commands they want, so yes avoid it .
As others have pointed out, GET should not be used for actions with side effects like inserting, updating and deleting.
To allow cross-origin use of your API, look into Cross-Origin Resource Sharing, although it's currently only partially supported by browsers.
I have http://example.com/index.html, which from within the HTML uses JavaScript (XmlHttpRequest) to call a web services at http://example.com/json/?a=...&b=...
The web service returns to index.html a JSON array of information to then be displayed on index.html.
Since anyone can view the source code for index.html and see how I'm calling the JSON web service (http://example.com/json/), how do I prevent people from calling my JSON web service directly?
Since the web service is essentially an open read into my database, I don't want people to abuse the web service and start fetching data directly from the web service, start DoS my server, fetching more information than they should, etc..
UPDATE:
Is there no way to limit requests for http://example.com/json/ to only come from the same server (IP) and URL request of http://example.com/index.html?
Meaning, can't http://example.com/json/ detect that the Requester is ($_SERVER['REQUEST_URI'] == http://example.com/index.html) and only allow that?
There are no easy way to prevent that. If your service isn't extremely popular and thus being likely target for denial of service attacks, I wouldn't bother.
One thing which came into my mind is using disposable tokens (valid for 10 or 100 requests).
Other (naive) approach is checking that X-Requested-With header exists in request, but of course that can be easily faked. So, my advice is: do nothing unless the problem is real.
One more idea: hash calc. The idea is to require client performing rather expensive calculation per every request, while validating the calculation in server side is cheap. For a single request the overhead is very small, but say for 1000 requests it may take significant amount of CPU time. I have no idea if hashcalc has been used to prevent DoS'ing web services. Some years ago it was proposed as antispam measure, but never became popular.
Answer is really simple, use CSRF protection. http://en.wikipedia.org/wiki/Cross-site_request_forgery
Simply, when user comes to your site (index.php), put in session:
CSRF=(RANDOM_HASH)
Ask for JSON request, example.com/json.php?hash=$_SESSION['CSRF']
And in json.php check if $_GET['hash'] matches $_SESSION['CSRF']
Simple as that...
It's Server-Side solution!
I would keep track of the IP addresses making the requests. If you ever saw a large number of requests coming from the same IP, you could block it or offer a CAPTCHA.
There are a wide array of things you can do to add security to your service. Check this out
You can't properly secure a web-service that is callable from client-side javascript.
You can try security through obscurity techniques like javascript obfuscation, but it won't stop someone motivated.