Securing AJAX API - javascript

I have an API (1) on which I have build an web application with its own AJAX API (2). The reason for this is not to expose the source API.
However, the web application uses AJAX (through JQuery) go get new data from its AJAX API, the data retrieved is currently in XML.
Lately I have secured the main API (1) with an authorization algorithm. However, I would like to secure the web application as well so it cannot be parsed. Currently it is being parsed to get the hash used to call the AJAX API, which returns XML.
My question: How can I improve the security and decrease the possibility of others able to parse my web application.
The only ideas I have are: stop sending XML, but send HTML instead. Use flash (yet, this is not an option).
I understand that since the site is public, and no login can be implemented, it can be hard to refuse access to bots (non legitimate users). Also, Flash is not an option... it never is ;)
edit
The Web Application I am referring to: https://bikemap.appified.net/

This is somewhat of an odd request; you wish to lock down a system that your own web application depends on to work. This is almost always a recipe for disaster.
Web applications should always expect to be sidelined, so the real security must come from the server; tarpitting, session tokens, throttling, etc.
If that's already in place, I don't see any reason why should jump through hoops in your own web application to give robots a tougher time ... unless you really want to separate humans from robots ;-)
One way to reduce the refactoring pain on your side is to wrap the $.ajax function in a piece of code that could sign the outgoing requests (or somehow add fields to it) ... then minify / obscurify that code and hope it won't get decoded so fast.

Related

API calls from JavaScript to backend: Ensuring legitimacy without server-side code

I want to create a Javascript widget that my users can put on their websites.
The widget is capable of creating audio, which in turn costs my users' money.
For the sake of illustration, let's say that every time a widget, placed on my user's site, is loaded by anyone on the internet (i.e. my users' users), I bill my user $1.
The widget is a Javascript code wrapped around an HTML audio player. The JS code makes a request to my backend API every time it is loaded, and upon receiving the response from my backend API, the player is constructed.
Diagram:
My concern is malicious usage by people who are not my users.
Let's say someone takes the widget's source code they found on a website that belongs to one of my users, and they put it on their site. They will, therefore, use my service but not pay for it. Instead, my actual user will pay for it (assuming I use a public API key as a way of distinguishing my users).
Usually, this is prevented by having a server-side library be responsible for any usages that might spend money. For example, I use Pusher as my WebSockets IaaS, and whenever I want to publish messages, I have to do it server-side, using their PHP SDK, with both private and public API keys.
In my use case, it's mandatory not to have a server-side library.
Question: how do I make sure that API requests I receive are legitimate?
I considered using the hostname where the widget is placed as a legitimacy measure. During the widget set-up, I could ask my users to whitelist certain (sub)domains and reject all requests that don't match the criteria, but this could be easily spoofed by, for example, a custom local domain or a CURL-crafted request.
I understand this may not be possible.
It seems like what you're asking is closely related to the topic of client side encryption. In most cases, the answer would be no, its not possible. However, in this case, it may be possible to implement something along the lines of the following. If you can get your clients to install a plugin (which you would build), you could encrypt your JS code after finishing it, and have your server serve this encrypted file. Normally, where this falls short, is that if you're sending an encrypted file, there needs to be a way for the client to decrypt it. This would require you to also serve an unecrypted JS file which would do the decoding, but by serving the unencrypted decoder you undo any security gained by encrypting your main JS file (the decryption file could be easily used to reverse engineer your encryption method/ just straight up run for people other than your intended users). Now, this is where having those API users (and the ability to communicate with them through means outside of server-client connections) comes in handy. If you build a decryption plugin, and give it to the API users (you could issue a unique decryption key for each user, but without server access implementing unique user keys would be very difficult/impossible), the plugin could then decrypt your served file in their browser, essentially guaranteeing that only users you have given the 'key' to can access your software. However, this approach has a few caveats. It implies that you trust your users enough that they wouldn't distribute the plugin (it would be against their intrest to distribute it anyway, as it could lead to higher chargers if people impersonate them). There are also probably a couple of other security concerns with this approach, however, I can't think of them right now. If any come to mind, I'll edit this post and add them.
apparently, I don't have enough reputation to comment yet, hence the post...
But in response to your post, I think that method seems much better than the one I suggested; I didn't realize you could control the API's response to the server.
I don't quite understand which of the following you mean:
a) Send a JS file to the user, with the sole purpose of determining if the player should also be sent (ie upon arriving, it pings the server with the client's API key/ url) and then the server would serve the file (in which case your approach seems safe to me, but others may find security problems with it).
or
b) Send a file with the JS and the audio player, which upon arriving, determines if the URL and API key are correct, and then allows the audio player to function normally (sending the API key to the server to track usage, not as a security feature).
If using option b, this would not improve security. If your code relies on security that runs on the client-side, and the security system was sent by the same means as the code, then almost without exception, the system designed is flawed and inherently unsafe.
I hope this makes helps, and if you disagree / have more questions, feel free to comment!
How about sending the following parameters from JavaScript widget to API backend:
Public API key (e.g. bbbe3b259f881cfc796f468619eb9d)
Current URL (e.g. https://example.com/articles/chiang-mai-thailand-january-2016-june-2016)
I will use the API key as a way of distinguishing my user and the current URL as a way of knowing which audio file to create (my widget will create an audio file based on the URL).
Furthermore, and this is crucial, I will have a user whitelist their domains and subdomains on my central site, where my users will get their widget code.
This is the same as what FB does for their integrations:
So if for example, my backend API receives the aforementioned sample URL, and the user has set up the widget to only allow URLs that belong to foo.com and bar.baz.com, I will reject the audio creation process and display an error.
Do you see any issues with this approach?

How to prevent a DDoS on AJAX endpoints?

I try to wrap my head around how to really secure ajax calls of any kind that are publicly available.
Let’s say the JavaScript on a public page (so no user authentication of any kind) contains an AJAX call to a PHP script (REST API or just a script, it doesn’t matter) that does a lot of heavy lifting. So any user can just look into the source code, find the AJAX call, rebuild and execute it, and execute it again a million times in a second and DDoS your site that way - not so great. At first I thought a HTTP_REFERER check could be helpful, but as any header field, also this is manipulable (just use a curl request) so the gain of security wouldn’t be too high.
The next approach was about a combination of using session ids, cookies, etc. to build some kind of access key for every page viewer and when someone exceeds the limit the AJAX call would run into an error. Sounds great so far, but just by cleaning the cookies, etc. everything will be reseted. So also no real solution. But, of course! Use the IP! Great idea! Users in public networks, that use only one IP for internet access will be totally happy, if one miscreant will block the service for all of them by abusing the call... not. So, also no great solution.
So, I’m really stuck here and can’t think of any great answer for my problem.
I also thought about API keys, or something alike. But that is an information that is also extractable from the JavaScript source. So how to prevent other servers using your service in a proxy kind of manner serving your data to their users? (e.g. you implemented the GMaps API in your website (or any other API) and someone uses your script accessing the API with your key)
tl;dr
Is there any good way to really secure your publicly viewable AJAX calls from abusing them for DDoSing your site, presenting your data on other sites, etc.
I think you're overthinking what AJAX is. When your site makes an ajax request, server side, it's the same as any other page request (even if some scripts are more process intensive). You need to protect your entire site, and not just specific scripts. If your server does not have any DDoS protection, it can be attacked through any page. Look into services like CloudFare
As #Sage Mentioned it is similar to normal http request. You can use normal authentication as the http headers/cookie information will be passed on to the server every time you make ajax call. For clear view you can look into developer console on browser. Its the same as exposing you website root url. Just make sure you have authentications checks for ajax calls too.

node.js and single page web application

I am looking at express.js for the back end and JS on the client side.
My app is a single page Web App.
The server will only serve JSON messages and my question is about "routing" for express.
Is one supposed to use routing to connect the UI and the server side business logic?
How will that work with my single page app?
so lets say, the client makes an Ajax call to the server looking for a value in the database and there is server side script that provides the JSON back to the UI. How is this UI and node script relationship setup?
Can someone shed some light on this?
Single page apps are those that live on a single HTML document. This means that if you want to display some different content to the user, depending on the state of the application, you will need to do some DOM manipulation (cutting out and replacing certain elements of the current document with different HTML) in order to update the 'view' that the user sees. Excuse me if this is obvious to you, please don't take offense. I figured I'd start from here. Hang with me and I'll explain how your routing situation is going to play out (more or less).
URLs are composed of a few different parts, each of which informs the browser of a particular bit of information that is required in order to download the resource that the user is attempting to access. Typically the resources that you are looking for are off on a server somewhere and the browser knows this because of pieces in the URL like 'protocol' ('http:') and 'host' ('www.mydomain.com'), so it goes off to that server to find what you're requesting. There are also 'query' parameters in URLs which provide some additional information to the server regarding a particular action, like the search terms of a search query. After the query parameters, comes the 'hash'. The hash is where the magic of single page apps happens... eh, well, kind of.....
First a bit about the hash. When you add a '#' to a URL, the browser then interprets the information that comes after it to be some location (element) within the currently displayed document. That means, if you have an element with an 'id' of 'main' and you add '#main' to the end of the URL, like so: 'http: //www.example.com#main', the browser will 'scroll' (typically 'jump') to the beginning of that element, so that the you can see it. Be aware, though, that if you type 'http://www.example.com/#main' (with the hash separated from the URL by a slash) then you will force a complete page reload and the browser will attempt to find a file by the name '#main' on the server (I bet it doesn't find it).
The takeaway here is that the browser will not attempt to navigate away from the current document if there is a hash in the URL, the exception being of course the case mention above, and this is great because single-page apps don't want to navigate away from the page or request a new document from the server. (See how routing is different for single-page apps?)
Now, this whole thing about the hash isn't vital to single-page apps, as you could make one without dealing with it all. A bunch of click handlers and DOM manipulation is all you'd need really... But, that would mean that users will have no way of sharing links to particular views in your app. The URL would never change, and we would never be able to navigate to any particular view directly. We'd always be starting from the starting position of your app, which could easily be a very annoying situation.
If your single-page app is going to have different views, and you want users to be able to navigate directly to particular ones via bookmarks or links, then you will need to implement a form of routing on the front-end in addition to the routing that you'll need to implement on the backend (routing for data API, etc.), which means that you will need to make use of the hash.
I don't want to get into how different frameworks accomplish routing on the front-end, but it's basically a matter of updating the browser's address field when the user clicks a link, and watching the address bar to determine what the current URL is and loading the HTML that is associated with that URL into the DOM in the designated location in the document tree.
So, within a single-page app, you have one route on the server that deals with rendering the app HTML document (index.html), and you have routes that are responsible for dealing with the data of your app (creating new instances in the database, logging in and out, editing or destroying instances in the DB, and fetching data...) which are called via AJAX requests.
This is actually a fairly complicated situation in that HTML5 allows us to be able to forgo the hash (with the help of some link rewriting on the server) and also be able to use the 'back' and 'forward' buttons as if we've actually navigated away from the original document (which we haven't because we have only pointed the browser to the exact same URL, only with modified hash values, so no new page loads have occurred). Traditional site navigation and linking can be achieved by utilizing the browser's History API, which is available for IE beginning with version 10 (I believe), the rest of the big browser vendors were already on to it quite a bit earlier, so frameworks that leverage this technology will allow your users to navigate your app without the hash in the URL. Explaining this is a diversion and not necessary for understanding routing in single-page apps, but it is interesting and you'll have to learn it eventually anyway, probably..
AJAX should be used to request JSON from the server. AJAX requests will always hit your server because you don't include the hash symbol in AJAX requests (it would be ridiculous to do so because the hash is meant only for in-document browsing), so server-side routes must be responsible for exposing your data API (consider a RESTful one). While this is not their sole purpose in single-page apps, it is perhaps their most important one.
Soooo, to wrap it up, you will have two sets of routes. One on the client (as part of a client-side framework like AngularJS or EmberJS, the list goes on... I prefer Angular, but there is a fairly steep learning curve for that one.), and one on the server. When you think about 'server routes' think data API. When you think of 'page routing', remember that this all gets handled on the client, by the javascript that you delivered with the initial server response (this is the one and only necessary server-side route involved with rendering HTML to the browser, loading your 'index.html' and all of the necessary scripts and stylesheets, etc). You will use express.static middleware to serve static files, so you don't have to worry about assigning routes for that stuff.
EDIT A quick mention of AJAX implementation.
On the server, you will have routes similar those that Alex has provided as examples and you will make calls to those URLs from the client using whatever XMLHttpRequest (XHR) object is exposed by your framework or library of choice. It is now considered more or less standard and best practice for frameworks/libraries to implement these requests as Promises http://wiki.commonjs.org/wiki/Promises/A. You should read a bit about it on your own, but I might be able to summarize it by saying that it is an asynchronous operation analogous to 'try, catch, throw' in synchronous operations. You will instantiate a promise object and through it you will attempt to load data from the server, for instance, via GET request. Make sure that you have assigned functions to handle requests made to the URL that you made the request to (server-side route)! This object that you instantiate and subsequently make the request to the server through, promises to return the result of the request to you once it comes back from the server (no matter whether it was successful or not) If it is successful, it will call a function that you have written and will supply it with the data from the server. If it fails, it will call a different function, also written by you, and will supply it with the error object (or 'reason' for failure), so you can handle the error appropriately.
Hope that helped answer your question.
You only have to route requests you serve dynamically. Your HTML, CSS, JS are all static assets. So all you need to handling routing for is your data.
It sounds like you want a Restful API, which basically means that you have URLs for specific resources, and HTTP verbs for manipulating them.
Something like:
GET /books.json - Get all books
POST /books.json - Create a new book with properties passed in the body of the request
GET /books/123.json - Get book with id of 123
PUT /books/123.json - Update an existing book with properties passed in the body of the request
This blog post seems to show how to set this up in Express.
Once you have a sane API delivering JSON, you just make your AJAX calls use it based on what objects you want to fetch.

How to restrict access to my web service?

I have http://example.com/index.html, which from within the HTML uses JavaScript (XmlHttpRequest) to call a web services at http://example.com/json/?a=...&b=...
The web service returns to index.html a JSON array of information to then be displayed on index.html.
Since anyone can view the source code for index.html and see how I'm calling the JSON web service (http://example.com/json/), how do I prevent people from calling my JSON web service directly?
Since the web service is essentially an open read into my database, I don't want people to abuse the web service and start fetching data directly from the web service, start DoS my server, fetching more information than they should, etc..
UPDATE:
Is there no way to limit requests for http://example.com/json/ to only come from the same server (IP) and URL request of http://example.com/index.html?
Meaning, can't http://example.com/json/ detect that the Requester is ($_SERVER['REQUEST_URI'] == http://example.com/index.html) and only allow that?
There are no easy way to prevent that. If your service isn't extremely popular and thus being likely target for denial of service attacks, I wouldn't bother.
One thing which came into my mind is using disposable tokens (valid for 10 or 100 requests).
Other (naive) approach is checking that X-Requested-With header exists in request, but of course that can be easily faked. So, my advice is: do nothing unless the problem is real.
One more idea: hash calc. The idea is to require client performing rather expensive calculation per every request, while validating the calculation in server side is cheap. For a single request the overhead is very small, but say for 1000 requests it may take significant amount of CPU time. I have no idea if hashcalc has been used to prevent DoS'ing web services. Some years ago it was proposed as antispam measure, but never became popular.
Answer is really simple, use CSRF protection. http://en.wikipedia.org/wiki/Cross-site_request_forgery
Simply, when user comes to your site (index.php), put in session:
CSRF=(RANDOM_HASH)
Ask for JSON request, example.com/json.php?hash=$_SESSION['CSRF']
And in json.php check if $_GET['hash'] matches $_SESSION['CSRF']
Simple as that...
It's Server-Side solution!
I would keep track of the IP addresses making the requests. If you ever saw a large number of requests coming from the same IP, you could block it or offer a CAPTCHA.
There are a wide array of things you can do to add security to your service. Check this out
You can't properly secure a web-service that is callable from client-side javascript.
You can try security through obscurity techniques like javascript obfuscation, but it won't stop someone motivated.

Suppressing browser's authentication dialog

I apologize that there is a similar question already but I'd like to ask it more broadly.
Is there any way at all to determine on the client side of a web application if requesting a resource will return a 401 status code and cause the browser to display an ugly authentication dialog?
Or, is there any way at all to load an mp3 audio resource in flash which fails invisibly in the case of a 401 status code rather than letting the browser show an ugly dialog?
The Adobe Air run-time will suppress the authentication if I set the "authenticate" property of the URLRequest object but this property is not in the Flash run-time. Any solution which works on the client will do. An XMLHttpRequest is not likely to work as the resources in questions will be at different domains.
It is important to fail invisibly because the application will have a list of many audio resources to try and it makes no sense to bother the user to try and authenticate for one when there are many others available. It is important that the solution work on the client because the mp3's in question come from various servers outside my control.
I'm having the same problem with the twitter api - any protected user requires the client to authenticate.
The only solution that I could come up with was to load the pages serverside and return a list of the urls with their http response code.
"Is there any way at all to determine on the client side of a web application if requesting a resource will return a 401 status code and cause the browser to display an ugly authentication dialog?"
No, not in general. The 401 response is the only standard way for the server to indicate that authentication is necessary.
Just wrap your access to the resource that might potentially require authentication to an Ajax call. You can catch the response code, and use javascript to do whatever you want (ie. play that sound). If the response code is however alright, then use javascript to forward user to the resource.
Most likely this approach will generate slightly more load on server (you might have to resort to loading the same resource several times in some circumstances), but it should work. Any good tutorial about how to use XMLHttpRequest should contain all you need. Take a look at for instance http://www.xul.fr/en-xml-ajax.html
If you are using URLRequest to get the files, then you are running across more than just elegant error handling, you are running into a fundamental difference in the Flash and AIR run-times.
If using the URLRequest object to retrieve files you are going to get a security error from Flash on every request to every server that has not set a policy file to allow these sort of requests. AIR allows these requests since it basically IS the client. This makes sense since it's the difference between installing an application and visiting a web page.
I hate to provide the non-answer, but if you can't make a server-side call, and you are hitting a range of "not-known" servers, it's going to be a tough road to hoe.
But maybe I misunderstand, are you just trying to Link to the files and prevent the user from getting bad links, or are you trying to actually load the files?

Categories