API - use GET to add, edit, and delete? - javascript

I'm building an API and want Ajax to be able to interact with it. The API needs to allow inserting, updating, and deletion of data. Is it a good idea to allow any of these operations via GET?
For example: http://api.domain.com/insert_person/?name=joe
My original plan way to use GET for my "getting" methods (basically, just a simple DB query) and POST for add, edit, and delete. Problem is JS same-origin policy which would make it hard for Ajax to interact with my API. There is a jQuery workaround for GET (via JSONP).
Suggestions?

In a word: NO
GET should always be used only for retrieving information and should never have side effects, ever.
This is a best practice across just about every web api out there and has to do with both the intent of the verb as well as how existing software expects things to behave.

If you're trying to get around the same origin policy, GET via JSONP is the only possible front-end solution. If you've got control of the back end you can setup a proxy service that is on the same domain as the page, but relays to and from the API service.
If you're going to go down the JSONP GET path, make sure you read up on XSS and CSRF.

Add another layer of to handle your code and interact with your database (different domain).
You would still use POST and you can make a request to your db in the server side, using what ever language your are working with, example php will use curl.(to make request to a different domain)
If you allow to interact with your db using get, then anyone can simply type the url with the commands they want, so yes avoid it .

As others have pointed out, GET should not be used for actions with side effects like inserting, updating and deleting.
To allow cross-origin use of your API, look into Cross-Origin Resource Sharing, although it's currently only partially supported by browsers.

Related

Can I send a post without a form?

I'm studying web security.
Form Security During studying, I wonder questions.
Can I send a post without a form?
It does not ask to transfer data to the server without post transmission.
When I send a 'post', I ask if I should go through the 'form'.
You can use XMLHttpRequest to send requests of all types (POST, GET, ...)
More information here.
POSTing in HTML without a FORM didn't work in the browsers I tried it in. Since the FORM element specifies the URL to access, there isn't really complete basis to perform such an operation.
However using curl or a custom HTTP client, it is definitely possible to construct & send handcrafted requests (POST, GET, PUT, DELETE, others) to an HTTP server. These are independent of any HTML pages the server might offer -- the client need never perform a GET -- and can be constructed completely arbitrarily.
For example, a request may specify parameters (eg "order.total", "customer.id", "developmentMode=true") which the web application never offered in the HTML or expected to be received. This can be a potential security hole if eg. automatic binding frameworks are used, and bindable fields should be carefully controlled when using such.
Applications must be robust against such requests as a basic principle of web security.
Google Chrome has an extension called "Postman" that allows sending HTTP POST requests: https://chrome.google.com/webstore/detail/postman/fhbjgbiflinjbdggehcddcbncdddomop?hl=en

Should Ajax actions REALLY have separate URL?

Every now and then I hear an opinion that having the same URL for non-Ajax and Ajax action is bad.
On my app, I'm having forms that are sent with Ajax for better user experience. For people who disable JavaScript, my forms work too. Same goes with some of my links. I used to have the same URL for both and just use appropriate content and Content-Type, according to whether it's an Ajax call or not. This caused problem with Google Chrome: Laravel 5 and weird bug: curly braces on back
My question now is - is this REALLY bad idea to have the same URL for Ajax and non-Ajax actions? It's painful to make two separate URLs for each of those actions. Or maybe is there a good workaround to manage caching? In theory, one header can change the behavior entirely, so I don't see why should I create extra layer of my app and force the same thing to have separate URL.
Please share your opinions.
HTTP is flexible and allows you to design the resources the way you want. You design the APIs and designing comes to personal preferences. But in this case, having one resource that responds to different types of request is absolutely fine. This is why the HTTP headers like Content-type exists.
And for the caching you can use HTTP Etag header. It's a caching header that forces the client to validate the cached resources before using them.
The ETag or entity tag is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for web cache validation, which allows a client to make conditional requests. This allows caches to be more efficient, and saves bandwidth, as a web server does not need to send a full response if the content has not changed

How to prevent a DDoS on AJAX endpoints?

I try to wrap my head around how to really secure ajax calls of any kind that are publicly available.
Let’s say the JavaScript on a public page (so no user authentication of any kind) contains an AJAX call to a PHP script (REST API or just a script, it doesn’t matter) that does a lot of heavy lifting. So any user can just look into the source code, find the AJAX call, rebuild and execute it, and execute it again a million times in a second and DDoS your site that way - not so great. At first I thought a HTTP_REFERER check could be helpful, but as any header field, also this is manipulable (just use a curl request) so the gain of security wouldn’t be too high.
The next approach was about a combination of using session ids, cookies, etc. to build some kind of access key for every page viewer and when someone exceeds the limit the AJAX call would run into an error. Sounds great so far, but just by cleaning the cookies, etc. everything will be reseted. So also no real solution. But, of course! Use the IP! Great idea! Users in public networks, that use only one IP for internet access will be totally happy, if one miscreant will block the service for all of them by abusing the call... not. So, also no great solution.
So, I’m really stuck here and can’t think of any great answer for my problem.
I also thought about API keys, or something alike. But that is an information that is also extractable from the JavaScript source. So how to prevent other servers using your service in a proxy kind of manner serving your data to their users? (e.g. you implemented the GMaps API in your website (or any other API) and someone uses your script accessing the API with your key)
tl;dr
Is there any good way to really secure your publicly viewable AJAX calls from abusing them for DDoSing your site, presenting your data on other sites, etc.
I think you're overthinking what AJAX is. When your site makes an ajax request, server side, it's the same as any other page request (even if some scripts are more process intensive). You need to protect your entire site, and not just specific scripts. If your server does not have any DDoS protection, it can be attacked through any page. Look into services like CloudFare
As #Sage Mentioned it is similar to normal http request. You can use normal authentication as the http headers/cookie information will be passed on to the server every time you make ajax call. For clear view you can look into developer console on browser. Its the same as exposing you website root url. Just make sure you have authentications checks for ajax calls too.

Can Ajax prevent GET parameters from being seen by the client?

An API key has to be entered as a GET parameter and these are easily seen in the URL. I could try using ajax to call a page, which in turn uses jQuery.get(); to fetch me the things I need, but is it possible for someone with more knowledge of a browser's inspector to find their way to the js variables of a page called via ajax?
If yes, then how do I protect the API key? The tutorials only explain how to use the API itself, as if the rest is common knowledge.
Anything in the browser space is essentially public.
So either you restrict the info available with the key to information you don't mid being seen, or you put an intermediary in the process - i.e. a server that does the look ups on the protected routes and passes back public info
Some notes on GET requests:
GET requests can be cached.
GET requests remain in the browser history.
GET requests can be bookmarked.
GET requests should never be used when dealing with sensitive data.
GET requests have length restrictions.
GET requests should be used only to retrieve data.
Source W3Schools

Using a client's IP to get content

I'm a bit embarrassed here because I am trying to get content remotely, by using the client's browser and not the server. But I have specifications which make it look impossible to me, I literally spent all day on it with no success.
The data I need to fetch is on a distant server.
I don't own this server (I can't do any modification to it).
It's a string, and I need to get it and pass it to PHP.
It must be the client's (user browsing the website) browser that actually gets the data (it needs to be it's IP, and not the servers).
And, with the cross-domain policy I don't seem to be able to get around it. I already knew about it, still tried a simple Ajax query, which failed. Then I though 'why not use iFrames', but the same limitation seems to apply to them too. I then read about using YQL (http://developer.yahoo.com/yql/) but I noticed the server I was trying to reach blocked YQL's user-agent making it impossible to use this technique.
So, that's all I could think about or find. But I can't believe it's not possible to achieve such a thing, that doesn't even look hard...
Oh, and my Javascript knowledge is very basic, this mustn't help either.
This is one reason that the same-origin policy exists. You're trying to have your webpage access data on a different server, without the user knowing, and without having "permission" from the other server to do so.
Without establishing a two-way trust system (ie modifying the 'other' server), I believe this is not possible.
Even with new xhr and crossdomain support, two-way trust is still required for the communication to work.
You could consider a fat-client approach, or try #selbie suggestion and require manual user interaction.
The same origin policy prevents document or script loaded from one
origin from getting or setting properties of a document from a different
origin.
-- From http://www.mozilla.org/projects/security/components/same-origin.html
Now if you wish to do some hackery to get it... visit this site
Note: I have never tried any of the methods on the aforementioned site and cannot guarantee their success
I can only see a really ugly solution: iFrames. This article contains some good informations to start with.
You could do it with flash application:
flash with a crossdomain.xml file (won't help though since you don't control the other server)
On new browsers there is CORS - requires Access-Control-Allow-Origin header set on the server side.
You can also try to use JSONP (but I think that won't work since you don't own the other server).
I think you need to bite the bullet and find some other way to get the content (on the server side for example).

Categories