We have micro-service that allows CORS using the Access-Control-Allow-Origin. It works fine. However we have a complicated setup:
subdomain-A.welt.de has an iFrame that points to subdomain-B.welt.de. subdomain-B.welt.de calls the micro-service (somewhere else) via XHR and requires CORS.
Within that iFrame, and only there, subdomain-B.welt.de sends null as Origin-header for every XHR-request.
I am not absolutely certain on what's the cause of this and in my research I stumbled upon document.domain, but as I am uncertain I don't know if that is something to consider (or the right way to fix whatever is happening here).
Related
This question is related to Cross-Origin Resource Sharing (CORS, http://www.w3.org/TR/cors/).
If there is an error when making a CORS request, Chrome (and AFAIK other browsers as well) logs an error to the error console. An example message may look like this:
XMLHttpRequest cannot load http://domain2.example. Origin http://domain1.example is not allowed by Access-Control-Allow-Origin.
I'm wondering if there's a way to programmatically get this error message? I've tried wrapping my xhr.send() call in try/catch, I've also tried adding an onerror() event handler. Neither of which receives the error message.
See:
http://www.w3.org/TR/cors/#handling-a-response-to-a-cross-origin-request
...as well as notes in XHR Level 2 about CORS:
http://www.w3.org/TR/XMLHttpRequest2/
The information is intentionally filtered.
Edit many months later: A followup comment here asked for "why"; the anchor in the first link was missing a few characters which made it hard to see what part of the document I was referring to.
It's a security thing - an attempt to avoid exposing information in HTTP headers which might be sensitive. The W3C link about CORS says:
User agents must filter out all response headers other than those that are a simple response header or of which the field name is an ASCII case-insensitive match for one of the values of the Access-Control-Expose-Headers headers (if any), before exposing response headers to APIs defined in CORS API specifications.
That passage includes links for "simple response header", which lists Cache-Control, Content-Language, Content-Type, Expires, Last-Modified and Pragma. So those get passed. The "Access-Control-Expose-Headers headers" part lets the remote server expose other headers too by listing them in there. See the W3C documentation for more information.
Remember you have one origin - let's say that's the web page you've loaded in your browser, running some bit of JavaScript - and the script is making a request to another origin, which isn't ordinarily allowed because malware can do nasty things that way. So, the browser, running the script and performing the HTTP requests on its behalf, acts as gatekeeper.
The browser looks at the response from that "other origin" server and, if it doesn't seem to be "taking part" in CORS - the required headers are missing or malformed - then we're in a position of no trust. We can't be sure that the script running locally is acting in good faith, since it seems to be trying to contact servers that aren't expecting to be contacted in this way. The browser certainly shouldn't "leak" any sensitive information from that remote server by just passing its entire response to the script without filtering - that would basically be allowing a cross-origin request, of sorts. An information disclosure vulnerability would arise.
This can make debugging difficult, but it's a security vs usability tradeoff where, since the "user" is a developer in this context, security is given significant priority.
Let's consider the following scenario:
a user loads https://foo.com/index.html in one of the modern browsers which allow CORS.
index.html loads a javascript from https://bar.com/script.js via the script tag.
considering a hypothetical situation where this script.js is never cached and the content of script.js has changed.
script.js makes an XHR request to https://baz.com
https://baz.com has enabled Access-Control-Allow-Credentials: * thus this XHR made by script.js goes through.
important user information now can be passed to https://baz.com which is a security risk.
Prior to CORS, XHR calls were strictly followed same-origin policy and thus calls to https://baz.com from https://foo.com would not be permitted by the browsers.
I am wondering if there is a way for https://foo.com/index.html to specify a list of XHR permissible domain names so that the above scenario would not be possible.
Any pointer is highly appreciated.
[UPDATED]
I guess I have found the answer to my question.
Thank you for being considerate 🙏
Best!
I guess I found the answer to my question.
Using connect-src directive of the Content-Security-Policy Header https://foo.com/ can restrict the XHR, fetch calls along with WebSocket, EventSource, <a> ping to desired domains.
Content-Security-Policy: connect-src <source> <source>;
More information at https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Security-Policy/connect-src
I once thought of deleting my question but someone else like me can be benefited.
CORS blocking is done on the server side, where the response headers are set to allow requests from certain origins or not.
If bar.com/script.js fetches data from baz.com, then baz.com should have the CORS restriction.
I'm trying to better understand Cross Site Scripting and lets use:
http://api.beatport.com/crossdomain.xml as the example.
The XML lists that all domains are allowed access. However when I make the request from within my HTML page (or from within the console) it will fail with an error similar to:
XMLHttpRequest cannot load http://api.beatport.com/catalog/tracks. Origin <mydomain> is not allowed by Access-Control-Allow-Origin.
What I find weird though, is if I put the request in the address bar of my browser, the request goes through.
Can someone please explain what is going on and what I need to do to fix this because clearly the API allows access from any domain.
XMLHttpRequest doesn't look at crossdomain.xml, it looks at Access-Control-Allow-Origin header as mentioned in the error message.
So the server needs to send a header like this:
Access-Control-Allow-Origin: *
If they don't send that header (http://api.beatport.com/catalog/tracks doesn't), it will be denied.
This is bizarre, I was wondering if anyone could shed some light on why this happened.
Basically, I've been pulling my hair out trying to test JSONP out so I can implement a JSON web service that other sites can use. I'm doing development on localhost--specifically, Visual Studio 2008 and Visual Studio 2008's built-in web server.
So as a JSONP test run w/ jQuery, I implemented the following:
$().ready(function() {
debugger;
try {
$.getJSON("<%= new Uri(Request.Url, "/").ToString() %>XssTest?callback=?", function(data) {
alert(data.abc);
});
} catch (err) {
alert(err);
}
});
And on the server ..
<%= Request["callback"] %>({abc : 'def'})
So what ends up happening is I set a breakpoint on the server and I get the breakpoint both on the first "debugger;" statment in the client-side script as well as on the server. The JSONP URL is indeed being invoked after the page loads. That's working great.
The problem I was having was that the callback would never execute. I tested this in both IE8 as well as Firefox 3.5. Neither one would invoke the callback. The catch(err) was never reached, either. Nothing happened at all!
I'd been stuck on this for a week, and even tested with a manually keyed HTTP request in Telnet on the specified port to be sure that the server is returning the format...
callbackfn({abc : 'def'})
.. and it is.
Then it dawned on me, what if I change the hostname from localhost to localhost with a globalizer ('.'), i.e http://localhost.:41559/ instead of http://localhost:41559/ (yes, adding a dot to any hostname is legal, it is to DNS what global:: is to C# namespaces). And then it worked! Internet Explorer and Firefox 3.5 finally showed me an alert message when I just added a dot.
So this makes me wonder, what is going on here? Why would late script tag generation work with an Internet hostname and not with plain localhost? Or is that the right question?
Clearly this is implemented for security reasons, but what are they trying to secure?? And, by getting it to work with a dot, did I just expose a security hole in this security feature?
By the way, my hosts file, while altered for other hosts, has nothing special going on with localhost; the default 127.0.0.1 / ::1 are still in place with no overrides below.
FOLLOW-UP: I got past this for local development purposes by adding:
127.0.0.1 local.mysite.com
.. to my hosts file, then adding the following code to my global.asax:
protected void Application_BeginRequest(object sender, EventArgs e)
{
if (Request.Headers["Host"].Split(':')[0] == "localhost")
{
Response.Redirect(
Request.Url.Scheme
+ "://"
+ "local.mysite.com"
+ ":" + Request.Url.Port.ToString()
+ Request.Url.PathAndQuery
, true);
}
}
I'm going to throw an answer out there; after some thought I've reached my own conclusions.
It could be that this is a security feature that's implemented to try to thwart an Internet web site from invoking JSONP services running on the client machine.
A web site could just go through a list of ports and keep invoking localhost on different ports and paths. 'Localhost' is one of few DNS hostnames that are dynamic in meaning depending on when and where it's queried, making the potential targets vulnerable. And yes, the fact that appending a dot (.) to 'localhost' ('localhost.') produces a working workaround does expose a security vulnerability, but does offer a [tentative] workaround for development puposes.
A better approach is to map the loopback IP to a new hostname entry in the hosts file so that it works locally, isn't prone to be "fixed" by a browser update, and doesn't work anywhere else but on the development workstation.
I'm experiencing a similar problem. Most of the solutions I've tried work with IE (7), but I'm having difficulty getting Firefox (3.5.2) to play ball.
I've installed HttpFox in order to see how my server's responses are being interpreted on the client, and I'm getting NS_ERROR_DOM_BAD_URI. My situation is a little different to yours though, as I'm trying to invoke a JSONP call back to the same site the hosting page came from, and then this call is responding with a 302 redirect to another site. (I'm using the redirect as a convenient way to get cookies from both domains returned to the browser.)
I'm using jQuery, and I originally tried doing a standard AJAX call via $.ajax(). I figured that as the initial request was to the same site as the hosting page, Firefox would just follow the 302 response to another domain. But no, it appeared to fall foul of XSS defenses. (Note that contrary to what Returning redirect as response to XHR request implies, jQuery does follow the 302 redirect for a standard dataType="json" call: a redirect to the same domain works fine; a redirect to another domain generates NS_ERROR_DOM_BAD_URI in the browser.) As an aside, I don't see why same-domain 302 redirects to other domains can't just be followed - after all, it's the hosting page's domain that is issuing the redirect, so why can't it be trusted? If you're worried about scripting injection attacks, then the JSONP route is open for abuse anyway...
jQuery's $.getJSON() with a ?callback=? suffix also fails in Firefox with the same error. As does using $.getScript() to roll my own JSONP <script> tag.
What does appear to work, is having a pre-existing <script id="jsonp" type="text/javascript"></script> in the HTML and then using $("jsonp").attr("src", url + "?callback=myCallback") to invoke the JSONP call. If I do that, then the cross-domain 302 redirect is followed and I get my JSON response passed to myCallback (which I've defined at the same time as the <script/> tag).
And, yes, I'm developing all this using Cassini with localhost:port URLs. Cassini won't respond to non-localhost URLs, so I can't easily try local.mysite.com to see if that has any affect on the solutions I've tried above. However, sticking a dot at the end of localhost appears to have fixed all my problems!
Now I can go back to a standard $.ajax({ ... dataType:"jsonp" ... }) call with localhost__.__:port instead of localhost:port and all is well. I find it interesting that modifying the src attribute of a script tag that pre-exists in the page's HTML does allow ordinary localhost URLs to be invoked - I guess following your thought process, this could be another security vulnerability.
I need to make an AJAX request from a website to a REST web service hosted in another domain.
Although this is works just fine in Internet Explorer, other browsers such as Mozilla and Google Chrome impose far stricter security restrictions, which prohibit cross-site AJAX requests.
The problem is that I have no control over the domain nor the web server where the site is hosted. This means that my REST web service must run somewhere else, and I can't put in place any redirection mechanism.
Here is the JavaScript code that makes the asynchronous call:
var serviceUrl = "http://myservicedomain";
var payload = "<myRequest><content>Some content</content></myRequest>";
var request = new XMLHttpRequest();
request.open("POST", serviceUrl, true); // <-- This fails in Mozilla Firefox amongst other browsers
request.setRequestHeader("Content-type", "text/xml");
request.send(payload);
How can I have this work in other browsers beside Internet Explorer?
maybe JSONP can help.
NB youll have to change your messages to use json instead of xml
Edit
Major sites such as flickr and twitter support jsonp with callbacks etc
The post marked as the answer is erroneous: the iframes document is NOT able to access the parent. The same origin policy works both ways.
The fact is that it is not possible in any way to consume a rest based webservice using xmlhttprequest. The only way to load data from a different domain (without any framework) is to use JSONP. Any other solutions demand a serverside proxy located on your own domain, or a client side proxy located on the remote domain and som sort of cross-site communication (like easyXDM) to communicate between the documents.
The fact that this works in IE is a security issue with IE, not a feature.
Unfortunately cross-site scripting is prohibited, and the accepted work around is to proxy the requests through your own domain: do you really have no ability to add or modify server side code?
Furthermore, the secondary workaround - involving the aquisition of data through script tags - is only going to support GET requests, which you might be able to hack with a SOAP service, but not so much with the POST request to a RESTful service you describe.
I'm really not sure an AJAX solution exists, you might be back to a <form> solution.
The not very clear workaround (but works) is using iframe as container for requests to another sites. The problem is, the parent can not access iframe's content, can only navigate iframe's "src" attribut. But the iframe content can access parent's content.
So, if the iframe's content know, they can call some javascript content in parent page or directly access parent's DOM.
EDIT:
Sample:
function ajaxWorkaroung() {
var frm = gewtElementById("myIFrame")
frm.src = "http://some_other_domain"
}
function ajaxCallback(parameter){
// this function will be called from myIFrame's content
}
Make your service domain accept cross origin resource sharing (CORS).
Typical scenario: Most CORS compliant browsers will first send an OPTIONS header, to which, the server should return information about which headers are accepted. If the headers satisfy the service's requirements for the request provided (Allowed Methods being GET and POST, Allowed-Origin *, etc), the browser will then resend the request with the appropriate method (GET, POST, etc.).
Everything this point forward is the same as when you are using IE, or more simply, if you were posting to the same domain.
Caviots: Some service development SDK's (WCF in particular) will attempt to process the request, in which case you need to preprocess the OPTIONS Method to respond to the request and avoid the method being called twice on the server.
In short, the problem lies server-side.
Edit There is one issue with IE 9 and below with CORS, in that it is not fully implemented. Luckily, you can solve this problem by making your calls from server-side code to the service and have it come back through your server (e.g. mypage.aspx?service=blah&method=blahblah&p0=firstParam=something). From here, your server side code should implement a request/response stream model.
Just use a server side proxy on your origin domain. Here is an example: http://jquery-howto.blogspot.com/2009/04/cross-domain-ajax-querying-with-jquery.html
This can also be done using a webserver setup localy that calls curl with the correct arguments and returns the curl output.
app.rb
require 'sinatra'
require 'curb'
set :views,lambda {"views/"+self.name.to_s.downcase.sub("controller","")}
set :haml, :layout => :'../layout', :format => :html5, :escape_html=>true
disable :raise_errors
get '/data/:brand' do
data_link = "https://externalsite.com/#{params[:brand]}"
c = Curl::Easy.perform(data_link)
c.body_str
end
Sending an ajax request to localhost:4567/data/something will return the result from externalsite.com/something.
Another option would be to setup a CNAME record on your own domain to "Mask" the remote domain hostname.