Confusion over how Cross Origin Resource Sharing (CORS) works - javascript

From what I understand about CORS, this is how it works: I have a site foo.com which serves a page X. X wants to post data to another domain bar.com. If bar.com is CORS enabled (its headers produce Access-Control-Allow-Origin foo.com) then page X can now send data to bar.com.
As I understand to get CORS to work it's all about settingit up on bar.com, and has nothing to do with foo.com. It's all about making sure bar.com doesn't accept requests from any old domain.
However this really doesn't make sense to me. I thought CORS was designed to enable foo.com to dictate who X is allowed to communicate with. If we go back to the previous example but this time X is compromised by dodgy script so that it sends data secretly to evil.com, how is CORS going to stop that? evil.com is CORS enabled, and set to *, so it will accept requests from anything. That way a user thinking they using a site foo.com, are unwittingly sending data to evil.com.
If it is really all about bar.com protecting itself, then why does it make the browser enforce the policy?. The only conceivable situation in which this makes sense if you have evil.com serving up page Y that impersonates foo.com, that tries to send data to bar.com. But CORS is enforced by the browser, all you'd have to do is make evil.com a proxy that sends faked origin requests to bar.com (data goes from Y to evil.com, evil.com sets its fake origin to foo.com then sends it to bar.com).
It only makes sense to me if it works the other way round. foo.com is CORS enabled, and its headers are set to Access-Control-Allow-Origin bar.com. That way rouge scripts would get denied access evil.com by the browser. It then makes sense for the browser to enforce the policy because its running the scripts that could go rouge. It won't stop rouge sites from trying to send rouge data to bar.com, but bar.com can protect itself with a username/password. If foo.com has endpoints that it's expecting data back from X, then you can embed tokens into X, to ensure evil.com doesn't send data to it instead.
I feel like I'm not understanding something fundamentally important here. Would really appreciate the help.

However this really doesn't make sense to me. I thought CORS was designed to enable foo.com to dictate who X is allowed to communicate with.
No, it's about bar.com controlling use of its content.
But CORS is enforced by the browser, all you'd have to do is make evil.com a proxy that sends faked origin requests to bar.com...
Yup. And if you do, and the people at bar.com notice and care, they disallow requests from your server. You move it, they disallow the new one. Whack-a-mole time. But painful as that game of whack-a-mole is, it's a lot less painful than if the requests come directly from each individual user of foo.com, from their desktop.
Having foo.com enforce what foo.com can do doesn't make any sense. foo.com already enforces what foo.com can do, because it's foo.com that serves foo.com's content and scripts.

It isn't about Foo.com, nor about Bar.com. It is about user.
There are two things that CORS protects against. The first is access to resources behind the firewall. The second are resources that are normally protected, unless a request is sent from a browsers with authentication or other sensitive data cookies.
CORS is a Browser technology, with support from servers, that allows foo limited freedom to call outside of its domain. It is a restricted hole punched in the restriction against cross domain scripting.
Anyone can fake the ORIGIN header and create a CORS preflight or simple request -- Of course, anyone can directly connect to the Bar server directly and make the requests without using CORS at all. Any browser can directly connect to bar.com and get data. But a modern browser will not run a script from foo.com that access a bar.com resource. People visiting websites are protected against visiting a site designed to exploit cookies or the fact that the browser is behind the corporate firewall.
So the accepted answer is WRONG. It isn't about bar.com protecting its resources -- it does this through authentication and authorization. You don't have to create a proxy to send CORS requests -- you create a proxy to strip out the CORS requests (automatically responding to the preflight request, and returning the proper headers to the browser, but sending a normal request to bar.com). But you will still need authentication to get bar.com's resources, and foo.com would still need to somehow get you to install a proxy to exploit the cross domain scripting hole that CORS protects against.
But the concluding sentence is correct -- foo.com isn't in control of the resources -- it is the browser, with a quick check with bar.com to ask it if this is something that was intended.
From the OP:
If it is really all about bar.com protecting itself, then why does it
make the browser enforce the policy?. The only conceivable situation
in which this makes sense if you have evil.com serving up page Y that
impersonates foo.com, that tries to send data to bar.com. But CORS is
enforced by the browser, all you'd have to do is make evil.com a proxy
that sends faked origin requests to bar.com (data goes from Y to
evil.com, evil.com sets its fake origin to foo.com then sends it to
bar.com).
evil.com can already contact bar.com -- just like any human using a browser can (or curl or wget, etc). The issue is can evil.com force your browser to connect to bar.com, which may have IP filters, cookies, firewalls, etc protecting it, but javascript can connect to using your browser. So the Browser is the thing that protects the user. By disallowing cross domain scripting. But sometimes it is useful (ex: google apis, or a bank connecting to a bill paying service, etc) to cross domain script. CORS tells the browser that it is OK in this instance.
That isn't to say that there are no holes, or the the standard is the best, or that there aren't holes in implementation in the browser, or that sites are too permissive. But those are different questions...

Related

Access to fetch at 'https://www.google.com/' from origin 'null' has been blocked by CORS policy [duplicate]

I've been reading up on CORS and how it works, but I'm finding a lot of things confusing. For example, there are lots of details about things like
User Joe is using browser BrowserX to get data from site.com,
which in turn sends a request to spot.com. To allow this, spot has
special headers... yada yada yada
Without much background, I don't understand why websites wouldn't let requests from some places. I mean, they exist to serve responses to requests, don't they? Why would certain people's of requests not be allowed?
It would really appreciate a nice explanation (or a link to one) of the problem that CORS is made to solve.
So the question is,
What is the problem CORS is solving?
The default behavior of web browsers that initiate requests from a page via JavaScript (AKA AJAX) is that they follow the same-origin policy. This means that requests can only be made via AJAX to the same domain (or sub domain). Requests to an entirely different domain will fail.
This restriction exists because requests made at other domains by your browser would carry along your cookies which often means you'd be logged in to the other site. So, without same-origin, any site could host JavaScript that called logout on stackoverflow.com for example, and it would log you out. Now imagine the complications when we talk about social networks, banking sites, etc.
So, all browsers simply restrict script-based network calls to their own domain to make it simple and safe.
Site X at www.x.com cannot make AJAX requests to site Y at www.y.com, only to *.x.com
There are some known work-arounds in place (such as JSONP which doesn't include cookies in the request), but these are not a permanent solution.
CORS allows these cross-domain requests to happen, but only when each side opts into CORS support.
First, let's talk about the same origin policy. I'll quote from a previous answer of mine:
The same-origin policy was invented because it prevents code from one website from accessing credential-restricted content on another site. Ajax requests are by default sent with any auth cookies granted by the target site.
For example, suppose I accidentally load http://evil.com/, which sends a request for http://mail.google.com/. If the SOP were not in place, and I was signed into Gmail, the script at evil.com could see my inbox. If the site at evil.com wants to load mail.google.com without my cookies, it can just use a proxy server; the public contents of mail.google.com are not a secret (but the contents of mail.google.com when accessed with my cookies are a secret).
(Note that I've said "credential-restricted content", but it can also be topology-restricted content when a website is only visible to certain IP addresses.)
Sometimes, however, it's not evil.com trying to peek into your inbox. Sometimes, it's just a helpful website (say, http://goodsite.foo) trying to use a public API from another origin (say, http://api.example.com). The programmers who worked hard on api.example.com want all origins to access their site's contents freely. In that case, the API server at api.example.com can use CORS headers to allow goodsite.foo (or any other requesting origin) to access its API responses.
So, in sum, we assume by default that cross-origin access is a bad thing (think of someone trying to read your inbox), but there are cases where it's a good thing (think of a website trying to access a public API). CORS allows the good case to happen when the requested site wants it to happen.
There are security and privacy reasons for not allowing requests from anywhere. If you visited my website, you wouldn't want my code to make requests to Facebook, reddit, your bank, eBay, etc. from your browser using your cookies, right? My site would then be able to make posts, read information, place orders, etc. on your behalf. Or on my behalf with your accounts.

Can a https request header be faked as a security threat?

I am working on an app and would like to use http post requests to send data from client to server, on which the server uses the origin url from the client to allow to use our services, so my question is, can the http url from the client side be faked as a security threat to access my services, and if so, what other alternatives should i take?
An example is, if a client-side script running on a page from foo.com wants to request data from bar.com, in the request it must specify the header Origin: http://foo.com, and bar must respond with Access-Control-Allow-Origin: http://foo.com.
What is there to stop malicious code from the site roh.com from simply spoofing the header Origin: http://foo.com to request pages from bar?
What is there to stop malicious code from the site roh.com from simply
spoofing the header Origin
The browser is. CORS restrictions and headers only work in the browser and the browser is very strict about controlling them. This is specifically to prevent drive-by attacks from random scripts on random sites a user may unknowingly visit. It’s to protect the user of the browser.
However, absolutely nothing prevents a rogue server from sending an arbitrary HTTP request with any arbitrary headers to your server. I could do so from my command line using curl or such right now. These headers do not pose any sort of guarantee or protection for your server.
'on which the server uses the origin url from the client to allow to use our services'
The origin url can be faked very easily.
Add a login with a token to prevent that.

How is Cross-Domain even a thing?

I fail to understand certain things that I'd like to ask.
Scenario 1: I create a HTML/JavaScript website, in which I use AJAX to obtain HTML of Google.com, I'm met with infamous Cross-Domain issue (No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access.)
Scenario 2: I enter www.google.com, I select Source Code in my context menu and I get the enter code.
Here are the questions:
What purpose does this message (and it's implications) have? How is Google protected from my devilish evil script whilst I can request the same website through browser? Isn't the request identical?
How are there origin differences between Scenario 1 and Scenario 2 when the source is browser, my laptop, my router, my ISP, the internet and then Google in both cases.
Why and who invented the way to discriminate against local scripts against browser itself, what purpose does it serve? If request would be malicious it would be equally malicious in both scenario's.
How does Google know what origin it comes from and how is it any different than me requesting their website through address bar? Once again, same exact origin.
Origin has nothing to do with your browser, laptop, router, ISP, etc. Origin is about the domain which is making the request. So if a script https://evil.com/devious.js is making an XHR request to http://google.com, the origin for the request is evil.com. Google never knows this. The user's browser checks the access control headers (the Access-Control-Allow-Origin you mentioned) and verifies that that script is permitted to make that request.
The purpose of all of this is to prevent a malicious script from (unbeknownst to a user) making requests to Google on their behalf. Think about this in the context of a bank. You wouldn't want any script from any other website being able to make requests to the bank (from your browser on your behalf). The bank can prevent this by disabling cross domain requests.
And to (1): When you open console on a google.com page, any requests you make have the origin google.com, which is why you are able to make these requests. This is different than the case I just mentioned, because the user would have to make a conscious effort to copy some malicious javascript, go to their bank's website, open up the console, and paste and run it (which many users would be suspicious of).
Websites are not typically stateless, they often store information on your machine, such as a session cookie that identifies the account you're logged into. If your browser didn't block cross origin requests that aren't explicitly allowed, you could be logged into gMail and then got to randomguysblog.org and a script on randomguysblog.org could make a POST request to gMail using your browser. Since you are logged in he could send emails on your behalf. Perhaps you're also logged in to your bank and randomguy decides to initiate a transfer for all your money to his account, or just look around and see how much money you have.
To respond to your questions individually:
What purpose does this message (and it's implications) have? How is Google protected from my devilish evil script whilst I can request the same website through browser? Isn't the request identical?
It's not Google directly who is protected, it's the users of your website who are also logged into Google. The request is identical, but the users browser won't even send the request assuming the server supports pre-flight, if the server doesn't support pre-flight requests, then it will send the request but won't allow the script that initiated it to see the response. There are other ways to send requests without seeing the response that don't use Ajax, such as by submitting a hidden form, which is why CSRF tokens are also needed. A CSRF token makes an action require two requests and a token in the response from the first request is needed to make the second one.
How are there origin differences between Scenario 1 and Scenario 2 when the source is browser, my laptop, my router, my ISP, the internet and then Google in both cases.
In scenario 2 the user is making both requests themselves so they must intend to make both request. In scenario 1 the user is only trying to access your website and your website is making the request to Google using their browser which they might not want your website to do.
Why and who invented the way to discriminate against local scripts against browser itself, what purpose does it serve? If request would be malicious it would be equally malicious in both scenario's.
The purpose is to protect browser users against malicious scripts. The malicious script can't access the response from Google in scenario 1. The user can, but it's not intended to protect users from attacking themselves.
How does Google know what origin it comes from and how is it any different than me requesting their website through address bar? Once again, same exact origin.
Google can could check the referrer header, but they don't actually need to know where the request is from. Google only needs to tell the browser where requests are allowed to come from and the user's browser decides whether or not to send the request to Google.

Understanding CORS security: what's the use of server allowing browser cross-domain access?

I just found out that in order to allow cross-domain AJAX calls, Access-Control-Allow-Origin header should be set on SERVER side. This looks frustrating to me, let me explain why:
1) Typical use case is that the client wants to make a cross-domain request. I have never heard of a server trying to restrict access from alien webpages. Oh, I remember 'prevent images hotlinking', a funny feature of my hosting, which can be easily beaten by sending fake 'Referrer` header.
2) Even if server wanted to restrict connections from other domains, it's impossible to do this using capabilities of HTTP protocol. I suggest using tokens for that.
3) What's the use of blocking XMLHttpRequests while you still can use jsonp?
Can you explain why is this done that way?
For those who are still reading, there's a bonus question:
4) Do you know a way to prevent any cross-domain request from a webpage? Imagine a junior web developer creating a login form on a page having ads or other scripts potentially sniffing passwords? Isn't this the essence of web security? Why anyone is talking about that?
I have never heard of a server trying to restrict access from alien webpages.
The Same Origin Policy is a restriction imposed by browsers, not servers.
CORS is the server telling the browser that it can relax its normal security because the data doesn't need that level of protection.
Even if server wanted to restrict connections from other domains, it's impossible to do this using capabilities of HTTP protocol.
Which is why HTTP the protocol isn't used for that.
I suggest using tokens for that.
Using a nonce to protect against CSRF solves a different problem.
It's a relatively expensive solution that you only need to get out when it is the side effects of the request that can be problematic (e.g. "Post a new comment") rather then the data being passed back to JavaScript running on another site.
You couldn't use them instead of the Same Origin Policy to protect against reading data across origins because (without the Same Origin Policy) the attacking site would be able to read the token.
What's the use of blocking XMLHttpRequests while you still can use jsonp?
You can't use JSONP unless the server provides the data in JSONP.
Providing the data in JSONP and using CORS to grant permission to access resources are two different ways that the server can allow the browser to access data that is normally protected by the Same Origin Policy.
JSONP was a hack. CORS came later and is more flexible (since it can allow access to any kind of data, respond to request methods other than GET, and allow custom HTTP headers to be added).
Can you explain why is this done that way?
The default policy is "No Access" since there is no way for the browser to know if the data being requested is public or not.
Consider this situation:
Alice has an account on Bob's website. That account is password protected and has information that should be kept secret between Alice and Bob (bank records or exam results, for example).
Mallory has another website. It uses Ajax to try to access Bob's site.
Without the Same Origin Policy, Alice might (while logged in to Bob's site) visit Mallory's website. Without Alice's knowledge or permission, Mallory's website sends JavaScript to Alice's browser that uses Ajax to fetch data from Bob's site. Since it is coming from Alice's browser, all of Alice's private information is given to the JavaScript. The JavaScript then sends it to Mallory.
This is clearly not a good thing.
The Same Origin Policy prevents that.
If Bob, as the person running the site, decides that the information is not secret and can be shared publicly, then he can use CORS or JSONP to provide access to it to JavaScript running on other sites.
Do you know a way to prevent any cross-domain request from a webpage.
No. The webpage is a single entity. Trying to police parts of it from other parts is a fool's errand.
Imagine a junior web developer creating a login form on a page having ads or other scripts potentially sniffing passwords? Isn't this the essence of web security? Why anyone is talking about that?
"Be careful about trusting third party scripts" is something that doesn't get mentioned as much as it should be. Thankfully, most ad providers and CDN hosted libraries are supplied by reasonably trustworthy people.
Do you know an easy way of overcoming the problem of missing Access-Control-Allow-Origin
Configure the server so it isn't missing.
Use JSONP instead
Use a proxy that isn't blocked by the same origin policy to fetch the data instead (you won't get any credentials the browser might send because Alice has an account with Bob though).

Usefulness of Same Origin Policy with CORS

I have read some articles on Same Origin Policy and CORS and still doesn't understand well the security that it brings to the user.
The Same Origin Policy gives a true valuable security, preventing a site from an origin from accessing some webpage content on another website. Thus preventing the threat of having the content of a iframe accessed by the script of the container, possibly faked/phishing website.
But here comes AJAX and CORS. CORS gives the ability for the server to control which origins can access it. But, at the end, it is the browser which stops the request if not allowed, after headers handcheck.
So, imagine you get some malicious website myphishing.com. You want to show information from another trusted website mybank.com through AJAX request to this site. This one is protected by well configured CORS headers only allowing request from mybank.com origin. What if, me author of myphising.com, relay all requests to mybank.com by a proxy that alter headers in both request and response way to fake client browser and bank server? It seems one can change the origin header in the request for a mybank.com one, and change the CORS response headers to make the browser think myphishing.com is allowed to make the request. Headers handcheck passed, you can then send the request and get the response with similar headers substitution tricks.
Perhaps I'm totally misleaded, but I would be very pleased if someone could show me where I have misunderstand the whole thing.
Possible duplicate but I didn't find my answer here: What is the threat model for the same origin policy?.
What if, me author of myphising.com, relay all requests to mybank.com by a proxy that alter headers in both request and response way to fake client browser and bank server?
You could do that anyway, CORS or no CORS.
If the request is coming from your proxy, however, then it has no way to know what credentials the browser would have sent to the server if the request was coming from the browser.

Categories