I am working on an app and would like to use http post requests to send data from client to server, on which the server uses the origin url from the client to allow to use our services, so my question is, can the http url from the client side be faked as a security threat to access my services, and if so, what other alternatives should i take?
An example is, if a client-side script running on a page from foo.com wants to request data from bar.com, in the request it must specify the header Origin: http://foo.com, and bar must respond with Access-Control-Allow-Origin: http://foo.com.
What is there to stop malicious code from the site roh.com from simply spoofing the header Origin: http://foo.com to request pages from bar?
What is there to stop malicious code from the site roh.com from simply
spoofing the header Origin
The browser is. CORS restrictions and headers only work in the browser and the browser is very strict about controlling them. This is specifically to prevent drive-by attacks from random scripts on random sites a user may unknowingly visit. It’s to protect the user of the browser.
However, absolutely nothing prevents a rogue server from sending an arbitrary HTTP request with any arbitrary headers to your server. I could do so from my command line using curl or such right now. These headers do not pose any sort of guarantee or protection for your server.
'on which the server uses the origin url from the client to allow to use our services'
The origin url can be faked very easily.
Add a login with a token to prevent that.
Related
I currently have my server file set up like so:
export default createServer = (container) => {
const env = process.env.NODE_ENV
const allowedOrigins = process.env.ALLOWED_ORIGINS || ''
const allowedOriginsArray = allowedOrigins.split(",").map(item => item.trim());
const cors = Cors({
origins: allowedOriginsArray,
allowedHeaders: [
'access-control-allow-origin',
'authorization',
'Pragma',
'contact',
],
exposeHeaders: []
})
}
Here, I have origins set to an array of strings from my env file. (I have checked the cors documentation page, I believe there may be a typo, origins should be origin. Either way it does not seem to make a difference).
In my postman request, just to test it out, I have set the origins header to "http://www.test.com" (this is not one of the trusted origins i have in my env file). The requests succeeds when it should fail. I'm wondering if I am testing this incorrectly in postman or if something in my code is incorrect.
The Same Origin Policy is enforced by browsers to stop Mallory's Evil Website from sending your browser some JavaScript that will make an Ajax request to your online banking and send your credit history to Mallory.
CORS is used to relax the Same Origin Policy (so that websites can give access to the data you share with them to certain trusted other websites).
Postman is not a browser. You do not use it to visit websites. It doesn't execute JS embedded in those websites. Mallory can't tell it to make HTTP requests. Only you can do that. Postman doesn't need to enforce the Same Origin Policy, so it doesn't.
You can use Postman to make an HTTP request with an Origin header. Your server can then send back an appropriate Access-Control-Allow-Origin header. However, Postman won't do anything with it except display it in the list of response headers.
It certainly won't relax the Same Origin Policy because it doesn't enforce it in the first place.
From your comment on another answer, I think you have it backwards in your mind; Postman can add Origin because Postman is a tool for "faking" http requests sent by browsers, and to not have the ability to add an Origin would make Postman a poor faker. Origin might be used by some servers and Postman having the ability to send an Origin header means that whatever the server does with it, can be tested.
By and large Origin as a browser-to-server communication is little to do with CORS and SOP, which are a security that works in the direction of "server-to-honest-browser". Inclusion of an Origin header on a request may serve to cause a server to respond with CORS headers but you should not hard-link the two concepts because Origin can be sent for non CORS requests too, and CORS wouldn't specifically require any information in an Origin header in order to work
For CORS enabled scenarios the server says (in a response header) which sites are supposed to be using it and an honest browser (like most people have, when they install latest Chrome) decides whether the page on show should be fetching data from the server or not. Imagine a browser is showing delta.com and a script on the page tries to fetch some data from some back end server. Before it actions the request for the data the script wants the browser makes its own request to the server to check if it's OK; this is an OPTIONS request. If the server responds to the OPTIONS saying "only scripts served from acme.com should use me" and the browser isn't showing acme.com, then the browser doesn't perform the request the script on the page wants to do; instead an error appears in the console, because the delta.com script wanted data and the browser decided not to carry out the request after doing a quick "whisper in the server's ear"
A malicious actor using a browser that doesn't care about CORS requests of "please only use me if you're showing a page from acme.com" will be able to use the server anyway. Postman is an example of such a "browser" - it makes requests at your behest and doesn't care about CORS at all. If you were at all interested in making Postman behave like a browser, you should:
Pretend to be delta.com
Use postman to issue an OPTIONS to server.com (with an Origin)
Postman gets the "i only talk to acme.com" response in the A-C-A-O header
Your own brain decides not to issue the POST request you originally wanted to do, because you've realized you're pretending to be delta.com and the server only talks to acme.com
This would be a more accurate "postman pretends to be like a normal every day browser" iteraction
Origin can feed into CORS; the server could have 1000 different sites it is willing to allow requests from. Clearly sending 1000 site names in every response ("please only use me if you're showing a page from acme1.com, or acme2.com or... acme999.com") would make the response headers huge and the browser then has a thousand sites to check through. If the server is intelligent it can use the Origin to drive the response; if the submitted "Origin: acme547.com" is in the allowed list of 1000 sites the server knows of, then the server can send just that one acme547.com back in the "please only use me if.." header - it doesn't need to send the other 999 sites too. If the Origin is from delta.com, then it doesn't send a "please only use me.." header at all, causing an honest browser to fail the request
The requests succeeds when it should fail.
That's a misunderstanding of how the Same Origin Policy and CORS work. The request shouldn't fail, it's just that the response will or won't include information that a browser can use to determine whether to grant access to the information in it to the origin that requested the information. If the response doesn't allow the origin that requested it, the browser prevents code from the origin seeing the response.
Postman, not being a browser, doesn't do that (not least because there's no origin for it to test the response against).
You can't enforce this at the server side reliably, because malicious actors would just send your server what they think your server wants. The point is for the server to reply with information telling a browser who can access that information.
The goal isn't to keep the information the server sends back private (that's what SSL and authentication are for). It's to prevent a user's stored authentication information or active session from being used to steal information using taht user's credentials. Here's how it works:
Let's say Alice goes online to Bank B to pay a bill, and while she has that session active she also goes to check on what something costs and visits Malicious Site X to find out. Malicious Site X, hoping that Alice just happens to be a customer of Bank B, sends a request to Bank B's website for (say) Alice's account information. Let's say that request is a perfect mimic of a real request, so Bank B's website happily returns the information — because Alice does happen to be signed in with Bank B's site. Oh no!
This is where the SOP comes in. The browser knows that it was code on X's page that requested the information, and knows that it asked for information from Bank B's website. So it checks the response to see if the response specifically says "yes, it's okay to share this with X." Since the response doesn't say that, the browser prevents Malicious Site X from seeing the response and their evil plot to steal Alice's information is foiled.
I read this blog What Happens If Your JWT Is Stolen?
The blog says that if one gets the JWT he/she can send requests to the server on behalf of the user.
My question is:- if JWT is stolen and my website don't allow request from unknown domain (due to same origin policy), will I be safe? Is there a way to override same origin policy by the hacker.
I know with XSS attack hacker can send the request from my domain. But here, just assume the hacker only has the JWT and there is no XSS attack.
These policies are a set of rules for browsers. Every HTTP client like curl or Postman can "override" these policies and send custom requests. With Postman you can configure the request as you want.
Same origin policies don't protect your server from attackers. They protect users of your web application from involuntary executing malicious code.
If attackers get a valid token they can send valid requests.
"Can same orgin policy prevent attack if jwt is stolen?" No, they can't.
Your website is a server that answers HTTP(S) requests. An HTTP request is just a bunch of text in a few IP packets. Anybody connected to the Internet can send any text they want to your server. The only information in there which is reliable is the origin and target IP adress of the server and the client connecting (as otherwise the packets won't arrive). Now to ensure that a certain request comes from a certain user, you share a secret with the user, which gets send with the request. When a request arrives, this secret is the only way to authenticate the user. If someone is able to steal or guess that secret, there is no way for the server to distinguish between the user and the attacker.
my website don't allow request from unknown domain
Not quite, the browser which the user uses to store the secret the server gave to the user, ensures that the secret gets only shared with your server. If you disable cross origin sharing, code of other websites the user visits with their webbrowser is unable to perform requests to your server in the background. Thus it prevents that other code uses the secret to perform an action on your server.
In conclusion CORS policies only help to keep secrets secret, if the secret is not secret anymore they won't help.
I'm working on an Angular web application. I need to make a POST request with a XML body to a server I don't have control over. The request needs an Authorization header. I tried the following:
Send the request directly: It only works when the application is served on http://localhost. Otherwise, the browser shows the following error: Access to XMLHttpRequest at 'server.com' from origin 'my-server.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource..
Use a browser extension that adds the missing header to responses: Unsafe, because the extension adds Access-Control-Allow-Origin: * to responses from all domains and that header allows requests from any domain.
Disable browser security: I ran Chrome using this command: chrome.exe --user-data-dir="C:/Chrome dev session" --disable-web-security. Works when the application is running on a HTTPS server. However, it's unsafe, for the same reasons stated for the previous approach.
Use a third-party proxy: Works for a few requests, but the server blocks the proxy IP because the requests of all clients pass through the same proxy.
My project requires to bypass browser security without compromising security for non-related domains. My project also requires a different IP to be sent to the server by each client. That's required so that if a client overuses the feature, it won't affect other clients.
Is there a way I can add Access-Control-Allow-Origin: my-server.com to all responses or add the header only for a specific server? Is there a way I can redirect each request to a different IP so that the server won't block all my clients? Are there any other workarounds?
For protection of end users browsers block requests to other servers. Yes, you can have a cors browser extension but that is a temporary solution.
You need to set up an endpoint on your server 'my-server.com' to consume your web application post requests. From there you can communicate with the server you don't own and set up your proper auth headers ect.
From what I understand about CORS, this is how it works: I have a site foo.com which serves a page X. X wants to post data to another domain bar.com. If bar.com is CORS enabled (its headers produce Access-Control-Allow-Origin foo.com) then page X can now send data to bar.com.
As I understand to get CORS to work it's all about settingit up on bar.com, and has nothing to do with foo.com. It's all about making sure bar.com doesn't accept requests from any old domain.
However this really doesn't make sense to me. I thought CORS was designed to enable foo.com to dictate who X is allowed to communicate with. If we go back to the previous example but this time X is compromised by dodgy script so that it sends data secretly to evil.com, how is CORS going to stop that? evil.com is CORS enabled, and set to *, so it will accept requests from anything. That way a user thinking they using a site foo.com, are unwittingly sending data to evil.com.
If it is really all about bar.com protecting itself, then why does it make the browser enforce the policy?. The only conceivable situation in which this makes sense if you have evil.com serving up page Y that impersonates foo.com, that tries to send data to bar.com. But CORS is enforced by the browser, all you'd have to do is make evil.com a proxy that sends faked origin requests to bar.com (data goes from Y to evil.com, evil.com sets its fake origin to foo.com then sends it to bar.com).
It only makes sense to me if it works the other way round. foo.com is CORS enabled, and its headers are set to Access-Control-Allow-Origin bar.com. That way rouge scripts would get denied access evil.com by the browser. It then makes sense for the browser to enforce the policy because its running the scripts that could go rouge. It won't stop rouge sites from trying to send rouge data to bar.com, but bar.com can protect itself with a username/password. If foo.com has endpoints that it's expecting data back from X, then you can embed tokens into X, to ensure evil.com doesn't send data to it instead.
I feel like I'm not understanding something fundamentally important here. Would really appreciate the help.
However this really doesn't make sense to me. I thought CORS was designed to enable foo.com to dictate who X is allowed to communicate with.
No, it's about bar.com controlling use of its content.
But CORS is enforced by the browser, all you'd have to do is make evil.com a proxy that sends faked origin requests to bar.com...
Yup. And if you do, and the people at bar.com notice and care, they disallow requests from your server. You move it, they disallow the new one. Whack-a-mole time. But painful as that game of whack-a-mole is, it's a lot less painful than if the requests come directly from each individual user of foo.com, from their desktop.
Having foo.com enforce what foo.com can do doesn't make any sense. foo.com already enforces what foo.com can do, because it's foo.com that serves foo.com's content and scripts.
It isn't about Foo.com, nor about Bar.com. It is about user.
There are two things that CORS protects against. The first is access to resources behind the firewall. The second are resources that are normally protected, unless a request is sent from a browsers with authentication or other sensitive data cookies.
CORS is a Browser technology, with support from servers, that allows foo limited freedom to call outside of its domain. It is a restricted hole punched in the restriction against cross domain scripting.
Anyone can fake the ORIGIN header and create a CORS preflight or simple request -- Of course, anyone can directly connect to the Bar server directly and make the requests without using CORS at all. Any browser can directly connect to bar.com and get data. But a modern browser will not run a script from foo.com that access a bar.com resource. People visiting websites are protected against visiting a site designed to exploit cookies or the fact that the browser is behind the corporate firewall.
So the accepted answer is WRONG. It isn't about bar.com protecting its resources -- it does this through authentication and authorization. You don't have to create a proxy to send CORS requests -- you create a proxy to strip out the CORS requests (automatically responding to the preflight request, and returning the proper headers to the browser, but sending a normal request to bar.com). But you will still need authentication to get bar.com's resources, and foo.com would still need to somehow get you to install a proxy to exploit the cross domain scripting hole that CORS protects against.
But the concluding sentence is correct -- foo.com isn't in control of the resources -- it is the browser, with a quick check with bar.com to ask it if this is something that was intended.
From the OP:
If it is really all about bar.com protecting itself, then why does it
make the browser enforce the policy?. The only conceivable situation
in which this makes sense if you have evil.com serving up page Y that
impersonates foo.com, that tries to send data to bar.com. But CORS is
enforced by the browser, all you'd have to do is make evil.com a proxy
that sends faked origin requests to bar.com (data goes from Y to
evil.com, evil.com sets its fake origin to foo.com then sends it to
bar.com).
evil.com can already contact bar.com -- just like any human using a browser can (or curl or wget, etc). The issue is can evil.com force your browser to connect to bar.com, which may have IP filters, cookies, firewalls, etc protecting it, but javascript can connect to using your browser. So the Browser is the thing that protects the user. By disallowing cross domain scripting. But sometimes it is useful (ex: google apis, or a bank connecting to a bill paying service, etc) to cross domain script. CORS tells the browser that it is OK in this instance.
That isn't to say that there are no holes, or the the standard is the best, or that there aren't holes in implementation in the browser, or that sites are too permissive. But those are different questions...
I have read some articles on Same Origin Policy and CORS and still doesn't understand well the security that it brings to the user.
The Same Origin Policy gives a true valuable security, preventing a site from an origin from accessing some webpage content on another website. Thus preventing the threat of having the content of a iframe accessed by the script of the container, possibly faked/phishing website.
But here comes AJAX and CORS. CORS gives the ability for the server to control which origins can access it. But, at the end, it is the browser which stops the request if not allowed, after headers handcheck.
So, imagine you get some malicious website myphishing.com. You want to show information from another trusted website mybank.com through AJAX request to this site. This one is protected by well configured CORS headers only allowing request from mybank.com origin. What if, me author of myphising.com, relay all requests to mybank.com by a proxy that alter headers in both request and response way to fake client browser and bank server? It seems one can change the origin header in the request for a mybank.com one, and change the CORS response headers to make the browser think myphishing.com is allowed to make the request. Headers handcheck passed, you can then send the request and get the response with similar headers substitution tricks.
Perhaps I'm totally misleaded, but I would be very pleased if someone could show me where I have misunderstand the whole thing.
Possible duplicate but I didn't find my answer here: What is the threat model for the same origin policy?.
What if, me author of myphising.com, relay all requests to mybank.com by a proxy that alter headers in both request and response way to fake client browser and bank server?
You could do that anyway, CORS or no CORS.
If the request is coming from your proxy, however, then it has no way to know what credentials the browser would have sent to the server if the request was coming from the browser.