I have already working modal login dialog. The problem is that if the origin page is loaded via http I still want to pass credentials to server via https. And of course I want to do with as little rewriting of working code as it can be.
I cannot use JSONP for my case because login data is passed to server via POST AJAX request.
Any ideas?
The Same Origin Policy makes this impossible (at least in browsers which don't support cross domain XHR, which is enough).
(And since the host document is served over HTTP it is subject to interception and alteration on the wire, which would make the data vulnerable even if it was transported over SSL)
Just out of curiosity, why don't you force the user to a secure page to begin with? Why had a similar issue a while back, so now, we force the user to https (via redirect) as soon as they hit our page.
Please note that according to Same-origin policy it should be not possible, as you're trying to post non-secured credentials to secured page. And if login landing page is not using SSL, then an attacker could modify the page as it is sent to the user and change the form submission location or insert JavaScript which steals the username/password as it is typed. So login landing page must use SSL.
To illustrate, the following table gives an overview of typical outcomes for checks against the URL "http://www.example.com/dir/page.html".
Compared URL Outcome Reason
http://www.example.com/dir/page2.html Success Same protocol and host
http://www.example.com/dir2/other.html Success Same protocol and host
http://u:pass#www.example.com/x/o.html Success Same protocol and host
http://www.example.com:81/dir/other.html Failure Same protocol and host but different port
https://www.example.com/dir/other.html Failure Different protocol
http://en.example.com/dir/other.html Failure Different host
http://example.com/dir/other.html Failure Different host (exact match required)
http://v2.www.example.com/dir/other.html Failure Different host (exact match required)
http://www.example.com:80/dir/other.html Depends Port explicit. Depends on implementation in browser.
Unlike other browsers, Internet Explorer does not include the port in the calculation of the origin, using the Security Zone in its place.
How to relax the same-origin policy
In some circumstances the same-origin policy is too restrictive, posing problems for large websites that use multiple subdomains. Here are four techniques for relaxing it:
document.domain property,
Cross-Origin Resource Sharing,
Cross-document messaging,
JSONP,
If you really what to do that, it is possible, but you need to make sure that your public key certificate of your website has been verified by certification authority therefore it is valid.
If it is not, you may try to add your certificate to the white list in your web browser. Or try with different web browsers.
Alternatevely you can make sure that users are always on a secure pages when being presented with the login form or disable modal form for login forms.
Other workaround include adding rewrite rule by forwarding the non-secured traffic into ssl, e.g.
# Various rewrite rules.
<IfModule mod_rewrite.c>
RewriteEngine on
# Force <front> to ssl for modal use of secure log in module.
RewriteRule http://www.example.net/^$ https://www.example.net [R=301,L]
See also:
Is posting from HTTP to HTTPS a bad practice?
Is it secure to submit from a HTTP form to HTTPS?
How to stop “secure and nonsecure items” warning on your site?
Getting Chrome to accept self-signed localhost certificate
Installing root certificate in Google Chrome
Related
I've been reading up on CORS and how it works, but I'm finding a lot of things confusing. For example, there are lots of details about things like
User Joe is using browser BrowserX to get data from site.com,
which in turn sends a request to spot.com. To allow this, spot has
special headers... yada yada yada
Without much background, I don't understand why websites wouldn't let requests from some places. I mean, they exist to serve responses to requests, don't they? Why would certain people's of requests not be allowed?
It would really appreciate a nice explanation (or a link to one) of the problem that CORS is made to solve.
So the question is,
What is the problem CORS is solving?
The default behavior of web browsers that initiate requests from a page via JavaScript (AKA AJAX) is that they follow the same-origin policy. This means that requests can only be made via AJAX to the same domain (or sub domain). Requests to an entirely different domain will fail.
This restriction exists because requests made at other domains by your browser would carry along your cookies which often means you'd be logged in to the other site. So, without same-origin, any site could host JavaScript that called logout on stackoverflow.com for example, and it would log you out. Now imagine the complications when we talk about social networks, banking sites, etc.
So, all browsers simply restrict script-based network calls to their own domain to make it simple and safe.
Site X at www.x.com cannot make AJAX requests to site Y at www.y.com, only to *.x.com
There are some known work-arounds in place (such as JSONP which doesn't include cookies in the request), but these are not a permanent solution.
CORS allows these cross-domain requests to happen, but only when each side opts into CORS support.
First, let's talk about the same origin policy. I'll quote from a previous answer of mine:
The same-origin policy was invented because it prevents code from one website from accessing credential-restricted content on another site. Ajax requests are by default sent with any auth cookies granted by the target site.
For example, suppose I accidentally load http://evil.com/, which sends a request for http://mail.google.com/. If the SOP were not in place, and I was signed into Gmail, the script at evil.com could see my inbox. If the site at evil.com wants to load mail.google.com without my cookies, it can just use a proxy server; the public contents of mail.google.com are not a secret (but the contents of mail.google.com when accessed with my cookies are a secret).
(Note that I've said "credential-restricted content", but it can also be topology-restricted content when a website is only visible to certain IP addresses.)
Sometimes, however, it's not evil.com trying to peek into your inbox. Sometimes, it's just a helpful website (say, http://goodsite.foo) trying to use a public API from another origin (say, http://api.example.com). The programmers who worked hard on api.example.com want all origins to access their site's contents freely. In that case, the API server at api.example.com can use CORS headers to allow goodsite.foo (or any other requesting origin) to access its API responses.
So, in sum, we assume by default that cross-origin access is a bad thing (think of someone trying to read your inbox), but there are cases where it's a good thing (think of a website trying to access a public API). CORS allows the good case to happen when the requested site wants it to happen.
There are security and privacy reasons for not allowing requests from anywhere. If you visited my website, you wouldn't want my code to make requests to Facebook, reddit, your bank, eBay, etc. from your browser using your cookies, right? My site would then be able to make posts, read information, place orders, etc. on your behalf. Or on my behalf with your accounts.
I fail to understand certain things that I'd like to ask.
Scenario 1: I create a HTML/JavaScript website, in which I use AJAX to obtain HTML of Google.com, I'm met with infamous Cross-Domain issue (No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'null' is therefore not allowed access.)
Scenario 2: I enter www.google.com, I select Source Code in my context menu and I get the enter code.
Here are the questions:
What purpose does this message (and it's implications) have? How is Google protected from my devilish evil script whilst I can request the same website through browser? Isn't the request identical?
How are there origin differences between Scenario 1 and Scenario 2 when the source is browser, my laptop, my router, my ISP, the internet and then Google in both cases.
Why and who invented the way to discriminate against local scripts against browser itself, what purpose does it serve? If request would be malicious it would be equally malicious in both scenario's.
How does Google know what origin it comes from and how is it any different than me requesting their website through address bar? Once again, same exact origin.
Origin has nothing to do with your browser, laptop, router, ISP, etc. Origin is about the domain which is making the request. So if a script https://evil.com/devious.js is making an XHR request to http://google.com, the origin for the request is evil.com. Google never knows this. The user's browser checks the access control headers (the Access-Control-Allow-Origin you mentioned) and verifies that that script is permitted to make that request.
The purpose of all of this is to prevent a malicious script from (unbeknownst to a user) making requests to Google on their behalf. Think about this in the context of a bank. You wouldn't want any script from any other website being able to make requests to the bank (from your browser on your behalf). The bank can prevent this by disabling cross domain requests.
And to (1): When you open console on a google.com page, any requests you make have the origin google.com, which is why you are able to make these requests. This is different than the case I just mentioned, because the user would have to make a conscious effort to copy some malicious javascript, go to their bank's website, open up the console, and paste and run it (which many users would be suspicious of).
Websites are not typically stateless, they often store information on your machine, such as a session cookie that identifies the account you're logged into. If your browser didn't block cross origin requests that aren't explicitly allowed, you could be logged into gMail and then got to randomguysblog.org and a script on randomguysblog.org could make a POST request to gMail using your browser. Since you are logged in he could send emails on your behalf. Perhaps you're also logged in to your bank and randomguy decides to initiate a transfer for all your money to his account, or just look around and see how much money you have.
To respond to your questions individually:
What purpose does this message (and it's implications) have? How is Google protected from my devilish evil script whilst I can request the same website through browser? Isn't the request identical?
It's not Google directly who is protected, it's the users of your website who are also logged into Google. The request is identical, but the users browser won't even send the request assuming the server supports pre-flight, if the server doesn't support pre-flight requests, then it will send the request but won't allow the script that initiated it to see the response. There are other ways to send requests without seeing the response that don't use Ajax, such as by submitting a hidden form, which is why CSRF tokens are also needed. A CSRF token makes an action require two requests and a token in the response from the first request is needed to make the second one.
How are there origin differences between Scenario 1 and Scenario 2 when the source is browser, my laptop, my router, my ISP, the internet and then Google in both cases.
In scenario 2 the user is making both requests themselves so they must intend to make both request. In scenario 1 the user is only trying to access your website and your website is making the request to Google using their browser which they might not want your website to do.
Why and who invented the way to discriminate against local scripts against browser itself, what purpose does it serve? If request would be malicious it would be equally malicious in both scenario's.
The purpose is to protect browser users against malicious scripts. The malicious script can't access the response from Google in scenario 1. The user can, but it's not intended to protect users from attacking themselves.
How does Google know what origin it comes from and how is it any different than me requesting their website through address bar? Once again, same exact origin.
Google can could check the referrer header, but they don't actually need to know where the request is from. Google only needs to tell the browser where requests are allowed to come from and the user's browser decides whether or not to send the request to Google.
I have a website with SSL (https instead of http). I am attempting to embed a widget that references files (js, css) from another domain. This other domain does not have SSL (http instead of https). As a result, I get an net::ERR_INSECURE_RESPONSE and the widget will not load.
How can I tell my site to allow the insecure content used by the widget?
You cannot tell a website to allow insecure content in the page as it is not the site that is blocking access to the scripts - it's the Browser.
When using SSL the browser puts a minimum level of security on the connection and ensures that no unencrypted connections are being made from the page. This stops any unencrypted connections being used to transmit data in the requests.
The simplest option I can think of to work around this is to host the scripts yourself on the domain that is secured by SSL. It may or may not be possible to edit the widget easily so it uses the locally hosted script instead of the ones from the non-https site.
I'd like to generate a file on a website using JavaScript and provide it for download by the user. I learned that this is not possible using plain JavaScript and HTML5.
I'm thinking of posting the generated file contents to a CGI function on my server that just echoes the data. By setting the right headers I could provide the data for download this way.
I'm wondering if such an echo CGI function could be misused and result in security problems. The website (also the CGI function) is password protected using basic authentication.
Any comments?
It is indeed possible to do this using data: URLs:
text file
However, if you still wish to use the CGI method, there are security risks of allowing a page to echo POSTed data. This is known as reflected Cross Site Scripting (XSS).
Say you have a CGI listening on https://example.com/cgi-bin/downloader.
Assume the user is already authenticated with your basic authentication and then receives an email containing a link. The user visits the website in the email (say evil.com) which creates a POST request submitted by JavaScript to https://example.com/cgi-bin/downloader containing a HTML document which also contains some JavaScript to send the cookies from your domain to the attacker. Even if you are setting the correct content-type header to identify the HTTP response as XML, some browsers (IE) will try to sniff the content and present the Content Type that the browser thinks it is to the user (HTML in this case). To avoid this, make sure that the following header is set in the response:
X-Content-Type-Options: nosniff
So using this header in combination with a Content Type of text/xml in the response should mitigate the risk of XSS. I also recommend the Content-Disposition header being set so the file will be downloaded rather than displayed:
Content-Disposition: attachment; filename="bar.xml"
To prevent other sites making requests to your CGI service in the first place, you could use a CSRF prevention method. The recommended method is the "Synchronizer Token Pattern" and this involves creating a server side token that is tied to user session that must be submitted with the POST request. Whether this is possible with your system using basic authentication is for you to decide. The Referer header can be used, although this is less secure:
checking the referer is considered to be a weaker form of CSRF protection. For example, open redirect vulnerabilities can be used to exploit GET-based requests that are protected with a referer check and some organizations or browser tools remove referrer headers as a form of data protection. There are also common implementation mistakes with referer checks. For example if the CSRF attack originates from an HTTPS domain then the referer will be omitted. In this case the lack of a referer should be considered to be an attack when the request is performing a state change. Also note that the attacker has limited influence over the referer. For example, if the victim's domain is "site.com" then an attacker have the CSRF exploit originate from "site.com.attacker.com" which may fool a broken referer check implementation. XSS can be used to bypass a referer check.
In short, referer checking is a reasonable form of CSRF intrusion detection and prevention even though it is not a complete protection. Referer checking can detect some attacks but not stop all attacks. For example, if you HTTP referrer is from a different domain and you are expecting requests from your domain only, you can safely block that request.
You should also test this to make sure that script will not be executed when an XML is downloaded using Internet Explorer.
If all it does is echo data coming from the client, I don't see any security issues. That data was already known to the client. You are just changing headers so the browser will allow you to save the contents as a file.
You are not disclosing any information to parties that don't already have access.
I am designing a website that uses JavaScript Ajax XHR calls to retrieve dynamic data.
I have two C++ based applications that serve data on their own ports, and I have control of the ports that they use.
Dynamic Data is requested with HTTP 1.1 requests and data is returned with an HTTP 1.1 header, and I have control of the header data. Effectively, I have a custom HTTP server embedded in my dynamic data applications, so I have full control of both ends of the conversation.
If I choose two arbitrary ports to serve the dynamic data on, will the browser-based user have to open those ports on their firewall to allow the request from my web page?
For example, the web page would be served as www.mydomain.com/default.aspx, and within it, it would have Ajax XHR calls to make connections to www.mydomain.com:8080 and www.mydomain.com:8081 (or whatever port numbers are chosen).
Am I going to be blocked by the same origin policy?
Could I get away with using ports that are often open on firewalls, but not actively being served on my server?
What is the best way to work around this so that the user does not have to make firewall changes and does not get a cross domain warning? I'm hoping not to use iFrames if possible.
This topic may have been asked before, I have searched thoroughly but have not found anything that matches.
Wikipedia says you have to keep the same scheme, host and port but notes that some unnamed browsers do not enforce the port.
http://en.wikipedia.org/wiki/Same_origin_policy
The scheme is like HTTP:
The host name is like my.yahoo.com but there are some possiblilites to access any ???.yahoo.com in some browsers.
Port is pretty clear but notice that HTTP and HTTPS use different ports as the default.
This page is interesting:
http://www.w3.org/Security/wiki/Same_Origin_Policy