I'm using a script that detects Javascript errors and reports them to my backend. However I am getting cryptic "script error" messages, which is not very helpful for debugging.
According to Cryptic "Script Error." reported in Javascript in Chrome and Firefox the reason is because the script that threw the error is served from a different origin than my site.
Since I'm using a CDN all of my scripts are effectively served from another domain. Is there a way to get more useful error messages while still using a CDN?
Also everything is served over SSL so I would like to retain this ability.
I had a similar problem: my scripts are served by a subdomain and fall under the same origin restriction. However, I solved this by:
1) adding every script tag like this:
<script type="text/javascript" src="http://subdomain.mydomain.tld" crossorigin="*.mydomain.tld" />
2) modifying the apache httpd.conf by adding the following inside every vhost (you must enbable mod_headers):
<IfModule mod_headers.c>
Header add Access-Control-Allow-Origin "*.mydomain.tld"
</IfModule>
On one of my server I was not able to make this functional except by replacing
*.mydomain.tld
by
*
Be aware of the flaws with potentially allowing * to phish extended information. Documentation on CORS, same-origin, img & fonts, cdn is available but very fewer about script tag crossorigin details is available.
Hope this helps ...
Try using jsonp for the dataType attribute in jQuery.ajax. The remote server will also need to support jsonp. It will get around the browser security preventing XSS.
Alternatively, you could use an IFrame and use jQuery within each window, but use HTML5 postMessage to communicate back and forth between the windows on two different domains.
Or, if you control both servers you can set the headers for same origin.
Jsonp has been my weapon of choice for this kind of problem. The others are just a legitimate.
Related
I'm trying to load some external JavaScript, but Chrome isn't having it.
To be clear, I'm using a bookmarklet to create a new script element, change the src attribute to my source code, and then append it to the header.
Here's the code:
javascript:var script=document.createElement("script");script.src="URL";document.getElementsByTagName('head')[0].appendChild(script);
Unfortunately, Chrome's Cross-Origin Read Blocking algorithm is preventing me from injecting the source into the header. I believe that this is due to the fact that said source document is simply plain text, and not an acceptable JavaScript file.
Is there any workaround for this?
There are many websites that because of CSP (Content Secrutiy Policy) not let you load scripts from other URLS. You should check the CSP headers of the website you trying to inject scripts into.
More info on CSP:
https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP
UPDATE
I see that your edit your question (after my initial answer) and now your asking specific about CORB issue. In that case you have to ensure that your script returing the correct content-type header. If you script doens't return text/javascript chrome will not execute it.
How do you serve the JavaScript file? if you will give more details we can try to help.
Usually servers are giving the correct header to JS files. There are some exception. For e.g. when user upload JS file to GitHub, GitHub will serve it without the content type header. This is because they don't want that a webmaster that trust only GitHub will have a risk that a user will execute user uploaded script (to GitHub) on his website.
More info:
https://www.chromium.org/Home/chromium-security/corb-for-developers
I am looking for an approach to allow only whitelisted scripts to run within a sandboxed iframe. I was thinking of an iframe-sandbox directive that allows only whitelisted scripts to run within an iframe. The analogy is the script-src directive in the Content Security Policy.
The problem:
<iframe sandbox="allow-same-origin allow-scripts" src="https://app.thirdparty.com" width="100%" height="800" frameBorder="0"></iframe>
The app in the iframe provides valuable functionality for my website. However, it pulls in external resources that I would like to control (i.e., block), e.g., AnalyticsJavaScript.com and TrackingPixel.com. I would like to allow scripts from app.thirdparty.com but block AnalyticsJavaScript.com and TrackingPixel.com.
Any help appreciated.
The answer to this is unfortunately complicated. With the advent of iframe sandboxing the question seems simple enough, but the spec that you're looking for is very much a work in progress. Thus, if you want decent browser support, the issue devolves into how to modify an iframe's content, which usually involves some sort of proxy.
Content Security Policy
The spec you really need is the CSP. At its simplest, you would allow specific scripts with the iframe atribute csp="...".
<iframe ...
src=""
csp="script-src https://app.thirdparty.com/"
...></iframe>
Any scripts from domains not specified (i.e. tracking scripts as in the question) would not be allowed in the response. Note that limiting scripts to those from a specified source does rely on cooperation with the third party app's server. If the server does not inform the user agent that it will adhere to the CSP restrictions then the response will be blocked.
The CSP is still a working draft and may change in the future. As stated in the comments, Chrome 61 and Opera 48 have implemented the CSP spec, but at this stage there is no sign from Firefox, Edge or Safari that they will also implement it. Unless you can guarantee that your users will only be using a browser that supports the spec, the tracking scripts will still be present for a very large percentage of users.
The remaining suggestions all involve modifying the iframe's content to remove the offending scripts.
Reverse proxy
Creating a reverse proxy to block a couple of tracking scripts in an iframe is probably equivalent to using a nuclear warhead to light a camp fire as far as overkill goes. But, if you are able to configure your server to this extent, it is the most reliable and seamless method for iframe content injection/modification/blocking that I've found.
The Wikipedia page states:
A reverse proxy is a type of proxy server that retrieves resources on behalf of a client from one or more servers. These resources are then returned to the client, appearing as if they originated from the proxy server itself.
Because the reverse proxy is an intermediary between the third party app and your site, it can transparently modify the responses to remove the undesired scripts. I'll use Apache in this example, but your implementation really depends on what server you're already using.
You need a subdomain for the proxy that points to your server IP, e.g. proxywebapp.yourdomain.com. On your server you would then create a virtual host in httpd.conf that uses the Apache mod_proxy module. Within your virtual host configuration you would then substitute the script calls to AnalyticsJavaScript.com and TrackingPixel.com with blanks. If the third party app must use HTTPS then reverse proxying gets trickier as you need an SSL virtual host and a SSL certificate for the proxy's FQDN.
<VirtualHost *:*>
ServerName proxywebapp.yourdomain.com
ProxyPreserveHost On
ProxyPass "/" "http://app.thirdparty.com/"
ProxyPassReverse "/" "http://app.thirdparty.com"
# in case any URLs have the original domain hard coded
Substitute "s|app.thirdparty.com/|proxywebapp.yourdomain.com/|i"
# replace the undesired scripts with blanks
Substitute "s|AnalyticsJavaScript/| /|i"
Substitute "s|TrackingPixel/| /|i"
</VirtualHost>
Your iframe would then point to proxywebapp.yourdomain.com.
<iframe ... src="proxywebapp.yourdomain.com" ...></iframe>
Again: total overkill but should work transparently.
Proxy scripts
A third option to consider is implementing a proxy script on your server between the iframe and third party app. You would add functionality into the proxy script that searches for and removes the undesired scripts before they reach the iframe. Additionally the proxy means the iframe's content will validate the same-origin policy, thus you could instead remove the undesired content with JavaScript in the frontend, although this may not guarantee that the scripts won't run before they are removed. There are many proxy scripts available online for all manner of backends (PHP, Node.js etc. ad nauseum). You would likely install the script and add it as the iframe's src, something like <iframe ... src="proxy.php?https://app.thirdparty.com/" ...>.
Unless properly configured for all cases, the proxy may not correctly transfer data between the third party app and its parent server. Testing will be required.
Writing your own server side proxy to remove a couple of scripts from an iframe is probably a bit excessive.
If you can't access the backend, it is possible to scrape the web app's content using JavaScript and a CORS or JSONP web app, and modify it to remove the scripts. Essentially making your own proxy in JavaScript. Such web apps (Any Origin, All Origins, etc) allow you to bypass cross-domain policy restrictions, but because they are third party you can no longer assume any of the web app's data is private. The issue with correctly communicating any data transfer between the app and its parent server will still be present.
Summary
A widely supported pure frontend solution is not feasible at the moment. But there are many ways to skin a cat and perhaps even more ways to modify an iframe's content, regardless of cross-domain restrictions.
Content Security Policy does look promising and is exactly what you're asking for, but currently its lack of widespread support means it can only be used in very niche situations. A reverse proxy that modifies content may take a lot of configuring and in this situation is like driving a full size semi-trailer over a Hot Wheels track, but will likely operate seamlessly. Content modification from a forward proxy is somewhat simpler to implement, but may break communications with the third party app's parent server.
You can't do this the way you want (for now). As mentioned in comments CSP:EE is a thing yet to come.
However you can try proxying the request and removing the unnecessary scripts from the body on the server side or on the client side, e.g.:
1) Get the needed page via XMLHTTPRequest
2) Remove unwanted
3) Inject into iframe on the page
"Workability" of this method depends purely on external app functionality. I.e. it will not work if the aforementioned app needs registration/authorisation of the end user to work, however this can still be suitable for some simple cases.
P.S.: you can implement a workaround to make such thing work via browser extension, however I'm sure this is not what you want.
I have just moved my site from http to https and IE-9 started showing non-secure content warning at home page. This warning is understandable because i have one http call to googleapi for getting jquery script. But when I login and enter the inner pages there is no warning from IE despite the fact that most of the images are coming from other servers through http protocol.
So the question: Is getting image over http is fine when accessing site over https? Does only css and js matters? or shall I have to get all the data through HTTPS? If so how is my scenario justifiable (getting images over http from other server on https page without warning)?
If you load CSS and JS over HTTP then an attacker can inject executable code. Unfortunately IE will execute JavaScript within CSS. The problem with loading images over HTTP from the same domain is that the browser will likely spill the session id in plain text which is a violation OWASP a9.
You can use the protocol-relative URL on all your urls to avoid this issue in IE.
Basicaly, instead of linking to a js/image/css by using its full path with the protocol, you instead link to it by leaving out the protocol bit and just using a double slash, //.
This will have the effect of all the above links inheriting the protocol from the parent page.
Of course this depends on you having valid SSL certs on the domains you're serving the different files form.
One other thing to note also is that images in your pages or CSS that are done using data URI could also cause mixed content warnings in IE.
To find out what files are causing issues, I recommend using Fiddler
There is also another tool that a fellow SO user, Eric Law wrote:
Install it from http://www.bayden.com/dl/scriptfreesetup.exe and you will get a different mixed content prompt which shows the exact URL of the first insecure resource on the page. That tool is basically a prototype and you should uninstall it when you're done with it. It works on IE8 and you should install it as admin.
I am working on an ASP.NET MVC application which consists of a website where all the JavaScript, CSS and images are hosted and then the main web app which uses the resources hosted on this website.
Lets say that the resource url is resources.example.com and the web app url is webapp.example.com
One of the JavaScript files IE9.js (http://code.google.com/p/ie7-js/) in order to work makes a request to the CSS file (resources.example.com/styles.css), this however gets blocked as the request is to another domain this is because of the same origin policy (http://en.wikipedia.org/wiki/Same_origin_policy).
I thought I had found a way around this using the Access-Control-Allow-Origin header to the resources.example.com site. My understanding of this was that this would then allow any requests made to the resources.example.com website would be permitted provided they were in that header.
To try this out I added the header to the web.config of the website as follows:
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*"/>
</customHeaders>
</httpProtocol>
Which I thought would allow any request the permission to run, but when I step through IE9.js to the point where the request is made it still catches a PermissionDenied error when requesting the CSS file resources.example.com/styles.css.
The CSS has to be hosted on the other domain and IE9.js needs to be able to request it. I believe that the answer is to do with these HTTP headers but that I might have misunderstood how to use them.
Any insight on this issue would be appreciated. As an aside I should add that the javascript file is linked to from the google code website:
<!--[if lt IE 9]>
<script src="http://ie7-js.googlecode.com/svn/version/2.1(beta4)/IE9.js"></script>
<![endif]-->
in a conditional comment and this is only an issue in IE. Any solution has to work in IE7+.
Update:
I have successfully made a request to the CSS using XDomainRequest object for IE. The content I am now getting back is gzipped so I just need to decode it and I should be away. I will post a full update and answer when I have it working.
Update:
Further to the last, the XDomainRequest object doesn't seem to support being able to modify the Accept-Encoding header which means that the response seems to always be returned encoded and decoding this will slow down the page load and doesn't feel right. Furthermore it appears that the XDomainRequest object is not supported in IE7 which is a required supported browser.
The only other thing I can think of is to set up a handler which IE9.js can call instead which will be on the same domain and load the contents of the required file. This also does not feel great but is the only other solution I can currently think of. Any other suggestions are welcome.
A another way of solving this problem can be to put a reverse-proxy (such as Nginx) in front of IIS.
Then add location proxy_pass to both webapp.domain.com and resources.domain.com. In such a way that it defaults to webapp.domain.com, but if you uses webapp.domain.com/resources it goes of to resources.domain.com.
In this way the client only asks for one origin, hence on cross-site scripting.
This trick is also very handy when testing locally.
Just an idea...
In order to get around this issue I modified IE9.js file that was making the request to call a handler in the main MVC web app. This handler could then make the requests to the resources that IE9.js needed and thus it didn't need to make the request to the resources site directly itself.
In the scenario outlined in the question I was unable to find another solution to the problem.
I am posting this question on Super User as well. In my opinion this question overlaps the two...
I am creating a simple JavaScript wrapper for CouchDB's REST-ful interface, but I am stuck on same-origin policy issues.
So far I've been developing my code to work locally - and only as a proof of concept - on Mozilla FireFox. My server is running on localhost, port 5984.
To disable cross-origin policy in Mozilla FireFox you can use the PrivilegeManager, but it only gets me half-way in the sense that I can't do PUT requests against my server...
/*
* Including this in my JavaScript file only seems to disable cross-origin
* policy checks for POST and GET requests in Mozilla FireFox.
* PUT requests fail.
*/
netscape.security.PrivilegeManager.enablePrivilege(
"UniversalBrowserRead UniversalBrowserWrite"
);
Is there any way that I can configure my server to hide it's location so I won't have to implement browser-specific work-arounds to avoid same-origin policy issues? If not: what browser work-arounds exist to disable same-origin policy completely?
Unfortunately, any browser workarounds to disable same-origin policies are likely to be treated as serious security bugs and fixed as soon as possible.
See if you can come up with a way to work within the same-origin policy without trying to bypass it.
Can you serve your example scripts on the target server? Could you build a reflection script that would load the target script on your server after a local script on the users computer uploaded whatever they modified?
There should be a good solution that doesn't involve bypassing the same-origin policy. Trying to hack your way around it is a good way to ensure that your code doesn't work properly in future browsers.
I strugled with that issue too, trying to run automated tests on a local html file connecting to a virtualized CouchDB server, here's my solution:
I created a small implementation (and open sourced it) of the simplest solution when you can't enable CORS on the server,
you need to upload a .js and an .html file to the target server, (you can use any security mechanism to restrict access to this file if you want). Or you can change some simple parameters on the html file to restrict by domain.
On your page you use the same script to create an invisible iframe where the hosted .html is loaded, and proxy certain methods (sort-of RPC) thru that iframe using window.postMessage(), by default jQuery ajax methods can be proxied without extra configuration.
All this with one line of js code :)
FrameProxy at GitHub
(fell free to use it and fork it!)