How can XSS be avoided in HTML downloads? - javascript

We have an internal web application that acts as a repository to which users can upload files. These files can be any format, including HTML pages.
We have tested than in IE8, if you download an HTML file that contains some script that tries to access your cookies and, after downloading, you choose the "Open" option, the script executes and gets your cookie information with no problems at all.
Actually, that script could use XmlHttpRequest object to call the server and do some malicious operations within the session of the user who downloaded the file.
Is there any way to avoid this? We have tested that both Chrome and Firefox do not let this happen. How could this behaviour be avoided in any browser, including IE8?

Don't allow the upload of arbritary content. It's exclusively a terrible idea.
One potential "solution" could be to only host the untrusted uploads on a domain that doesn't have any cookies and that the user doesn't associate any trust with in any way. This would be a "solution", but certainly not the ideal one.
Some more practical options could be an authorisation-based process, where each file goes through an automated review and then a manual confirmation of the automated cleaning/analysis phase.
All in all though, it's a very bad idea to allow the general public to do this.

That's a really bad idea from a security point of view. Still, if you wish to do this, include HTTP response header Content-disposition: attachment It will force browser to download file instead of opening it. In Apache, it's done by adding Header set Content-disposition "attachment" to .htaccess file.
Note that it's a bad idea just to add Content-type: text/plain as mentioned in one of the answers, because it won't work for Internet Explorer. When IE receives file with text/plain content-type header, it turns on its MIME sniffer which tries to define file's real content-type (because some servers send all the files with text/plain). In case it meets HTML code inside a file, it will force the browser to serve file as text/html and render it.

If you really need to have the users upload HTML files, you should make sure the HTML files in this directory are served with the mime type text/plain rather than text/html or similar.
This will prevent the opened files from executing scripts in the browser. If you're using apache, see the AddType directive.

Related

File .htc files - Do I need web server?

I want to use PIE in my project with IE7.
However something I didn't understand is can i you use .htc files only on web server?
Can i use it in local pages loaded via browser without webserver?
I seen at PIE's documentation and they said this text below:
Serving the correct Content-Type
IE requires that HTC behaviors are served up with a content-type
header of "text/x-component", otherwise it will simply ignore the
behavior. Many web servers are preconfigured to serve the correct
content-type, but others are not.
If you have problems with the PIE behavior not being applied, check
your server configuration and if possible update it to use the correct
content-type. For Apache, you can do this in a .htaccess file:
AddType text/x-component .htc
So if i need load via "text/x-component" Can i do that via AJAX?
It doesn't have anything to do with AJAX :)
It's just a response header of the file. E.G. A HTML page is usually text/html "content-type". A javascript file is application/javascript. This is called the mime type or content type, and helps the browser know what type of file its receiving.
You need to either add a .htaccess file into your web project in the main folder, that has:
AddType text/x-component htc
or add that exact same line into your apache/virtual host configuration.

Will any browser ignore cache control headers for URLs with a query string?

I have a whole lot of css changes for my site. So I have used versioning to load the updated css files. But from some article I came to know that when some browsers like IE see a question mark they always hit the server to get the file but does not use the cache?
Is this true?
It varies. The main concern is not IE, but rather proxy servers between you and the client.
Personally, I use links of the form //example.com/t=12345/css/main.css
That t=12345 is the file's modification time, inserted by my "static resource management" class.
Then, a simple .htaccess rewrite rule strips that part out, leaving just /css/main.css as the target file.
From the browser's perspective, it's just a weirdly named folder, and it will cache according to the headers it receives. This will work for proxy servers too. Anything that can cache, will cache.

Open files in network not working

I have WAMP installed on a network machine. I have a table with file links, for people could open those files directly from a web page.
Those files are in another server, in the same network as WAMP.
When the users click on the link it appears the following error:
"not allowed to load local resource: file:///networkdrive/directorie/file.xls"
How can I resolve this?
I have this:
<button type="button" onClick="openfile('networkdrive/ptlr/Sectorial/LRCD/HorĂ¡rios/Equipas Turno.xls')">botao</button>
<script>
function openfile(file) {
window.location = "file:///" + file;
}
</script>
Just read the error: "not allowed to load local resource (...)"
Or on Firefox I get "Access to (...) from script denied".
It seems you are looking for a magical solution that will solve it, but no, it's exactly as the description of the error says: you are not allowed to do that because of security reasons.
The problem is that you're trying to make the browser open a file on your local drive, and that's not allowed from any protocol other than file:/// itself. So, what you'll want to do is either make sure the local file is also accessible via a server, or open the webpages that contain this script from file:/// too.
You can see this at work by first opening http://jsbin.com/OYObEMA/1/ and seeing this same error occur, then pressing CTRL+S and saving it as a single HTML file, and opening that HTML file then. The JSBin one opens via the internet, so isn't given access to the file:/// protocol, but the local (downloaded) HTML file can access it.
One way you could kind-of do this is to just provide the url the user needs to go to instead. So, just make an <input> that has a value set to the url the user would need to go to, and then provide the instructions "please copy this url into your url bar to open this file". That's not an elegant way to do it, but it would kind-of work.
About your answer to my initial comment: sure, I understood that. The question was not meant in a literal way, but to make you start thinking about what you actually try! You mix different environments.
either need to use webdav for this, if your client side applications are able to use that http extension to load and save files, or
you have to do the old scheme known from the IT middleages which is still typical for MS-Windows systems: offer the file for download via http and add an additional upload service (which will give you versioning pain), or
deliver your web page via the same protocol (network share) so that it is opened with as a local file on the client side, since then your are allowed to open additional local files referenced inside the web page.

Javascript used to include html, is it cached?

I'm using a method of creating a .js file on server #1 which contains document.writes to write html code, then a simple js include inside html code on server #2 to load that html code (there are multiple server #2's). This is basically replacing an iframe method with the advantage being that each server #2 owner controls their own css.
The method works perfectly as is. My question has to deal with caching. Each time the page is loaded on server #2 I want the .js reloaded, as it will change frequently on server #1. This appears to be the case on each browser I tested, but can I rely on this as being the default case, or is it dependent on browser settings? Despite all I've read on caching I can't figure out what triggers the load for a case like this.
You can control browser caching using HTTP headers on the server side. Like cache-control and cache-expiration. More here - http://www.w3.org/Protocols/rfc2616/rfc2616-sec13.html
In a case like this, the caching is triggered by the cache policy of the js file. Not the html file.
The browser doesn't cache the rendered page (well, it does for back buttons but that's not what we're talking about). The browser caches the source file. Therefore even if the html page is configured to be cached for a long time the javascript injected content will only be cached as long as its been configured to.
To configure caching policy you need to set specific headers on the server side. Sometimes you can do this in a CGI script. Sometimes you can do this in the server configuration files.
Google "http caching" and read up on how to configure a page to be cached or not cached (also google "json disable caching" or "ajax disable caching" because this issue crops up a lot with ajax).

Can JavaScript detect if the user's browser supports gzip?

Can I use JavaScript to detect if the user's browser supports gzipped content (client side, not node.js or similar)?
I am trying to support the following edge case:
There are a lot of possible files that can load on a particular web app and it would be better to load them on demand as necessary as the application runs rather than load them all initially. I want to serve these files off of S3 with a far-future cache expiration date. Since S3 does not support gzipping files to clients that support it, I want to host two versions of each file -- one normal, and one gzipped with content-type set to application/gzip. The browser of course needs to know which files to request. If JavaScript is able to detect if the browser supports gzipped content, then the browser will be able to request the correct files.
Is this possible?
Javascript can't, but you can use Javascript to detect wether or not the browser supports gzipped content.
I commented above and would just like to reiterrate, you should use CloudFront anyway, which does gzip content. If you are using S3, then there is zero reason why you would not want to use CloudFront, however, for the purposes of answering your question...
This blog post perfectly addresses how you would detect if the browser supports Gzip.
http://blog.kenweiner.com/2009/08/serving-gzipped-javascript-files-from.html
Here is a quick summary:
1) Create a small gzipped file, gzipcheck.js.jgz, and make it available in CloudFront. This file should contain one line of code:
gzipEnabled = true;
2) Use the following code to attempt to load and run this file. You'll probably want to put it in the HTML HEAD section before any other Javascript code.
<script type="text/javascript" src="gzipcheck.js.jgz">
</script>
If the file loads, it sets a flag, gzipEnabled, that indicates whether or not the browser supports gzip.
Well cloudfront does not gzip content automatically. Till the time Amazon decides to do automatic gzip compression in S3 and Cloudfront one has to use the below workaround.
In addition to the normal version, create a gzipped version of the file and upload on S3. If the file name is style.css the gzipped version should be named style.css.gz.
Add a header to the file with key=Content-Encoding & value=gzip to the file. This is needed so that browsers understand that the content is encoded using gzip. The header can be added using S3 api or the popular S3 file manager tools like Cloudberry, Bucket Explorer, etc
Also add the correct Content-Type header for the file. e.g for style.css it should be Content-Type: text/css.
In the webpage include the file normally
Use the above mentioned javascript to detect if the browser supports gzip encoding. If found true replace the file name e.g style.css with style.css.gz

Categories