Can I use JavaScript to detect if the user's browser supports gzipped content (client side, not node.js or similar)?
I am trying to support the following edge case:
There are a lot of possible files that can load on a particular web app and it would be better to load them on demand as necessary as the application runs rather than load them all initially. I want to serve these files off of S3 with a far-future cache expiration date. Since S3 does not support gzipping files to clients that support it, I want to host two versions of each file -- one normal, and one gzipped with content-type set to application/gzip. The browser of course needs to know which files to request. If JavaScript is able to detect if the browser supports gzipped content, then the browser will be able to request the correct files.
Is this possible?
Javascript can't, but you can use Javascript to detect wether or not the browser supports gzipped content.
I commented above and would just like to reiterrate, you should use CloudFront anyway, which does gzip content. If you are using S3, then there is zero reason why you would not want to use CloudFront, however, for the purposes of answering your question...
This blog post perfectly addresses how you would detect if the browser supports Gzip.
http://blog.kenweiner.com/2009/08/serving-gzipped-javascript-files-from.html
Here is a quick summary:
1) Create a small gzipped file, gzipcheck.js.jgz, and make it available in CloudFront. This file should contain one line of code:
gzipEnabled = true;
2) Use the following code to attempt to load and run this file. You'll probably want to put it in the HTML HEAD section before any other Javascript code.
<script type="text/javascript" src="gzipcheck.js.jgz">
</script>
If the file loads, it sets a flag, gzipEnabled, that indicates whether or not the browser supports gzip.
Well cloudfront does not gzip content automatically. Till the time Amazon decides to do automatic gzip compression in S3 and Cloudfront one has to use the below workaround.
In addition to the normal version, create a gzipped version of the file and upload on S3. If the file name is style.css the gzipped version should be named style.css.gz.
Add a header to the file with key=Content-Encoding & value=gzip to the file. This is needed so that browsers understand that the content is encoded using gzip. The header can be added using S3 api or the popular S3 file manager tools like Cloudberry, Bucket Explorer, etc
Also add the correct Content-Type header for the file. e.g for style.css it should be Content-Type: text/css.
In the webpage include the file normally
Use the above mentioned javascript to detect if the browser supports gzip encoding. If found true replace the file name e.g style.css with style.css.gz
Related
I am in developer mode in the .region file trying to add a background video with the video tag. I put the mp4 file into the template folder and I have been trying to access it through src="video.mp4" and display the video. It doesn't display the video and I am not sure why I can't grab it. When i change the source to any http// video online it works so its not the code. It only doesn't display the video when I try grabbing it from the local folder. Any leads or help would be appreciated. Thank you!
Files that are directly located in the /template folder are not intended to be accessible via http. Instead, put the file within /template/assets and then reference the file as /assets/video.mp4.
If that doesn't help, ensure that the file is even accessible via http by entering http://yoursite.squarespace.com/assets/video.mp4 in the address bar (using your site's correct URL). If you can access the video file, then it will work as a src attribute of a video element. If you cannot access it, then something else is going on: either you haven't uploaded the file or the file name is incorrect.
Another tip: if using the full URL for a file (as opposed to the relative URL), try using https for the protocol in place of http. The correct protocol depends on your site's settings, of course, and whether you are using your built-in or custom domain.
If using the local development server via Node.js (as opposed to the live server, that is, your actual Squarespace site), try pushing/uploading the files to the live server on Squarespace (via Git or SFTP) and then retesting locally. I've found that sometimes this may be required due to caching in the local environment. This will also reveal whether the file you are uploading is too large (the documentation does claim a 1MB limit which may be true, though it may be as large as 5MB or 20MB if the docs are out of date; I cannot recall whether this has changed).
If the file is too large for the /assets folder, then your only other option besides hosting it via a different service entirely is to use the file storage via the Squarespace Config UI, which allows up to 20MB, and referencing your video via that path. You'd have to get the video down to 20MB by shortening, scaling or further compressing it.
If hosting the file via a different service, Cloudinary may be worth considering; a free account may allow up to a 100MB video file and enough bandwidth (assuming your website's traffic is relatively low).
I want to use PIE in my project with IE7.
However something I didn't understand is can i you use .htc files only on web server?
Can i use it in local pages loaded via browser without webserver?
I seen at PIE's documentation and they said this text below:
Serving the correct Content-Type
IE requires that HTC behaviors are served up with a content-type
header of "text/x-component", otherwise it will simply ignore the
behavior. Many web servers are preconfigured to serve the correct
content-type, but others are not.
If you have problems with the PIE behavior not being applied, check
your server configuration and if possible update it to use the correct
content-type. For Apache, you can do this in a .htaccess file:
AddType text/x-component .htc
So if i need load via "text/x-component" Can i do that via AJAX?
It doesn't have anything to do with AJAX :)
It's just a response header of the file. E.G. A HTML page is usually text/html "content-type". A javascript file is application/javascript. This is called the mime type or content type, and helps the browser know what type of file its receiving.
You need to either add a .htaccess file into your web project in the main folder, that has:
AddType text/x-component htc
or add that exact same line into your apache/virtual host configuration.
I was just checking out the ajax minfy tool http://ajaxmin.codeplex.com and used this to create a minified js file.It shows in the result the minified and then gzipped file content but I just wanted to make sure that the file is indeed gzipped ..How can I test that and also what happens if we have gzipping enabled in IIS but the file is already gzipped does IIS just renders the file without zipping it again or something else.
Use your favorite debugging tool for the webbrowser (like Firebug) and inspect the HTTP request. If the file was indeed successfully transmitted as a gzipped file, you will see a Content-Encoding: gzip header. It may also say deflate, which is more or less the same thing.
As I have read, it is not easy for JavaScript to modify files on client PC. I am working on a web based file manager and would need to know the following:
Can JavaScript list files and folder structure on a client PC?
Can JavaScript list files and folder structure on a server?
If your answer is no, that Java Scipt can not list files and folders say on client's or server's C:\ drive, than would CGI script be the only solution?
Browser JS reading client PC's files: Depends
For a security reason, you can't access the files on the user's PC without the user's consent.
That's why the FileReader API is created around the file input box <input type="file"> and a drag-n-drop area since the whole idea is to "access the file with user's consent". Without the user intentionally putting the file for access, you can't access it at all.
Server-side JS reading own server's files: Yes
As for server, if you meant access the server using server-JS (NodeJS or Rhino), yes you can (How else would it serve webpages anyway?).
Browser JS reading own server's files: Depends
Accessing the server from the browser using JS works if you have an API to read files from it.
Browser JS reading other server's files: Yes, with a catch
To access other server's files without some API, you could resort to creating a web scraper or a web-spider that runs server-side (since browser can't cross domains due to the same origin policy) and have an API exposed to your browser.
However:
you can't crawl to all files as some may be restricted from outside access.
the public appearance of the structure could be different from the internal structure, especially if the site uses segmented url scheme
sites using query strings to generate pages cannot be crawled easily due to the number of permutations it could make, thus some pages might be unreacheable.
CGI won't be a solution either, as it only has access to the filesystem of your server, not that of the client visiting your site. The only way to access your client's filesystem from javascript seems to be the File API, which apparently is not implemented by many browsers.
It's a cludge but you could resort to a java applet or the dreaded active-x control.
We have an internal web application that acts as a repository to which users can upload files. These files can be any format, including HTML pages.
We have tested than in IE8, if you download an HTML file that contains some script that tries to access your cookies and, after downloading, you choose the "Open" option, the script executes and gets your cookie information with no problems at all.
Actually, that script could use XmlHttpRequest object to call the server and do some malicious operations within the session of the user who downloaded the file.
Is there any way to avoid this? We have tested that both Chrome and Firefox do not let this happen. How could this behaviour be avoided in any browser, including IE8?
Don't allow the upload of arbritary content. It's exclusively a terrible idea.
One potential "solution" could be to only host the untrusted uploads on a domain that doesn't have any cookies and that the user doesn't associate any trust with in any way. This would be a "solution", but certainly not the ideal one.
Some more practical options could be an authorisation-based process, where each file goes through an automated review and then a manual confirmation of the automated cleaning/analysis phase.
All in all though, it's a very bad idea to allow the general public to do this.
That's a really bad idea from a security point of view. Still, if you wish to do this, include HTTP response header Content-disposition: attachment It will force browser to download file instead of opening it. In Apache, it's done by adding Header set Content-disposition "attachment" to .htaccess file.
Note that it's a bad idea just to add Content-type: text/plain as mentioned in one of the answers, because it won't work for Internet Explorer. When IE receives file with text/plain content-type header, it turns on its MIME sniffer which tries to define file's real content-type (because some servers send all the files with text/plain). In case it meets HTML code inside a file, it will force the browser to serve file as text/html and render it.
If you really need to have the users upload HTML files, you should make sure the HTML files in this directory are served with the mime type text/plain rather than text/html or similar.
This will prevent the opened files from executing scripts in the browser. If you're using apache, see the AddType directive.