I know that it's better to use something like AWS for static files but I am on developing stage and I prefer to have javascript/css files on localhost.
It would be great if I could get gzip working on my javascript files for testing. I am using the default gzip middleware but it's just compressing the view request.
My template looks like:
<script src='file.js' type='application/javascript'></script>
There should be a type-of-file list similar to Nginx for the django-based-server. How can I add application/javascript, text/javascript, etc for gzip compression?
You should read the GZipMiddleware documentation, where it's explained that the middleware will not compress responses when the "Content-Type header contains javascript or starts with anything other than text/".
EDIT:
To clarify what the documentation says, if the Content-Type header value contains javascript or doesn't begin with text/, then the response won't be compressed. That means both text/javascript and application/javascript will be invalid responses, since they match javascript.
Those restrictions are intentionally imposed by the middleware itself, but you can still circumvent that by wrapping the static files view handler with the gzip_page() decorator and adding it to your URL configuration manually.
During development you are using the Django built-in webserver, this server is really simple and does not have any other options than what you can see with ./manage.py help runserver
You options are either to setup a real webserver or use the staticfiles app with a custom StaticFilesStorage
But honestly, this is overkill, why would want to test gzip compression?
Related
I return the whole HTML as the response of an ajax request (not just an array as JSON). So as you know, the response will be larger a bit (than JSON). Because it contains some more things like html tags, html attributes etc ... So to
tncreased scalability (reduced server load).
Less network bandwidth (lower cost).
Better user experience (faster).
I want to compress the response. Something like .gzip format. Based on so tests, this is the size of an ajax response:
And this is the size of an ajax response which is zipped:
See? There is a huge different between them in theirs sizes.
All I want to know, is it possible to compress the response of an ajax request on the way and convert it to a regular text on the client side? For doing that do I need to do some changes in the web service (like nginx) configuration? Or it does that automatically?
You should use Lz-string method to compress data and send using ajax call and at the time of response, again use it by decompressing it
GITHUB links : https://github.com/pieroxy/lz-string/
Documentation & usage : https://coderwall.com/p/mekopw/jsonc-compress-your-json-data-up-to-80
You may want to check out some of the other StackOverflow posts, plus Wikipedia links & articles on the web regarding Deflate & Gzip. Here are some links for you to peruse. They have some interesting charts & information in them:
StackOverflow: Why Use Deflate Instead of Gzip for Text Files Served by Apache
StackOverflow: Is There Any Performance Hit Involved In Choosing Gzip Over Deflate For-
HTTP.com
Wikpedia: Deflate
Wikipedia: Gzip
Article: How to Optimize Your Site with GZip Compression
RFC: Deflate
I have a mostly static HTML website served from CDN (plus a bit of AJAX to the server), and do want user's browsers to cache everything, until I update any files and then I want the user's browsers to get the new version.
How do I do achieve this please, for all types of static files on my site (HTML, JS, CSS, images etc.)? (settings in HTML or elsewhere). Obviously I can tell the CDN to expire it's cache, so it's the client side I'm thinking of.
Thanks.
One way to achieve this is to make use of the HTTP Last-Modified or ETag headers. In the HTTP headers of the served file, the server will send either the date when the page was last modified (in the Last-Modified header), or a random ID representing the current state of the page (ETag), or both:
HTTP/1.1 200 OK
Content-Type: text/html
Last-Modified: Fri, 18 Dec 2015 08:24:52 GMT
ETag: "208f11-52727df9c7751"
Cache-Control: must-revalidate
If the header Cache-Control is set to must-revalidate, it causes the browser to cache the page along with the Last-Modified and ETag headers it received with it. On the next request, it will send them as If-Modified-Since and If-None-Match:
GET / HTTP/1.1
Host: example.com
If-None-Match: "208f11-52727df9c7751"
If-Modified-Since: Fri, 18 Dec 2015 08:24:52 GMT
If the current ETag of the page matches the one that comes from the browser, or if the page hasn’t been modified since the date that was sent by the browser, instead of sending the page, the server will send a Not Modified header with an empty body:
HTTP/1.1 304 Not Modified
Note that only one of the two mechanisms (ETag or Last-Modified) is required, they both work on their own.
The disadvantage of this is that a request has to be sent anyways, so the performance benefit will mostly be for pages that contain a lot of data, but particularly on internet connections with high latency, the page will still take a long time to load. (It will for sure reduce your traffic though.)
Apache automatically generates an ETag (using the file’s inode number, modification time, and size) and a Last-Modified header (based on the modification time of the file) for static files. I don’t know about other web-servers, but I assume it will be similar. For dynamic pages, you can set the headers yourself (for example by sending the MD5 sum of the content as ETag).
By default, Apache doesn’t send a Cache-Control header (and the default is Cache-Control: private). This example .htaccess file makes Apache send the header for all .html files:
<FilesMatch "\.html$">
Header set Cache-Control "must-revalidate"
</FilesMatch>
The other mechanism is to make the browser cache the page by sending Cache-Control: public, but to dynamically vary the URL, for example by appending the modification time of the file as a query string (?12345). This is only really possible if your page/file is only linked from within your web application, in which case you can generate the links to it dynamically. For example, in PHP you could do something like this:
<script src="script.js?<?php echo filemtime("script.js"); ?>"></script>
To achieve what you want on the client side, you have to change the url of your static files when you load them in HTML, i.e. change the file name, add a random query string like unicorn.css?p=1234, etc. An easy way to automate this is to use a task runner such as Gulp and have a look at this package gulp-rev.
In short, if you integrate gulp-rev in your Gulp task, it will automatically append a content hash to all the static files piped into the task stream and generate a JSON manifest file which maps the old files to newly renamed files. So a file like unicorn.css will become unicorn-d41d8cd98f.css. You can then write another Gulp task to crawl through your HTML/JS/CSS files and replace all the urls or use this package gulp-rev-replace.
There should be plenty of online tutorial that shows you how to accomplish this. If you use Yeoman, you can check out this static webapp generator I wrote here which contains a Gulp routine for this.
This is what the HTML5 Application Cache does for you. Put all of your static content into the Cache Manifest and it will be cached in the browser until the manifest file is changed. As an added bonus, the static content will be available even if the browser is offline.
The only change to your HTML is in the <head> tag:
<!DOCTYPE HTML>
<html manifest="cache.appcache">
...
</html>
I am creating a site using sails and passport for authentication purposes. I've got problems when it come to the use of Jquery and backbone in my code though. It seems that both are down when i tried to use them with sails. What I am trying to do, after user authentication I route the user to home page where all the scripts exists. I put all .js files in the layout.ejs, for exapmle:
<link rel="stylesheet" href="styles/bootstrap-theme.min.css">
<script type="text/javascript" src="js/jquery.js"></script>
Most of css and js files works, but I run into problems with backbone and jquery (and jquerymobile). I am using jQuery 1.10.2 and backbone 1.1.0. Any idea what might be wrong?
In backbone code, I a trying to make Ajax requests via a php file.
var ProfileList = Backbone.Collection.extend({
model: ProfileModel,
url: 'data.php'
});
What exactly i ve got to do? To add url in routes? Or where should I place data.php file?
EDIT:I ve changed gruntfile.js putting jquery at the top and works fine. Now the problem remains the backbone. I am guessing that my troubles arises since I request access to different domains using passport and backbone. When I am requesting data with data.php I am calling the following jquery code:
xhr.send( ( s.hasContent && s.data ) || null ): jquery.js (line 8706)
This is an Ajax XMLHttpRequest send request http://www.w3schools.com/ajax/ajax_xmlhttprequest_send.asp. I generally try to request data from two domains localhost and localhost:1337, so a cross domain issue arises.
I ve found that cors(cross origin recource sharing) can handle this issue. Any idea how to allow cors in sails.js??
EDIT: In the route.js file I turn in homepage cors to true:
'/secondscreen': {
view: 'secondsocialscreen/index',
cors: true
}
I still receiving the same error. Also i changed the variable AllRoutes to true in config/cors.js file. My backbone file works(checked using console), however I cant fetch data. Config/cors.js file is the following:
module.exports.cors = {
// Allow CORS on all routes by default? If not, you must enable CORS on a
// per-route basis by either adding a "cors" configuration object
// to the route config, or setting "cors:true" in the route config to
// use the default settings below.
allRoutes: true,
// Which domains which are allowed CORS access?
// This can be a comma-delimited list of hosts (beginning with http:// or https://)
// or "*" to allow all domains CORS access.
origin: '*',
// Allow cookies to be shared for CORS requests?
credentials: true,
// Which methods should be allowed for CORS requests? This is only used
// in response to preflight requests (see article linked above for more info)
methods: 'GET, POST, PUT, DELETE, OPTIONS, HEAD',
// Which headers should be allowed for CORS requests? This is only used
// in response to preflight requests.
headers: 'content-type'
};
Am I missing something?? Ok I think I ve found something. Actually I am trying to direct to a home.ejs whrere I define a fetching.js (fetching data from data.php and definition of model-controller-views) where backbone code exists. Is it possible this file to work as it is, or I ve to define my MVC from sails?? Also I found that I am trying to get data from localhost:1337 (sails server). However I want to fetch data from apache localhost. How is it possible to request data to apache server while running sails?? In firebug I received the above:
GET http://localhost/sitec/fetchdata.php?widget=highlights 200 OK 47ms jquery.js (line 8706)
GET http://localhost/sitec/fetchdata.php?widget=sentiment 200 OK 47ms jquery.js (line 8706)
GET http://localhost/sitec/fetchdata.php?widget=tagsCloud 200 OK 47ms jquery.js (line 8706)
In response instead of having the object of json data (data.php returns json file), actually I can see the code of data.php file. Weird
We use CORS in sails by modifying the config/routes.js
module.exports.routes = {
'/*': {
cors: true
}
}
Documented here: http://sailsjs.org/#!documentation/config.routes
Sails.js by default use grunt to add CSS and JS files. Adding something by hand to layout isn't a good idea. In file Gruntfile.js that is in main root you will see two important arrays - cssFilesToInject and jsFilesToInject. To load files in proper order, just add them here in order that you need.
var jsFilesToInject = [
// Below, as a demonstration, you'll see the built-in dependencies
// linked in the proper order order
// Bring in the socket.io client
'linker/js/socket.io.js',
// then beef it up with some convenience logic for talking to Sails.js
'linker/js/sails.io.js',
// A simpler boilerplate library for getting you up and running w/ an
// automatic listener for incoming messages from Socket.io.
'linker/js/app.js',
// *-> put other dependencies here <-*
'path_to_jquery',
'path_to_underscore',
'path_to_backbone',
// All of the rest of your app scripts imported here
'linker/**/*.js'
];
I've been looking for ways of making my site load faster and one way that I'd like to explore is making greater use of Cloudfront.
Because Cloudfront was originally not designed as a custom-origin CDN and because it didn't support gzipping, I have so far been using it to host all my images, which are referenced by their Cloudfront cname in my site code, and optimized with far-futures headers.
CSS and javascript files, on the other hand, are hosted on my own server, because until now I was under the impression that they couldn't be served gzipped from Cloudfront, and that the gain from gzipping (about 75 per cent) outweighs that from using a CDN (about 50 per cent): Amazon S3 (and thus Cloudfront) did not support serving gzipped content in a standard manner by using the HTTP Accept-Encoding header that is sent by browsers to indicate their support for gzip compression, and so they were not able to Gzip and serve components on the fly.
Thus I was under the impression, until now, that one had to choose between two alternatives:
move all assets to the Amazon CloudFront and forget about GZipping;
keep components self-hosted and configure our server to detect incoming requests and perform on-the-fly GZipping as appropriate, which is what I chose to do so far.
There were workarounds to solve this issue, but essentially these didn't work. [link].
Now, it seems Amazon Cloudfront supports custom origin, and that it is now possible to use the standard HTTP Accept-Encoding method for serving gzipped content if you are using a Custom Origin [link].
I haven't so far been able to implement the new feature on my server. The blog post I linked to above, which is the only one I found detailing the change, seems to imply that you can only enable gzipping (bar workarounds, which I don't want to use), if you opt for custom origin, which I'd rather not: I find it simpler to host the coresponding fileds on my Cloudfront server, and link to them from there. Despite carefully reading the documentation, I don't know:
whether the new feature means the files should be hosted on my own domain server via custom origin, and if so, what code setup will achieve this;
how to configure the css and javascript headers to make sure they are served gzipped from Cloudfront.
UPDATE: Amazon now supports gzip compression, so this is no longer needed. Amazon Announcement
Original answer:
The answer is to gzip the CSS and JavaScript files. Yes, you read that right.
gzip -9 production.min.css
This will produce production.min.css.gz. Remove the .gz, upload to S3 (or whatever origin server you're using) and explicitly set the Content-Encoding header for the file to gzip.
It's not on-the-fly gzipping, but you could very easily wrap it up into your build/deployment scripts. The advantages are:
It requires no CPU for Apache to gzip the content when the file is requested.
The files are gzipped at the highest compression level (assuming gzip -9).
You're serving the file from a CDN.
Assuming that your CSS/JavaScript files are (a) minified and (b) large enough to justify the CPU required to decompress on the user's machine, you can get significant performance gains here.
Just remember: If you make a change to a file that is cached in CloudFront, make sure you invalidate the cache after making this type of change.
My answer is a take off on this: http://blog.kenweiner.com/2009/08/serving-gzipped-javascript-files-from.html
Building off skyler's answer you can upload a gzip and non-gzip version of the css and js. Be careful naming and test in Safari. Because safari won't handle .css.gz or .js.gz files.
site.js and site.js.jgz and
site.css and site.gz.css (you'll need to set the content-encoding header to the correct MIME type to get these to serve right)
Then in your page put.
<script type="text/javascript">var sr_gzipEnabled = false;</script>
<script type="text/javascript" src="http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr.gzipcheck.js.jgz"></script>
<noscript>
<link type="text/css" rel="stylesheet" href="http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css">
</noscript>
<script type="text/javascript">
(function () {
var sr_css_file = 'http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css';
if (sr_gzipEnabled) {
sr_css_file = 'http://d2ft4b0ve1aur1.cloudfront.net/css-050/sr-br-min.css.gz';
}
var head = document.getElementsByTagName("head")[0];
if (head) {
var scriptStyles = document.createElement("link");
scriptStyles.rel = "stylesheet";
scriptStyles.type = "text/css";
scriptStyles.href = sr_css_file;
head.appendChild(scriptStyles);
//alert('adding css to header:'+sr_css_file);
}
}());
</script>
gzipcheck.js.jgz is just sr_gzipEnabled = true; This tests to make sure the browser can handle the gzipped code and provide a backup if they can't.
Then do something similar in the footer assuming all of your js is in one file and can go in the footer.
<div id="sr_js"></div>
<script type="text/javascript">
(function () {
var sr_js_file = 'http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr-br-min.js';
if (sr_gzipEnabled) {
sr_js_file = 'http://d2ft4b0ve1aur1.cloudfront.net/js-050/sr-br-min.js.jgz';
}
var sr_script_tag = document.getElementById("sr_js");
if (sr_script_tag) {
var scriptStyles = document.createElement("script");
scriptStyles.type = "text/javascript";
scriptStyles.src = sr_js_file;
sr_script_tag.appendChild(scriptStyles);
//alert('adding js to footer:'+sr_js_file);
}
}());
</script>
UPDATE: Amazon now supports gzip compression. Announcement, so this is no longer needed. Amazon Announcement
Cloudfront supports gzipping.
Cloudfront connects to your server via HTTP 1.0. By default some webservers, including nginx, dosn't serve gzipped content to HTTP 1.0 connections, but you can tell it to do by adding:
gzip_http_version 1.0
to your nginx config. The equivalent config could be set for whichever web server you're using.
This does have a side effect of making keep-alive connections not work for HTTP 1.0 connections, but as the benefits of compression are huge, it's definitely worth the trade off.
Taken from http://www.cdnplanet.com/blog/gzip-nginx-cloudfront/
Edit
Serving content that is gzipped on the fly through Amazon cloud front is dangerous and probably shouldn't be done. Basically if your webserver is gzipping the content, it will not set a Content-Length and instead send the data as chunked.
If the connection between Cloudfront and your server is interrupted and prematurely severed, Cloudfront still caches the partial result and serves that as the cached version until it expires.
The accepted answer of gzipping it first on disk and then serving the gzipped version is a better idea as Nginx will be able to set the Content-Length header, and so Cloudfront will discard truncated versions.
We've made a few optimisations for uSwitch.com recently to compress some of the static assets on our site. Although we setup a whole nginx proxy to do this, I've also put together a little Heroku app that proxies between CloudFront and S3 to compress content: http://dfl8.co
Given publicly accessible S3 objects can be accessed using a simple URL structure, http://dfl8.co just uses the same structure. I.e. the following URLs are equivalent:
http://pingles-example.s3.amazonaws.com/sample.css
http://pingles-example.dfl8.co/sample.css
http://d1a4f3qx63eykc.cloudfront.net/sample.css
Yesterday amazon announced new feature, you can now enable gzip on your distribution.
It works with s3 without added .gz files yourself, I tried the new feature today and it works great. (need to invalidate you're current objects though)
More info
You can configure CloudFront to automatically compress files of certain types and serve the compressed files.
See AWS Developer Guide
I keep getting "Resource interpreted as other but transferred with MIME type text/javascript.", but everything seems to be working fine. This only seems to be happening in Safari 4 on my Mac.
I was advised to add "meta http-equiv="content-script-type" content="text/javascript" to the header, although that did nothing.
The most common way to get the error is with the following code:
<img src="" class="blah" />
A blank url is a shortcut for the current page url, so a duplicate request is made which returns content type html. The browser is expecting an image, but instead gets html.
i received this error due tu a missing element which a jquery plugin tried to call via js var btnChange i commented the none needed (and non existent) images out and the warning (google chrome dev tools) was fixed:
$(mopSliderName+" .sliderCaseRight").css({backgroundImage:"url("+btnChange.src+")"});
The (webkit-based) browser is issuing a warning that it has decided to ignore the mimetype provided by the webserver - in this case text/javascript - and is applying a different mimetype - in this case "other".
It's a warning which users can typically ignore, but a developer might find useful when looking for clues to a problem. For this example it might explain why some javascript wasn't being executed.
Your web server is sending the content with a certain MIME type. For example, a PNG image would be sent with the HTTP header Content-type: image/png. Configure your web server or script to send the proper content type.
It does cause issues if your are calling a javascript that adds functionality, it is likely to fail, as it does for me. No real answers yet.
I was getting this error due to a script with bad permissions bringing up a HTTP 403 error. I gave it read and execute rights across the board and it worked.
There is a setting for the Apache MIME module where it misses adding the type for javascript, to resolve it, simply open the .htaccess file OR httpd.conf file, add the following lines
<IfModule mod_mime.c>
AddType text/javascript .js
</IfModule>
Restart the apache server, issue will be resolved.