Related
I am not even sure if something like I want is possible, so I am asking you guys to just let me know if anyone did that before. So, my goal is to when I click on "Publish" website in VS2010, to have all javascript files compressed into one, same with css and then in my layout file change the references from all different js and css files to only those two merged ones. Is that doable? Or maybe it's doable but in more manual way?
Of course the goal here is to have only two calls to external files on the website, but when I develop I need to see all files so that I can actually work with it. I guess I could do it manually before each push, but I'd rather have it done automatically using some script or something. I didn't try anything yet, and I am not looking for ready solution, I am just looking to get to know the problem better and maybe some tips.
Thanks a lot!
This is built into ASP.net 4.5. But in the mean time, you should look at the following projects
YUI Compressor
The objective of this project is to compress any Javascript and Cascading Style Sheets to an efficient level that works exactly as the original source, before it was minified.
Cassette
Cassette automatically sorts, concatenates, minifies, caches and versions all your JavaScript, CoffeeScript, CSS, LESS and HTML templates.
RequestReduce
Super Simple Auto Spriting, Minification and Bundling solution
No need to tell RequestReduce where your resources are
Your CSS and Javascript can be anywhere - even on an external host
RequestReduce finds them at runtime automatically
SquishIt
SquishIt lets you squish some JavaScript and CSS. And also some LESS and CoffeeScript.
Combres
.NET library which enables minification, compression, combination, and caching of JavaScript and CSS resources for ASP.NET and ASP.NET MVC web applications. Simply put, it helps your applications rank better with YSlow and PageSpeed.
Chirpy
Mashes, minifies, and validates your javascript, stylesheet, and dotless files. Chirpy can also auto-update T4MVC and other T4 templates.
Scott Hanselman wrote a good overview blog post about this topic a while back.
I voted up the answer that mentioned Cassette but I'll detail that particular choice a little more. Cassette is pretty configurable, but under the most common option, it allows you to reference CSS and Javascript resources through syntax like this:
Bundles.Reference("Scripts/aFolderOfScriptsThatNeedsToLoadFirst", "first");
Bundles.Reference("Scripts/aFolderOfScripts");
Bundles.Reference("Styles/aFolderOfStyles");
You would then render these in your master or layout pages like this:
#Bundles.RenderStylesheets()
#Bundles.RenderScripts("first")
#Bundles.RenderScripts()
During development, your scripts and styles will be included as individual files, and Cassette will try to help you out by detecting changes and trying to make the browser reload those files. This approach is great for debugging into libraries like knockout when they're doing something you don't expect. And, the best part, when you launch the site, you just change the web.config and Cassette will minify and bundle all your files into as few bundles as possible.
You can find more detail in their documentation (which is pretty good though sometimes lags behind development): http://getcassette.net/documentation/getting-started
Have a look at YUI compressor # codeplex.com this could be really helpful.
What I have done before is setup a post-build event, have it run a simple batch file which minimizes your source files. Then if you're in release mode (not in debug mode), you would reference the minimized source files. http://www.west-wind.com/weblog/posts/2007/Jan/19/Detecting-ASPNET-Debug-mode
I haven't heard about publish minification. I think use should choose between dynamical minification like SquishIt or compile time like YuiCompressor or AjaxMinifier.
I prefer compile time. I don't think it's very critical to have to compile time changing files. If you have huge css/js code lines you can choose this action only for release compilation and if it helps publish this files only in needed build cinfigurations.
I don't know if there is any possible way to somehow hook into the functionality from that 'Publish' button/whatever it is, but it's surely possible to have that kind of 'static build process'.
Personally I'm using Apache ANT to script exactly what you've described there. So you're developing on your uncompressed js/html/css files and when you're done, you call like ant build which then minifies, compresses, stripes and publishes your whole web application.
Example script: https://github.com/jAndreas/typeof-NaN-2.0/blob/master/build/build.xml
Multiple sites reference combining JavaScript and CSS files to improve web page performance, including examples of using ANT build scripts to concatenate the files prior to deployment.
I've search, and haven't found any information how to automate updating references to those files in HTML and other documents. I am looking to avoid hacking together something error prone, and want to learn from others who have automated builds in the past.
Are there automated tools in the wild to complete this task that I'm not seeing? Are there recommended processes to update the script and link tags in HTML? Can these solutions be integrated with ANT or similar build tools?
There sure is and it's a smart thing to do.
I found a PHP solution, don't know it that's okay for you, but if it isn't you can still read it's source (it's not difficult) and learn a lot. The solution works like this:
Rewrite your requests like this: from css/main.css and css/skin.css to css/main.css,skin.css (of course you can put many more).
Use apache's mod_rewrite to redirect this request to a script (in our case combine.php), that will combine all files to a single one.
The script combines all the files and sends the combined file. Then it saves it to a cache folder.
Next time around it checks if there is an up-to-date version of the cache and serves that one. If the latest file modification time has changed, it discards the cache.
The solution works great and it even makes use of HTTP cache headers and spits out an [ETags], which you should do anyway.
You are correct this is a great way to speed up page loading. It will even work in conjunction with a CDN, which the other poster recommended.
Here is a small script that will pack multiple files in to one for deployment. It supports both JS and CSS, and will even "minify" them by removing whitespace, etc. Just hook this in to your build and deploy scripts.
juicer: http://cjohansen.no/en/ruby/juicer_a_css_and_javascript_packaging_tool
What even better, it will follow JS and CSS import statements, so you only need to point your HTML files to the loader file and it will work in both development and production. (Assuming you replace the loader file with the combined file on deployment.)
There are others, including some run-time solutions. But it sounds like you have a build process in place anyway.
As far as HTML updating, if you still need it, since automated deployments are very popular in the Ruby world, and you may find some standalone utilities to help even for non-ruby projects. (As above) Methinks this would be best handled by your own project's template language, though. (With a static resource revision id, or such.)
Good luck, and let us know what you find.
I think what you really want is a CDN Content Delivery Network.
Read about it here
http://developer.yahoo.com/performance/rules.html
http://en.wikipedia.org/wiki/Content_delivery_network
I'm starting to use the Dojo toolkit and it has rich features like Dijits and themes which are useful but take forever to load.
I've got a good internet connection but those with slower connections would experience rather slow page loads.
This is also a question about heavy vs light frameworks. If you make heavy use of widgets, what are some techniques to keep page load times down?
Dojo has a build system that will drastically improve load times. Take a look at one of the dojo books or the online docs & look at layered builds. In order to do a build, you need to have the "source" (or "full") version of dojo, which has the build tool included -- you can tell if you have this by the presence of the 'util' directory (which is at the same level as dojo, dijit, dojox). If you don't have the full version, go back to the dojo site & delve down into the download area -- it's not completely obvious perhaps.
Anyway, if you have the right version, you basically just need to make a "build profile" file (or files ... aka a layered build), which is essentially your list of dojo.requires that you would normally have in your html. The build system will jam all the javascript code for all the dijits, dojox stuff, etc. together into a "layered build" (a file) and it will run shrinksafe on it, which sort of minifies the code (removes whitespace, shortens names, etc). It will also do some of this to the css files. Aside from making things much smaller, you get just a single file for all the js code (or a few files if you do more than one layer, but most of the time a single layer suffices).
This will improve your load times at least ten-fold, if not more. It might take you a bit of reading to get down the format of the profiles and the build command itself, but it's not too hard really. Once you create a build file, name it something obvious like "mystuff" and then you can dojo.require the "mystuff" file (which will be in the new build directory that is created when you build, then underneath that & hanging out with the dojo.js file in the dojo directory). Requiring in your built file will satisfy all the dojo.require's you normally do (assuming you have them all listed in the profile to build) and things will load very fast.
Here's a link to the older build docs, which mostly still hold true:
http://www.dojotoolkit.org/book/dojo-book-0-9/part-4-meta-dojo/package-system-and-custom-builds
Here's the updated docs (though perhaps a little incomplete):
docs.dojocampus.org/build/index
It reads harder than it really is ... use the layer.profile file in the profiles directory as a starting point. Just put a couple of things & then do a build & see if you get the release directory created (which should be at the same level as dojo, dijit, etc.) and it will have the entire dojo system in it (all minified) as well as your built (layered) stuff. Much faster.
Dylan Tynan
It's not that big (28k gzipped). Nevertheless you can use Google's hosted version of Dojo. Many of your users will already have it cached.
Once you create a build file, name it something obvious like "mystuff"
and then you can dojo.require the "mystuff" file (which will be in the
new build directory that is created when you build, then underneath
that & hanging out with the dojo.js file in the dojo directory).
Requiring in your built file will satisfy all the dojo.require's you
normally do (assuming you have them all listed in the profile to
build) and things will load very fast
Slight correction -- you don't dojo.require that file, you reference it in an ordinary script tag.
<script type="text/javascript" src="js/dojo/dojo/dojo.js" ></script>
<script type="text/javascript" src="js/dojo/mystuff/mystuff.js"></script>
For the directory layout I put the built file "mystuff.js" into the same directory as my package. So at the same level as dojo, dojox, and dijit, I would have a directory named "mystuff", and within that I have MyClass1.js and MyClass2.js. Then a fragment from the profile.js file for the build looks like:
layers:[
{
name: "../mystuff/mystuff.js",
dependencies: [
"mystuff.MyClass1",
"mystuff.MyClass2"
]
},...
I know this is an old thread. But I'm posting this answer for the benefit of other users like me who may read this.
If you are serving from apache there are other factors too. These settings can make a huge difference - MaxClients and MaxRequestsPerChild. You will need to tweak them based on the resources available to your server/machine serving the files.
Changing this worked really well for me.
Using the google CDN is also a good option although it may not be practical in some situations.
Custom build also has an effect as pointed out in other answers.
Do you localize your javascript to the page, or have a master "application.js" or similar?
If it's the latter, what is the best practice to make sure your .js isn't executing on the wrong pages?
EDIT: by javascript I mean custom javascript you write as a developer, not js libraries. I can't imagine anyone would copy/paste the jQuery source into their page but you never know.
Putting all your js in one file can help performance (only one request versus several). And if you're using a content distribution network like Akamai it improves your cache hit ratio. Also, always throw inline js at the very bottom of the page (just above the body tag) because that is executed synchronously and can delay your page from rendering.
And yes, if one of the js files you are using is also hosted at google, make sure to use that one.
Here's my "guidelines". Note that none of these are formal, they just seem like the right thing to do.
All shared JS code lives in the SITE/javascripts directory, but it's loaded in 'tiers'
For site-wide stuff (like jquery, or my site wide application.js), the site wide layout (this would be a master page in ASP.net) includes the file. The script tags go at the top of the page.
There's also 'region-wide' stuff (eg: js code which is only needed in the admin section of the site). These regions either have a common layout (which can then include the script tags) or will render a common partial, and that partial can include the script tags)
For less-shared stuff (say my library that's only needed in a few places) then I put a script tag in those HTML pages individually. The script tags go at the top of the page.
For stuff that's only relevant to the single page, I just write inline javascript. I try to keep it as close to it's "target" as possible. For example, if I have some onclick js for a button, the script tag will go below the button.
For inline JS that doesn't have a target (eg: onload events) it goes at the bottom of the page.
So, how does something get into a localised library, or a site-wide library?.
The first time you need it, write it inline
The next time you need it, pull the inline code up to a localised library
If you're referencing some code in a localized library from (approximately) 3 or more places, pull the code up to a region-wide library
If it's needed from more than one region, pull it up to a site-wide library.
A common complaint about a system such as this, is that you wind up with 10 or 20 small JS files, where 2 or 3 large JS files will perform better from a networking point of view.
However, both rails and ASP.NET have features which handle combining and caching multiple JS files into one or more 'super' js files for production situations.
I'd recommend using features like this rather than compromising the quality/readability of the actual source code.
Yahoo!'s Exceptional Performance Team has some great performance suggestions for JavaScript. Steve Souders used to be on that team (he's now at Google) and he's written some interesting tools that can help you decide where to put JavaScript.
I try to avoid putting javascript functions on the rendered page. In general, I have an application.js (or root.js) that has generic functionality like menu manipulation. If a given page has specific javascript functionality, I'll create a .js file to handle that code and mimic the dir structure on how to get to that file (also using the same name as the rendered file).
In other words, if the rendered page is in public/dir1/dir2/mypage.html, the js file would be in public/js/dir1/dir2/mypage.js. I've found this style works well for me, especially when doing templating on a site. I build the template engine to "autoload" my resources (css and js) by taking the request path and doing some checking for the css and js equivalents in the css and js directories on the root.
Personally, I try to include several Javascript files, sorted by module (like YUI does). But once in a while, when I'm writing essentially a one-liner, I'll put it on the page.
Best practice is probably to put it on Google's servers.
(Depends what you mean by "your" javascript though I suppose :)
This is something I've been wrestling with, too. I've ended up by using my back-end PHP script to intelligently build a list of required JS files based on the content requested by the user.
By organizing my JS files into a repository that contains multiple files organized by purpose (be they general use, focused for a single page, single section, etc) I can use the chain of events that builds the page on the back-end to selectively choose which JS files get included based on need (see example below).
This is after implementing my web app without giving this aspect of the code enough thought. Now, I should also add that the javascript I use enhances but does not form the foundation of my site. If you're using something like SproutCore or Ext I imagine the solution would be somewhat different.
Here's an example for a PHP-driven website:
If your site is divided into sections and one of those sections is calendar. The user navigates to "index.phhp?module=calendar&action=view". If the PHP code is class-based the routing algorithm instantiates the CalendarModule class which is based on 'Module' and has a virtual method 'getJavascript'. This will return those javascript classes that are required to perform the action 'view' on the 'calendar' module. It can also take into account any other special requirements and return js files for those as well. The rendering code can verify that there are no duplicates of js files when the javascript include list is built for the final page. So the getJavascript method returns an array like this
return array('prototype.js','mycalendar.js');
Note that this, or some form of this, is not a new idea. But it took me some time to think it important enough to go to the trouble.
If it's only a few hundred bytes or less, and doesn't need to be used anywhere else, I would probably inline it. The network overhead for another http request will likely outweigh any performance gains that you get by pulling it out of the page.
If it needs to be used in a few places, I would put the function(s) into a common external file, and call it from an inline script as needed.
If you are targeting an iphone, try to keep anything that you want cached under 25k.
No hard and fast rules really, every approach has pros and cons, would strongly recommend you check out the articles that can be found on yahoo's developer section, so you can make informed decisions on a case by case basis.
Nowadays, we have tons of Javascript libraries per page in addition to the Javascript files we write ourselves. How do you manage them all? How do you minify them in an organized way?
Organization
All of my scripts are maintained in a directory structure that I follow whenever I work on a site. The directory structure normally goes something like this:
+--root
|--javascript
|--lib
|--prototype.js
|--scriptaculous
|--scriptaculous.js
|--effects.js
|--..
|--myOwnScript.js
|--myOwnScript2.js
If, on the off chance, that I'm working on a team uses an inordinate amount of scripts, then I'll normally create a custom directory in which we'll organize scripts by relationship. This doesn't happen terribly often, though.
Compression
Though there are a lot of different compressors and obfuscators out there, I always come back to YUI Compressor.
Inclusion
Unless a site is using some form of a master page, CMS, or something that dictates what can be included on a page beyond my control, I only included the scripts necessarily for the given page just for the small performance sake. If a page doesn't require any script, there will be no script inclusions on that page.
First of all, YUI Compressor.
Keeping them organized is up to you, but most groups that I've seen have just come up with a convention that makes sense for their application.
It's generally optimal to package up your files in such a way that you have a small handful of packages which can be included on any given page for optimal caching.
You also might consider dividing your javascript up into segments that are easy to share across the team.
Cal Henderson (of Flickr fame) wrote Serving JavaScript Fast a while back. It covers asset delivery, not organization, but it might answer some of your questions.
Here are the bullet points:
Yes, you ought to concatenate JavaScript files in production to minimize the number of HTTP requests.
BUT you might not want to concatenate into one giant file; you might want to break it into logical pieces and spread the transfer cost over several pages.
gzip compression is good, but you shouldn't serve gzipped assets to IE <= 6, so you might also want to minify/compress your JavaScript.
I'll add a few bullet points of my own:
You ought to come up with a solution that works for both development and production. In development mode, it should pull in extra JavaScript files on demand; in production it should bundle everything ahead of time. Switching from one behavior to the other should be as easy as setting a flag.
Rails 2.0 handles all this through an asset cache; other web app frameworks might offer similar solutions.
As another answer suggests, placing third-party libraries in a lib directory is a good start. You can also divide your own JS files into sub-directories if it makes sense. Ideally, you'll be able to arrange them in such a way that the files in a given sub-directory can be concatenated into one file.
I will have a folder for all javascript, and a sub folder of that for 3rd party/shared libraries, and sub folders for each component of the site to keep everything organized.
For example:
/
+--/javascript/
+-- lib/
+-- admin/
+-- compnent1/
+-- compnent2/
Then run everything through a minifier/obfuscator during the build process.
I'v been using this lately:
http://code.google.com/apis/ajaxlibs/
And then have a "jscripts" folder where I keep my custom code.
In my last project, we had three kinds of JS files, all of them inside a JS folder.
Library code. A bunch of functions used on most all of the pages, so they were put together in one or a few files.
Classes. These had their own files, organized in folders as needed, but not necessarily so.
Ad hoc JS. Code that was specific to that page. These were saved in files that had the same name as the JSP pages they were supposed to run in.
The biggest effort was in having most of the code on the first two kinds, having custom code only know what to call, and when.
This might be a different approach than what you're looking for, but I've been playing around with the idea of JavaScript templates in our blog engine. In a nutshell, you assign a Javascript template to a page id using the database and it will dynamically include and minify all the JavaScript files associated with that template and create a file in a server-side cache with the template id as a file name. When a page is loaded, it calls the template file which first checks if the file exists in the cache and loads it if it does. If it doesn't exist, it creates it on the fly and includes it. I also use the template file to gzip the conglomerate JavaScript file.
The template idea would work well for site-wide JavaScript (like a JavaScript library), but it doesn't cover page-specific JavaScript. However, you can still use the same approach for page specific JavaScript by including a second file that does the same as above.