Are there ways to improve javascript (Dojo) loading? - javascript

I'm starting to use the Dojo toolkit and it has rich features like Dijits and themes which are useful but take forever to load.
I've got a good internet connection but those with slower connections would experience rather slow page loads.
This is also a question about heavy vs light frameworks. If you make heavy use of widgets, what are some techniques to keep page load times down?

Dojo has a build system that will drastically improve load times. Take a look at one of the dojo books or the online docs & look at layered builds. In order to do a build, you need to have the "source" (or "full") version of dojo, which has the build tool included -- you can tell if you have this by the presence of the 'util' directory (which is at the same level as dojo, dijit, dojox). If you don't have the full version, go back to the dojo site & delve down into the download area -- it's not completely obvious perhaps.
Anyway, if you have the right version, you basically just need to make a "build profile" file (or files ... aka a layered build), which is essentially your list of dojo.requires that you would normally have in your html. The build system will jam all the javascript code for all the dijits, dojox stuff, etc. together into a "layered build" (a file) and it will run shrinksafe on it, which sort of minifies the code (removes whitespace, shortens names, etc). It will also do some of this to the css files. Aside from making things much smaller, you get just a single file for all the js code (or a few files if you do more than one layer, but most of the time a single layer suffices).
This will improve your load times at least ten-fold, if not more. It might take you a bit of reading to get down the format of the profiles and the build command itself, but it's not too hard really. Once you create a build file, name it something obvious like "mystuff" and then you can dojo.require the "mystuff" file (which will be in the new build directory that is created when you build, then underneath that & hanging out with the dojo.js file in the dojo directory). Requiring in your built file will satisfy all the dojo.require's you normally do (assuming you have them all listed in the profile to build) and things will load very fast.
Here's a link to the older build docs, which mostly still hold true:
http://www.dojotoolkit.org/book/dojo-book-0-9/part-4-meta-dojo/package-system-and-custom-builds
Here's the updated docs (though perhaps a little incomplete):
docs.dojocampus.org/build/index
It reads harder than it really is ... use the layer.profile file in the profiles directory as a starting point. Just put a couple of things & then do a build & see if you get the release directory created (which should be at the same level as dojo, dijit, etc.) and it will have the entire dojo system in it (all minified) as well as your built (layered) stuff. Much faster.
Dylan Tynan

It's not that big (28k gzipped). Nevertheless you can use Google's hosted version of Dojo. Many of your users will already have it cached.

Once you create a build file, name it something obvious like "mystuff"
and then you can dojo.require the "mystuff" file (which will be in the
new build directory that is created when you build, then underneath
that & hanging out with the dojo.js file in the dojo directory).
Requiring in your built file will satisfy all the dojo.require's you
normally do (assuming you have them all listed in the profile to
build) and things will load very fast
Slight correction -- you don't dojo.require that file, you reference it in an ordinary script tag.
<script type="text/javascript" src="js/dojo/dojo/dojo.js" ></script>
<script type="text/javascript" src="js/dojo/mystuff/mystuff.js"></script>
For the directory layout I put the built file "mystuff.js" into the same directory as my package. So at the same level as dojo, dojox, and dijit, I would have a directory named "mystuff", and within that I have MyClass1.js and MyClass2.js. Then a fragment from the profile.js file for the build looks like:
layers:[
{
name: "../mystuff/mystuff.js",
dependencies: [
"mystuff.MyClass1",
"mystuff.MyClass2"
]
},...

I know this is an old thread. But I'm posting this answer for the benefit of other users like me who may read this.
If you are serving from apache there are other factors too. These settings can make a huge difference - MaxClients and MaxRequestsPerChild. You will need to tweak them based on the resources available to your server/machine serving the files.
Changing this worked really well for me.
Using the google CDN is also a good option although it may not be practical in some situations.
Custom build also has an effect as pointed out in other answers.

Related

Javascript build tools that update script tags after concatenation

I'm very keen to make use of some build techniques in my Javascript/Web App development such as
Concatentation
Minification
Image replacement with data:uri's
Build vs Source *
App Cache Manifest generation *
It's those last two that I haven't found an answer for yet.
Build vs Source
By this I mean having a "source" version of my HTML and Javascript that is untouched so that I do not have to build each time to preview a change. All of my JS files are separate <script> tags as usual with the build vs updating these script sections with the final concatenated versions. To be honest I feel like I'm missing something here with all of these new Javascript build systems as this seems like an obvious need but I can't find anyone else talking about it. How is everyone else dealing with this?.. Build on each change during development?? surely not.
App Cache Manifest generation
This explains itself - walk through my source tree and build up a manifest and insert it into my <html> tag.
I've searched for these two with no luck - any pointers?
I'd be on the road with a killer build system if it wasn't for those two.
Thanks!!!
Re: Build vs Source
It sounds like you're already familiar with grunt. You may want to consider looking into the grunt node-build-script plugin.
It adds a number of new tasks, notably grunt mkdirs and grunt copy which duplicates your project directory into a separate staging folder and then copies your optimised project into a publish folder. If I'm not mistaken, this is what you mean by keeping an 'untouched' version of your source files?
Running grunt server will then serve up the contents of your publish files on localhost. You could always point your web server to your initial project directory if you want to examine your application in its unoptimised state.
node-build-script adds a bunch of other super convenient tasks, such as image optimisation, automatic file revving and substitution. It's incredibly easy to use and super customisable.
I have a basic single page template which uses node-build-script which also may be of interest.
Re: App Cache Manifest generation
I believe this used to be part of node-build-script but was since removed, see 1, 2
There would be nothing stopping you from creating a custom grunt task that utilised something like confess.js however.
Finally, it looks like Google's upcoming Yeoman might be worth keeping an eye on if you're not already!

Should I version control the minified versions of my jQuery plugins?

Let's say I write a jQuery plugin and add it to my repository (Mercurial in my case). It's a single file, say jquery.plugin.js. I'm using BitBucket to manage this repository, and one of its features is a Downloads page. So, I add jquery.plugin.js as one of the downloads.
Now I want to make available a minified version of my plugin, but I'm not sure what the best practice is. I know that it should be available on the Downloads page as jquery.plugin.min.js, but should I also version control it each time I update it to reflect the unminified version?
The most obvious problem I see with version controlling the minified version is that I might forget to update it each time I make a change to the unminified version.
So, should I version control the minified file?
No, you should not need to keep generated minimized versions under source control.
We have had problems when adding generated files into source control (TFS), because of the way TFS sets local files to be read-only. Tools that generate files as part of the build process then have write access problems (this is probably not a problem with other version control systems).
But importantly, all the:
tools
scripts
source code
resources
third party libraries
and anything else you need to build, test and deploy your product should be under version control.
You should be able to check out a specific version from source control (by tag or revision number or the equivalent) and recreate the software exactly as it was at that point in time. Even on a 'fresh' machine.
The build should not be dependent on anything which is not under source control.
Scripts: build-scripts whether ant, make, MSBuild command files or whatever you are using, and any deployment scripts you may have need to be under version control - not just on the build machine.
Tools: this means the compilers, minimizers, test frameworks - everything you need for your build, test and deployment scripts to work - should be under source control. You need the exact version of those tools to be available to recreate to a point in time.
The book 'Continuous Delivery' taught me this lesson - I highly recommend it.
Although I believe this is a great idea - and stick to it as best as possible - there are some areas where I am not 100% sure. For example the operating system, the Java JDK, and the Continuous Integration tool (we are using Jenkins).
Do you practice Continuous Integration? It's a good way to test that you have all the above under control. If you have to do any manual installation on the Continuous Integration machine before it can build the software, something is probably wrong.
My simple rule of thumb:
Can this be automatically generated during a build process?
If yes, then it is a resource, not a source file. Do not check it in.
If no, then it is a source file. Check it in.
Here are the Sensible Rules for Repositories™ that I use for myself:
If a blob needs to be distributed as part of the source package in order to build it, use it, or test it from within the source tree, it should be under version control.
If an asset can be regenerated on demand from versioned sources, do that instead. If you can (GNU) make it, (Ruby) rake it, or just plain fake it, don't commit it to your repository.
You can split the difference with versioned symlinks, maintenance scripts, submodules, externals definitions, and so forth, but the results are generally unsatisfactory and error prone. Use them when you have to, and avoid them when you can.
This is definitely a situation where your mileage may vary, but the three Sensible Rules work well for me.

compressing .js and .css files on push of the website

I am not even sure if something like I want is possible, so I am asking you guys to just let me know if anyone did that before. So, my goal is to when I click on "Publish" website in VS2010, to have all javascript files compressed into one, same with css and then in my layout file change the references from all different js and css files to only those two merged ones. Is that doable? Or maybe it's doable but in more manual way?
Of course the goal here is to have only two calls to external files on the website, but when I develop I need to see all files so that I can actually work with it. I guess I could do it manually before each push, but I'd rather have it done automatically using some script or something. I didn't try anything yet, and I am not looking for ready solution, I am just looking to get to know the problem better and maybe some tips.
Thanks a lot!
This is built into ASP.net 4.5. But in the mean time, you should look at the following projects
YUI Compressor
The objective of this project is to compress any Javascript and Cascading Style Sheets to an efficient level that works exactly as the original source, before it was minified.
Cassette
Cassette automatically sorts, concatenates, minifies, caches and versions all your JavaScript, CoffeeScript, CSS, LESS and HTML templates.
RequestReduce
Super Simple Auto Spriting, Minification and Bundling solution
No need to tell RequestReduce where your resources are
Your CSS and Javascript can be anywhere - even on an external host
RequestReduce finds them at runtime automatically
SquishIt
SquishIt lets you squish some JavaScript and CSS. And also some LESS and CoffeeScript.
Combres
.NET library which enables minification, compression, combination, and caching of JavaScript and CSS resources for ASP.NET and ASP.NET MVC web applications. Simply put, it helps your applications rank better with YSlow and PageSpeed.
Chirpy
Mashes, minifies, and validates your javascript, stylesheet, and dotless files. Chirpy can also auto-update T4MVC and other T4 templates.
Scott Hanselman wrote a good overview blog post about this topic a while back.
I voted up the answer that mentioned Cassette but I'll detail that particular choice a little more. Cassette is pretty configurable, but under the most common option, it allows you to reference CSS and Javascript resources through syntax like this:
Bundles.Reference("Scripts/aFolderOfScriptsThatNeedsToLoadFirst", "first");
Bundles.Reference("Scripts/aFolderOfScripts");
Bundles.Reference("Styles/aFolderOfStyles");
You would then render these in your master or layout pages like this:
#Bundles.RenderStylesheets()
#Bundles.RenderScripts("first")
#Bundles.RenderScripts()
During development, your scripts and styles will be included as individual files, and Cassette will try to help you out by detecting changes and trying to make the browser reload those files. This approach is great for debugging into libraries like knockout when they're doing something you don't expect. And, the best part, when you launch the site, you just change the web.config and Cassette will minify and bundle all your files into as few bundles as possible.
You can find more detail in their documentation (which is pretty good though sometimes lags behind development): http://getcassette.net/documentation/getting-started
Have a look at YUI compressor # codeplex.com this could be really helpful.
What I have done before is setup a post-build event, have it run a simple batch file which minimizes your source files. Then if you're in release mode (not in debug mode), you would reference the minimized source files. http://www.west-wind.com/weblog/posts/2007/Jan/19/Detecting-ASPNET-Debug-mode
I haven't heard about publish minification. I think use should choose between dynamical minification like SquishIt or compile time like YuiCompressor or AjaxMinifier.
I prefer compile time. I don't think it's very critical to have to compile time changing files. If you have huge css/js code lines you can choose this action only for release compilation and if it helps publish this files only in needed build cinfigurations.
I don't know if there is any possible way to somehow hook into the functionality from that 'Publish' button/whatever it is, but it's surely possible to have that kind of 'static build process'.
Personally I'm using Apache ANT to script exactly what you've described there. So you're developing on your uncompressed js/html/css files and when you're done, you call like ant build which then minifies, compresses, stripes and publishes your whole web application.
Example script: https://github.com/jAndreas/typeof-NaN-2.0/blob/master/build/build.xml

How to update HTML script and link references when combining JavaScript and CSS files?

Multiple sites reference combining JavaScript and CSS files to improve web page performance, including examples of using ANT build scripts to concatenate the files prior to deployment.
I've search, and haven't found any information how to automate updating references to those files in HTML and other documents. I am looking to avoid hacking together something error prone, and want to learn from others who have automated builds in the past.
Are there automated tools in the wild to complete this task that I'm not seeing? Are there recommended processes to update the script and link tags in HTML? Can these solutions be integrated with ANT or similar build tools?
There sure is and it's a smart thing to do.
I found a PHP solution, don't know it that's okay for you, but if it isn't you can still read it's source (it's not difficult) and learn a lot. The solution works like this:
Rewrite your requests like this: from css/main.css and css/skin.css to css/main.css,skin.css (of course you can put many more).
Use apache's mod_rewrite to redirect this request to a script (in our case combine.php), that will combine all files to a single one.
The script combines all the files and sends the combined file. Then it saves it to a cache folder.
Next time around it checks if there is an up-to-date version of the cache and serves that one. If the latest file modification time has changed, it discards the cache.
The solution works great and it even makes use of HTTP cache headers and spits out an [ETags], which you should do anyway.
You are correct this is a great way to speed up page loading. It will even work in conjunction with a CDN, which the other poster recommended.
Here is a small script that will pack multiple files in to one for deployment. It supports both JS and CSS, and will even "minify" them by removing whitespace, etc. Just hook this in to your build and deploy scripts.
juicer: http://cjohansen.no/en/ruby/juicer_a_css_and_javascript_packaging_tool
What even better, it will follow JS and CSS import statements, so you only need to point your HTML files to the loader file and it will work in both development and production. (Assuming you replace the loader file with the combined file on deployment.)
There are others, including some run-time solutions. But it sounds like you have a build process in place anyway.
As far as HTML updating, if you still need it, since automated deployments are very popular in the Ruby world, and you may find some standalone utilities to help even for non-ruby projects. (As above) Methinks this would be best handled by your own project's template language, though. (With a static resource revision id, or such.)
Good luck, and let us know what you find.
I think what you really want is a CDN Content Delivery Network.
Read about it here
http://developer.yahoo.com/performance/rules.html
http://en.wikipedia.org/wiki/Content_delivery_network

How do you manage your Javascript files?

Nowadays, we have tons of Javascript libraries per page in addition to the Javascript files we write ourselves. How do you manage them all? How do you minify them in an organized way?
Organization
All of my scripts are maintained in a directory structure that I follow whenever I work on a site. The directory structure normally goes something like this:
+--root
|--javascript
|--lib
|--prototype.js
|--scriptaculous
|--scriptaculous.js
|--effects.js
|--..
|--myOwnScript.js
|--myOwnScript2.js
If, on the off chance, that I'm working on a team uses an inordinate amount of scripts, then I'll normally create a custom directory in which we'll organize scripts by relationship. This doesn't happen terribly often, though.
Compression
Though there are a lot of different compressors and obfuscators out there, I always come back to YUI Compressor.
Inclusion
Unless a site is using some form of a master page, CMS, or something that dictates what can be included on a page beyond my control, I only included the scripts necessarily for the given page just for the small performance sake. If a page doesn't require any script, there will be no script inclusions on that page.
First of all, YUI Compressor.
Keeping them organized is up to you, but most groups that I've seen have just come up with a convention that makes sense for their application.
It's generally optimal to package up your files in such a way that you have a small handful of packages which can be included on any given page for optimal caching.
You also might consider dividing your javascript up into segments that are easy to share across the team.
Cal Henderson (of Flickr fame) wrote Serving JavaScript Fast a while back. It covers asset delivery, not organization, but it might answer some of your questions.
Here are the bullet points:
Yes, you ought to concatenate JavaScript files in production to minimize the number of HTTP requests.
BUT you might not want to concatenate into one giant file; you might want to break it into logical pieces and spread the transfer cost over several pages.
gzip compression is good, but you shouldn't serve gzipped assets to IE <= 6, so you might also want to minify/compress your JavaScript.
I'll add a few bullet points of my own:
You ought to come up with a solution that works for both development and production. In development mode, it should pull in extra JavaScript files on demand; in production it should bundle everything ahead of time. Switching from one behavior to the other should be as easy as setting a flag.
Rails 2.0 handles all this through an asset cache; other web app frameworks might offer similar solutions.
As another answer suggests, placing third-party libraries in a lib directory is a good start. You can also divide your own JS files into sub-directories if it makes sense. Ideally, you'll be able to arrange them in such a way that the files in a given sub-directory can be concatenated into one file.
I will have a folder for all javascript, and a sub folder of that for 3rd party/shared libraries, and sub folders for each component of the site to keep everything organized.
For example:
/
+--/javascript/
+-- lib/
+-- admin/
+-- compnent1/
+-- compnent2/
Then run everything through a minifier/obfuscator during the build process.
I'v been using this lately:
http://code.google.com/apis/ajaxlibs/
And then have a "jscripts" folder where I keep my custom code.
In my last project, we had three kinds of JS files, all of them inside a JS folder.
Library code. A bunch of functions used on most all of the pages, so they were put together in one or a few files.
Classes. These had their own files, organized in folders as needed, but not necessarily so.
Ad hoc JS. Code that was specific to that page. These were saved in files that had the same name as the JSP pages they were supposed to run in.
The biggest effort was in having most of the code on the first two kinds, having custom code only know what to call, and when.
This might be a different approach than what you're looking for, but I've been playing around with the idea of JavaScript templates in our blog engine. In a nutshell, you assign a Javascript template to a page id using the database and it will dynamically include and minify all the JavaScript files associated with that template and create a file in a server-side cache with the template id as a file name. When a page is loaded, it calls the template file which first checks if the file exists in the cache and loads it if it does. If it doesn't exist, it creates it on the fly and includes it. I also use the template file to gzip the conglomerate JavaScript file.
The template idea would work well for site-wide JavaScript (like a JavaScript library), but it doesn't cover page-specific JavaScript. However, you can still use the same approach for page specific JavaScript by including a second file that does the same as above.

Categories