When I write my JS files for a Django project, of course I do some AJAX calls, and for the moment the urls for those calls are hard-coded (which is very ugly).
I was thinking of having the JS files served by django (instead of Apache), so I could take advantage of the template tags ({% url %} !!!).
Is there a reason why I shouldn't do this ?
Or is there a right way to do this ?
(I can give a least one : it will consume a lot of time resending JS files that haven't changed. What would be great is to have an application that generates files when restarting django server, and serves them statically after !)
I would go for a hybrid technique. Serve most of your javascript statically. But in your Django template, have a <script> block that defines various global variables, which are generated by the server-side code - url is a good example. Then your static JS can refer to the variables that are generated in the dynamic code.
I searched deeper in those asset manager applications from djangopackages, have found out that django-mediagenerator provides that feature, even if it is not well documented : you can generate your js or css files as django templates, and then serve them statically (they are also bundled, and caching is managed etc ... so two birds with one stone + it is really easy to set-up !).
In order to have JS files generated as django templates (after having set-up django-mediagenerator), just add the filter :
ROOT_MEDIA_FILTERS = {
'js': 'mediagenerator.filters.template.Template',
}
in your settings.
Dynamically generating Javascript on your server can be a tremendously powerful tool and I've experienced both it's upside and downside in my projects.
In general you want to keep as much as possible static to minimize the work to be done on every request. This includes having the browser cache as much as possible, which might become a problem in your case.
What I usually do is to have a block in the header in my base template. In templates that need to do custom javascript that is only known at runtime (customization based on logged in user, for example), I add it to the block. Here I can dynamically generate javascript that I know won't be cached so I can make some assumptions. The downside is more complexity.
If what you need are just pointing to urls, or have some simple configuration, etc, then I would suggest creating a view that will return a Javascript file with these settings. You can set the correct headers(Etag, Cache-Control, etc) so the browser will cache the file for some reasonable time. When you upgrade your code, make sure the Etag will change.
In the code that needs to use the configuration, you need to always check that the variable you are looking for is actually defined otherwise you will run into problems that are hard to debug when for some reason the configuration javascript is not loaded correctly.
The .js that gets sent to the browser would vary. That could make debugging more cumbersome. Maybe not a problem but something to potentially consider...
Nowadays, the best way to do this is to use Django.js
Here is the doc where they talk about the URL reversing: http://djangojs.readthedocs.org/en/0.8.1/djangojs.html#reverse-urls
Related
I was thinking about creating script that would do the following:
Get all javascripts from JS directory used on server
Combine all scripts to one - that would make only one request instead of multiple
Minify combined script
Cache the file
Let's say that the order in which the files need to be loaded is written in config file somewhere.
Now when I load myexamplepage.com I actually use jQuery, backbone, mootools, prototype and few other libraries, but instead of asking server for these multiple files, I call myexamplepage.com/js/getjs and what I get is combined and minified JS file. That way I eliminate those additional requests to server. And as I read on net about speeding up your website I found out that the more requests you make to server, the slower your web become.
Since I'm pretty new to programming world I know that many things that I think of already exists, I don't think that this is exception also.
So please list what you know that does exactly or similar to what I described.(please note that you don't need to use any kind of minifiers or third party software everytime you want your scripts to be changed, you keep original files structure, you only use class helper)
P.S. I think same method could be used for CSS files also.
I'm using PHP and Apache.
Rather than having the server do this on-the-fly, I'd recommend doing it in advance: Just concatenate the scripts and run them through a non-destructive minifier, like jsmin or Google Closure Compiler in "simple" mode.
This also gives you the opportunity to put a version number on that file, and to give it a long cache life, so that users don't have to re-download it each time they come to the page. For example: Suppose the content of your page changes frequently enough that you set the cache headers on the page to say it expires every day. Naturally, your JavaScript doesn't change every day. So your page.html can include a file called all-my-js-v4.js which has a long cache life (like, a year). If you update your JavaScript, create a new all-in-one file called all-my-js-v5.js and update page.html to include that instead. The next time the user sees page.html, they'll request the updated file; but until then, they can use their cached copy.
If you really want to do this on-the-fly, if you're using apache, you could use mod_pagespeed.
If you're using .NET, I can recommend Combres. It does combination and minification of JavaScript and CSS files.
I know this is an old question, but you may be interested in this project: https://github.com/OpenNTF/JavascriptAggregator
Assuming you use AMD modules for your javascript, this project will create highly cacheable layers on demand. It has other features you may be interested in as well.
Multiple sites reference combining JavaScript and CSS files to improve web page performance, including examples of using ANT build scripts to concatenate the files prior to deployment.
I've search, and haven't found any information how to automate updating references to those files in HTML and other documents. I am looking to avoid hacking together something error prone, and want to learn from others who have automated builds in the past.
Are there automated tools in the wild to complete this task that I'm not seeing? Are there recommended processes to update the script and link tags in HTML? Can these solutions be integrated with ANT or similar build tools?
There sure is and it's a smart thing to do.
I found a PHP solution, don't know it that's okay for you, but if it isn't you can still read it's source (it's not difficult) and learn a lot. The solution works like this:
Rewrite your requests like this: from css/main.css and css/skin.css to css/main.css,skin.css (of course you can put many more).
Use apache's mod_rewrite to redirect this request to a script (in our case combine.php), that will combine all files to a single one.
The script combines all the files and sends the combined file. Then it saves it to a cache folder.
Next time around it checks if there is an up-to-date version of the cache and serves that one. If the latest file modification time has changed, it discards the cache.
The solution works great and it even makes use of HTTP cache headers and spits out an [ETags], which you should do anyway.
You are correct this is a great way to speed up page loading. It will even work in conjunction with a CDN, which the other poster recommended.
Here is a small script that will pack multiple files in to one for deployment. It supports both JS and CSS, and will even "minify" them by removing whitespace, etc. Just hook this in to your build and deploy scripts.
juicer: http://cjohansen.no/en/ruby/juicer_a_css_and_javascript_packaging_tool
What even better, it will follow JS and CSS import statements, so you only need to point your HTML files to the loader file and it will work in both development and production. (Assuming you replace the loader file with the combined file on deployment.)
There are others, including some run-time solutions. But it sounds like you have a build process in place anyway.
As far as HTML updating, if you still need it, since automated deployments are very popular in the Ruby world, and you may find some standalone utilities to help even for non-ruby projects. (As above) Methinks this would be best handled by your own project's template language, though. (With a static resource revision id, or such.)
Good luck, and let us know what you find.
I think what you really want is a CDN Content Delivery Network.
Read about it here
http://developer.yahoo.com/performance/rules.html
http://en.wikipedia.org/wiki/Content_delivery_network
I have a legacy JSF application. Now that we want to add i18n support to the application. Some portion of the textual content comes from javascript code and is added dynamically. How do we best internationalize this? The approach that i have in mind is to get the entire resource bundle as a json object from a server side script and then use jsonobj.propname wherever needed.
Is this is a right approach? Do you have any better solution?
That certainly sounds like an approach that will work.
A commercial application I worked on generated a separate JavaScript library for each language as part of the build process. This was done for performance reasons - the results were minified, a single file reduced latency and the results were cached "forever." But not every application has such tight constraints.
I've been using yuicompressor.jar on my test server for on-the-fly minimisation of changed JavaScript files. Now that I have deployed the website to the public server, I noticed that the server's policies forbid the use of exec() or its equivalents, so no more java execution for me.
Is there a decent on-the-fly JS compressor implemented in PHP? The only thing resembling this that I was able to find was Minify, but it's more of a full-blown compression solution with cache and everything. I want to keep the files separate and have the minimised files follow my own naming conventions, so Minify is a bit too complex for this purpose.
The tool, like yuicompressor, should be able to take either a filename or JavaScript as input and should either write to a file or output the compressed JavaScript.
EDIT: To clarify, I'm looking for something that does not have to be used as a standalone (i.e. it can be called from a function, rather than sniffing my GET variables). If I just wanted a compressor, Minify would obviously be a good choice.
EDIT2: A lot has changed in the five years since I asked this question. Today I would strongly recommend separating the front-end workflow from the server code. There are plenty of good tools for JS development around and except for the most trivial jQuery enhancements it's a better idea to have a full workflow with automated bundling, testing and linting in place and just deploy the minified bundles rather than the raw files.
Yes there is, it's called minify.
The only thing in to worry about in the way of complexity is setting up a group, and there's really nothing to it. Edit the groupsConfig.php file if you want multiple JS/CSS in one <script> or <link> statement:
return array(
'js-common' => array('//js/jquery/jquery-1.3.2.min.js', '//js/common.js', '//js/visuals.js',
'//js/jquery/facebox.js'),
'css-common' => array('//css/main.css', '//css/layout.css','//css/facebox.css')
);
To include the above 'js-common' group, do this:
<script type="text/javascript" src="/min/g=js-common"></script>
(i know i was looking for the exact same thing not knowing how to deal directly with the jar file using php - that's how i ended up here so i'm sharing what i found)
Minify is a huge library with tons of functionalities. However the minifying part is a very tiny class : http://code.google.com/p/minify/source/browse/trunk/min/lib/Minify/YUICompressor.php
& very very easy to use :
//set the path to the jar file
Minify_YUIcompressor::$jarFile=_ROOT.'libs/java/yuicompressor.jar';
//set the path to a writable temp folder
Minify_YUIcompressor::$tempDir=_ROOT.'temp/';
//minify
$yourcssminified=Minify_YUIcompressor::minifyCss($yourcssstringnotminified,$youroptions)
same process for js, if you need more functionalities just pick from the library & read the source to see how you can make direct call from your app.
I didn't read the question well, since minify is based on using the jar files, the op can't use it anyway with his server config
Minify also include other minifying methods than yui, for example:
http://code.google.com/p/minify/source/browse/trunk/min/lib/JSMinPlus.php?r=443&spec=svn468
Try Lissa:
Lissa is a generic CSS and JavaScript loading utility. Lissa is an extension of the YUI PHP Loader aimed at solving one of the current loader limitations; combo loading. YUI PHP Loader ships with a combo loader that is capable of reducing HTTP requests and increasing performance by outputting all the YUI JavaScript and/or CSS requirements as a single request per resource type. Meaning even if you needed 8 YUI components which ultimately boil down to say 13 files you would still only make 2 HTTP requests; one for the CSS and another for the JavaScript. That's great, but what about custom non-YUI resources. YUI PHP Loader will load them, but it loads them as separate includes and thus they miss out on benefits of the combo service and the number of HTTP requests for the page increases. Lissa works around this limitation by using the YUI PHP Loader to handle the loading and sort of YUI and/or custom resource dependencies and pairs that functional with Minify.
Do you localize your javascript to the page, or have a master "application.js" or similar?
If it's the latter, what is the best practice to make sure your .js isn't executing on the wrong pages?
EDIT: by javascript I mean custom javascript you write as a developer, not js libraries. I can't imagine anyone would copy/paste the jQuery source into their page but you never know.
Putting all your js in one file can help performance (only one request versus several). And if you're using a content distribution network like Akamai it improves your cache hit ratio. Also, always throw inline js at the very bottom of the page (just above the body tag) because that is executed synchronously and can delay your page from rendering.
And yes, if one of the js files you are using is also hosted at google, make sure to use that one.
Here's my "guidelines". Note that none of these are formal, they just seem like the right thing to do.
All shared JS code lives in the SITE/javascripts directory, but it's loaded in 'tiers'
For site-wide stuff (like jquery, or my site wide application.js), the site wide layout (this would be a master page in ASP.net) includes the file. The script tags go at the top of the page.
There's also 'region-wide' stuff (eg: js code which is only needed in the admin section of the site). These regions either have a common layout (which can then include the script tags) or will render a common partial, and that partial can include the script tags)
For less-shared stuff (say my library that's only needed in a few places) then I put a script tag in those HTML pages individually. The script tags go at the top of the page.
For stuff that's only relevant to the single page, I just write inline javascript. I try to keep it as close to it's "target" as possible. For example, if I have some onclick js for a button, the script tag will go below the button.
For inline JS that doesn't have a target (eg: onload events) it goes at the bottom of the page.
So, how does something get into a localised library, or a site-wide library?.
The first time you need it, write it inline
The next time you need it, pull the inline code up to a localised library
If you're referencing some code in a localized library from (approximately) 3 or more places, pull the code up to a region-wide library
If it's needed from more than one region, pull it up to a site-wide library.
A common complaint about a system such as this, is that you wind up with 10 or 20 small JS files, where 2 or 3 large JS files will perform better from a networking point of view.
However, both rails and ASP.NET have features which handle combining and caching multiple JS files into one or more 'super' js files for production situations.
I'd recommend using features like this rather than compromising the quality/readability of the actual source code.
Yahoo!'s Exceptional Performance Team has some great performance suggestions for JavaScript. Steve Souders used to be on that team (he's now at Google) and he's written some interesting tools that can help you decide where to put JavaScript.
I try to avoid putting javascript functions on the rendered page. In general, I have an application.js (or root.js) that has generic functionality like menu manipulation. If a given page has specific javascript functionality, I'll create a .js file to handle that code and mimic the dir structure on how to get to that file (also using the same name as the rendered file).
In other words, if the rendered page is in public/dir1/dir2/mypage.html, the js file would be in public/js/dir1/dir2/mypage.js. I've found this style works well for me, especially when doing templating on a site. I build the template engine to "autoload" my resources (css and js) by taking the request path and doing some checking for the css and js equivalents in the css and js directories on the root.
Personally, I try to include several Javascript files, sorted by module (like YUI does). But once in a while, when I'm writing essentially a one-liner, I'll put it on the page.
Best practice is probably to put it on Google's servers.
(Depends what you mean by "your" javascript though I suppose :)
This is something I've been wrestling with, too. I've ended up by using my back-end PHP script to intelligently build a list of required JS files based on the content requested by the user.
By organizing my JS files into a repository that contains multiple files organized by purpose (be they general use, focused for a single page, single section, etc) I can use the chain of events that builds the page on the back-end to selectively choose which JS files get included based on need (see example below).
This is after implementing my web app without giving this aspect of the code enough thought. Now, I should also add that the javascript I use enhances but does not form the foundation of my site. If you're using something like SproutCore or Ext I imagine the solution would be somewhat different.
Here's an example for a PHP-driven website:
If your site is divided into sections and one of those sections is calendar. The user navigates to "index.phhp?module=calendar&action=view". If the PHP code is class-based the routing algorithm instantiates the CalendarModule class which is based on 'Module' and has a virtual method 'getJavascript'. This will return those javascript classes that are required to perform the action 'view' on the 'calendar' module. It can also take into account any other special requirements and return js files for those as well. The rendering code can verify that there are no duplicates of js files when the javascript include list is built for the final page. So the getJavascript method returns an array like this
return array('prototype.js','mycalendar.js');
Note that this, or some form of this, is not a new idea. But it took me some time to think it important enough to go to the trouble.
If it's only a few hundred bytes or less, and doesn't need to be used anywhere else, I would probably inline it. The network overhead for another http request will likely outweigh any performance gains that you get by pulling it out of the page.
If it needs to be used in a few places, I would put the function(s) into a common external file, and call it from an inline script as needed.
If you are targeting an iphone, try to keep anything that you want cached under 25k.
No hard and fast rules really, every approach has pros and cons, would strongly recommend you check out the articles that can be found on yahoo's developer section, so you can make informed decisions on a case by case basis.