I usually have jQuery code that is page specific along with a handful of functions that many pages share. One approach is to make seperate files for organizing, but i'm thinking that putting all the script in one file and making comments in the file for readability would also work. Then when the site goes live I can minify and obfuscate if needed.
I think the question comes down to limiting http requests or limiting file size. Is one of these a bad habit?
You can have it both ways. Develop with as many individual .js files as you need. Then use a build/deployment process that assembles the files into one larger one, then pushes them through something like Google's Closure Compiler. Compression can be handled transparently by your web server if configured properly.
Of course, this implies a structured development and deployment workflow -- e.g., with files to be assembled/compiled in a specific directory, separated from files that should be served as-is.
References:
Closure Compiler
Apache Ant
Automating the Closure Compiler with Ant
If you can put all the scripts in one file which is minified then that's what you should do first.
Also if your webserver sends out gzipped content the actual script transfer would be small, and the script will be cached on client. Since tcp transfers starts out slow and increase in speed, limiting the number of requests is the best way to speed up the overall loading of a page.
This is the same reason you see sites concatenating images into one larger image, and using CSS to display the correct part of it.
Related
i am wondering if i have to include all my javascript files to one entire file which file included to a folder? I just have for example 10 seperate js files into one javascript folder is bad for the website? Or it is ok?
P.S.I am new one to web development.
Photo of my Folder:
If you're on a halfway decent server - one that can respond to clients in your service area quickly - it's unlikely to matter.
The biggest issue with including many separate files is that, if your server uses HTTP/1.1, clients will usually only be able to request a small handful of resources at a time (10 or less). If you have a whole bunch of resources that the client needs to download for the app to function, and the ping between the client and your server isn't great, this could increase the time it takes for your site to initially appear functional. But for only 10 script files, it's probably not an issue. (Look at many of the large sites of today, and you'll see that quite a few of them load a huge number of resources.)
If your server uses HTTP/2, clients will be able to download all files requested at once, without a limit on the number (except for bandwidth), so in that situation, using many different files is less likely to create issues.
Still, for larger applications, it could be a good idea to bundle up your script into a single file for production. There are many options out there, and they have a bunch of benefits - they're often considered essential for a modern application.
Ideally it is all in one file. On your development computer you will probably have several js files, especially as you begin including various js packages or frameworks to your web application (website). On your development machine, you can manually use a tool like grunt to automate the creation of that single js file from several different js files, or you can automate the build process and other pre-deployment tasks using a CI/CD pipeline.
I have a web application that is currently split into like 40+ javascript files. When I run the application some subset of those files need to be downloaded by the browser. Obviously, given that browsers use ~6 threads to download files, this is not the optimal solution. One optimization idea that come to my mind was to embed all those javascript files (except external ones) inside the served .aspx page. So that the browser just gets one big html file and does not need to make any round trips to the server. The html page alone may contain user specific data that will be different on every request. The scripts, however, are not changing between requests. In typical use case the page (complete with scripts) has 180KB (scripts not minified) or 130KB (minified).
Now the question: does this approach have any drawbacks performance wise (network, browsers' javascript engines)? Do you know of any big applications doing something like that? Note that I am not interested in arguments about eg. maintainability as the individual scripts will still be available as separate files during development. Same question applies to css files (even though this is less of an issue in my app).
One bit of information that may be important here: the application is one big multipage form that does not require postbacks to go between the pages, validate form, submit the form, etc. However, the application in which it is embedded may have multiple such forms.
In general, it is a very good idea to concatenate javascript and css files together. I'm just not so sure about "your concept". My biggest concern and question here would be, can that .aspx file change potentially in any way through (dynamic) code ?
That would make in impossible for the browser to cache the file, which would be a horrible scenario.
The great thing about concatenating files is, that we have one big downstream (which still is a lot faster then downloading with several single requests and HTTP overhead) and the browser can cache this file afterwards.
There are some great build-tools and scripts available, Apache ANT is one I can really suggest. You should have a look on the HTML5 Boilerplate where they make usage of ANT very frequently.
I have a couple of questions that are somewhat related so I'm posting them all on a single question on SO...
Question 1:
I'm currently doing this Facebook application where I'm using jQuery UI Tabs, there's only 4 where 2 of them are loaded through Ajax. The main page is index.html, this is where the tabs code is placed and for the 2 tabs loaded through Ajax, I have two different files, tab1.html and tab2.html.
Currently, the jQuery tabs initialization and Facebook JavaScript initialization is done on index.html. Both tab1.html and tab2.html have JavaScript code that belongs to those pages. For instance, tab2.html has a form and there's some JS (with jQuery) code to validate the form, this code is irrelevant to tab1.html as the JS code on tab1.html is irrelevant to tab2.html.
My question is, should I keep doing this or maybe aggregate all the JS/jQuery code in index.html, tab1.html and tab2.html in a single global.js file and then include it in index.html?
I though of doing this but there will be irrelevant code loaded if the user never opens tab1 or tab2. The benefit of using a single global.js file is that I could pack/minify the file, which I couldn't do if I included each code block in each respective tabX.html file.
Question 2:
As I'm using jQuery, I'm also using lots of plugins (actually only 3 for now, but that number can grow). Some of them provide a minified JS and I use those when available, when they are not, I use the normal versions of course.
There's also the requests problem. If I have lots of plugins, say 10, it will be 10 requests for those plugins. And there is also the fact that some plugins are used in tab1.html but not on tab2.html and vice-verse.
How should I load all the plugins in a minified/packed version on a single web request? Should I do that manually before publishing my app (packing and merging them into a single file) or could I use the PHP version of Dean Edwards's Packer and pack/merge all plugins on the fly? Would this be a good approach?
Question 3:
If the answer on Q1 was something like "merge all code in a single global.js file", should I include the global.js file in the packing/merging script I described above on Q2?
Doing this would simplify everything. I could have my development environment properly organized with all .js files, for the plugins and the global.js in the appropriate folders without bothering with anything else. The packing/merging should take care of the rest (pull the files from the respective folders, send the respective JS headers and output one single packed .js file).
The one thing that's confusing me the most is that not all plugins are used for every tab, not all code is for every tab too. Still, a chunk of the code is global to every tab and the index. This also simplifies everything as: a) I don't have to worry to add the needed code to each tabX.html file and can I simply look at them as HTML templates and nothing else; b) I don't have to be bothered in including the necessary plugins where I need them as I'm currently using $.getScript() from jQuery to load the plugins I need when and only when I need them, but I'm not sure this is a good approach and the code feels dirty and ugly like this.
Question 1:
Pack them all into a single .js file. This will make maintenance easier, and the tiny bit of overhead for the user loading a little js they they potentially may not use does not matter. I would also let Google load the jQuery library for you and then have all of your js code in a single separate file.
Question 2:
As these plugins don't really change I would manually combine them. Closure Compiler is good at this. When minifying use the highest setting that does not give any warnings.
Question 3:
Yes you will want to minify the global.js
When the browser downloads the global.js it's cached for an amount of time. Thus when you call the entire global.js again on a different page, its not re-downloaded it looks at your local copy first. So you do a little bit more work at first on the initial download, but from then on, it should be quicker.
Generally best practices related to javascript for speeding up website loads are:
Minify all javascript and put all of it into a single file (make as much of your javascript external as possible).
Put javascript at the bottom of the document.
Force web server to assign expiration date in the future and use a timestamped query string to invalidate old versions of javascript files, this will prevent unnecessary requests for your javascript if it has not changed. (ie: in httpd.conf ExpiresByType application/x-javascript "access plus 1 year", in your document: <script type="text/javascript" src="/allmy.js?v=1285877202"></script>)
Configure your web server to gzip all text files.
The main reason why you should keep too much javascript away from tab pages is because it will kill user experience. When a user clicks on a tab for the first time it will grab all the components needed on the fly which makes it kinda sluggish.
You're question is only semi-specific as we don't know a lot of things about your site like exact file sizes, how the modules are really used.
The general idea would be to find balance between modularity and speed.
When you're combining modules together these are the general ideas you should consider:
how often does this module change?
how often is this module used?
how big is this module (filesize)?
Then put the most used, stable codebase and merge it into one. Then you should include the rest site specific functionality on the tab pages.
Also, make sure to load javascript asynchronously as it won't block rendering of the page (and tabs).
Another combined answer:
if adding all the JS together in packed/minifed version generates no more than 30k of file size you're better off combining it. A single extra connection for a file (assuming it's not cached) is worth 10-20k of extra JS download. This has to do with browsers opening and closing connections vs streaming extra 20k on an established connection. The threshold also depends on your user distribution. If you have a lot of dial-up or low bandwidth users your threshold will be smaller.
I typically recommend combining and loading as 1 file unless the library is very obscure and requires a very edge case for it to be triggered on a page. Ex: Hover triggers functionality Y but it's on a feedback widget that gets less than 1% of traffic- don't bother combining.
Minifying and Packing is a little overrated these days. With the vast majority of browsers supporting gZip the amount of data consolidation gZip provides of the file over the wire during browser transmission has virtually the same effect as min/pack. However, there is a small cost on the browser to unpack it. Having said that, it's still good practice to min/pack the code since not all browsers support it, you may not want the file to be gZip enabled, etc.
I've used online packers against 3rd party module and it works fairly well. However, there are times when it can cause an issue so make sure to test your manually packed version before deploying.
Alternate:
If you feel that your users will rest on your index page for longer than 10 seconds you could pre-load the additional libraries separately using Js Loader Prototype pattern.
Steve Souder's Even Faster Websites is a book you should look into.
Firstly one experience slowdowns because whenever an external script is linked the browser waits for the script to download, parse and then execute. After this only it regains processing rest of the request. So to avoid such slow downs one can look at parallely downloading the scripts. Few techniques are Ajax the scripts if the scripts are in the same domain or use Script Dom element or Script in iframe if the scripts are on external domains
Q1 : For me modularising all the content is a better option with respect to further development if the page content has to be changed constantly. Responsiveness is very important for the end user. A small global.js will help in getting the app up and running.Parallely one can download the tabX.html.
Q2: As the jquery plugins rarely change. The plugins for the tabX.html pages can be downloaded parallely and locally cached so when the tabX.html is loaded the required plugins need not be fetched. SO all the plugins required by the main page should be in one single file and the ones used by the tabX.html's should be in different files.
Q3 : its a personal choice here. Do you want it to be developer friendly or user friendly. I bank on user friendliness. Making responsive and efficient apps is our job !!!. All the advantages of packing everything into a singe files is you will have ease in development. Well ugly code begets beautiful apps :). Users are speed-aholics. For eg. when google changed its 10 results per page to 20 they saw a considerable drop in search queries. So my opinion is not to pack all of them into one and load each parallely
some of the techniques and relevant links on testing each:
XHR eval /ajax : http://stevesouders.com/cuzillion/?ex=10009
XHR Injection : http://stevesouders.com/cuzillion/?ex=10015
Script in Iframe : http://stevesouders.com/cuzillion/?ex=10012
Script DOM element : http://stevesouders.com/cuzillion/?ex=10010
Question 1:
The best practice would be to place all js files in a single "global" file. This minimizes your HTTP Requests. Let's say you have 5 plug-ins, this would me you need to do 5 request, wherein if you combine them as one, you only need to request it once. This might be a little bit heavy on the first load, but the next time around this file will be cached by the browser, so..no worries about the size. HOWEVER, be careful about the sequence of the scripts when combining it. (I.E. : JQuery script should be placed first on the js file before JQuery UI's)
http://articles.sitepoint.com/article/web-site-optimization-steps/4
http://code.google.com/speed/page-speed/docs/rtt.html
Question 2:
You can do it manually or automatically.Dean Edward's Packer is a good choice. If you're using ASP.NET, you can check MB Compression Handler, if you're using APACHE with PHP perhaps you can change the configuration of your htaccess to gzip it
Question 3:
It'd be better if you pack the "global" javascript file as well. This could save up bandwidth and save more time to load. You got the point, combining all the js files you need for the site will save you time from including individual scripts.
Our file structures is pretty good, organizing functionality in separate folders. My question is how do others work on applications that involves upwards of 500 JavaScript files.
We have written a maven plugin to concatenate these files together (also runs YUI compressor). However, this involves 3-10seconds of compiling for every change.
Is this step necessary for organization of a large application, I feel like a well structured HTML file pulling in all these resources would save me 45minutes every day.
For my own framework projects, typically monitoring, testing, or in-page services to orchestrate other toolkits (but not as high as your file count), my approach has been to target the individual and dynamically loaded files during development. For test, I'll run one build to compress and version the individual files, and test the individual files again because, depending on the concatenation order, compression technique, and browser, I may wind up with a script error and it's a pain to dig it out of one monster file. Third, I'll concatenate together and test once more.
In the HTML reference, I'll either target the uncompressed file, which loads specified dependencies, or the compound file. A separate bootstrap file names the dependencies, which are either included in the compound file, or loaded dynamically as needed.
This way I can add or change a file, and start developing and testing without rebuilding.
The solution is likely to concatenate and compress for user testing and production only.
For development, it's probably best to simply import them all into the HTML file. It speeds up the dev process, and also simplifies debugging. It also allows the browser to cache some of those files.
When you can't rely on cached copies (which, with 500 files, I don't think will be very often), it will slow down load times.
You can likely save a lot of time by only running the compressor in production. The YUI compressor is notoriously slow, because it uses Java Rhino interpreter to actually parse the JavaScript and analyze it etc.
I'm currently developing an application that will be run on local network in B2B environment. So I can almost forget about micro(mini?) optimizations in terms of saving bandwidth because Hardware Is Cheap, Programmers Are Expensive.
We have a well structured Object oriented js code in the project and obviously lots of js classes. If all the classes will be stored in separated files then it will be quite easy to navigate through this code and hence maintain it.
But this will bring my browser to generate a couple dozens of HTTP requests to get all the js files/classes I need on the page. Even in local environment it is not super fast on first load(with empty cache), and later when you modify it and cache has to be invalidated.
Possible solutions:
violate rule "one class per file"
use YUI compressor all the time(in development & production) for generating one big js file.
But if we choose YUI compressor for this(no minify action in dev environment, and minify for production) - then we need to reload/recompile this big js file on every modification in any js file.
What would you recommend for solving this problem?
Keep all the .js files separate. Keep your "one class per file" rule.
Then, use a server-side technology to aggregate the script into one request.
Options:
Use an ASPX or PHP or whatevver server-side scripting thing you have, to aggregate all the JS into one request. The request for a .js is no longer a static file, but with caching on the server it should be relatively cheap to serve.
Use Server Side Includes in a consolidated .js file.
<!--#include virtual="/class1.js"-->
<!--#include virtual="/class2.js"-->
Your approach of having separate files for each class is good - practices that make development easier are always good.
Here's some tips for making the loading faster:
Compress your code. As you say, you could use YUICompressor, or the newly released Google Closure Compiler.
When concatenating multiple files into one, think of what you need and when: If you only need files A, B and C when the app starts, but not Z and X, put only A, B and C into a single file. Load another file with Z and X concurrently after A/B/C.
You can use Firefox plugins YSlow and Page Speed to test for load performance bottlenecks
As you mention, you would need to rerun the compressor each time you make a change. I don't think this is a big problem - on a decent machine, it should run pretty fast even with a lot of files. Alternatively, you could use a daily build process using some tool, which could build the latest revision from your source control (you do use scm, right?), and run unit tests and deploy if everything goes OK.
I would recommend using Ant or some other automation tool to create a build script. This will make it as simple as running one command to build your compressed script, reducing the repetitive work you would otherwise need to do. You could even have Ant deploy your code to the server.
You may have the best of both worlds - a development environment with one class per js file without the need to compile/deploy for every iteration AND one (or several) concatenated larger js files (minified if desired) in production.
Depending on your build environment this may be setup in a number of different ways, but using Ant may be the easiest way. Using Ant you can run tasks for both concatenation and minification (running YUICompressor through the Java task). This will produce the concatenated and minified large js file.
However, to maintain productivity you want to avoid doing this for every code iteration. Changing the tags from one to several (for every class file) is out of the question.
So, you load your big js file as expected:
<script src="application.js"></script>
When deploying to production this file is the concatenated/minified version of all your js files.
However, during development this file is a bootstrap/loader file that simply loads all your individual js-files (illustrative example using jQuery).
$.getScript('/class1.js');
$.getScript('/class2.js');
$.getScript('/class3.js');
$.getScript('/class4.js');
$.getScript('/classn.js');
....
If you are using YUI 3 look into the module behavior and how to specify dependencies.
Using different Ant targets the generation and copying of these files to the correct location may easily be managed.
And now you may simply reload your browser whenever you need to test a change in a file, but get the performance benefit during production. All without sacrificing productivity or maintainability.