How to make page loading feel faster? - javascript

I want to decrease the time taken by my pages to load and be displayed, assuming I start with an empty browser cache, and the pages may or may not have inline css and javascript in the html file. does changing the order in which files are sent to the browser decrease the display time, and thus make pages seem to be loading faster?
For example if a page has some .css, .js, .png files and so on, would loading the css first, display things faster?
And is there a standard/specific order to load file types?

Here are few steps that could optimize the performance of your web pages.
put css at top.
put javascript at bottom.
cache everything.
set far future expire header.
return 304 when appropriate.
use unique url for css and js for propagating the change.
apart from that use ajax wherever required.

Beware of too many HTTP connections. It takes time to establish an HTTP connection and it can easily eat up loading time if you have many elements linked in your HTML file.
If you have many small icons, glyphs, etc. combine them into a sprite so only one image is loaded. Facebook for instance makes use of the sprite technique - you can see that if you inspect the images it loads.
You can also consolidate your CSS files into one file - same with Javascript files.
Also, if you have JavaScript that affects the content of your page when it loads then make sure to use the event that notifies you when the DOM is ready, instead of waiting for the body loadevent which doesn't trigger until all resources, such as images, CSS files, JavaScript etc is loaded.

js files block page loading until they're executed. When possible, include those before closing body

At first make sure that your webhoster has no slow servers. This can happen on very cheap shared site webhosters. Than you should check that you remove all unnessesary stuff from your html output. Than you could check if your content is dynamic or static. If it is dynamic try to convert it to static content.
In some conditions you can simply activate the caching functions of a CMS that should also help to send the website content faster. Just on slow connections it could be better to use gzip to compress the output stream. But this costs time. The server and also the client have to compress/decompress. You have to check that too.
If you use javascript and the execution is delayed you could also use the ready event to execute your javascript after the html document is loaded (and not all images and so on) like using the document.onload event.

You can save your page load time to use few trick like :- CSS image sprites rather than call every single image for every single purpose this will Minimize your website's HTTP Requests, remove unnecessary div tags or unnecessary code from your HTML-Markup & CSS
Where we can get good results through CSS and so we should not use Jscripts there.
Should make always clean HTML-Markup without any irreverent code.
Combined files are a way to reduce the number of HTTP requests by combining all scripts into a single script, and similarly combining all CSS into a single stylesheet. Combining files is more challenging when the scripts and stylesheets vary from page to page, but making this part of your release process improves response times.

The solutions turned out the simple, combine all the different files into a single large file and compress that file using zip. Unfortunately, if you do this manually you are going to run into maintenance problems. That single compressed file is no longer editable. So after editing one of the original source files you will have to re-combine it with the other files and re-compress it.

Related

Page Specific JavaScript using Content Security Policy (CSP) [duplicate]

This question already has answers here:
Best way to execute js only on specific page
(5 answers)
Closed 5 years ago.
I want to use Content Security Policy (CSP) across my entire site. This requires all JavaScript to be in separate files. I have shared JavaScript used by all pages but there is also page specific JavaScript that I only want to run for a specific page. What is the best way to handle page specific JavaScript for best performance?
Two ways I can think of to workaround this problem is to use page specific JavaScript bundles or a single JavaScript bundle with switch statement to execute page specific content.
there is lots of ways to execute page specific javascript
Option 1 (check via class)
Set a class to body tag
<body class="PageClass">
and then check via jQuery
$(function(){
if($('body').hasClass('PageClass')){
//your code
}
});
Option 2 (check via switch case)
var windowLoc = $(location).attr('pathname'); //jquery format to get window.location.pathname
switch (windowLoc) {
case "/info.php":
//code here
break;
case "/alert.php":
//code here
break;
}
Option 3 Checking via function
make all the page specific script in the function
function homepage() {
alert('homepage code executed');
}
and then run function on specific page
homepage();
Sorry, I know this ended up being a long read, but it'll be worth it to do it as you'll be able to make the choice that's right for your site. For a tl;dr, read the first sentence of each paragraph.
First of all, no matter which route you choose you should put all of the JS common to each page in the same file to take maximum advantage of caching. That's just common sense. Also, in all cases, I assume you're using a competent minifier since that will make a bigger difference than anything else. Packagers also exist if you need one of those -- Google is your friend if you need either of these.
For the page specific JS, you should decide whether it's most important to have your first page load (the user's first contact with your site) be 'fast', or if it's most important to have the following page loads (the user's first contact with any given page) be 'fast'. Modern browser caching is quite good now, so you can rely on the browser loading from cache whenever it can. In general, if it's most important for the first page load to be fast, then create separate JS files (this way, the user isn't stuck downloading 10 MB of data before they even get to your site). If not, then put all the JS in the same file, keeping in mind that if one page has significantly more JS than others, it will adversely affect the load time of every page on your site. Note that this extra load time can be mitigated with the use of async or defer tags, more on that later.
Consider the case where page A has 5 KB of JS and page B has 5 MB of JS. If you put both scripts in the same file, page A will load more slowly (since it needs to load ~5 MB of JS) but page B will load much faster due to the JS file being cached already. If you keep them separate, page A will load much faster than page B, but there will be an average speed decrease compared to the first case. If one page doesn't have significantly more JS than another, use separate files. You'll encounter much better average load time since the "savings" of loading the big file ahead of time will be greatly diminished (you'll also avoid the issue mentioned below).
Another consideration is whether one of the JS files will change often, as this will invalidate the cached version and require the browser to redownload it. If you put all your JS together and only one of the files is volatile (especially if it's a page not often visited, such as a registration page), the end user will face a higher average load time than if you keep them separate. Stack Overflow themselves took an interesting approach to this. It appears they have a function to invalidate the cache of JS unrelated to the page and load it (if necessary) when the JS on the page loads from the cache to save loading time later.
One more thing! Beyond all this, you should also decide whether or not you should use async or defer in your script tags since you're migrating to fully "external" JS.
async allows the page to load and display to the user before the JS is finished downloading. This is a great way to hide the download of a big JS file if you decide to go the "one file to rule them all" route. However, you might also find the JS needs to be downloaded and execute in order for the page to display properly (as is the case when not using async or defer).
As a result, it might be a good idea to use a hybrid of the two suggestions and split your js into individual files that need to be loaded per page for the page to display correctly (one per page), and put all the js that doesn't into a script that loads through an async or defer tag (this being the "one big file"). defer lets the browser load it in the background after the page is displayed to the user.
Ultimately, only you can make the decisions that are right for your app. There's no one magic option that will work in all cases, but that's the reality of software design/engineering. I hope I've made the process clearer for you so you can arrive at the right choice more easily, though.

Combine scripts to reduce HTTP requests -- Invasion Of The Body Switchers

I'm using the Invasion Of The Body Switchers script (http://www.brothercake.com/site/resources/scripts/iotbs/) as a styleswitcher. This requires loading three js files in the header.
I'm trying to reduce HTTP requests by combining scripts when possible. Has anybody used IOTBS before and successfully combined the scripts into one file? Would I need to make any modifications to the scripts or to the HTML switchers on the page to make that work?
The first thing you want to pay attention to is the global namespace. IOTBS uses one global variable: switcher. Since these files are working together, then it is safe to consolidate them into one file. When you're consolidating them, please add them to the file in the order that they are called on the page.
And kudos for deciding to reduce HTTP requests. You're making the Internet a faster place.
If you can put the three scripts into your HTML header in a row and it all loads properly, then you should be able to just concatenate the files together and load them with one script tag.
Separate script tags in HTML does not imply separate namespaces for the code inside.

jquery and script speed?

Quick question, I have some scripts that only need to be run on some pages and some only on a certain page, would it be best to include the script at the bottom of the actual page with script tags or do something like in my js inlcude;
var pageURL = window.location.href;
if (pageURL == 'http://example.com') {
// run code
}
Which would be better and faster?
The best is to include the script only on pages that need it. Also in terms of maintenance your script is more independant from the pages that are using it. Putting those ifs in your script makes it tightly coupled to the structure of your site and if you decide to rename some page it will no longer work.
I can recommend you to use an asynchrounous resource loader, LAB.js for example. Then you could build a dependencies list, for instance:
var MYAPP = MYAPP || {};
/*
* Bunches of scripts
* to load together
*/
MYAPP.bunches = {
defaults: ["libs/jquery-1.6.2.min.js"],
cart: ["plugins/jquery.tmpl.min.js",
"libs/knockout-1.2.1.min.js",
"scripts/shopping-cart.js"],
signup: ["libs/knockout-1.2.1.min.js",
"scripts/validator.js"]
/*
... etc
*/
};
/*
* Loading default libraries
*/
$LAB.script(MYAPP.defaults);
if (typeof MYAPP.require !== 'undefined') {
$LAB.script(MYAPP.dependencies[MYAPP.require]);
}
and in the end of your page you could write:
<script type="text/javascript">
var MYAPP = MYAPP || {};
MYAPP.require = "cart";
</script>
<script type="text/javascript" src='js/libs/LAB.min.js'></script>
<script type="text/javascript" src='js/dependencies.js'></script>
By the way, a question to everyone, is it a good idea to do so?
In so far as possible only include the scripts on the pages that requirement. That said, if you're delivering content via AJAX that can be hard to do, since the script might already be loaded and reloading could cause problems. Of course you can deliver code in a script block (as opposed to referencing an external js file), in code delivered via AJAX.
In cases where you need to load scripts (say via a master page) for all pages, but that only apply to certain pages, take advantage of the fact that jQuery understands and deals well with selectors that don't match any elements. You can also use live handlers along with very specific selectors to allow scripts loaded at page load time to work with elements added dynamically later.
Note: if you use scripts loaded via content distribution network, you'll find that they are often cached locally in the browser anyway and don't really hurt your page load time. The same is true with scripts on your own site, if they've already been loaded once.
You have two competing things to optimize for, page load time over the network and page initialization time.
You can minimize your page load time over the network by taking maximum advantage of browser caching so that JS files don't have to be loaded over the network. To do this, you want as much javascript code for your site in on or two larger and fully minimized JS files. To do this, you should put JS for multiple different pages in one common JS file. It will vary from site to site whether the JS for all pages should be ine one or two larger JS files or whether you group it into a small number of common JS files that are each targeted at part of your site. But, the general idea is that you want to combine the JS code from different pages into a common JS file that can be most effectively cached.
You can minimize your page initialization time by only calling initialization code that actually needs to execute on the particular page that is being displayed. There are several different ways to approach this. I agree with the other callers that you do not want to be looking at URLs to decide which code to execute because this ties your code to the URL structure which is better to avoid. If your code has a manageable number of different types of pages, then I'd recommend identifying each of those page types with a unique class name on the body tag. You can then have your initialization code look for the appropriate class on the body tag and branch to the appropriate initialization code based on that. I've even seen it done where you find a class name with a particular common prefix, parse out the non-common part of the name and call an initialization function by that name. This allows you to give a page a specific set of behaviors by only adding a classname to the body tag. The code remains very separate from the actual page.
the less general purpose way of doing this is to keep all the code in the one or two common JS files, but to add the appropriate initialization call to each specific page's HTML. So, the JS code that does the initialization code lives in the common JS files and thus is maximally cached, but the calling of the appropriate initialization code for that page is embedded inline in each specific page. This minimizes the execution time of the initialization, but still lets you use maximal caching. It's slightly less generic than the class name technique mentioned earlier, but some may like the more direct calling technique.
Include scripts at bottom of pages that need it only.
The YSlow add-on is the best solution to know why your website is slow.
There are many issues which could be the reason for slowness.
Combining many jQuery to one could help you increasing your performance.
Also you can put the script at the bottom of your page and CSS at top.
Its basically up to you and depends on what the code is.
Generally with small things I will slip it into the bottom of the page. (I'm talking minor ui things that relate only to that page).
If you're doing the location ref testing for more than a couple pages it probably means you're doing something wrong.
You might want to take a look at one of these:
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
http://2tbsp.com/node/91
And as for which is faster it's wildly negligible, pick what is easier for you to maintain.

Improving Javascript Load Times - Concatenation vs Many + Cache

I'm wondering which of the following is going to result in better performance for a page which loads a large amount of javascript (jQuery + jQuery UI + various other javascript files). I have gone through most of the YSlow and Google Page Speed stuff, but am left wondering about a particular detail.
A key thing for me here is that the site I'm working on is not on the public net; it's a business to business platform where almost all users are repeat visitors (and therefore with caches of the data, which is something that YSlow assumes will not be the case for a large number of visitors).
First up, the standard approach recommended by tools such as YSlow is to concatenate it, compress it, and serve it up in a single file loaded at the end of your page. This approach sounds reasonably effective, but I think that a key part of the reasoning here is to improve performance for users without cached data.
The system I currently have is something like this
All javascript files are compressed and loaded at the bottom of the page
All javascript files have far future cache expiration dates, so will remain (for most users) in the cache for a long time
Pages only load the javascript files that they require, rather than loading one monolithic file, most of which will not be required
Now, my understanding is that, if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all. If this is correct, I would assume that having multiple tags is not causing any performance penalty, as I'm still not having any additional requests on most pages (recalling from above that almost all users have populated caches).
In addition to this, not loading the JS means that the browser doesn't have to interpret or execute all this additional code which it isn't going to need; as a B2B application, most of our users are unfortunately stuck with IE6 and its painfully slow JS engine.
Another benefit is that, when code changes, only the affected files need to be fetched again, rather than the whole set (granted, it would only need to be fetched once, so this is not so much of a benefit).
I'm also looking at using LabJS to allow for parallel loading of the JS when it's not cached.
Specific questions
If there are many tags, but all files are being loaded from the local cache, and less javascript is being loaded overall, is this going to be faster than one tag which is also being loaded from the cache, but contains all the javascript needed anywhere on the site, rather than an appropriate subset?
Are there any other reasons to prefer one over the other?
Does similar thinking apply to CSS? (I'm currently using a much more monolithic approach to CSS)
2021 Edit:
As this answer has had some recent upvotes, do notice that with http 2.0 things changed a lot. You don't get the per-request hit as you now multiplex over a single TCP connection. You also get server-push. While most of the answer is still valid, do take it as how things were previously done.
I would say that the most important thing to focus on is the perception of speed.
First thing to take into consideration, there is no win-win formula out there but a threshold where a javascript file grows into such a size that it could (and should) be split.
GWT uses this and they call it DFN (Dead-for-now) code. There isn't much magic here. You just have to manually define when you'll need a need a new piece of code and, should the user need it, just call that file.
How, when, where will you need it?
Benchmark. Chrome has a great benchmarking tool. Use it extensivelly. See if having just a small javascript file will greatly improve the loading of that particular page. If it does by all means do start DFNing your code.
Apart from that it's all about the perception.
Don't let the content jump!
If your page has images, set up their widths and heights up front. As the page will load with the elements positioned right where they are supposed to be, there will be no content fitting and adjusting the user's perception of speed will increase.
Defer javascript!
All major libraries can wait for page load before executing javascript. Use it. jQuery's goes like this $(document).ready(function(){ ... }). It doesn't wait for parsing the code but makes the parsed code fire exactly when it should. After page load, before image load.
Important things to take into consideration:
Make sure js files are cached by the client (all the others stand short compared to this one)
Compile your code with Closure Compiler
Deflate your code; it's faster than Gziping it (on both ends)
Apache example of caching:
// Set up caching on media files for 1 month
<FilesMatch "\.(gif|jpg|jpeg|png|swf|js|css)$">
ExpiresDefault A2629744
Header append Cache-Control "public, proxy-revalidate"
Header append Vary "Accept-Encoding: *"
</FilesMatch>
Apache example of deflating:
// compresses all files for faster transfer
LoadModule deflate_module modules/mod_deflate.so
AddOutputFilterByType DEFLATE text/html text/plain text/xml font/opentype font/truetype font/woff
<FilesMatch "\.(js|css|html|htm|php|xml)$">
SetOutputFilter DEFLATE
</FilesMatch>
And last, and probably least, serve your Javascript from a cookie-less domain.
And to keep your question in focus, remember that when you have DFN code, you'll have several smaller javascript files that, precisely for being split, won't have the level of compression Closure can give you with a single one. The sum of the parts isn't equal to the whole in this scenario.
Hope it helps!
I really think you need to do some measurement to figure out if one solution is better than the other. You can use JavaScript and log data to get a clear idea of what your users are seeing.
First, analyze your logs to see if your cache rate is really as good as you would expect for your userbase. For example, if each html page includes jquery.js, look over the logs for a day--how many requests were there for html pages? How many for jquery.js? If the cache rate is good, you should see far fewer requests for jquery.js than for html pages. You probably want to do this for a day right after an update, and also a day a few weeks after an update, to see how that affects the cache rate.
Next, add some simple measurements to your page in JavaScript. You said the script tags are at the bottom, so I assume it looks something like this?
<html>
<!-- all your HTML content... -->
<script src="jquery.js"></script>
<script src="jquery-ui.js"></script>
<script src="mycode.js"></script>
In that case, you time how long it takes to load the JS, and ping the server like this:
<html>
<!-- all your HTML content... -->
<script>var startTime = new Date().getTime();</script>
<script src="jquery.js"></script>
<script src="jquery-ui.js"></script>
<script src="mycode.js"></script>
<script>
var endTime = new Date().getTime();
var totalTime = endTime - startTime; // In milliseconds
new Image().src = "/time_tracker?script_load=" + totalTime;
</script>
Then you can look through the logs for /time_tracker (or whatever you want to call it) and see how long it's taking people to load the scripts.
If your cache rate isn't good, and/or you're dissatisfied with how long it takes to load the scripts, then try moving all the scripts to a concatenated/minified file on one of your pages, and measure how long that takes to load in the same way. If the results look promising, do the rest of the site.
I would definitely go with the non-monolithic approach. Not only in your case, but in general gives you more flexibility when you need something changed or re-configured.
If you make a change to one of these files then you will have to merge-compress and deliver. If you are doing this in an automated way then you are OK.
As far as the browser question "if the cache expiration date for a javascript file has not been reached, then the cached version is used immediately; there is no HTTP request sent at to the server at all", i think that there is an HTTP request made but the with response "NOT MODIFIED". To be sure you should check all the Requests made to the Web Server (using one of the tools available). After the response is given then the browser uses the unmodified resource - the js file or image or other.
Good luck with your B2B.
Even though you are dealing with repeat-visitors, there are many reasons why their cache may have been cleared, including privacy and performance tools that delete temporary cache files to "speed up your computer".
Merging and mini-fying your script doesn't have to be an onerous process. I write my JavaScript in separate files, nicely spaced out to be readable to me so it is easier to maintain. However, I serve it via a script page that combines all of the scripts into a single script and mini-fies it all - so one script gets sent to the browser with all my scripts in. This is the best of both worlds as I work on a collection of JavaScript files that are all readable, and the visitor gets one compressed JavaScript file, which is the recommendation for reducing the HTTP requests (and therefore the queue time).
Did you try Google Closure? From what I've read about it, it seems quite promising.
http://code.google.com/closure/
http://googlecode.blogspot.com/2009/11/introducing-closure-tools.html - blog post
http://axod.blogspot.com/2010/01/google-closure-compiler-advanced-mode.html - performance of GC
http://www.sitepoint.com/google-closure-how-not-to-write-javascript/ - a few tips for javascript
Generally it's better to have fewer, larger requests than to have many small requests, since the browser will only do two (?) requests in parallel to a particular domain.
So whilst you say that most users are repeat visitors, when the cache expires there will be many round-trips for the many files, rather than one for a monolithic file.
If you take this to an extreme and have potentially thousands of files with individual functions in them, it would become obvious that this would lead to a huge number of requests when the cache expires.
Another reason to have a monolithic file is for when various parts of the site have different chunks of javascript associated with them, as you again get this in the cache when you hit the first page, saving later requests and round-trips.
If you're worried about the initial hit loading a "large" javascript file you can try loading it asynchronously, using the method described here : http://www.webmaster-source.com/2010/06/07/loading-javascript-asynchronously/
Whichever way you go in the end, remember that since you're setting a far-future modified date, you'll need to change the name of the javascript (and CSS) files when changes are made in them, otherwise clients won't pick up the changes until their cache expires anyway.
PS : Profile it on the different browsers with the differing methods and write it up, as it will prove useful to those who are also stuck on slow JS engines like IE6 :)
I've used the following for both CSS and Javascript -- most of my pages in Google Speed report being 94-96/100 and they load very fast (always within a second, even if there are 100kb's of Javascript).
1. I have a PHP function to call files -- this is a class and stores all the unique files that are asked for. My call looks something like:
javascript( 'jquery', 'jquery.ui', 'home-page' );
2. I spit out a url-encoded version of these strings combined together to call a dynamic PHP page:
<script type="text/javascript" src="/js/?files=eNptkFsSgjAMRffCP4zlTVmDi4iQkVwibbEUHzju3UYEHMffc5r05gJnEX8IvisHnnHPQN9cMHZeKThzJOVeex7R3AmEDhQLCEZBLHLMLVhgpaXUikRMXCJbhdTjgNcG59UJyWSVPSh_0lqSSp0KN6XNEZSYwAqt_KoBY-lRRvNblBZrYeHQYdAOpHPS-VeoTpteVFwnNGSLX6ss3uwe1fi-mopg8aqt7P0LzIWwz-T_UCycC2sQavrp-QIsrnKh"></script>
3. That dynamic PHP page takes decodes the string and creates an array of the files that will needed to be called. A cache_file path is created:
$compressed_js_file_path = $_SERVER['DOCUMENT_ROOT'] . '/cache/js/' . md5( implode( '|', $js_files ) ) . '.js';
4. It checks to see if that file path already exists in the cache, if so, it just reads the file:
if( file_exists( $compressed_js_file_path ) ) {
echo file_get_contents( $compressed_js_file_path );
} else {
5. If it doesn't exist, it compresses all the javascript into one "monolith" file, but realize it has ONLY the necessary javascript for that page, not for the entire site.
if( $fh = #fopen( $compressed_js_file_path, 'w' ) ) {
fwrite( $fh, $js );
fclose( $fh );
}
// Echo the compressed Javascript
echo $js;
I've given you excerpts of the code. The program you use to compress javascript is completely up to you. I use this with both CSS and Javascript so that all those file requires 1 HTTP request, ever, the result is cached on the server (simply delete that file if you change something), and it has only the necessary Javascript & CSS for that page.

Put javascript in one .js file or break it out into multiple .js files?

My web application uses jQuery and some jQuery plugins (e.g. validation, autocomplete). I was wondering if I should stick them into one .js file so that it could be cached more easily, or break them out into separate files and only include the ones I need for a given page.
I should also mention that my concern is not only the time it takes to download the .js files but also how much the page slows down based on the contents of the .js file loaded. For example, adding the autocomplete plugin tends to slow down the response time by 100ms or so from my basic testing even when cached. My guess is that it has to scan through the elements in the DOM which causes this delay.
I think it depends how often they change. Let's take this example:
JQuery: change once a year
3rd party plugins: change every 6 months
your custom code: change every week
If your custom code represents only 10% of the total code, you don't want the users to download the other 90% every week. You would split in at least 2 js: the JQuery + plugins, and your custom code. Now, if your custom code represents 90% of the full size, it makes more sense to put everything in one file.
When choosing how to combine JS files (and same for CSS), I balance:
relative size of the file
number of updates expected
Common but relevant answer:
It depends on the project.
If you have a fairly limited website where most of the functionality is re-used across multiple sections of the site, it makes sense to put all your script into one file.
In several large web projects I've worked on, however, it has made more sense to put the common site-wide functionality into a single file and put the more section-specific functionality into their own files. (We're talking large script files here, for the behavior of several distinct web apps, all served under the same domain.)
The benefit to splitting up the script into separate files, is that you don't have to serve users unnecessary content and bandwidth that they aren't using. (For example, if they never visit "App A" on the website, they will never need the 100K of script for the "App A" section. But they would need the common site-wide functionality.)
The benefit to keeping the script under one file is simplicity. Fewer hits on the server. Fewer downloads for the user.
As usual, though, YMMV. There's no hard-and-fast rule. Do what makes most sense for your users based on their usage, and based on your project's structure.
If people are going to visit more than one page in your site, it's probably best to put them all in one file so they can be cached. They'll take one hit up front, but that'll be it for the whole time they spend on your site.
At the end of the day it's up to you.
However, the less information that each web page contains, the quicker it will be downloaded by the end-viewer.
If you only include the js files required for each page, it seems more likely that your web site will be more efficient and streamlined
If the files are needed in every page, put them in a single file. This will reduce the number of HTTP request and will improve the response time (for lots of visits).
See Yahoo best practice for other tips
I would pretty much concur with what bigmattyh said, it does depend.
As a general rule, I try to aggregate the script files as much as possible, but if you have some scripts that are only used on a few areas of the site, especially ones that perform large DOM traversals on load, it would make sense to leave those in separate file(s).
e.g. if you only use validation on your contact page, why load it on your home page?
As an aside, you can sometimes sneak these files into interstitial pages, where not much else is going on, so when a user lands on an otherwise quite heavy page that needs it, it should already be cached - use with caution - but can be a handy trick when you have someone benchmarking you.
So, as few script files as possible, within reason.
If you are sending a 100K monolith, but only using 20K of it for 80% of the pages, consider splitting it up.
It depends pretty heavily on the way that users interact with your site.
Some questions for you to consider:
How important is it that your first page load be very fast?
Do users typically spend most of their time in distinct sections of the site with subsets of functionality?
Do you need all of the scripts ready the moment that the page is ready, or can you load some in after the page is loaded by inserting <script> elements into the page?
Having a good idea of how users use your site, and what you want to optimize for is a good idea if you're really looking to push for performance.
However, my default method is to just concatenate and minify all of my javascript into one file. jQuery and jQuery.ui are small and have very low overhead. If the plugins you're using are having a 100ms effect on page load time, then something might be wrong.
A few things to check:
Is gzipping enabled on your HTTP server?
Are you generating static files with unique names as part of your deployment?
Are you serving static files with never ending cache expirations?
Are you including your CSS at the top of your page, and your scripts at the bottom?
Is there a better (smaller, faster) jQuery plugin that does the same thing?
I've basically gotten to the point where I reduce an entire web application to 3 files.
vendor.js
app.js
app.css
Vendor is neat, because it has all the styles in it too. I.e. I convert all my vendor CSS into minified css then I convert that to javascript and I include it in the vendor.js file. That's after it's been sass transformed too.
Because my vendor stuff does not update often, once in production it's pretty rare. When it does update I just rename it to something like vendor_1.0.0.js.
Also there are minified versions of those files. In dev I load the unminified versions and in production I load the minified versions.
I use gulp to handle doing all of this. The main plugins that make this possible are....
gulp-include
gulp-css2js
gulp-concat
gulp-csso
gulp-html-to-js
gulp-mode
gulp-rename
gulp-uglify
node-sass-tilde-importer
Now this also includes my images because I use sass and I have a sass function that will compile images into data-urls in my css sheet.
function sassFunctions(options) {
options = options || {};
options.base = options.base || process.cwd();
var fs = require('fs');
var path = require('path');
var types = require('node-sass').types;
var funcs = {};
funcs['inline-image($file)'] = function (file, done) {
var file = path.resolve(options.base, file.getValue());
var ext = file.split('.').pop();
fs.readFile(file, function (err, data) {
if (err) return done(err);
data = new Buffer(data);
data = data.toString('base64');
data = 'url(data:image/' + ext + ';base64,' + data + ')';
data = types.String(data);
done(data);
});
};
return funcs;
}
So my app.css will have all of my applications images in the css and I can add the image's to any chunk of styles I want. Typically i create classes for the images that are unique and I'll just take stuff with that class if I want it to have that image. I avoid using Image tags completely.
Additionally, use html to js plugin I compile all of my html to the js file into a template object hashed by the path to the html files, i.e. 'html\templates\header.html' and then using something like knockout I can data-bind that html to an element, or multiple elements.
The end result is I can end up with an entire web application that spins up off one "index.html" that doesn't have anything in it but this:
<html>
<head>
<script src="dst\vendor.js"></script>
<script src="dst\app.css"></script>
<script src="dst\app.js"></script>
</head>
<body id="body">
<xyz-app params="//xyz.com/api/v1"></xyz-app>
<script>
ko.applyBindings(document.getTagById("body"));
</script>
</body>
</html>
This will kick off my component "xyz-app" which is the entire application, and it doesn't have any server side events. It's not running on PHP, DotNet Core MVC, MVC in general or any of that stuff. It's just basic html managed with a build system like Gulp and everything it needs data wise is all rest apis.
Authentication -> Rest Api
Products -> Rest Api
Search -> Google Compute Engine (python apis built to index content coming back from rest apis).
So I never have any html coming back from a server (just static files, which are crazy fast). And there are only 3 files to cache other than index.html itself. Webservers support default documents (index.html) so you'll just see "blah.com" in the url and any query strings or hash fragments used to maintain state (routing etc for bookmarking urls).
Crazy quick, all pending on the JS engine running it.
Search optimization is trickier. It's just a different way of thinking about things. I.e. you have google crawl your apis, not your physical website and you tell google how to get to your website on each result.
So say you have a product page for ABC Thing with a product ID of 129. Google will crawl your products api to walk through all of your products and index them. In there you're api returns a url in the result that tells google how to get to that product on a website. I.e. "http://blah#products/129".
So when users search for "ABC thing" they see the listing and clicking on it takes them to "http://blah#products/129".
I think search engines need to start getting smart like this, it's the future imo.
I love building websites like this because it get's rid of all the back end complexity. You don't need RAZOR, or PHP, or Java, or ASPX web forms, or w/e you get rid of those entire stacks.... All you need is a way to write rest apis (WebApi2, Java Spring, or w/e etc etc).
This separates web design into UI Engineering, Backend Engineering, and Design and creates a clean separation between them. You can have a UX team building the entire application and an Architecture team doing all the rest api work, no need for full stack devs this way.
Security isn't a concern either, because you can pass credentials on ajax requests and if your stuff is all on the same domain you can just make your authentication cookie on the root domain and presto (automatic, seamless SSO with all your rest apis).
Not to mention how much simpler server farm setup is. Load balance needs are a lot less. Traffic capabilities a lot higher. It's way easier to cluster rest api servers on a load balancer than entire websites.
Just setup 1 nginx reverse proxy server to serve up your index .html and also direct api requests to one of 4 rest api servers.
Api Server 1
Api Server 2
Api Server 3
Api Server 4
And your sql boxes (replicated) just get load balanced from the 4 rest api servers (all using SSD's if possible)
Sql Box 1
Sql Box 2
All of your servers can be on internal network with no public ips and just make the reverse proxy server public with all requests coming in to it.
You can load balance reverse proxy servers on round robin DNS.
This means you only need 1 SSL cert to since it's one public domain.
If you're using Google Compute Engine for search and seo, that's out in the cloud so nothing to worry about there, just $.
If you like the code in separate files for development you can always write a quick script to concatenate them into a single file before minification.
One big file is better for reducing HTTP requests as other posters have indicated.
I also think you should go the one-file route, as the others have suggested. However, to your point on plugins eating up cycles by merely being included in your large js file:
Before you execute an expensive operation, use some checks to make sure you're even on a page that needs the operations. Perhaps you can detect the presence (or absence) of a dom node before you run the autocomplete plugin, and only initialize the plugin when necessary. There's no need to waste the overhead of dom traversal on pages or sections that will never need certain functionality.
A simple conditional before an expensive code chunk will give you the benefits of both the approaches you are deciding on.
I tried breaking my JS in multiple files and ran into a problem. I had a login form, the code for which (AJAX submission, etc) I put in its own file. When the login was successful, the AJAX callback then called functions to display other page elements. Since these elements were not part of the login process I put their JS code in a separate file. The problem is that JS in one file can't call functions in a second file unless the second file is loaded first (see Stack Overflow Q. 25962958) and so, in my case, the called functions couldn't display the other page elements. There are ways around this loading sequence problem (see Stack Overflow Q. 8996852) but I found it simpler put all the code in one larger file and clearly separate and comment sections of code that would fall into the same functional group e.g. keep the login code separate and clearly commented as the login code.

Categories