Why doesn't Facebook combine its CSS/JS files? - javascript

I am curious as to why the Facebook developers have chosen to not combine their scripts and stylesheets into single files. Instead they are loaded on demand via their CDN.
Facebook is obviously a very complex application and I can understand how such modularity might make Facebook easier to maintain, but wouldn't the usual optimisation advice still apply (especially given its high level of usage)?
Or, does the fact that they are using a CDN avoid the usual performance impact of having lots of small scripts / styles?

In a word BigPipe. They divide the page up into 'pagelets' each is processed separately on their servers and sent to the browser in parallel. Essentially almost everything (CSS, JS, images, content) is lazy loaded, thus it comes down in a bunch of small files.

They might be running into the case where the savings of being able to serve different combinations of JS files to the browser at different times (for different pages or different application configurations for different users) represents a larger savings than the reduced HTTP request overhead of combining all of the files into one.
If a browser is only ever executing a small percent of the total JS code base at any given time, then this would make sense. Because they have so many different users and different parts of different applications running in different configurations for those users, it is arguable that this is the case.
Second, those files only need to be downloaded once, then the browser won't ask for them again until they have changed or the cache has expired, so only the first visit really benefits from the all-in-one style. And yes having and advanced CDN with many edge locations around the world definitely helps.

Maybe they think it's more likely that you visit Facebook more often than you clear your browser cache.

Related

Do browsers prefer leaner JS bundles?

I'm working in an MVC application that has about 10 BIG JavaScript libraries (jquery, modernizr, knockout, flot, bootstrap...), about 30 jQuery plugins and each view (hundreds of them) has it's own corresponding Javascript file.
The default MVC4 bundling is used, but all these JavaScript files are packaged in two bundles; one containing all the plugins and libraries, and one for the view specific files. And these two bundles are loaded on EVERY page, regardless if needed or not.
Now, they're loaded only the first time the user opens the application, but even minified the two are about 300 KB (way more raw), and the bulk of that code is very specific to certain pages.
So, is it better for the browsers to have 2 giant bundles, or to create "smarter" and lighter bundles specific to pages, but have more of them? The browser would cache them regardless first time they're opened, but is there any performance benefit to having less javascript loaded per page vs having all of it loaded on every page?
If there's any chance of not needing them for a session then it would make sense to split them into smaller bundles. Obviously any bytes that you don't have to send are good bytes ;)
You're right about the caching somewhat eliminating this problem as once you need it once it can be cached, but if, for example, you have a calendar page and a news page, it's conceivable that someone could not care at all about the calendar and so it wouldn't make sense to load it.
That said, you can probably go overboard on splitting things up and the overhead caused by creating each new request will add up to more than you save by not loading larger libraries all at once.
The size of the files is irrelevant to a browser on its own, size of the page as a whole is relevant to the user's computer, it will impact processor, network and memory (where the 3 mentioned components performance will somewhat depend on the browser used).
Many small files will probably provide a better response on slow clients because the file downloads and is executed, vs. waiting to download a big file (waiting for memory to be allocated to read the file) and the executing the scripts.
People will probably suggest to go easy on the scripts and plugins if you want a leaner web application.
Concepts like image sprites and JS bundling are inventions due to the goal of minimising HTTP requests. Each HTTP request has an overhead and can result in bottlenecks, so it's better to have one big bundle than many small bundles.
Having said that, as Grdaneault said, you don't want users to load JS that they won't use.
So the best approach would be to bundle all the common stuff into one, then do separate bundles for uncommon stuff. Possibly bundle per view, depends on your structure. But don't let your bundles overlap (e.g. bundle A has file A & B, bundle B has file A & C), as this will result in duplicate loading.
Though 30 plugins, wow. The initial load is just one of the many issues to sort out. Think carefully as to whether you need them all - not everyone will have an environment that's as performant as you hopefully do!

How to optimally serve and load JavaScript files?

I'm hoping someone with more experience with global-scale web applications could clarify some questions, assumptions and possible misunderstandings I have.
Let's take a hypothetical site (heavy amount of client-side / dynamic components) which has hundreds of thousands of users globally and the sources are being served from one location (let's say central Europe).
If the application depends on popular JavaScript libraries, would it be better to take it from the Google CDN and compile it into one single minified JS file (along with all application-specific JavaScript) or load it separately from the Google CDN?
Assetic VS headjs: Does it make more sense to load one single JS file or load all the scripts in parallel (executing in order of dependencies)?
My assumptions (please correct me):
Compiling all application-specific/local JS code into one file, using CDNs like Google's for popular libraries, etc. but loading all of these via headjs in parallel seems optimal, but I'm not sure. Server-side compiling of third party JS and application-specific JS into one file seems to almost defeat the purpose of using the CDN since the library is probably cached somewhere along the line for the user anyway.
Besides caching, it's probably faster to download a third party library from Google's CDN than the central server hosting the application anyway.
If a new version of a popular JS library is released with a big performance boost, is tested with the application and then implemented:
If all JS is compiled into one file then every user will have to re-download this file even though the application code hasn't changed.
If third party scripts are loaded from CDNs then the user only has download the new version from the CDN (or from cache somewhere).
Are any of the following legitimate worries in a situation like the one described?
Some users (or browsers) can only have a certain number of connections to one hostname at once so retrieving some scripts from a third party CDN would be result in overall faster loading times.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Compiling all application-specific/local JS code into one file
Since some of our key goals are to reduce the number of HTTP requests and minimize request overhead, this is a very widely adopted best practice.
The main case where we might consider not doing this is in situations where there is a high chance of frequent cache invalidation, i.e. when we make changes to our code. There will always be tradeoffs here: serving a single file is very likely to increase the rate of cache invalidation, while serving many separate files will probably cause a slower start for users with an empty cache.
For this reason, inlining the occasional bit of page-specific JavaScript isn't as evil as some say. In general though, concatenating and minifying your JS into one file is a great first step.
using CDNs like Google's for popular libraries, etc.
If we're talking about libraries where the code we're using is fairly immutable, i.e. unlikely to be subject to cache invalidation, I might be slightly more in favour of saving HTTP requests by wrapping them into your monolithic local JS file. This would be particularly true for a large code base heavily based on, for example, a particular jQuery version. In cases like this bumping the library version is almost certain to involve significant changes to your client app code too, negating the advantage of keeping them separate.
Still, mixing request domains is an important win, since we don't want to be throttled excessively by the maximum connections per domain cap. Of course, a subdomain can serve just as well for this, but Google's domain has the advantage of being cookieless, and is probably already in the client's DNS cache.
but loading all of these via headjs in parallel seems optimal
While there are advantages to the emerging host of JavaScript "loaders", we should keep in mind that using them does negatively impact page start, since the browser needs to go and fetch our loader before the loader can request the rest of our assets. Put another way, for a user with an empty cache a full round-trip to the server is required before any real loading can begin. Again, a "compile" step can come to the rescue - see require.js for a great hybrid implementation.
The best way of ensuring that your scripts do not block UI painting remains to place them at the end of your HTML. If you'd rather place them elsewhere, the async or defer attributes now offer you that flexibility. All modern browsers request assets in parallel, so unless you need to support particular flavours of legacy client this shouldn't be a major consideration. The Browserscope network table is a great reference for this kind of thing. IE8 is predictably the main offender, still blocking image and iFrame requests until scripts are loaded. Even back at 3.6 Firefox was fully parallelising everything but iFrames.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Working out if the client machine can access a remote host is always going to incur serious performance penalties, since we have to wait for it to fail to connect before we can load our reserve copy. I would be much more inclined to host these assets locally.
Many small js files is better than few large ones for many reasons including changes/dependencies/requirements.
JavaScript/css/html and any other static content is handled very efficiently by any of the current web servers (Apache/IIS and many others), most of the time one web server is more than capable of serving 100s and 1000s requests/second and in any case this static content is likely to be cached somewhere between the client and your server(s).
Using any external (not controlled by you) repositories for the code that you want to use in production environment is a NO-NO (for me and many others), you don't want a sudden, catastrophic and irrecoverable failure of your whole site JavaScript functionality just because somebody somewhere pressed commit without thinking or checking.
Compiling all application-specific/local JS code into one file, using
CDNs like Google's for popular libraries, etc. but loading all of
these via headjs in parallel seems optimal...
I'd say this is basically right. Do not combine multiple external libraries into one file, since—as it seems you're aware—this will negate the majority case of users' browsers having cached the (individual) resources already.
For your own application-specific JS code, one consideration you might want to make is how often this will be updated. For instance if there is a core of functionality that will change infrequently but some smaller components that might change regularly, it might make sense to only compile (by which I assume you mean minify/compress) the core into one file while continuing to serve the smaller parts piecemeal.
Your decision should also account for the size of your JS assets. If—and this is unlikely, but possible—you are serving a very large amount of JavaScript, concatenating it all into one file could be counterproductive as some clients (such as mobile devices) have very tight restrictions on what they will cache. In which case you would be better off serving a handful of smaller assets.
These are just random tidbits for you to be aware of. The main point I wanted to make was that your first instinct (quoted above) is likely the right approach.

What's the impact of not compressing javascript and CSS on client side processing

Ignoring download times, what's the performance impact of making the browser interpret several separate small files as opposed to one big one. In particular, could it make a significant difference to page rendering speed in ie6 and 7?
Browsers typically limit themselves to a certain number of simultaneous requests. This number is dependent on how "server friendly" they are.
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
So, depending on the number of artifacts the browser has to load, it may have to wait for others to complete first. Artifacts include everything the browser has to go back to the server for: images, javascript, css, flash, etc. Even the favicon if you have one.
That aside, rendering speed is normally going to boil down to how the pages are structured. ie. how many calculations you depend on the browser to make (% width vs fixed width).
It has to make more round-trip HTTP requests. It may or may not have significant consequences.
Where,
Apart from download times , if you too have many javascript and css files
Each request is as an extra http call from client to server.
If the page load is one of the main criteria you should definetly think about it
read this doc also
http://developer.yahoo.com/performance/rules.html
I work for a gov't organization with a large scale enterprise intranet and when we had around 25+ JS files and 10+ CSS files loading on our intranet portal we did notice a dramatic lag in page load time in IE6 and 7. Newer browsers have faster routines for loading and executing JavaScript. I used YUI Compressor to minify everything including CSS.
If you include minification in along with combining files, then dead code often gets removed (depending on the minifier) and some code can be optimized (see YUI Compressor: What are micro optimizations? and Which javascript minification library produces better results?).
I've asked this question a bunch of times when I first started out with web development.
If you have under 10 javascripts and 10 css files (css not so important in my opinion), then I don't think there is much use minifying and compressing. However, if you are dealing with a bunch of javascript files (greater than 10), then YES, it's gonna make a difference.
What you may experience is, even after compressing and minifying and combining your scripts, you may still experience slow-ness. That's when HTML caching plays a huge role in website optimizations, at least that's what I experienced in my web application. Try looking into Memcached and use it to cache your html files. This technique speeds up your web application a WHOLE LOT!!!
I am assuming your question is related to web optimization and high performance websites.
Just my 2 cents.

performance loading js files

I was wondering what would be best. I have different JS functions, for instance I have the accordion plugin, a script for the contact page. But I only use each script on one page e.g. 'the faq page'uses the accordion JS but not the contact JS obviously.
This along with many other examples (my js dir is 460kb big in total, seperated in different files)
So what's best, put all the scripts in one file and load it in my header template, or seperate them into about 10 different files and load them when I need them?
Regards
You want to place them all in one file. It cuts down on the number of trips to the server and reduces overhead.
Placing them at the end of the document is generally recommended as that way the rest of the page downloads beforehand.
Here's a link describing the best practices by Yahoo on where to include scripts and about minimizing trips to the server.
http://developer.yahoo.com/performance/rules.html
The "best" isn't usually a one-size-fits-all.
Merging the files together means fewer connections and (assuming your cache settings are correct) it will allow your first page view to take a hit and then all other pages would benefit.
Splitting them out gives you more granularity in terms of caching but it comes at a cost of many connections (each connection has an overhead associated with it). Remember that many browers only make 2 connections to any given hostname. This is an old restriction imposed by the HTTP spec.
Usually "best" is finding the way to chunk them into large enough groups that for any one page you aren't downloading too much extra but you are able to share a lot between pages. Favor fewer groups over worrying about downloading too much.
Whichever way you go, make sure you compact your scripts (remove whitespace, comments, etc.), serve them up GZipped or Deflated, and set your expire headers appropriately so that a user isn't downloading the same scripts over and over.
I would group it into a couple or 3 files based on what is used everywhere or only somewhere.
Also, with that much code, you should look at minifying the code to reduce the download time. I've used the YUI Compressor before, does a good job and is easy to integrate into a build file.
Combine them into a single file - it will mean fewer HTTP requests.
However, it is very important that you are setting expiry headers on your CSS and JS files. You should always have these headers set, but it's especially bad if you're forcing the user to re-download the contents of 10 files each page load.
If you really only use each function on a single page, you won't gain much by combining them into a single file. It'll take longer to load whatever page a visitor hits first, but subsequent pages will load faster.
If most scripts are only used on a few pages, then it might make sense to figure out which pages visitors are likely to hit first (main page, plus whatever's bookmark-worthy) and produce combined js files for those pages, so they load as quickly as possible. Then just load the less-used scripts on whatever page they're used.

javascript and css loadings

I was wondering, If I have, let's say 6 javascripts includes on a page and 4-5 css includes as well on the same page, does it actually makes it optimal for the page to load if I do create one file or perhaps two and append them all together instead of having bunch of them?
Yes. It will get better performance with fewer files.
There are a few reasons for this and I'm sure others will chime in as I won't list them all.
There is overhead in the requests in addition to the size of the file, such as the request its self, the headers, cookies (if sent) and so on. Even in many caching scenarios the browser will send a request to see if the file has been modified or not. Of course proper headers/server configuration can help with this.
Browsers by default have a limited number of simultaneous connections that it will open at a time to a given domain. I believe IE has 2 and firefox does 4 (I could mistaken on the numbers). Anyway the point is, if you have 10 images, 5 js files and 2 css files, thats 17 items that needs to be downloaded and only a few will be done at the same time, the rest are just queued.
I know these are vague and simplistic explanations, but I hope it gets you on the right track.
One of your goals is to reduce http requests, so yes. The tool called yslow can grade your application to help you see what you can do to get a better user experience.
http://developer.yahoo.com/yslow/
Even if browse doing several requests it's trying to open least amount of TCP connections (see Keep-Alive HTTP header option docs). Speed of web pages loading also can be improved by settings up compression (DEFLATE or GZIP) mode on the server side.
Each include is a separate HTTP request the user's browser has to make, and with an HTTP request comes overhead (on both the server and the connection). Combining multiple CSS and JavaScript files will make things easier on you and your users.
This can be done with images as well, via a technique called CSS sprites.
Yes. You are making fewer HTTP requests that way.
The best possible solution would be to add all code to one page so it can be fetched in one GET request by the browser. If you are linking to multiple files, the browser has to request for these external pages everytime the page is loaded.
This may not cause a problem if pieplineing is enabled in the browser and the site is not generating much traffic.
Google have streamlined their code to being all in one. I can't even imagine how many requests that has saved and lightened the load on their servers with that amount of traffic.
There's no longer any reason to feel torn between wanting to partition js & css files for the sake of organisation on the one hand and to have few files for the sake of efficiency on the other. There are tools that allow you to achieve both.
For instance, you can incorporate Blender into your build process to aggregate (and compresses) CSS and JavaScript assets. Java developers should take a look at JAWR, which is state of the art.
I'm not really very versed in the factors which effect server load, however I think the best thing to do would be to find a balance between having one big chunk and having your scripts organized into meaningful separate files. I don't think that having five or so different files should influence performance too much.
A more influential factor to look at would the compression of the scripts, there are various online utilities which get rid of white space and use more efficient variable names, I think these will result in much more dramatic improvements than putting the files together.
As others have said, yes, the fewer files you can include, the better.
I highly recommend Blender for minifying and consolidating multiple CSS/JS files. If you're like me and often end up with 10-15 stylesheets (reset, screen, print, home, about, products, search, etc...) this tool is great help.

Categories