Good Practice with External JavaScript files - javascript

I'm new to JavaScript.
How should I split my functions across external scripts? What is considered good practice? should all my functions be crammed into one external .js file or should I group like functions together?
I would guess more files mean more HTTP requests to obtain the script and that could slow down performance? However more files keep things organized: for example, onload.js initializes things on load, data.js retrieves data from the server, ui.js refer to UI handlers...
What's the pros advice on this?
Thanks!

As Pointy mentioned, you should try a tool. Try Grunt of Brunch, both are meant to help you in the build process, and you can configure them to combine all your files when you are ready for prod (also, minify, etc), while keeping separate files when you are developing.

When releasing a product, you generally want as little HTTP requests as possible to load a page (Hence image sprites, for example)
So I'd suggest concatenating your .js's for release, but keeping them separated in a way that works well for you, during development.
Keep in mind that if you "use strict", concatenating scripts might be a source of errors.
(more info on js strict mode here)

It depends on the size, count of your scripts and how many of them you use at any time.
Many performance good practices claim (and there's good logic in this) that it's good to inline your JavaScript if it's small enough. This leads to lower count of HTTP requests but it also prevents the browser from caching the JavaScript so you should be very careful. That's why there're a practices even to inline your images (using base64 encoding) in some special cases (for example look at Bing.com, all their JavaScript is inline).
If you have a lot of JavaScript files and you're using just a little part of them at any time (not only as count but as size) you can load them asynchronously (for example using require.js). But this will require a lot of changes in your application design if you haven't considered it at the beginning (and also make your application complexity bigger).
There are practices even to cache your CSS/JavaScript into the localStorage. For further information you can read the Web Performance Daybook
So let's make something like a short retrospection.
If you have your JavaScript inline this will reduce the first load of the page. The inline JavaScript won't be cached by the browser so every next load of your page will be slower that if you have used external files.
If you are using different external files make sure that you're using all of them or at least big part of them because you can have redundant HTTP requests for files which actually are unnecessary loaded. This will lead to better organization of your code but probably greater load time (still don't forget the browser cache which will help you).
To put everything in at single file will reduce your HTTP requests but you'll have one big file which will block your page loading (if you're using synchronous loading of the JS file) until the file have been loaded completely. In such case I can recommend you to put this big file in the end of the body.
For performance tracking you can use tools like YSlow.

When I think about good practice, then I think of MVC patterns. One might argue if this is the way to go in development, but many people use it to structure what the want to achieve. Usually it is not advisable to use MVC at all if the project is just too small - just like creating a full C++ windows app if you just needed a simple C program with a for loop.
In any case, MVC or MV* in javascript will help you structure your code to the extent that all the actions are part of the controllers, while object properties are just stored in the model. The views then are just for showing purposes and are rendered for the user via special requests or rendinering engines. When I stared using MV*, I stared with BackboneJS and the Guide "Developing BackboneJS Applications" from Addy Osmani. Of course there are a multitude of other frameworks that you can use to structure your code. They can be found on the TodoMVC website.
What you can also do is derive your own structure from their apps and then use the directory structure for your development (but without the MV* framework).
I do not agree to your concern that using such a structure lead to more files, which mean more HTTP requests. Of course this is true during development, BUT remember, the user should get a performance enhanced (i.e. compiled) and minified version as a script. Therefore even if you are developing in such an organized way, it makes more sense to minify/uglify and compile your scripts with closure compiler from Google.

Related

Do browsers prefer leaner JS bundles?

I'm working in an MVC application that has about 10 BIG JavaScript libraries (jquery, modernizr, knockout, flot, bootstrap...), about 30 jQuery plugins and each view (hundreds of them) has it's own corresponding Javascript file.
The default MVC4 bundling is used, but all these JavaScript files are packaged in two bundles; one containing all the plugins and libraries, and one for the view specific files. And these two bundles are loaded on EVERY page, regardless if needed or not.
Now, they're loaded only the first time the user opens the application, but even minified the two are about 300 KB (way more raw), and the bulk of that code is very specific to certain pages.
So, is it better for the browsers to have 2 giant bundles, or to create "smarter" and lighter bundles specific to pages, but have more of them? The browser would cache them regardless first time they're opened, but is there any performance benefit to having less javascript loaded per page vs having all of it loaded on every page?
If there's any chance of not needing them for a session then it would make sense to split them into smaller bundles. Obviously any bytes that you don't have to send are good bytes ;)
You're right about the caching somewhat eliminating this problem as once you need it once it can be cached, but if, for example, you have a calendar page and a news page, it's conceivable that someone could not care at all about the calendar and so it wouldn't make sense to load it.
That said, you can probably go overboard on splitting things up and the overhead caused by creating each new request will add up to more than you save by not loading larger libraries all at once.
The size of the files is irrelevant to a browser on its own, size of the page as a whole is relevant to the user's computer, it will impact processor, network and memory (where the 3 mentioned components performance will somewhat depend on the browser used).
Many small files will probably provide a better response on slow clients because the file downloads and is executed, vs. waiting to download a big file (waiting for memory to be allocated to read the file) and the executing the scripts.
People will probably suggest to go easy on the scripts and plugins if you want a leaner web application.
Concepts like image sprites and JS bundling are inventions due to the goal of minimising HTTP requests. Each HTTP request has an overhead and can result in bottlenecks, so it's better to have one big bundle than many small bundles.
Having said that, as Grdaneault said, you don't want users to load JS that they won't use.
So the best approach would be to bundle all the common stuff into one, then do separate bundles for uncommon stuff. Possibly bundle per view, depends on your structure. But don't let your bundles overlap (e.g. bundle A has file A & B, bundle B has file A & C), as this will result in duplicate loading.
Though 30 plugins, wow. The initial load is just one of the many issues to sort out. Think carefully as to whether you need them all - not everyone will have an environment that's as performant as you hopefully do!

Can number of javascript files make project slower or any effects?

I am working on a project using HTML and JavaScript. I am using a number of JavaScript files in my project. I want to know if there are any side-effects caused by the number of JavaScript files in a web project? (In terms of efficiency or speed).
I want to know that Is there any effect of no of JavaScript files in a web project ? In terms of efficiency or speed.
If you mean, is there a speed cost associated with having all of your JavaScript embedded in inline script tags within your HTML, then maybe, maybe not; it depends on how much JavaScript there is.
It's a trade-off:
If you put all of your JavaScript in the HTML files, that means you have to duplicate it in each HTML file, making each one larger. If it's a lot of script, that adds up to each HTML file being heavier, and it means when you change the content (rather than the code), you force the user to re-download all of that code.
If you put your JavaScript in a separate file, you have an additional HTTP request when your page is loaded, but only if the JavaScript file isn't already in cache — and all of your HTML files are that much smaller.
So in most cases, if you have any significant amount of code, you're better off using a separate JavaScript file and setting the caching headers correctly so that the browser doesn't need to send the separate HTTP request each time. For instance, configure your server to tell the browser that JavaScript file is good for a long time, so it doesn't even do an If-Modified-Since HTTP request. Then if you need to update your JavaScript, change the filename and refer to the new file in your HTML.
That's just one approach to caching control, which is a significant and non-trival topic. But you get the idea. Making it separate gives you options like that, with a fairly low initial cost (the one extra HTTP request to load it into cache). But it can be overkill for a quite small amount of code.
In terms of execution, I believe there should be no considerable difference depending on the number of JavaScript-files you have. However, a large number of JavaScript-files require a large number of requests when your page is loaded, which definitely may impact the time it takes to load your page.
There are at least two ways to handle this and which approach to use depends on the situation at hand.
Bundle all files into one:
Each request to the server adds to the time it takes to load your page. Becuse of this it is often considered good practice to bundle and minify your JavaScript-files into one single file, containing all your scripts, thus requiring only a single request to load all the JavaScript.
In some cases however, if you have very much JavaScript for instance, that single file can become quite large, and even if you just load a single file, loading that file may take some time, thus slowing down your page load. In that case, it might be worth considering option two.
Load modules as you need them:
In other cases, for example if you have a lot of JavaScript for your site, but only a few modules of all your code is used on each page, it might give you better performance to use conditional loading. In that case, you keep your code in separate modules and have each module in a separate file. You can then load only the modules you need on each page.
This will depend entirely on the javascript files that you will include. Naturally, when including a javascript file, it will have to be loaded by the browser. How taxing this is depends on the size of the file. Generally speaking it is advantageous to minimize the number of separate requests of script files by combining all code in a single file
A larger number of JavaScript files can cause your page to load slower, but if everything caches correctly their shouldn't be any problem. A browser that requests a lot of files from cache is a bit slower before rendering, but we are talking about 0.001 seconds.
In Crowders post if you are correctly caching then you have to change your filename and refer to it, but if you implement a nice caching mechanism in your webproject it's easier to refer your JavaScript with a querystring like:
scripts/foo.js?version=1
We use the software versionnumber, so every new release we change the querystring and all clients get the new script files.
Everytime you release it's best to bundle en minify all of your JavaScript, but it isn't that useful for small projects.

How to optimally serve and load JavaScript files?

I'm hoping someone with more experience with global-scale web applications could clarify some questions, assumptions and possible misunderstandings I have.
Let's take a hypothetical site (heavy amount of client-side / dynamic components) which has hundreds of thousands of users globally and the sources are being served from one location (let's say central Europe).
If the application depends on popular JavaScript libraries, would it be better to take it from the Google CDN and compile it into one single minified JS file (along with all application-specific JavaScript) or load it separately from the Google CDN?
Assetic VS headjs: Does it make more sense to load one single JS file or load all the scripts in parallel (executing in order of dependencies)?
My assumptions (please correct me):
Compiling all application-specific/local JS code into one file, using CDNs like Google's for popular libraries, etc. but loading all of these via headjs in parallel seems optimal, but I'm not sure. Server-side compiling of third party JS and application-specific JS into one file seems to almost defeat the purpose of using the CDN since the library is probably cached somewhere along the line for the user anyway.
Besides caching, it's probably faster to download a third party library from Google's CDN than the central server hosting the application anyway.
If a new version of a popular JS library is released with a big performance boost, is tested with the application and then implemented:
If all JS is compiled into one file then every user will have to re-download this file even though the application code hasn't changed.
If third party scripts are loaded from CDNs then the user only has download the new version from the CDN (or from cache somewhere).
Are any of the following legitimate worries in a situation like the one described?
Some users (or browsers) can only have a certain number of connections to one hostname at once so retrieving some scripts from a third party CDN would be result in overall faster loading times.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Compiling all application-specific/local JS code into one file
Since some of our key goals are to reduce the number of HTTP requests and minimize request overhead, this is a very widely adopted best practice.
The main case where we might consider not doing this is in situations where there is a high chance of frequent cache invalidation, i.e. when we make changes to our code. There will always be tradeoffs here: serving a single file is very likely to increase the rate of cache invalidation, while serving many separate files will probably cause a slower start for users with an empty cache.
For this reason, inlining the occasional bit of page-specific JavaScript isn't as evil as some say. In general though, concatenating and minifying your JS into one file is a great first step.
using CDNs like Google's for popular libraries, etc.
If we're talking about libraries where the code we're using is fairly immutable, i.e. unlikely to be subject to cache invalidation, I might be slightly more in favour of saving HTTP requests by wrapping them into your monolithic local JS file. This would be particularly true for a large code base heavily based on, for example, a particular jQuery version. In cases like this bumping the library version is almost certain to involve significant changes to your client app code too, negating the advantage of keeping them separate.
Still, mixing request domains is an important win, since we don't want to be throttled excessively by the maximum connections per domain cap. Of course, a subdomain can serve just as well for this, but Google's domain has the advantage of being cookieless, and is probably already in the client's DNS cache.
but loading all of these via headjs in parallel seems optimal
While there are advantages to the emerging host of JavaScript "loaders", we should keep in mind that using them does negatively impact page start, since the browser needs to go and fetch our loader before the loader can request the rest of our assets. Put another way, for a user with an empty cache a full round-trip to the server is required before any real loading can begin. Again, a "compile" step can come to the rescue - see require.js for a great hybrid implementation.
The best way of ensuring that your scripts do not block UI painting remains to place them at the end of your HTML. If you'd rather place them elsewhere, the async or defer attributes now offer you that flexibility. All modern browsers request assets in parallel, so unless you need to support particular flavours of legacy client this shouldn't be a major consideration. The Browserscope network table is a great reference for this kind of thing. IE8 is predictably the main offender, still blocking image and iFrame requests until scripts are loaded. Even back at 3.6 Firefox was fully parallelising everything but iFrames.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Working out if the client machine can access a remote host is always going to incur serious performance penalties, since we have to wait for it to fail to connect before we can load our reserve copy. I would be much more inclined to host these assets locally.
Many small js files is better than few large ones for many reasons including changes/dependencies/requirements.
JavaScript/css/html and any other static content is handled very efficiently by any of the current web servers (Apache/IIS and many others), most of the time one web server is more than capable of serving 100s and 1000s requests/second and in any case this static content is likely to be cached somewhere between the client and your server(s).
Using any external (not controlled by you) repositories for the code that you want to use in production environment is a NO-NO (for me and many others), you don't want a sudden, catastrophic and irrecoverable failure of your whole site JavaScript functionality just because somebody somewhere pressed commit without thinking or checking.
Compiling all application-specific/local JS code into one file, using
CDNs like Google's for popular libraries, etc. but loading all of
these via headjs in parallel seems optimal...
I'd say this is basically right. Do not combine multiple external libraries into one file, since—as it seems you're aware—this will negate the majority case of users' browsers having cached the (individual) resources already.
For your own application-specific JS code, one consideration you might want to make is how often this will be updated. For instance if there is a core of functionality that will change infrequently but some smaller components that might change regularly, it might make sense to only compile (by which I assume you mean minify/compress) the core into one file while continuing to serve the smaller parts piecemeal.
Your decision should also account for the size of your JS assets. If—and this is unlikely, but possible—you are serving a very large amount of JavaScript, concatenating it all into one file could be counterproductive as some clients (such as mobile devices) have very tight restrictions on what they will cache. In which case you would be better off serving a handful of smaller assets.
These are just random tidbits for you to be aware of. The main point I wanted to make was that your first instinct (quoted above) is likely the right approach.

What is the proper way to structure/organize javascript for a large application?

Let's say you are building a large application, and you expect to have tons of javascript on the site. Even if you separated the javascript into 1 file per page where javascript is used, you'd still have about 100 javascript files.
What is the best way to keep the file system organized, include these files on the page, and keep the code structure itself organized as well? All the while, having the option to keep things minified for production is important too.
Personally I prefer the module pattern for structuring the code, and I think this article gives a great introduction:
http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth
It keeps my global scope clean, enables namespacing and provides a nice way to seperate public and private methods and fields.
I'd structure the code in separate files, and seek to keep coupling low and cohesion high. Although multiple files are bad for client performance (you should minimize the number of requests), you'll easily get the best from both worlds using a compression utility.
I've some experience with YUI Compressor (for which there is a Maven plugin:
http://alchim.sourceforge.net/yuicompressor-maven-plugin/compress-mojo.html - I haven't used this myself). If you need to order the javascript files in a certain way (also applies for CSS), you could make a shell script, concatenating the files in a given order in advance (Tip: YUI Compressor defaults to STDIN).
Besides, either way you decide to minify your files will probably do just fine. Your focus should instead be on how you structure your code, so that it performs well on the client (should be highest priority in order to satisfy a good UX), and is maintainable in a way that works for you.
You mention namespaces, which is a good idea, but I'd be careful to adopt too many concepts from the traditional object oriented domain, and instead learn to utilize many of the excellent features of the javascript language :)
If the overall size is not too big, i'd use a script to compile all files into a single file and minify this file. Then I'd use aggressive caching plus the mtime in the url so people load that file only once but get the new one as soon as one is available.
Use:
jslint - to keep checks on code quality.
require.js - to get just the code you need for each page (or equivalent).
Organise your js as you would any other software project, break it up by roles, responsibilities and separating concerns.

embedded vs linked JS / CSS

I am familiar with the benefits of linked CSS vs embedded and inline for maintainability and modularity. I have, however, read that in certain mobile applications of web dev, it can be beneficial(faster performance) to embed or even inline your CSS.
I would would avoid any inline JS, and to a lesser extent CSS, but I have noticed on many sites including plenty google pages, JS is embedded right in the header of pages.
A colleague of mine insists on always linking to external js files. I think it makes more sense to embed js if the function is specific to one page or varies slightly per page, to save the processing overhead of a linked script.
The one thing the other answers didn't touch on is developer efficiency. If it's easier to put it inline, and there's no immediate performance requirement/concern, then do that. There is real business value to "easy", and it trumps eventual or non-existent performance concerns. Don't prematurely optimize.
The advantage of linked JS files is that they can be cached by the browser and loaded from local disk or memory on subsequent pages.
The advantage of inline JS is that you might have fewer network requests per page.
The best compromise is usually a small number of linked JS files (one or two) that consist of a mininified combination of all your JS so they are combined into as few files as possible and as small as possible.
Getting the benefits of local caching far exceed the extra parsing of a little JS that might not be used on some pages.
Embedded JS that makes sense (even most of your JS is in linked files) is the settings of a few JS variables that contain state that is specific to your page. That would typically be embedded into the section of the page as it's generated dynamically at your server, different for every page and usually not cacheable. But, this data should typically be small and page-specific.
Linking a script incurs a small penalty in the form of an extra request to the server. If you keep it inline this request is not made and depending on the situation you may get a faster loading page. It makes sense to inline your code if:
it is very small
it is dynamcally generated since then you won't get the benefits of caching anyway
In the case of google and facebook you're most likely seeing inline javascript because it's being generated by server side code.
Other answers have already mentioned the advantages of caching with external JS files. I would almost always go that way for any library or framework type functionality that is likely to be used by at least two pages. Avoid duplication when you can.
I thought I would comment on "inline" vs "embedded".
To me, "inline" means mixing the JS in with the HTML, perhaps with several separate <script> blocks that may or may not refer to each other, almost certainly with a number of event handlers set directly with HTML element properties like <div onclick="..."/>. I would discourage this in most circumstances, but I wouldn't get too hung up about it for occasional uses. Sometimes it's simply less hassle and pretending otherwise just wastes time that you could spend on more important issues.
I would define "embedded" as having (preferably) a single <script> block in the head or at the end of the body, with event handlers assigned within that block using document ready or onload function(s). I see nothing wrong with this for functions specific to one page, in fact I tend to prefer this over an external file for that purpose if it's only a small amount of script and I don't care about caching it client-side. Also if the page is generated dynamically and something in the JavaScript needs to be generated on the server it is generally much easier to do it if the script is on the same page.
One last note on external files: during development watch out for IE's tendency to "over cache". Sometimes while testing I've made some small changes to an external file or two and pulled my hair out wondering why it didn't work only to eventually realise that IE was still using an old cached version. (On the one hand of course this is my fault, but on the other I know a lot of people who have fallen victim to this from time to time.)
All the answers above are very good answers. But I would like to add my own based on 15 years of web dev experience.
ALWAYS use linked resources for both CSS and ECMAScripted resources rather than inline code as linked content is cached by the browsers in most cases and used across potentially thousands of pages over hours and days of interaction by a user with a given domain online. The advantages are as follows:
The bandwidth savings using linked scripts are HUGE as you simply deliver less script over the wire over the user experience which might use the cache for weeks.
There's also better cascade of CSS, as embedded and inline styles override, by weight, linked styles causing confusion in designers.
There is avoidance of duplicate scripts which happens a lot with inline scripts.
You reuse the same libraries over many pages with cache control on the client now possible using versioning and date-based querystrings.
Linked resources tell the browser to preload all resources BEFORE initializing scripts of styles in the DOM. Ive seen issues related to this where users pressed buttons prior to pre-processing of date-times in time sheet apps by scripts causing major problems.
You have more control over script and CSS libraries by all developers in a project, avoiding polluting your app with hundreds of poorly vetted custom scripting in pages
Its very easy to update libraries for your users as well as version linked libraries.
External script libraries from Google or others are now accessible. You can even reuse your own linked libraries and CSS using linked resources.
Best of all there are processing speed gains using cached client-side resources. Cached resources out perform on-demand resources any time!
Linked scripts also enforces style and layout consistencies instead of custom layout shifts per page. If you use HTML layouts consistently, you can simulate flash-free page views because cached CSS is used by the DOM across web pages to render pages faster.
Once you pull in linked resources on the first domain request/response the user's experience is fast and server-side page delivery means the DOM and HTML layouts will not shift or refresh despite numerous page views or links to pages across the site. We often then added limited custom page-level embedded style and script resources as needed to the cached linked stack of libraries on a page level if needed for a narrow range of customizations. Global variables and custom CSS can then override linked values. This allows you to maintain websites much easier without guesswork page-by-page as to what libraries are missing or already used. Ive added custom linked JQuery or other libraries in sub-sections to gain more speed this way, which means you can use linked scripts to manage custom subgroups of website sections, as well.
GOOGLE ANGULAR
What you are seeing in Google's web presence is often implementation of Angular's very complex ES5 and ES6 modular scripted cache systems that utilize fully Javascripted DOM manipulation of single page applications using the scripts and embedded CSS in page views now exclusively managed in Angular(2+). They utilize elaborate modules to load on-demand and lazy load components and modules with HTML/CSS templates pre-loaded into memory and from the server behind the scenes to speed delivery of news and other pages they manage.
The FLAW in all that is they demand browsers stream HUGE megabytes of ECMAScript preloaded with HTML and CSS embedded into these webpages behind the scenes as the user interacts with these cached pages and modules. The problem is they have HUGE stacks of the same CSS and scripts that get injected into multiple modules then parts of the DOM which is sloppy and wasteful. They argue there is no need now for server-side delivery or caching when they can easily manage all that via inline style and script content downloaded via XMLHTTPRequest hidden WebAPI calls to and from the server. Why download all that and rebuild and store inline pages in memory constantly when a much smaller file linked from the page would suffice?
Honestly, this is the sloppiest approach to cache management of styles, content, and CSS I have seen yet in web dev frameworks, as it still demands huge megabytes of scripts just to parse a simple news page with a few lines of text. Someone at Google didn't really think that framework through lol. Its wasteful of bandwidth and processing in the browser, if you ask me, and overkill. Its typical of over-engineering at these bloated vendors.
That is why I always argue for linked CSS and scripts. Less code and more content is why these frameworks were invented. Linked and cached code means SIMPLER, OLDER models have worked better using the fast delivery of smaller markup pages that cache tiny kilobytes of linked ECMAScript and CSS libraries. It means less code is used to display content. The browser's relationship with the server now is so fast and efficient today compared to years ago that the initial caching of these smaller linked files directly from the server (rather than giant inline pages of duplicate scripts yanked down via Web API in Angular on every page view) means linked resources are delivered much faster over the initial visit of a typical web domain visit.
Its only recently the 'script kiddies' have forgotten all this and so have started going backwards to a failed way of using local embedded and inline styles and scripts which we stopped using 20 years ago for a reason. It is a very poor choice and shows inexperience with the web and its markup and content model by many new developers today.
Stokely

Categories