I'm developing a Node.JS application. My client side javascript is about 1.000 LoC, plus 3 libraries. I minified and concatenate my javascript files to a file of total 150kb size (50kb gzipped).
I wonder at which point of file size it would be reasonable to think about using AMD, require.js or similiar.
Short answer:
~100-150kb will be a very big warning sign but file size is not the only indicator. Also, a check for if it's loading blocks responsiveness and UI loading and how much (a look at the network panel in Chrome's devtools) is important for making a good decision.
Long answer:
The reason for using Asynchronous loading is less about file size and more about usage patterns of parts of the code and timing. The thought is to load the minimum required in the initial load, so the page will load as fast as it can and the user get the most responsive experience and then load, in the background, resources the will be needed in the future.
A 150kb file is a lot (as an example, yours is 50kb gzipped so less scary) but if the js file loading doesn't block the UI rendering then file size is less of a problem. I mean, if the user doesn't notice a page loading delay then there isn't a file size problem. Also, if this is the case you might want to consider using the async tag attribute. This of course is void for applications built on client-side frameworks like angular or ExtJs as the js is rendering the UI.
Ask yourself how much of your code is used by how many of your users, how often, (be honest with yourself). A rule of thumb is if any UI elements appear later in the application flow, they can be loaded later. But remember that as with all rule-of-thumbs there are exceptions pay attention. Good analytics data is better than an arbitrary file size, you can really see what parts of the UI (and code) is used and when. Maybe some need to load very late in the flow.
If all code is needed upfront (This is rarely the case), then AMD is probably not for you.
If your application has, lets say, a statistics charts dialog that uses a lot of unique code but only 20% of the users click on the button that opens and that button is located in settings page then loading all it's code with everything else on every other page is a waste and better to do so when in the settings page or when the button is clicked, if there isn't a settings page (or even when the mouse is over the button).
Understanding usage patterns and timing is a key criteria.
These two articles are very interesting and might help in making some deployment decisions:
https://alexsexton.com/blog/2013/03/deploying-javascript-applications/
http://tomdale.net/2012/01/amd-is-not-the-answer/
And also if you still want to load everything in one file requireJs also has it's optimizer tool worth a look:
http://requirejs.org/docs/optimization.html
Related
I'm building a blogging plugin that enables commenting on specific pages. The way it works at the moment is that you include a js script in your html page, which then triggers at load time and adds a comment box to the page.
It's all running properly, however when running Lighthouse reports I still get a "Remove unused Javascript".
I assume this is happening because the code isn't immediately used, but rather initiated once the page has fully loaded?
Any idea how I could fix this?
"Remove Unused JavaScript" isn't telling you that the script isn't used, it is telling you that within that script you have JavaScript that isn't needed for page load.
What it is encouraging you to do is "code splitting", which means you serve JS essential for the initial page content (known as above the fold content) first and then later send any other JS that is needed for ongoing usage on the site.
If you look at the report and open the entry you will see it has a value that shows how much of the code base is not needed initially.
If the script is small (less than 5kb), which I would imagine it is if all it does is create a comment box, then simply ignore this point. If it is larger (and you can't easily make it smaller) - split it into "initialisation" code and "usage" code / the rest of the code and serve the "initialisation" code early and the rest after all essential items have loaded or on intent.
It does not contribute to your score directly and is there purely to indicate best practices and highlight potential things that are slowing your site down.
Additional
From the comments the author has indicated the library is rather large at 70kb, due to the fact it has a WYSIWYG etc.
If you are trying to load a large library but also be conscious about performance the trick is to load the library either "on intent" or after critical files have loaded.
There are several ways to achieve this.
on intent
If you have a complex item to appear "above the fold" and you want high performance scores you need to make a trade-off.
One way to do this is instead of having the component initialised immediately you defer the initialisation of the library / component until someone needs to do use it.
For example you could have a button on the page that says "leave a comment" (or whatever is appropriate) that is linked to a function that only loads the library once it is clicked.
Obviously the trade-off here is that you will have a slight delay loading and initialising the library (but a simple load spinner would suffice as even a 70kb library should only take a second or so to load even on a slow connection).
You can even fetch the script once someone's mouse cursor hovers over the button, those extra few milliseconds can add up to a perceivable difference.
Deferred
We obviously have the async and defer attribute.
The problem is that both mean you are downloading the library alongside critical assets, the difference is when they are executed.
What you can do instead is use a function that listens for the page load event (or listens for all critical assets being loaded if you have a way of identifying them) and then dynamically adds the script to the page.
This way it does not take up valuable network resources at a time when other items that are essential to page load need that bandwidth / network request slot.
Obviously the trade-off here is that the library will be ready slightly later than the main "above the fold" content. Yet again a simply loading spinner should be sufficient to fix this problem.
I am creating a website professionally for a client as my first project and i am using too many libraries for instance velocity.js,jquery,jquery.ui,animate.css and also some image slider plugin for jquery right now i am using the min version and all of the files are downloaded in my machine but when i will put the site live will it severely affect the loading time of website or it will be normal.
You can put it up. Test it Click here. But the best way is to put it up and test the ping.
Yes, it will severely affect the loading of the page. Chrome comes with developer tools out of the box, and Firebug for Firefox is only a couple of clicks away. That, combined with the RTT time and badnwidth to the site gives you enough information to calculate exactly how slow the first hit page load time.
Assuming that caching is configured correctly, subsequent page transitions should be faster - but even loading all this javascript from cache and parsing it will have a big impact on the load time. Note that in the of Dokuwiki, the CMS already handles merging of javascript into a single file.
Using PJax or similar will help with subsequent transitions. Using online repositories for standard libraries does not help with performance. Minimizing the javascript does not give very big wins (since you are already compressing the files on the fly, aren't you?)
There's big wins to be had from defering Javascript loading and stripping/optimizing/merging CSS files.
There are a lot of other things you can do to make your page render faster - but space here is too limited.
I'm working in an MVC application that has about 10 BIG JavaScript libraries (jquery, modernizr, knockout, flot, bootstrap...), about 30 jQuery plugins and each view (hundreds of them) has it's own corresponding Javascript file.
The default MVC4 bundling is used, but all these JavaScript files are packaged in two bundles; one containing all the plugins and libraries, and one for the view specific files. And these two bundles are loaded on EVERY page, regardless if needed or not.
Now, they're loaded only the first time the user opens the application, but even minified the two are about 300 KB (way more raw), and the bulk of that code is very specific to certain pages.
So, is it better for the browsers to have 2 giant bundles, or to create "smarter" and lighter bundles specific to pages, but have more of them? The browser would cache them regardless first time they're opened, but is there any performance benefit to having less javascript loaded per page vs having all of it loaded on every page?
If there's any chance of not needing them for a session then it would make sense to split them into smaller bundles. Obviously any bytes that you don't have to send are good bytes ;)
You're right about the caching somewhat eliminating this problem as once you need it once it can be cached, but if, for example, you have a calendar page and a news page, it's conceivable that someone could not care at all about the calendar and so it wouldn't make sense to load it.
That said, you can probably go overboard on splitting things up and the overhead caused by creating each new request will add up to more than you save by not loading larger libraries all at once.
The size of the files is irrelevant to a browser on its own, size of the page as a whole is relevant to the user's computer, it will impact processor, network and memory (where the 3 mentioned components performance will somewhat depend on the browser used).
Many small files will probably provide a better response on slow clients because the file downloads and is executed, vs. waiting to download a big file (waiting for memory to be allocated to read the file) and the executing the scripts.
People will probably suggest to go easy on the scripts and plugins if you want a leaner web application.
Concepts like image sprites and JS bundling are inventions due to the goal of minimising HTTP requests. Each HTTP request has an overhead and can result in bottlenecks, so it's better to have one big bundle than many small bundles.
Having said that, as Grdaneault said, you don't want users to load JS that they won't use.
So the best approach would be to bundle all the common stuff into one, then do separate bundles for uncommon stuff. Possibly bundle per view, depends on your structure. But don't let your bundles overlap (e.g. bundle A has file A & B, bundle B has file A & C), as this will result in duplicate loading.
Though 30 plugins, wow. The initial load is just one of the many issues to sort out. Think carefully as to whether you need them all - not everyone will have an environment that's as performant as you hopefully do!
Sites like Facebook use "lazy" loading of js.
When you would have to take in consideration that I have one server, with big traffic.
I'm interested - which one is better?
When I do more HTTP requests at once - slower loading of page (due to limit (2 requests at once))
When I do one HTTP request with all codes - traffic (GB) is going high, and apaches are resting little bit more. But, we'll have slower loading of page.
What's faster in result ?
Less requests! Its the reason why we combine JS files, CSS files, use image sprites, etc. You see the problem of web is not that of speed or processing by server or the browser. The biggest bottleneck is latency! You should look up for Steve Souders talks.
It really depends on the situation, device, audience, and internet connection.
Mobile devices for example need as little HTTP requests as possible as they are on slower connections and every round trip takes longer. You should go as far as inline (base-64) images inside of the CSS files.
Generally, I compress main platform and js libs + css into one file each which are cached on a CDN. JavaScript or CSS functionality that are only on one page I'll either inline or include in it's own file. JS functionality that isn't important right away I'll move to the bottom of the page. For all files, I set a far HTTP expires header so it's in the browser cache forever (or until I update them or it gets bumped out when the cache fills).
Additionally, to get around download limits you can have CNAMES like images.yourcdn.com and scripts.yourcdn.com so that the user can download more files in parallel. Even if you don't use a CDN you should host your static media on a separate hostname (can point to the same box) so that the user isn't sending cookies when it doesn't need to. This sounds like overfill but cookies can easily add an extra 4-8kb to every request.
In a developer environment, you should be working with all uncompressed and individual files, no need to move every plugin to one script for example - that's hard to maintain when there are updates. You should have a script to merge files before testing and deployment. This sounds like a lot of work but its something you do for one project and can reuse for all future projects.
TL;DR: It depends, but generally a mixture of both is appropriate. 'Cept for mobile, less HTTP is better.
The problem is a bit more nuanced then that.
If you put your script tags anywhere but at the bottom of the page, you are going to slow down page rendering, since the browser isn't able to much when it hits a script tag, other then download it and execute it. So if the script tag is in the header, that will happen before anything else, which leads to users sitting there stairing at a white screen until everything downloads.
The "right" way is to put everything at the bottom. That way, the page renders as assets are downloaded, and the last step is to apply behavior.
But what happens if you have a ton of javascript? (in facebooks example, about a meg) What you get is the page renders, and is completely unusable until the js comes down.
At that point, you need to look at what you have and start splitting it between vital and non vital js. That way you can take a multi-stage approach, bringing in the stuff that is nessicary for the page to function at a bare minimum level quickly, and then loading the less essential stuff afterwards, or even on demand.
Generally, you will know when you get there, at that point you need to look at more advanced techniques like script loaders. Before that, the answer is always "less http requests".
"The jQuery Mobile "page" structure is optimized to support either single pages, or local internal linked "pages" within a page." jQuery docs
What gives better performance for a jQuery Mobile application - which runs on PhoneGap?
all pages in a single.html file and internal loading
either single pages with external links
Any other aspects to consider?
I use jQuery mobile, and all the sites that I've made have been one page sites. The only external pages that I create are those that have embedded Google maps, just so the iframe loading doesn't happen if the user doesn't need it.
I think it boils down to this: one page with lots of content may slow initial loading but will be snappier once loaded, whereas a tiny home page will be quick from the start, buteach linked page will trigger an Ajax request. When designing for mobile, my rule of thumb is to minimize http requests as much as possible. Though many users are on 3+ G networks, it can still be a wait depending on connectivity. Also, connectivity can change in an instant and if the user has been navigating through the site successfully, and all of sudden things slow down to a crawl, this may create a bit of frustration. Therefore, I think from a user experience POV, users are willing to wait a few extra ticks on the initial load if everything else is quick once it's loaded.
Designing all in one page is also good for development with jQM, imo, because I just create a cache-manifest that includes only one page (and the css and js files). Then my site is cached and runs even if the user has no connectivity. If you've worked with applicationCache, you quickly realize that the more files you have, the more difficult it is to maintain the cache manifest and updates are slower.
I can't say much about browser performance, but you should consider load times.
Multiple pages in one document are loaded with the document, so if there's more of them, the DOMready will happen after some time, which can give an unpleasant look. Separate pages are fethed when you need them, so if there are no reasons to use multipage, then I'd recommend sticking with multiple HTML files. For an online app
Also - a multipage can't be used much if you want to stick with progressive enhancement which is JQM's development philosophy.
Any other aspects to consider?
Yes... As far as I know, there still might be some problems (eg. with dialogs) in multipage documents. JQMalpha3 didn't want to display dialogs for me if there was more than one in a multipage.
This depends on your app's size personally I've realised that using one page apps is more responsive if you have only a few pages in my case it was only 3 screens loading in external data which was more responsive than 3 seperate pages.
Hope that helps