How much speed is gained with RequireJS/AMD in JS? [closed] - javascript

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How much faster is requireJS actually, on a large website?
Has anyone done any tests on the speed of large websites that use asynchronous loading vs not?
For instance, using Backbone with a lot of views (> 100), is it better to simply have a views object that gets loaded with all the views at once and is then always available, or should they all be loaded asynchronously as needed?
Also, are there any differences for these considerations for mobile vs desktop? I've heard that you want to limit the number of requests on mobile instead of the size.

I don't believe that the intent of require.js is to load all of your scripts asynchronously in production. In development async loading of each script is convenient because you can make changes to your project and reload without a "compile" step. However in production you should be combining all of your source files into one or more larger modules using the r.js optimizer. If your large webapp can defer loading of a subset of your modules until a later time (e.g. after a particular user action) these modules can optimized separately and loaded asynchronously in production.
Regarding the speed of loading a single large JS file vs multiple smaller files, generally:
“Reduce HTTP requests” has become a general maxim for speeding up frontend performance, and that is a concern that’s even more relevant in today’s world of mobile browsers (often running on networks that are an order of magnitude slower than broadband connections). [reference]
But there are other considerations such as:
Mobile Caches: iPhones limit the size of files they cache so big files may need to be downloaded each time, making many small files better.
CDN Usage: If you use a common 3rd party library like jQuery it's probably best not to include it in a big single JS file and instead load it from a CDN since a user of your site may already have it in their cache from another website (reference). For more info, see update below
Lazy Evaluation: AMD modules can be lazily parsed and evaluated allowing download on app load while deferring the cost of parse+eval until the module is needed. See this article and the series of other older articles that it references.
Target Browser: Browsers limit the number of concurrent downloads per hostname. For example, IE 7 will only download two files from a given host concurrently. Others limit to 4, and others to 6. [reference]
Finally, here's a good article by Steve Souders that summarizes a bunch of script loading techniques.
Update: Re CDN usage: Steve Souders posted a detailed analysis of using a CDN for 3rd party libraries (e.g. jQuery) that identifies the many considerations, pros and cons.

This question is a bit old now, but I thought I might add my thoughts.
I completely agree with rharper in using r.js to combine all your code for production, but there is also a case for splitting functionality.
For single page apps I think having everything together makes sense. For large scale more traditional page based websites which have in page interactions this can be quite cumbersome and result in loading a lot of unnecessary code for a lot of users.
The approach I have used a few times is
define core modules (needed on all pages for them to operate properly), this gets combined into a single file.
create a module loader that understands DOM dependencies and paths to modules
on doc.ready loop through the module loader and async load modules needed for enhanced functionality as needed by specific pages.
The advantage here is you keep the initial page weight down, and as additional scripts are loaded async after page load the perceived performance should be faster. That said, all functionality loaded this way should be done approached as progressive enhancement (i.e. ajax on forms) so that in the event of slow loading or errors the basic functionality is still available.

Related

Javascript/jQuery best practice for web app: centralization vs. specification [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Imagine a web app that have dozens of page groups (pg1, pg2, ...) and some of these page groups need some JavaScript code specific only to them, but not to the entire app. For example, some visual fixes on window.resize() might be relevant only in pg2 but nowhere else.
Here are some possible solutions:
1/ Centralized: having one script file for the entire app that deals with all page groups. It's quite easy to know if relevant DOM object is present and so all irrelevant pages simply do a minor extra if().
Biggest advantage is that all JS is loaded once for the entire web app and no modification of the HTML code is needed. Disadvantage is that a additional checks are added to irrelevant pages.
2/ Mixed: the centralized script checks for the existence of a specific function on a page and launches it if it exists. For example we could add a
if (typeof page_specific_resize === 'function') page_specific_resize();
The specific page group in this case will have:
<script>
function page_specific_resize() {
//....
}
</script>
Advantage is that the code exists only for relevant pages and so isn't tested on every page. Disadvantage is additional size for the HTML results in the entire page group. If there are more than a few lines of code, the page group might be able to load an additional script specific to it but then we're adding an http call there to possibly save a few kilos in the centralized script.
Which is the best practice? Please comment on these solutions or suggest your own solution. Adding some resources to support your claims for what's better (consider performance and ease of maintenance) would be great. The most detailed answer will be selected. Thanks.
It's a bit tough to think on a best solution since it's an hypothetical scenario and we don't have the numbers to crunch on: what are the most loaded pages, how many are there, the total script size, etc...
That said, I didn't find the specific answer, Best Practice TM, but general points where people agree on.
1) Cacheing:
According to this:
https://developer.yahoo.com/performance/rules.html
"Using external files in the real world generally produces faster
pages because the JavaScript and CSS files are cached by the browser"
2) Minification
Still according to the Yahoo link:
"[minification] improves response time performance because the size of the downloaded file is reduced"
3) HTTP Requests
It's best to reduce HTTP calls (based on community answer).
One big javascript file or multiple smaller files?
Best practice to optimize javascript loading
4) Do you need that specific scritp at all?
According to: https://github.com/stevekwan/best-practices/blob/master/javascript/best-practices.md
"JavaScript should be used to decorate your site with additional functionality, and should not be required for your site to be operational."
It depends on the resources you have to load. Depends on how frequently a specific page group is loaded or how much frequently you expect it to be requested. The web app is single page? What each specific script do?
If the script loads a form, the user will not need to visit the page more than once. User will need internet connection to post data later anyway.
But if it's a script to resize a page and the user has some connection hiccups (ex: visiting your web app on a mobile, while taking the the subway), it may be better to have the code already loaded so the user can freely navigate. According to the Github link I posted earlier:
"Events that get fired all the time (for example, resizing/scrolling)"
Is one thing that should be optimized because it's related to performance.
Minifying all the code in one JS file to be cached early will reduce the number of requests made. Also, it may take a few seconds to a connection to stablish, but takes milliseconds to process a bunch of "if" statements.
However, if you have a heavy JS file for just one feature which is not the core of your app (say, this one single file is almost n% the size of the total of other scripts combined), then there is no need to make the user wait for that specific feature.
"If your page is media-heavy, we recommend investigating JavaScript techniques to load media assets only when they are required."
This is the holy grail of JS and hopefully what modules in ECMA6/7 will solve!
There are module loaders on the client such as JSPM and there are now hot swapping JS code compilers in node.js that can be used on the client with chunks via webpack.
I experimented successfully last year with creating a simple application that only loaded the required chunk of code/file as needed at runtime:
It was a simple test project I called praetor.js on github: github.com/magnumjs/praetor.js
The gh-pages-code branch can show you how it works:
https://github.com/magnumjs/praetor.js/tree/gh-pages-code
In the main,js file you can see how this is done with webpackJsonp once compiled:
JS:
module.exports = {
getPage: function (name, callback) {
// no better way to do this with webpack for chunk files ?
if (name.indexOf("demo") !== -1) {
require(["./modules/demo/app.js"], function (require) {
callback(name, require("./modules/demo/app.js"));
});
}
.....
To see it working live, go to http://magnumjs.github.io/praetor.js/ and open the network panel and view the chunks being loaded at runtime when navigating via the select menu on the left.
Hope that helps!
It is generally better to have fewer HTTP requests, but it depends on your web app.
I usually use requirejs to include the right models and views I need in each file.
While in development it saves me considerable time on debugging (because I know which files are present), in production this could be problematic considering the number of requests.
So I use r.js, which is conceived especially for requirejs, to compile my JS files when it's time for deployment.
Hope this helps

What are the best practices for including CSS and JavaScript in a page? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I would like to know if it is advisable to embed (inline) all CSS and JavaScript that are required by a webpage, into and tags instead of letting the browser download these files. Is such a practice advisable?
My "application" is a SPA and I have managed to compile everything, even images and font-icons (as base64) into a single index.html, but I am not sure if this is a common practice.
Thanks in advance.
You are ignoring some crucial things:
Browser can fetch specific resources in parallel, thus reducing the time of load compared to the "pack altogether" approach.
Browser can apply different caching policies for a different type of resources, which also can be used for some clever time and/or badnwidth reducing tuning.
People can get some useful data even if not all resources are loaded.
Not all functionality in SPA is heavily used, so sometimes it makes sense to load some entities lazily, on demand.
This is a very basic and simplified overview, there's a lot of thing to consider here. Moreover, budling to a bigger chuncks is actually something used in practice. Say, pretty often all js resources are bundled. But definitely trying to get rid of any additional http request will make your architecture less flexible, less cacheable and so on. So, it's overkill.
Best practice would be to split each resources(scripts, CSS, images, etc.) into separate files. Which will allow browser to download and cache each resource for future reuse(even for other pages). But browsers have limit into six(on time of writing) parallel connections per one origin. That why a lot of external resources on page cause bad page loading performance and bad waterfall.
There are a lot of techniques to improve performance such as: bundling, domain sharding, image sprites etc. Also for some critical resources you can use inline technique. It allows browsers to use this resources instantly without additional requests. For example, you can embed all resources(image, CSS, scripts) that are required for loading indicator and browser will render it without additional requests.
For best development style do not embed resources and use separate files. In case you care about performance you should investigate waterfall of your page(e.g. here or network tab in developer tool of any browser) and use some techniques to improve it. If you are interested in this field I recommend you to read books below:
High Performance Web Sites by Steve Souders
Even Faster Web Sites by Steve Souders
High Performance Browser Networking by Ilya Grigorik
Note that this techniques are relevant only for HTTP 1.1. For HTTP 2.0 it will be only harmful because new version is designed to improve performance.
No, always avoid inline-styling and scripting to reduce the page load of the html file. Also, separating your html, your css and your js keeps your coding clean, semantic and reusable for other external pages or applications that may require a common css property or script.
It's all about where in the pipeline you need the CSS as I see it.
inline css
Pros: Great for quick fixes/prototyping and simple tests without having to swap back and forth between the .css document and the actual HTML file.
Pros: Many email clients do NOT allow the use of external .css referencing because of possible spam/abuse. Embedding might help.
Cons: Fills up HTML space/takes bandwidth, not resuable accross pages - not even IFRAMES.
embedded css
Pros: Same as above regarding prototype, but easier to cut out of the final prototype and put into an external file when templates are done.
Cons: Some email clients do not allow styles in the [head] as the head-tags are removed by most webmail clients.
external css
Pros: Easy to maintain and reuse across websites with more than 1 page.
Pros: Cacheable = less bandwidth = faster page rendering after second page load
Pros: External files including .css can be hosted on CDN's and thereby making less requests the the firewall/webserver hosting the HTML pages (if on different hosts).
Pros: Compilable, you could automatically remove all of the unused space from the final build, just as jQuery has a developer version and a compressed version = faster download = faster user experience + less bandwidth use = faster internet! (we like!!!)
Cons: Normally removed from HTML mails = messy HTML layout.
Cons: Makes an extra HTTP request per file = more resources used in the Firewalls/routers.
Source/Reference : here
Keeping all the HTML, CSS, and JavaScript code in one file can make it difficult to work with. Stylesheet and JavaScript files must contain the and tags respectively, because they are HTML snippets and not pure .css or .js files.
Many web developers recommend loading JavaScript code at the bottom of the page to increase responsiveness, and this is even more important with the HTML service. In the NATIVE sandbox mode, all scripts you load are scanned and sanitized client-side, which may take a couple of seconds. Moving your tags to the end of your page will let HTML content render before the JavaScript is processed, allowing you to present a spinner or other message to the user.
When you're hard at work on your markup, sometimes it can be tempting to take the easy route and sneak in a bit of styling.
I'm going to make this text red so that it really stands out and makes people take notice!
Sure -- it looks harmless enough. However, this points to an error in your coding practices.
Instead, finish your markup, and then reference that P tag from your external stylesheet.
someElement > p { color: red;}
Remember -- the primary goal is to make the page load as quickly as possible for the user. When loading a script, the browser can't continue on until the entire file has been loaded. Thus, the user will have to wait longer before noticing any progress.
If you have JS files whose only purpose is to add functionality -- for example, after a button is clicked -- go ahead and place those files at the bottom, just before the closing body tag. This is absolutely a best practice.

Do browsers prefer leaner JS bundles?

I'm working in an MVC application that has about 10 BIG JavaScript libraries (jquery, modernizr, knockout, flot, bootstrap...), about 30 jQuery plugins and each view (hundreds of them) has it's own corresponding Javascript file.
The default MVC4 bundling is used, but all these JavaScript files are packaged in two bundles; one containing all the plugins and libraries, and one for the view specific files. And these two bundles are loaded on EVERY page, regardless if needed or not.
Now, they're loaded only the first time the user opens the application, but even minified the two are about 300 KB (way more raw), and the bulk of that code is very specific to certain pages.
So, is it better for the browsers to have 2 giant bundles, or to create "smarter" and lighter bundles specific to pages, but have more of them? The browser would cache them regardless first time they're opened, but is there any performance benefit to having less javascript loaded per page vs having all of it loaded on every page?
If there's any chance of not needing them for a session then it would make sense to split them into smaller bundles. Obviously any bytes that you don't have to send are good bytes ;)
You're right about the caching somewhat eliminating this problem as once you need it once it can be cached, but if, for example, you have a calendar page and a news page, it's conceivable that someone could not care at all about the calendar and so it wouldn't make sense to load it.
That said, you can probably go overboard on splitting things up and the overhead caused by creating each new request will add up to more than you save by not loading larger libraries all at once.
The size of the files is irrelevant to a browser on its own, size of the page as a whole is relevant to the user's computer, it will impact processor, network and memory (where the 3 mentioned components performance will somewhat depend on the browser used).
Many small files will probably provide a better response on slow clients because the file downloads and is executed, vs. waiting to download a big file (waiting for memory to be allocated to read the file) and the executing the scripts.
People will probably suggest to go easy on the scripts and plugins if you want a leaner web application.
Concepts like image sprites and JS bundling are inventions due to the goal of minimising HTTP requests. Each HTTP request has an overhead and can result in bottlenecks, so it's better to have one big bundle than many small bundles.
Having said that, as Grdaneault said, you don't want users to load JS that they won't use.
So the best approach would be to bundle all the common stuff into one, then do separate bundles for uncommon stuff. Possibly bundle per view, depends on your structure. But don't let your bundles overlap (e.g. bundle A has file A & B, bundle B has file A & C), as this will result in duplicate loading.
Though 30 plugins, wow. The initial load is just one of the many issues to sort out. Think carefully as to whether you need them all - not everyone will have an environment that's as performant as you hopefully do!

What is the acceptable number of javascript external linked files [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What is the acceptable number of javascript external linked files, within html.
And may a browser happen to not download external js file. If yes then why it may happen.
Thanks in advance.
Take in consideration that every script that is external will take time to load and the server that serves it may be offline.
You should consider including only the scripts you use on the current page and not all libraries in the world for small things.
An acceptable number of external files is 0. From my opinion.
If you want your webpage to run smoothly you should not consider loading anything external.
External files are often included for testing purposes when you don't want to save scripts on localhost, css (eg: jQuery and jQuery UI). But on live production you should have them on your host/server. Maybe in future the external server will not be available anymore.
A browser does NOT choose what to download, it downloads what he is asked. But if a script fails, or there are actions in that script that require an additional library and that library isn't available, the browser will stop loading and will give errors.
The answer to this question is quite complicated. You have to take into account caching, number of simultaneous requests and things like authentication.
The disadvantage of inline scripts is that you can't take good advantage of caching. If you move your scripts to external files revisiting users may still have your files in cache and the page will load faster for them. How many scripts you should have depends on the number of simultaneous requests a browser will make (typically 4), the size of the scripts and execute complexity. Keep in mind that CSS files, or basically any resource, on the same domain counts towards this limit as well. You may ignore stylesheets with media="print" as modern browsers will delay loading.
If you have more than 4 scripts the 5th script will only start loading when one of the other 4 have been loaded. If this script contains some on dom ready event code it will be delayed. You could consider merging scripts or changing the order in which they are loaded.
Another problem to be well aware of is updates. If you update your scripts and users still have the old one cached you're going to run into problems. Some users might even get some of the newer scripts and some of the older scripts. Make sure you have a mechanism in place for this. I've found fingerprinting to be really useful in cache management.
You could consider a lazy loading principle where you first only load the most basic scripts for showing what the user absolutely must see. Then load other scripts in the background as they are needed.
Then there are 3rd party services like Google Maps, you can't actually cache these files because they change over time and may contain authentication steps to prevent abuse and such. You have limited control over these scripts.
Overall it depends on the kind of website you're making. If you're making more of a business application a relatively long load time may be acceptable. If you're making a fancy promotional site, load time is absolutely key and inline scripts may be for you.
This is quite an advanced topic, don't worry about it too much unless you run into actual performance issues. Premature optimization is the root of all evil.

How to optimally serve and load JavaScript files?

I'm hoping someone with more experience with global-scale web applications could clarify some questions, assumptions and possible misunderstandings I have.
Let's take a hypothetical site (heavy amount of client-side / dynamic components) which has hundreds of thousands of users globally and the sources are being served from one location (let's say central Europe).
If the application depends on popular JavaScript libraries, would it be better to take it from the Google CDN and compile it into one single minified JS file (along with all application-specific JavaScript) or load it separately from the Google CDN?
Assetic VS headjs: Does it make more sense to load one single JS file or load all the scripts in parallel (executing in order of dependencies)?
My assumptions (please correct me):
Compiling all application-specific/local JS code into one file, using CDNs like Google's for popular libraries, etc. but loading all of these via headjs in parallel seems optimal, but I'm not sure. Server-side compiling of third party JS and application-specific JS into one file seems to almost defeat the purpose of using the CDN since the library is probably cached somewhere along the line for the user anyway.
Besides caching, it's probably faster to download a third party library from Google's CDN than the central server hosting the application anyway.
If a new version of a popular JS library is released with a big performance boost, is tested with the application and then implemented:
If all JS is compiled into one file then every user will have to re-download this file even though the application code hasn't changed.
If third party scripts are loaded from CDNs then the user only has download the new version from the CDN (or from cache somewhere).
Are any of the following legitimate worries in a situation like the one described?
Some users (or browsers) can only have a certain number of connections to one hostname at once so retrieving some scripts from a third party CDN would be result in overall faster loading times.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Compiling all application-specific/local JS code into one file
Since some of our key goals are to reduce the number of HTTP requests and minimize request overhead, this is a very widely adopted best practice.
The main case where we might consider not doing this is in situations where there is a high chance of frequent cache invalidation, i.e. when we make changes to our code. There will always be tradeoffs here: serving a single file is very likely to increase the rate of cache invalidation, while serving many separate files will probably cause a slower start for users with an empty cache.
For this reason, inlining the occasional bit of page-specific JavaScript isn't as evil as some say. In general though, concatenating and minifying your JS into one file is a great first step.
using CDNs like Google's for popular libraries, etc.
If we're talking about libraries where the code we're using is fairly immutable, i.e. unlikely to be subject to cache invalidation, I might be slightly more in favour of saving HTTP requests by wrapping them into your monolithic local JS file. This would be particularly true for a large code base heavily based on, for example, a particular jQuery version. In cases like this bumping the library version is almost certain to involve significant changes to your client app code too, negating the advantage of keeping them separate.
Still, mixing request domains is an important win, since we don't want to be throttled excessively by the maximum connections per domain cap. Of course, a subdomain can serve just as well for this, but Google's domain has the advantage of being cookieless, and is probably already in the client's DNS cache.
but loading all of these via headjs in parallel seems optimal
While there are advantages to the emerging host of JavaScript "loaders", we should keep in mind that using them does negatively impact page start, since the browser needs to go and fetch our loader before the loader can request the rest of our assets. Put another way, for a user with an empty cache a full round-trip to the server is required before any real loading can begin. Again, a "compile" step can come to the rescue - see require.js for a great hybrid implementation.
The best way of ensuring that your scripts do not block UI painting remains to place them at the end of your HTML. If you'd rather place them elsewhere, the async or defer attributes now offer you that flexibility. All modern browsers request assets in parallel, so unless you need to support particular flavours of legacy client this shouldn't be a major consideration. The Browserscope network table is a great reference for this kind of thing. IE8 is predictably the main offender, still blocking image and iFrame requests until scripts are loaded. Even back at 3.6 Firefox was fully parallelising everything but iFrames.
Some users may be using the application in a restricted environment, therefore the domain of the application may be white-listed but not the CDNs's domains. (If it's possible this is realistic concern, is it at all possible to try to load from the CDN and load from the central server on failure?)
Working out if the client machine can access a remote host is always going to incur serious performance penalties, since we have to wait for it to fail to connect before we can load our reserve copy. I would be much more inclined to host these assets locally.
Many small js files is better than few large ones for many reasons including changes/dependencies/requirements.
JavaScript/css/html and any other static content is handled very efficiently by any of the current web servers (Apache/IIS and many others), most of the time one web server is more than capable of serving 100s and 1000s requests/second and in any case this static content is likely to be cached somewhere between the client and your server(s).
Using any external (not controlled by you) repositories for the code that you want to use in production environment is a NO-NO (for me and many others), you don't want a sudden, catastrophic and irrecoverable failure of your whole site JavaScript functionality just because somebody somewhere pressed commit without thinking or checking.
Compiling all application-specific/local JS code into one file, using
CDNs like Google's for popular libraries, etc. but loading all of
these via headjs in parallel seems optimal...
I'd say this is basically right. Do not combine multiple external libraries into one file, since—as it seems you're aware—this will negate the majority case of users' browsers having cached the (individual) resources already.
For your own application-specific JS code, one consideration you might want to make is how often this will be updated. For instance if there is a core of functionality that will change infrequently but some smaller components that might change regularly, it might make sense to only compile (by which I assume you mean minify/compress) the core into one file while continuing to serve the smaller parts piecemeal.
Your decision should also account for the size of your JS assets. If—and this is unlikely, but possible—you are serving a very large amount of JavaScript, concatenating it all into one file could be counterproductive as some clients (such as mobile devices) have very tight restrictions on what they will cache. In which case you would be better off serving a handful of smaller assets.
These are just random tidbits for you to be aware of. The main point I wanted to make was that your first instinct (quoted above) is likely the right approach.

Categories