Non-blocking javascript and css in modern browsers. Is it still needed? - javascript

I am playing a little with some non-blocking JavaScript loading. This means I have a small snippet of JavaScript in my head, and load all my external files at runtime. I even took it a little further to load CSS non-blocking.
I see the articles I could find are a little outdated, that is why I want to know if this is all still relevant.
Now first the scripts, they look like this:
<script>
(function () {
var styles = JSON.parse(myObject.styles);
for( name in styles ){
var link = document.createElement('link');
link.setAttribute('rel', 'stylesheet');
link.setAttribute('type', 'text/css');
link.setAttribute('href', styles[name]);
document.getElementsByTagName('head')[0].appendChild(link);
}
var scripts = JSON.parse(myObject.scripts);
for( name in scripts ){
var e = document.createElement('script');
e.src = scripts[name];
e.async = true;
document.getElementsByTagName('head')[0].appendChild(e);
}
}());
</script>
myObject.styles is here just an object that holds all the urls for all the files.
I have run 3 test, here are the results:
Normal setup
This is just the normal setup, we have 4 css files in the head, and 3 js files at the bottom of the page.
Now I do not see anything blocking. What I see it that everything is loading at the same time.
Non-blocking JS
Now to take this a little further, I have made ONLY the js files non-blocking. This with the script above. I suddenly see that my css files are blocking up the load. This is strange, because it is not blocking anything in the first example. Why is css suddenly blocking the load ?
Everything non-blocking
Finally I did a test where all the external files are loaded in a non-blocking way. Now I do not see any difference with our first method. They just both look the same.
Conclusion
My conclusion is that the files are already loaded in a non-blocking way, I do not see a need to add special script.
Or am I missing something here?
More:
Article: http://www.yuiblog.com/blog/2008/07/22/non-blocking-scripts/
Question: Javascript non-blocking scripts, why don't simply put all scripts before </body> tag?
Question: Do modern browsers still limit parallel downloads?
Code: http://calendar.perfplanet.com/2010/the-truth-about-non-blocking-javascript/
Code: http://blog.fedecarg.com/2011/07/12/javascript-asynchronous-script-loading-and-lazy-loading/

Yes, in today's browsers, files referenced are being loaded non-blocking way. But there are differences:
ready event appears sooner if you put "things that you do not need to wait for" for dynamic load, as you can see from the timing of the blue bar. So actions in the page may start sooner.
scripts that are loaded from the text in the page (as opposed from dynamic loading) are executed in order. So they must wait for each other, if someone is loading longer. Dynamically loaded scripts, on other way, do execute as soon as possible unless put .async=false to script element.
So, on contemporary browsers, the difference is only semantical (static load simulates old sequential way, dynamic is much more parallel).

It depends of how many files you want to load in the same time. In your case you are using 3 JavaScript files. Different browsers have different limits, so it's mean when you have for example 7 JavaScript files in Frefox 7th will be loaded after 6 have finished, since Firefox has limit 6 parallel downloads.
Using scripts or loading scitps just before tag is still good approach. Try to repeat your test with more JavaScript files, like 10 or so.

Related

Third-party code blocked the main thread - Defer and Async don't solve it

I'm trying to improve the performance of a website that uses third party JS (as they all do :) ).
After running the Lighthouse analysis the report says:
Reduce the impact of third-party code Third-party code blocked the main thread
Since every JS blocks the critical path by downloading, parsing and executing the script I pushed all non-critical JS to the bottom of the page and added the defer attribute
Nevertheless, I still see the particular JS resource as the blocking the main thread.
Defering the resource should download it in parallel and execute it once the rendering has finished so I really don't get why Lighthouse keeps showing it in the list of main thread blocking resources. Sure, it gets parsed and executed on the main thread, but it's not blocking the critical path and it shouldn't affect the UX that much
What is the best solution to add, for example, Tidio chat widget to the web page without affecting the lighthouse performance score?
Cheers
EDIT
I've tested and both defer and async block the main thread
The following code also blocks it
window.addEventListener('DOMContentLoaded', (event) => {
var tidioScript = document.createElement("script");
tidioScript.src = "//code.tidio.co/xxxx.js";
document.body.appendChild(tidioScript);
});
What works is explicitly delaying the injection of the script tag into the DOM:
setTimeout(function() {
var tidioScript = document.createElement("script");
tidioScript.src = "//code.tidio.co/#{tidio_id}.js";
document.body.appendChild(tidioScript);
}, 3 * 1000);
but this simply feels wrong :/ I thought defer was suppose to achieve the same result :/
I advise you against modifying any of your code so as to accommodate 3rd party scripts.
On the contrary, it is the 3rd party scripts that should accommodate themselves around yours!
So to take full control of the timing of the loading of 3rd party scripts, don’t load them via HTML like this...
<script src="https://someplace.com/ThirdParty.js" async>
but rather via JS with a time delay of your choice, like this...
setTimeout(function(){
let S=document.createElement('script'); S.type='text/javascript';
S.src='https://someplace.com/ThirdParty.js';
try{document.getElementsByTagName('HEAD')[0].appendChild(S);} catch(e){alert(e);}
},3000);
Also, try to keep local copies of any 3rd party scripts you are allowed to, so as to avoid the loading/timing dilemma whenever possible.

Where to add asynchronous scripts in javascript

I want to load a non-blocking javascript on my page. Before async is supported, the best practice seems to dynamically load it via a simple script.
Here is an example that works fine in which it is inserted before the first <script>:
var myscript = document.createElement('script');
myscript.async = true; // cannot hurt, right?
myscript.type = 'text/javascript';
myscript.src = 'myscript.js';
var node = document.getElementsByTagName('script')[0];
node.parentNode.insertBefore(myscript, node);
I found several versions inserting the script in different places like the end of <head> or the <body>:
document.getElementsByTagName("head")[0].appendChild(myscript);
document.getElementsByTagName("body")[0].appendChild(myscript);
The order seems to matter in some browsers though it is asynchronous.
Are there any difference in terms of browser support? performance? blocking risk?
I don't have any constraint in terms of order (they don't impact each other) but I want to make sure that if my script takes too long to load the page content will still load just fine. I would think the last solution works best but I am not sure of the differences.
You'll want to use something like $script.js: http://www.dustindiaz.com/scriptjs
Appending the scripts at the end of the body is the best solution here. It still allows for loading the DOM without blocking for the script tags. Also if you put your scripts at the end of the document you no longer need to wrap your functions in a DOM ready event because at the moment your scripts start executing the DOM will already be loaded by the browser and you could directly start manipulating it or subscribing to some events.
You could try having a script that waits for the page to be complete and then loads the script that you want to add. Have done this recently and the page loads fine and then a new block appears.
var Widget = {}
Widget.myDocReadyInterval = setInterval(function(){
if (document.readyState === "complete")
{
clearInterval(Widget.myDocReadyInterval);
Widget.startLoading();
}
}, 20);
Widget.startLoading(){
// do what you need here...
}
Your question focuses on the load part, but actually performance can be impacted by different phases:
load
parsing
execution
For this reason adding the script at the end of the body is usually considered the less obtrusive.
To push it even further, you could wait for DOM ready to run your load script. In this case, it won't matter whether you attach the script to the head or the body.
[Edit] Side comment: the head and body tags are not mandatory in html pages. document.getElementsByTagName('script')[0] is a good approach in such edge cases as it guarantees that you'll get an element (there's at least your load script in the page).

Modernizr - Which scripts get loaded asynchronously?

I have the following:
Modernizr.load([
{
load : '//ajax.googleapis.com/ajax/libs/jquery/1.6.2/jquery.min.js',
complete : function () {
if ( !window.jQuery ){
Modernizr.load('/js/jquery-1.6.2.min.js');
}
}
},
{
load : ["/js/someplugin.js", "/js/anotherplugin.js"],
complete : function()
{
// do some stuff
}
},
{
load: ('https:' == location.protocol ? '//ssl' : '//www') + '.google-analytics.com/ga.js'
}
]};
I read that Modernizr loads scripts Asyncronously. But in the above example, which ones are loaded async?
Do all of the following get loaded asyncronously?
jquery.min.js
someplugin.js
anotherplugin.js
ga.js
Or is it a combination of async and ordered loading like this:
jquery.min.js is loaded first
Then someplugin.js and anotherplugin.js is loaded async
finally, ga.js is loaded
I'm having a hard time testing which case it is.
You've selected a fairly complex example to dissect. So lets do it in steps.
The three parameter sets {...},{...},{...} will execute sequentially.
Inside the first parameter set you will load jQuery from google's CDN, then when complete you test if jQuery actually loaded. If not (perhaps you are developing offline and the google CDN is not reachable), you load a local copy of jQuery. So these one is "sequential", but really only one of them will load.
In the second parameter set you load someplugin.js and 'anotherplugin.js` simultaneously and asynchronously. So they will be downloaded in parallel. Its great when you can parallel 2 items at a time as that is the "weakest link" for browsers today (yup only IE, Every other browser will parallel 6-8 files at a time).
In the third parameter set you load the google analytics script.
Remember modernizr is a collection of tools. The included loader is actually just a repackaged yepnope. So you can google for more about yepnope.
The idea with the sequential loads is to be able to ensure that dependencies load in order (example your jQuery plugins must load after jQuery framework). The purpose of the parallel downloads syntax in parameter set two is to speed up performance for multiple files that are not co-dependent (example you could load multiple jQuery plugins in parallel once jQuery is loaded as long as they don't depend on eachother).
So to answer your question, your hypothesis #2 is correct. If you'd like to explore more using firebug, add some console.log statements in each parameter set's complete function. You should see the 3 groups complete sequentially every time. Now switch firebug onto the "Net" tab to watch the file requests go out. The files someplugin.js and 'anotherplugin.js` won't necessarily load in the same order everytime. The requests will go out in order, but there timing bars should overlap (showing them as parallel). Testing this locally will be hard because it'll be so fast. Put your two test files on a slow server somewhere or bias them the opposite of what you are expecting (make the first file 1mb and the second <1kb) so you can see the overlapping downloads in the network monitor tab of firebug (this is just an artificial scenario for testing purposes, you could fill a file with repeated JS comments just to make an artificially slow file for testing).
EDIT: To clarify a little more accurately, I'd like to add a quote from the yepnopejs.com homepage. yepnopejs is the resource loader that is included (and aliased) inside modernizr:
In short, whatever order you put them in, that's the order that we
execute them in. The 'load' and 'both' sets of files are executed
after your 'yep' or 'nope' sets, but the order that you specificy
within those sets is also preserved. This doesn't mean that the files
always load in this order, but we guarantee that they execute in this
order. Since this is an asynchronous loader, we load everything all at
the same time, and we just delay running it (or injecting it) until
the time is just right.
So yes, you could put jquery, followed by some plugins in the same Modernizr.load statement and they will be downloaded in parallel and injected into the DOM in the same order as specified in the arguments. The only thing you are giving up here is the ability to test if jQuery loaded successfully and perhaps grab a backup non-CDN version of jQuery if necessary. So it's your choice how critical fallback support is for you. (I don't have any sources, but I seem to recall that the google CDN has only gone down once in its entire lifetime)

jquery and script speed?

Quick question, I have some scripts that only need to be run on some pages and some only on a certain page, would it be best to include the script at the bottom of the actual page with script tags or do something like in my js inlcude;
var pageURL = window.location.href;
if (pageURL == 'http://example.com') {
// run code
}
Which would be better and faster?
The best is to include the script only on pages that need it. Also in terms of maintenance your script is more independant from the pages that are using it. Putting those ifs in your script makes it tightly coupled to the structure of your site and if you decide to rename some page it will no longer work.
I can recommend you to use an asynchrounous resource loader, LAB.js for example. Then you could build a dependencies list, for instance:
var MYAPP = MYAPP || {};
/*
* Bunches of scripts
* to load together
*/
MYAPP.bunches = {
defaults: ["libs/jquery-1.6.2.min.js"],
cart: ["plugins/jquery.tmpl.min.js",
"libs/knockout-1.2.1.min.js",
"scripts/shopping-cart.js"],
signup: ["libs/knockout-1.2.1.min.js",
"scripts/validator.js"]
/*
... etc
*/
};
/*
* Loading default libraries
*/
$LAB.script(MYAPP.defaults);
if (typeof MYAPP.require !== 'undefined') {
$LAB.script(MYAPP.dependencies[MYAPP.require]);
}
and in the end of your page you could write:
<script type="text/javascript">
var MYAPP = MYAPP || {};
MYAPP.require = "cart";
</script>
<script type="text/javascript" src='js/libs/LAB.min.js'></script>
<script type="text/javascript" src='js/dependencies.js'></script>
By the way, a question to everyone, is it a good idea to do so?
In so far as possible only include the scripts on the pages that requirement. That said, if you're delivering content via AJAX that can be hard to do, since the script might already be loaded and reloading could cause problems. Of course you can deliver code in a script block (as opposed to referencing an external js file), in code delivered via AJAX.
In cases where you need to load scripts (say via a master page) for all pages, but that only apply to certain pages, take advantage of the fact that jQuery understands and deals well with selectors that don't match any elements. You can also use live handlers along with very specific selectors to allow scripts loaded at page load time to work with elements added dynamically later.
Note: if you use scripts loaded via content distribution network, you'll find that they are often cached locally in the browser anyway and don't really hurt your page load time. The same is true with scripts on your own site, if they've already been loaded once.
You have two competing things to optimize for, page load time over the network and page initialization time.
You can minimize your page load time over the network by taking maximum advantage of browser caching so that JS files don't have to be loaded over the network. To do this, you want as much javascript code for your site in on or two larger and fully minimized JS files. To do this, you should put JS for multiple different pages in one common JS file. It will vary from site to site whether the JS for all pages should be ine one or two larger JS files or whether you group it into a small number of common JS files that are each targeted at part of your site. But, the general idea is that you want to combine the JS code from different pages into a common JS file that can be most effectively cached.
You can minimize your page initialization time by only calling initialization code that actually needs to execute on the particular page that is being displayed. There are several different ways to approach this. I agree with the other callers that you do not want to be looking at URLs to decide which code to execute because this ties your code to the URL structure which is better to avoid. If your code has a manageable number of different types of pages, then I'd recommend identifying each of those page types with a unique class name on the body tag. You can then have your initialization code look for the appropriate class on the body tag and branch to the appropriate initialization code based on that. I've even seen it done where you find a class name with a particular common prefix, parse out the non-common part of the name and call an initialization function by that name. This allows you to give a page a specific set of behaviors by only adding a classname to the body tag. The code remains very separate from the actual page.
the less general purpose way of doing this is to keep all the code in the one or two common JS files, but to add the appropriate initialization call to each specific page's HTML. So, the JS code that does the initialization code lives in the common JS files and thus is maximally cached, but the calling of the appropriate initialization code for that page is embedded inline in each specific page. This minimizes the execution time of the initialization, but still lets you use maximal caching. It's slightly less generic than the class name technique mentioned earlier, but some may like the more direct calling technique.
Include scripts at bottom of pages that need it only.
The YSlow add-on is the best solution to know why your website is slow.
There are many issues which could be the reason for slowness.
Combining many jQuery to one could help you increasing your performance.
Also you can put the script at the bottom of your page and CSS at top.
Its basically up to you and depends on what the code is.
Generally with small things I will slip it into the bottom of the page. (I'm talking minor ui things that relate only to that page).
If you're doing the location ref testing for more than a couple pages it probably means you're doing something wrong.
You might want to take a look at one of these:
http://en.wikipedia.org/wiki/Unobtrusive_JavaScript
http://2tbsp.com/node/91
And as for which is faster it's wildly negligible, pick what is easier for you to maintain.

Does the ORDER of javascript files matter, when they are all combined into one file?

In todays modern age, where lots of (popular) javascripts files are loaded externally and locally, does the order in which the javascripts files are called matter especially when all local files are all combined (minified) into one file?
Furthermore, many claim that Javascript should go in the bottom of the page while others say javascript is best left in the head. Which should one do when? Thanks!
google cdn latest jquery js | external
another cdn loaded javascript js | external
TabScript ...js \
GalleryLightbox ...js \
JavascriptMenu ...js \
HTMlFormsBeautifier ...js > all minified and combined into one .js file!
TextFieldResize ...js /
SWFObjects ...js /
Tooltips ...js /
CallFunctions ...js /
Order matters in possibly one or more of the following situations:
When one of your scripts contains dependencies on another script.
If the script is in the BODY and not the HEAD.. UPDATE: HEAD vs BODY doesn't seem to make a difference. Order matters. Period.
When you are running code in the global namespace that requires a dependency on another script.
The best way to avoid these problems is to make sure that code in the global namespace is inside of a $(document).ready() wrapper. Code in the global namespace must be loaded in the order such that executed code must first be defined.
Checking the JavaScript error console in Firebug or Chrome Debugger can possibly tell you what is breaking in the script and let you know what needs to be modified for your new setup.
Order generally doesn't matter if functions are invoked based on events, such as pageload, clicks, nodes inserted or removed, etc. But if function calls are made outside of the events in the global namespace, that is when problems will arise. Consider this code:
JS file: mySourceContainingEvilFunctionDef.js
function evilGlobalFunctionCall() {
alert("I will cause problems because the HTML page is trying to call " +
"me before it knows I exist... It doesn't know I exist, sniff :( ");
}
HTML:
<script>
evilGlobalFunctionCall(); // JS Error - syntax error
</script>
<!-- Takes time to load -->
<script type="text/javascript" src="mySourceContainingEvilFunctionDef.js"></script>
...
In any case, the above tips will help prevent these types of issues.
As a side note, you may want to consider that there are certain speed advantages to utilizing the asynchronous nature of the browser to pull down resources. Web browsers can have up to 4 asynchronous connections open at a time, meaning that it's quite possible that your one massive script might take longer to load than that same script split up into chunks! There is also Yahoo Research that shows combining scripts produces the faster result, so results vary from one situation to another.
Since it's a balance between the time taken to open and close several HTTP connections vs the time lost in limiting yourself to a single connection instead of multiple asynchronous connections, you may need to do some testing on your end to verify what works best in your situation. It may be that the time taken to open all of the connections is offset by the fact that the browser can download all the scripts asynchronously and exceed the delays in opening/closing connections.
With that said, in most cases, combining the script will likely result in the fastest speed gains and is considered a best practice.
Yes, depending very much on what you do.
For example, if a.js had...
var a = function() {
alert('a');
}
...and b.js had...
a()
...then you wouldn't want to include b.js before a.js, or a() won't be available.
This only applies to function expressions; declarations are hoisted to the top of their scope.
As for whether you should combine jQuery, I reckon it would be better to use the Google hosted copy - adding it to your combined file will make it larger when there is a great chance the file is already cached for the client.
Read this post from the webkit team for some valuable information about how browsers load and execute script files.
Normally when the parser encounters an
external script, parsing is paused, a
request is issued to download the
script, and parsing is resumed only
after the script has fully downloaded
and executed.
So normally (without those async or defer attributes), scripts get excuted in the order in which they are specified in the source code. But if the script tags are in the <head>, the browser will first wait for all scripts to load before it starts executing anything.
This means that it makes no difference if the script is splitted into multiple files or not.
If I'm understanding your question I think you're asking if it matters where in a file a function/method is defined, and the answer is no, you can define them anywhere in a single source file. The JavaScript parser will read in all symbols before trying to run the code.
If you have two files that define variables or functions with the same name, the order that they're included will change which one actually is defined

Categories