Despite the continued adoption of the script element's async attribute, there is continued advice to place script at the bottom of the document without the async attribute. If document.write is avoided, are there any possible pitfalls when using async?
For example, if I load a script that I wrap in jQuery's $(document).ready(...), is there any chance that I might experience any negative effects when adding the async attribute? Can I reliably specify async on all such scripts?
If your scripts rely on being run in a particular order you cannot use async. This typically means that they rely on each other.
A recent example of this that happened to me was using Google's prettyprint syntax highlighting library. You include the library and then make a call to prettyPrint() to apply the syntax highlighting to relevant blocks.
<script src='http://cdnjs.cloudflare.com/ajax/libs/prettify/r298/prettify.js' type='text/javascript'></script>
<script type='text/javascript'>prettyPrint();</script>
So if all your scripts are wrapped in a $(document).ready then they are most likely perfectly fine to go with async. The question you should be asking though is whether you can combine all your files so you only need to make one request.
The defer attribute is similar to async but waits until the HTML is parsed and then runs the scripts in the same order that they appear in your HTML file. This would work in the above situation provided the call the prettyPrint() was external instead of inline as I don't believe it applies to inline scripts.
Related
I thought I knew how to use 'defer' attribute when referencing external scripts from my HTML pages.
I even thought there is no reason for me NOT to use it. But after a couple of unexpected things I started to research (even here) and I think I'm not 100% sure when it's safe to use it every time I use the script tag.
Is there somewhere a list of known use cases when defer should NOT be used?
The only thing defer does is run your script when the DOM has finished parsing, but before the DOMContentReady event is fired off.
So: if your code does not depend on the DOM (directly, or indirectly through access to document properties that can only be determined once the DOM is done), there is no reason to defer. For example: a utility library that adds a new namespace ComplexNumbers, with a ComplexNumber object type and associated utility functions for performing complex number maths, has no reason to wait for a DOM: it doesn't need to be deferred. Same for a custom websocket library: even if your own use of that library requires performing DOM updates, it does not depend on the DOM and doesn't need defer.
But for any code that tries to access anything related to the DOM: you need to use defer. And yes: you should pretty much have defer on any script that loads as part of the initial page load, and if you did your job right, none of those scripts interfere with each other when they try to touch the various pieces of the DOM they need to work with.
In fact, you should have both defer *and* async, so as not to block the page thread. The exception being if you're loading a type="module" script, in which case you don't get a choice in deferral: it's deferred by default. but it'll still need async.
I'm using polyfill.io to polyfill Promise and fetch for older clients. On their website they recommend using a script loader or their callback to make sure the script has loaded completely before running the modern code:
We recommend the use of the async and defer attributes on
tags that load from the polyfill service, but loading from us in a
non-blocking way means you can't know for certain whether your own
code will execute before or after the polyfills are done loading.
To make sure the polyfills are present before you try to run your own
code, you can attach an onload handler to the https://cdn.polyfill.io
script tag, use a more sophisticated script loader or simply use our
callback argument to evaluate a global callback when the polyfills are
loaded:
However, shouldn't setting defer on both scripts already guarantee that they are loaded async but still in the order in which they appear in the document (unless the browser doesn't support defer)?
<script src="https://cdn.polyfill.io/v2/polyfill.min.js" defer></script>
<script src="modernscript.js" defer></script>
According to MDN documentation defer attribute just defines a point of page loading time when script loading will occur.
From documentation that you've citated it can be seen:
To make sure the polyfills are present before you try to run your own
code, you can attach an onload handler to the https://cdn.polyfill.io
script tag
Since (as pointed into comments to this answer) it can't be clearly seen if defer scripts will be executed (1, 2) and taking in mind possible browser implementation differences - it may be not the best idea to rely on such behavior.
So better way would be either:
to use some script loader (RequireJS for example)
to add proposed onload handler to first <script> tag and create dynamic <script> tag for loading your code inside this handler
to bundle your code together with Promise polyfill (manually or using bundler like webpack) and load as single bundle.
UPDATE: As pointed by #PeterHerdenborg in comment - MDN document now clearly states that:
Scripts with the defer attribute will execute in the order in which they appear in the document.
The Google Maps javascript does some heavy DOM manipulation. Even so, the fine docs suggest to load it with the defer flag:
<script src="https://maps.googleapis.com/maps/api/js?key=YOUR_API_KEY&callback=initMap" async defer></script>
Why would the defer flag be suggested for a script that performs DOM manipulations? I ask to learn both about the defer flag and to learn about the Google Maps API as I seem to have a misunderstanding about what one of them is doing.
Normally, a script tag tells the browser to stop parsing the HTML, fetch the script, run it, and only then continue parsing the HTML. This is because the script code may use document.write to output to the HTML token stream.
async and defer are both mechanisms for telling the browser that it's okay to go ahead and keep parsing the HTML in parallel with downloading the script file, and to run the script file later, not right away.
They slightly different, though; this diagram from the script section of the WHAT-WG version of the HTML spec is useful for envisioning the differences:
Full details in the linked spec above, but in brief, for "classic" scripts (the kind you're used to; but module scripts are coming soon!):
Both async and defer allow the parsing of the HTML to continue without waiting for the script to download.
defer will make the browser wait to execute the script until the parsing is complete.
async will only make the browser wait until the script download is complete, which means it may run the script either before parsing is complete or afterward, depending on when download finishes (and remember it could come from cache).
If async is present and supported by the browser, it takes precedence over defer.
async scripts may be run in any order, regardless of the order in which they appear in the HTML.
defer scripts will be run in the order they appear in the HTML, once parsing is complete.
async and defer are well-supported in even semi-modern browsers, but are not properly supported in IE9 and earlier, see here and here.
Why would the defer flag be suggested for a script that performs DOM manipulations?
Two reasons:
It allows the parsing to continue while the script is downloaded, and
It means the script isn't run until parsing is complete.
If you didn't use defer and you placed your script tags non-optimally, using defer helps the API script behave properly by letting the browser finish building the DOM before the script tries to manipulate it.
A lot of people still put script tags in the head section of the document, even though that's usually the worst place to put them unless you use defer (or async). In most cases, the best place (unless you have a reason to do something else) is at the very end, just before the closing </body> tag, so that A) Your site renders quickly, without waiting for scripts; and B) The DOM is fully built before you try to manipulate it. Recommending defer may be saving them support hassles from people putting their script tags too early in the HTML.
The google maps examples use both async and defer flags.
The async flag allows the script to load in parallel to the DOM
parsing, and to execute as soon as the API is ready.
The defer flag allows the script to load in parallel to the DOM
parsing, but guarantees that the script will not execute until the
DOM is finished parsing.
async is supported by modern HTML5 browsers, while defer support is universal. When the tags are used together, defer is just a fallback for older browsers, and will be ignored if async is supported.
In these simple examples, either async or defer will work, though neither are necessary. In this case it's for performance only.
Refs:
Speed up Google Maps(and everything else) with async & defer
async vs defer attributes - Growing with the Web
Via jquery's $.getScript method, can you give the included script a DOM id?
So generated code should be:
<script type="text/javascript" id="xxxxxx" src="..."></script>
I know I could probably just document.write that line myself, but $.getScript must be there for a reason right? (cross browser compatibility, etc?)
I think this has maybe been answered/discussed before here: Why call $.getScript instead of using the <script> tag directly?.
getScript allows you to dynamically load a script in situations where it's either desirable to delay the loading of the script, in situations where you want to get a status callback on when the script has been loaded or in situations where you couldn't use a script tag.
getScript has some downsides in that it's subject to the same-origin policy whereas a script tag is not.
If have seen other web pages put an ID on a script tag (smugmug.com), but I've also seen that flagged as non-standard when testing standard's compliance. It seems to work and be used by others, but I'm guessing it isn't standard.
I would like to synchronously include a JavaScript file from a different domain via code. This means that using a synchronous XMLHttpRequest will not work. I also want to avoid document.write because my code will be executed when the document is fully loaded. Is this even possible? Does any of the existing JavaScript libraries support that feature?
Basically I want this to work:
<script type="text/javascript">
$(document).ready(function() {
load("path_to_jQuery_UI_from_another_domain");
console.log(jQuery.ui.version); //outputs the version of jQuery UI
});
</script>
EDIT:
My idea is to create a jQuery plugin which loads its JavaScript files based on the enabled features. jQuery plugins can be initialized at any time which means no document.write. It is perfectly fine to load the JavaScript files asynchronously but people expect their plugins to be fully initialized after calling $("selector").something();. Hence the need of synchronous JavaScript loading without document.write. I guess I just want too much.
The only way to synchonously load files is to document.write a script tag into your page. This is generally considered a bad practice. There is probably a better way to do what you actually want, but, in the spirit of transparency:
document.write('<script src="http://otherdomain.com/script.js"></'+'script>')
should do the trick. You have to escape the closing script tag so the parser doesn't close the script tag that you wrote.
**Note that you can't dynamically load scripts that contain a document.write
You should be able to use .getScript()
Edit: Cross-domain requests are always loaded asynchronously in jQuery.
A great library called YepNope exists for loading javascript dependencies from any location - developed by a member of the yayQuery podcast. It can be found here: http://yepnopejs.com/
It's not possible to synchronously execute a script at a URL. Note further that synchronous anything, when networks (or even file systems!) are involved is a Bad Idea. Someone, sometime, somewhere will be on a slow system, or a slow network, or both, and suddenly you've just hung their UI in the process.
Synchronous is bad. Asynchronous with callbacks is good.
Note that, as a worst-case hack, you could overwrite $ with your own function, which returned an object with just the right properties, and you could semi-lazily evaluate all actual calls. This of course breaks if you start relying on immediate execution of the calls, or on their execution being intermingled with the evaluation of arguments, but in the worst case it's not completely implausible.
LABjs.js, is nice library. I used it works well.
http://labjs.com/