Executing code before document.ready in requirejs - javascript

I see that require.js has support to execute code when document.ready() is fired. see: http://requirejs.org/docs/api.html#pageload
I was just wondering is there any way to get it to 100% execute code before document.ready() is invoked? For example, does the code main.js get executed before document.ready() event? Thanks.

There is no pure RequireJS solution to your request. In essence what you are asking for is:
I'd like a) to load my modules asynchronously but b) I want to load them synchronously.
Requirement a is entailed by the fact that you are using RequireJS. RequireJS is a loader designed to load AMD modules. AMD stands for asynchronous module definition. If you use RequireJS, you have to be okay with the fact that it operates asynchronously. So it cannot guarantee that any module will load and execute before the document is ready.
Requirement 'b' is entailed by your requirement that the code executes before the document is ready. This cannot be done otherwise than by forcing the execution of your modules to be synchronous. (I've given some thought about ways to delay the ready event but I don't see this as being generally possible. Solutions like jQuery.holdReady are not really delaying the event but only delaying those handlers that were set using jQuery.ready. The event still happens whenever the browser decides it happens but the handlers get fired later. And only the jQuery handlers are affected, nothing else.)
You can load your code using script elements (that don't use the async attribute) but then you won't be using RequireJS.
A solution that would give you something similar to RequireJS would be to use Almond. Your code would have to be built into a single bundle. And you could load this bundle synchronously. It would essentially have the same effect as using script elements but you would still benefit from modularization. Note that Almond comes with a series of limitations. You'll have to decide for yourself whether these limitations are a problem for you. I've explained how to use Almond synchronously in this answer.

Related

When NOT to use defer attribute

I thought I knew how to use 'defer' attribute when referencing external scripts from my HTML pages.
I even thought there is no reason for me NOT to use it. But after a couple of unexpected things I started to research (even here) and I think I'm not 100% sure when it's safe to use it every time I use the script tag.
Is there somewhere a list of known use cases when defer should NOT be used?
The only thing defer does is run your script when the DOM has finished parsing, but before the DOMContentReady event is fired off.
So: if your code does not depend on the DOM (directly, or indirectly through access to document properties that can only be determined once the DOM is done), there is no reason to defer. For example: a utility library that adds a new namespace ComplexNumbers, with a ComplexNumber object type and associated utility functions for performing complex number maths, has no reason to wait for a DOM: it doesn't need to be deferred. Same for a custom websocket library: even if your own use of that library requires performing DOM updates, it does not depend on the DOM and doesn't need defer.
But for any code that tries to access anything related to the DOM: you need to use defer. And yes: you should pretty much have defer on any script that loads as part of the initial page load, and if you did your job right, none of those scripts interfere with each other when they try to touch the various pieces of the DOM they need to work with.
In fact, you should have both defer *and* async, so as not to block the page thread. The exception being if you're loading a type="module" script, in which case you don't get a choice in deferral: it's deferred by default. but it'll still need async.

Dynamically loaded js, mutation observer and promises for nested js loading

I am working on a mechanism to dynamically load a set of js files that have some dependencies between them (e.g., my code after bootstrap after jquery). I defined a nested json type structure with the dependency relationships and then invoke a Promise based loadscript mechanism (thanks to some of the posts here) that makes async calls to load js files that are at the same level and invokes a recursive loadscript for js files that have dependencies between them.
The files are all correctly appended to HEAD in the order that i expect them to be. For the dependencies, i invoke loading a file for which there is a dependency in the onload handler of its predecessor. However, this does not seem to guarantee that the loaded js code is available to use (i.e., did not seem that the append had completed and the code been executed - e.g., var/const definitions were not available at the time of onload being invoked based on break points, etc... in debugger - i guess this seems correct).
So, added a mutationobserver that observed that the files were added to the DOM (e.g., childList change to HEAD). However, it appears that the mutation is invoked before the code is actually available for use.
Why do i think this? i put break points, console logs with timing info, etc.. at the point where the mutationObserver indicated that a change corresponding to the insertion of the newly loaded script had occurred (e.g., the observer firing because of the script added) but at that point, the code (e.g., some var definition) was not available in the globals in the debugger.
So, the question is not specific about my code but more about whether it is possible to know that the dynamically included js code has run and is available to use. For my own code i can handle this through other mechanisms but for third party js files i do not want to touch them.
So, more specifically, it does not seem that the actual appending of code to the DOM HEAD at the point detected by the mutationObserver actually means that the code has been executed and that it seems from other posts (e.g., Execution of dynamically loaded JS files or load and execute order of scripts ) that the actual execution of the code is somewhat asynchronous depending on what the browser is actually doing.
Any thoughts on the overall process understanding would be greatly appreciated.
Thanks,

Will setting defer on my polyfill and other scripts guarantee that they're loaded in order?

I'm using polyfill.io to polyfill Promise and fetch for older clients. On their website they recommend using a script loader or their callback to make sure the script has loaded completely before running the modern code:
We recommend the use of the async and defer attributes on
tags that load from the polyfill service, but loading from us in a
non-blocking way means you can't know for certain whether your own
code will execute before or after the polyfills are done loading.
To make sure the polyfills are present before you try to run your own
code, you can attach an onload handler to the https://cdn.polyfill.io
script tag, use a more sophisticated script loader or simply use our
callback argument to evaluate a global callback when the polyfills are
loaded:
However, shouldn't setting defer on both scripts already guarantee that they are loaded async but still in the order in which they appear in the document (unless the browser doesn't support defer)?
<script src="https://cdn.polyfill.io/v2/polyfill.min.js" defer></script>
<script src="modernscript.js" defer></script>
According to MDN documentation defer attribute just defines a point of page loading time when script loading will occur.
From documentation that you've citated it can be seen:
To make sure the polyfills are present before you try to run your own
code, you can attach an onload handler to the https://cdn.polyfill.io
script tag
Since (as pointed into comments to this answer) it can't be clearly seen if defer scripts will be executed (1, 2) and taking in mind possible browser implementation differences - it may be not the best idea to rely on such behavior.
So better way would be either:
to use some script loader (RequireJS for example)
to add proposed onload handler to first <script> tag and create dynamic <script> tag for loading your code inside this handler
to bundle your code together with Promise polyfill (manually or using bundler like webpack) and load as single bundle.
UPDATE: As pointed by #PeterHerdenborg in comment - MDN document now clearly states that:
Scripts with the defer attribute will execute in the order in which they appear in the document.

How requirejs load javascript async?

I am wondering anyone knows the internal of Requirejs, why it can load js asynchronourly? I know Javascript has no thread, how can async done by requirejs?
How RequireJS works?
Each module is contained inside a define call, which defines the module dependencies. With that, RequireJS makes a kind of tree to order each module from the one without dependencies, to the one with the most dependencies.
A module with only one dependency could be the one that depends on everything, if its dependency depends on another module, which depends on 2-3 other modules, and then it goes on like that.
define(['some/dep'], function(someDep){ /* module code */ });
In that order, RequireJS creates a <script> tag with the url to the module file and it inserts that script tag at the end of the <head>. The browser loads the JavaScript files and runs them in the order that they are present in the HTML.
Then, when every dependency is defined for the module to run, the function of that module is called with each dependency (previously defined) injected into the module factory function, and its result is stored.
How can it be async without threads?
It's async, but not necessarily parallel. Loading scripts can be parallel as the browser (at least chrome for sure) makes multiple connections to the server to fetch more files at once, but this has nothing to do with JS.
The async nature of JavaScript comes in an event-loop.
Each async callback is put in an event queue and when the synchronous call stack has completely finished executing, the next event callback from the queue is called.
It's easier to grasp when you see it, and you can in the chrome's dev tools Timeline tab.

Dynamically and synchronously load JavaScript file from a different domain

I would like to synchronously include a JavaScript file from a different domain via code. This means that using a synchronous XMLHttpRequest will not work. I also want to avoid document.write because my code will be executed when the document is fully loaded. Is this even possible? Does any of the existing JavaScript libraries support that feature?
Basically I want this to work:
<script type="text/javascript">
$(document).ready(function() {
load("path_to_jQuery_UI_from_another_domain");
console.log(jQuery.ui.version); //outputs the version of jQuery UI
});
</script>
EDIT:
My idea is to create a jQuery plugin which loads its JavaScript files based on the enabled features. jQuery plugins can be initialized at any time which means no document.write. It is perfectly fine to load the JavaScript files asynchronously but people expect their plugins to be fully initialized after calling $("selector").something();. Hence the need of synchronous JavaScript loading without document.write. I guess I just want too much.
The only way to synchonously load files is to document.write a script tag into your page. This is generally considered a bad practice. There is probably a better way to do what you actually want, but, in the spirit of transparency:
document.write('<script src="http://otherdomain.com/script.js"></'+'script>')
should do the trick. You have to escape the closing script tag so the parser doesn't close the script tag that you wrote.
**Note that you can't dynamically load scripts that contain a document.write
You should be able to use .getScript()
Edit: Cross-domain requests are always loaded asynchronously in jQuery.
A great library called YepNope exists for loading javascript dependencies from any location - developed by a member of the yayQuery podcast. It can be found here: http://yepnopejs.com/
It's not possible to synchronously execute a script at a URL. Note further that synchronous anything, when networks (or even file systems!) are involved is a Bad Idea. Someone, sometime, somewhere will be on a slow system, or a slow network, or both, and suddenly you've just hung their UI in the process.
Synchronous is bad. Asynchronous with callbacks is good.
Note that, as a worst-case hack, you could overwrite $ with your own function, which returned an object with just the right properties, and you could semi-lazily evaluate all actual calls. This of course breaks if you start relying on immediate execution of the calls, or on their execution being intermingled with the evaluation of arguments, but in the worst case it's not completely implausible.
LABjs.js, is nice library. I used it works well.
http://labjs.com/

Categories