I've just learned an important fact about the execution of Javascript in case of an error being thrown. Before I start making conclusions of this I'd better verify whether I am right.
Given an HTML page including 2 scripts:
<script src="script1.js" />
<script src="script2.js" />
script1:
doSomething();
script2:
doSomeOtherThing();
This effectively results in a single script being processed as one unit:
doSomething();
doSomeOtherThing();
In particular, if doSomething throws an error, the execution is broken. 'script2' is never being executed.
This is my "Lesson 1" - one might think since it is a separately included file it is not affected by script1. But it is. => see "late update" below
Now, if we change script2 as follows (presuming we have jQuery included somewhere above):
$(document).ready({ doSomeOtherThing(); });
and place the script before script2:
<script src="script2.js" />
<script src="script1.js" />
The order of execution is effectively still 'doSomething()' followed (sometime) by 'doSomeOtherThing()'.
However it is executed in two "units":
doSomething is executed early as part of the document's java script
doSomeOtherThing is executed when the document.ready event is processed.
If doSomeOtherThing throws an exception, it will not break the second processing "unit".
(I refrain from using the term thread because I reckon that all script is usually executed by the same thread, or more precisely this may depend on the browser.)
So, my Lession 2: Even though a JavaScript error may prevent any subsequent scripts from executing, it does not stop the event loop.
Conclusion 1
$(document).ready() does a great job in defining chunks of JavaScript code that should be executed independent on any other scripts in succeeding.
Or, in other words: If you have a piece of JavaScript and want to make sure it gets executed even if other scripts fail, place it within a $(document).ready().
This would be new to me in that I would only have used the event if the script depends on the document being fully loaded.
Conclusion 2
Taking it a step further it might be a good architecture decision to wrap all scripts within a $(document).ready() to make sure that all scripts are "queued" for execution. In the second example above, if script2.js was included after script1.js as in example 1:
<script src="script1.js" />
<script src="script2.js" />
An error in script1.js would prevent the doSomeOtherThing() from even being registered, because the $(document).ready() function would not be executed.
However, if script1.js used $(document).ready(), too, that would not happen:
$(document).ready(function() { doSomething(); });
$(document).ready(function() { doSomeOtherThing(); });
Both lines would be executed. Then later the event loop would execute doSomething which would break, but doSomeOtherThing would not be affected.
One more reason to do so would be that the thread rendering the page can return as soon as possible, and the event loop can be used to trigger the code execution.
Critique / Questions:
Was I mistaken?
What reasons are there that make it necessary to execute a piece of code immediately, i.e. not wrapping it into the event?
Would it impact performance significantly?
Is there another/better way to achieve the same rather than using the document ready event?
Can the execution order of scripts be defined if all scripts just register their code as an event handler? Are the event handlers executed in the order they were registered?
Looking forward to any helpful comments!
Late Update:
Like Briguy37 pointed out correctly, my observation must have been wrong in the first place. ("Was I mistaken - yes!"). Taking his simple example I can reproduce that in all major browsers and even in IE8, script2 is executed even if script1 throws an error.
Still #Marcello's great answer helps get some insight in the concepts of execution stacks etc. It just seems that each of the two scripts is executed in a separate execution stack.
The way JS handles errors, depends on the way JS is processing the script. It has nothing (or little) to do with threads. Therefor you have to first think about how JS works through your code.
First of all JS will readin every script file/block sequently (at this point your code is just seen as text).
Than JS starts interpreting that textblocks and compiling them into executable code. If an syntax error is found, JS stops compiling and goes on to the next script. In this process JS handles every script blocks/files as separated entity, that's why an syntax errors in script 1 does not necessarily break the execution of script 2. The code will be interpreted and compiled, but not executed at this point, so a throw new Error command wouldn't break the execution.
After all script files/blocks are compiled, JS goes through the code (starting with the first code file/block in your code) and building up a so called execution stack (function a calls function b calls function d and c....) and executing it in the given order. If at any point a processing error occurs or is thrown programmatically (throw new Error('fail')) the whole execution of that stack is stopped and JS goes back to the beginning to the beginning of that stack and starts with the execution of the next possible stack.
That said, the reason that your onload function is still executed after an error in script1.js, does not happen because of a new thread or something, it's simply because an event builds up a separate execution stack, JS can jump to, after the error in the previous execution stack happend.
Coming to your questions:
What reasons are there that make it necessary to execute a piece of code immediately, i.e. not wrapping it into the event?
I would advice you to have no "immediatly" called code in your web-application at all. The best practice is to have a single point of entrance in your application that is called inside of an onload event
$(document).ready(function () {App.init()});
This however has nothing to do with error handling or such. Error handling itself should definitely be done inside your code with either conditionals if(typeof myValue !== 'undefined') or try/catch/finally blocks, where you might expect potential errors. This also gives you the opportunity to try a second way inside of the catch block or gracefully handle the error in finally.
If you can build up your application event driven (not for error handling reasons of course), do so. JS is an event driven language and you can get the most out of it, when writing event driven code...
Would it impact performance significantly?
An event driven approach would IMHO make your application perform even better and make it more solid at the same time. Event driven code can help you to reduce the amount of internal processing logic, you just have to get into it.
Is there another/better way to achieve the same rather than using the document ready event?
As before mentioned: try/catch/finally
Can the execution order of scripts be defined if all scripts just register their code as an event handler? Are the event handlers executed in the order they were registered?
If you register the same event on the same object, the order is preserved.
Your first assumption that they run as one script is incorrect. Script2 will still execute even if Script1 throws an error. For a simple test, implement the following file structure:
-anyFolder
--test.html
--test.js
--test2.js
The contents of test.html:
<html>
<head>
<script type="text/javascript" src="test.js"></script>
<script type="text/javascript" src="test2.js"></script>
</head>
</html>
The contents of test.js:
console.log('test before');
throw('foo');
console.log('test after');
The contents of test2.js:
console.log('test 2');
The output when you open test.html (in the console):
test before test.js:1
Uncaught foo test.js:2
test 2
From this test, you can see that test2.js still runs even though test.js throws an error. However, test.js stops executing after it runs into the error.
I'm not sure about syntax errors, but you can use try {} catch(e) {} to catch the errors and keep the rest of the code running.
Will NOT run untill the end
var json = '{"name:"John"'; // Notice the missing curly bracket }
// This probably will throw an error, if the string is buggy
var obj = JSON.parse(json);
alert('This will NOT run');
Will run untill the end
var json = '{"name:"John"'; // Notice the missing curly bracket }
// But like this you can catch errors
try {
var obj = JSON.parse(json);
} catch (e) {
// Do or don'
}
alert('This will run');
UPDATE
I just wanted to show how to make sure that the rest of the code gets executed in case the error occurs.
What reasons are there that make it necessary to execute a piece of
code immediately, i.e. not wrapping it into the event?
Performance. Every such event fills up the event queue. Not that it will hurt much, but it's just not necessary... Why to do some work later, if it can be done right now? For example - browser detection and stuff.
Would it impact performance significantly?
If you are doing this many times per second, then yes.
Is there another/better way to achieve the same rather than using the
document ready event?
Yes. See above example.
Can the execution order of scripts be defined if all scripts just
register their code as an event handler? Are the event handlers
executed in the order they were registered?
Yes, I'm pretty sure that they are.
Related
I thought I understood how the setTimeout method worked, but this is confusing me.
test.html (I'm purposefully loading the test.js file before the jQuery file for demonstration. Let's say the jQuery file is hosted locally).
<body>
// ...code
<div id="area"></div>
// ...code
<script src="test.js"></script>
<script src="jquery.js"></script>
</body>
test.js
$('#area').text('hello');
I understand in this case "hello" won't get printed on the browser because jQuery is being loaded after the test.js file. Switching the order of these files solves the problem. But if I leave the order alone, and alter the test.js file, a setTimeout makes it work:
function wait() {
if(window.jQuery) {
$('#area').text("hello");
}
else
{
setTimeout(wait, 10);
}
}
wait();
In this case the "hello" text gets printed on the browser. But I'm sort of scratching my head because somehow the jQuery file does get loaded. But how? Why doesn't the test.js file get caught in an infinite loop forever checking to see if jQuery has loaded? I'd be grateful for some insight on the mechanics of what's going on.
There would be an infinite loop if jQuery never loaded. But in the normal case:
The first time through, jQuery isn't loaded, so we setTimeout()
1a. Other things happen in the meantime, including loading of resources like jQuery
10ms later, we check again.
Is jQuery loaded now? If not, set a timeout and go back to step two
After some number of retries, jQuery does load, and we're off.
The better way to do all of this, of course, would be to
Load jQuery first
Run your wait() function in a ready() handler so it doesn't run until it's needed.
<script src="jquery.js"></script>
<script src="test.js"></script>
// test.js
$(document).ready(
function()
{
$('#area').text("hello");
}
);
Why doesn't the test.js file get caught in an infinite loop forever checking to see if jQuery has loaded?
setTimeout works asynchronously. It does not pause the browser. It simply asks it to execute a certain function after a certain amount of milliseconds.
jquery.js gets loaded and executed inbetween wait() invocations.
Without that setTimeout() code, when the contents of "test.js" are evaluated the browser will immediately run into the problem of $ (jQuery) not being defined. With the setTimeout(), however, the code does not attempt to use the global jQuery symbols until it verifies that the symbols are defined.
Without the setTimeout the code fails with a runtime error. The code in the other version explicitly tests for that failure possibility to avoid it.
setTimeOut method runs in a separate queue called asynchronous callback. so once the interpreter comes to this line, the code is moved to a separate queue and continues with it parsing(which then executes jQuery.js). After this is executed , it looks for items the asynchronous queue to check if the timeout is completed and then executed method inside setTimeout. By this time jQuery.js is already loaded.
More on this
https://youtu.be/8aGhZQkoFbQ
JavaScript is not pre-compiled. It's working "on the fly".
You can add code on the fly, whenever you want, and this includes loading whole libraries. Once the browser loaded an external JS file it parses it, and it's all ready to use.
So if you wait for jQuery, and do have the proper code to load it, it will eventually be loaded by the browser and work.
I'm using a framework which features auto-connecting to server on page load. I can disable it by passing options arguments, but the line that confuses me is this:
You can prevent this initial socket from connecting automatically by disabling io.sails.autoConnect before the first cycle of the event loop elapses.
My questions are:
When does the first cycle of the event loop elapses?
Is this behaviour the same across ALL modern (IE9+) browsers?
I have a bunch of scripts (in <body>) loading between the lib and my entry file. Does this affect when the first cycle elapses? EDIT: Yes, it does.
How can I ensure my code runs before the first cycle elapses?
Is this kind of implementation of auto-connect considered good practice?
The documentation for the source file is a little more explicit; it says "This can be disabled or configured by setting io.socket.options within the first cycle of the event loop."
Basically what's happening is that there exists within the library a setTimeout(fn, 0) call, which is idiomatic for starting a parallel process. However, in the JS standards it's explicitly stated that JS is single-threaded: in other words, even though setTimeout and setInterval are asynchronous they are not actually parallel in the sense that any of their code will be executing simultaneously with any other code. So they wait until the current function is over before they execute. This queueing mechanism is known as the JavaScript event loop.
I believe that what you are asked to do by the script author is to modify the source to include the relevant change, perhaps at the bottom of the file for your convenience.
It is also likely that a similar effect will be achieved by putting a <script> tag underneath the one that loads the given JS. This has not been explicitly standardized by HTML 4, but may be implicitly standardized in the new HTML 5 spec (it's a complicated interaction between different parts of the specs).
In terms of HTML5, it looks like the current specs say that there is a afterscriptexecute event and a load event which occur immediately after any remote script is loaded (or, if it's an inline script, the load event is scheduled as a task -- I am not sure when those occur). So you might be able to guarantee it without modifying the script by instead doing:
<script>
function do_not_autoload() { /* ... */ }
</script>
<script onload="do_not_autoload()" src="./path/to/sails.io.js"></script>
but I'm not sure what the compatibility table for script#onload is going to look like.
I made you a jsfiddle which can be used to grab a 'fingerprint' for different browsers to get an idea of what evaluation orders are out there in the wild. The * is the document.body.onload event. On my system it produces:
Firefox 32.0.3 : cafdbe*
Chrome 37.0.2062 : cafd*be
IE 11.0.9600 : cafd*be
In other words,
I know it may sound very strange, but I need to know if there is any active/running javascript in the page.
I am in situation in which I have to run my javascript/jquery code after everything on the page is rendered and all other scripts have finished.
Is it possible to detect this?
EDIT:
Thank you all for the answers. Unfortunately, I was not able to find a solution, because I have no full control of what is going on the page.
Even, I was able to put my javascript in the end of the page, I think it will not be solution again. The reason is that when the page is rendering a function is triggered, it calls other functions and they calls other and so on. As a result, some of the data is incorrect and that's why i need to run my code to correct it.
I use setTimeout with 2 seconds to ensure that my code will be executed last, but this is ugly...
So, thank you all, but this is more problem with the system, not the js.
JavaScript on web browsers is single-threaded (barring the use of web workers), so if your JavaScript code is running, by definition no other JavaScript code is running.*
To try to ensure that your script occurs after all other JavaScript on the page has been downloaded and evaluated and after all rendering has occurred, some suggestions:
Put the script tag for your code at the very end of the file.
Use the defer and async attributes on the tag (they'll be ignored by browsers that don't support them, but the goal is to make yours the last as much as we can).
Hook the window load event via a DOM2 style hookup (e.g., addEventListener on browsers with standards support, or attachEvent on older IE versions).
In the load event, schedule your code to run after a setTimeout with a delay of 0ms (it won't really be zero, it'll be slightly longer).
So, the script tag:
<script async defer src="yourfile.js"></script>
...and yourfile.js:
(function() {
if (window.addEventListener) {
window.addEventListener("load", loadHandler, false);
}
else if (window.attachEvent) {
window.attachEvent("onload", loadHandler);
}
else {
window.onload = loadHandler; // Or you may want to leave this off and just not support REALLY old browsers
}
function loadHandler() {
setTimeout(doMyStuff, 0);
}
function doMyStuff() {
// Your stuff here. All images in the original markup are guaranteed
// to have been loaded (or failed) by the `load` event, and you know
// that other handlers for the `load` event have now been fired since
// we yielded back from our `load` handler
}
})();
That doesn't mean that other code won't have scheduled itself to run later (via setTimeout, for instance, just like we did above but with a longer timeout), though.
So there are some things you can do to try to be last, but I don't believe there's any way to actually guarantee it without having full control of the page and the scripts running on it (I take it from the question that you don't).
(* There are some edge cases where the thread can be suspended in one place and then allow other code to run in another place [for instance, when an ajax call completes while an alert message is being shown, some browsers fire the ajax handler even though another function is waiting on the alert to be dismissed], but they're edge cases and there's still only one thing actively being done at a time.)
There is no definitive way to do this because you can't really know what the latest is that other scripts have scheduled themselves to run. You will have to decide what you want to target.
You can try to run your script after anything else that may be running when the DOM is loaded.
You can try to run your script after anything else that may be running when the page is fully loaded (including images).
There is no reliable, cross-browser way to know which of these events, the scripts in the page are using.
In either case, you hook the appropriate event and then use a setTimeout() to try to run your script after anything else that is watching those events.
So, for example, if you decided to wait until the whole page (including images) was loaded and wanted to try to make your script run after anything else that was waiting for the same event, you would do something like this:
window.addEventListener("load", function() {
setTimeout(function() {
// put your code here
}, 1);
}, false);
You would have to use attachEvent() for older versions of IE.
When using this method, you don't have to worry about where your scripts are loaded in the page relative to other scripts in the page since this schedules your script to run at a particular time after a particular event.
A way to know when multiple functions have all finished executing
This can be useful if you have to wait multiple API calls or initialisation functions
let processRemining = 0;
async function f1() {
processRemining++
await myAsyncFunction()
processFinished()
}
async function f2() {
processRemining++
await myAsyncFunction2()
processFinished()
}
function processFinished() {
processRemining--
setTimeout(() => { // this is not needed is all the functions are async
if (processRemining === 0) {
// Code to execute when all the functions have finished executing
}
}, 1)
}
f1()
f2()
I often couple it with a freezeClic function to prevent users to interact with the page when there is a script that is still waiting an ajax / async response (and optionnaly display a preloader icon or screen).
I'm adding dynamic script by creating a script tag, setting its source and then adding the tag to the DOM. It works as expected, the script is getting downloaded and executes. However sometimes I would like to cancel script execution before it was downloaded. So I do it by removing the script tag from the DOM.
In IE9, Chrome and Safari it works as expected - after the script tag is removed from the DOM it doesn't execute.
However it doesn't work in Firefox - script executes even if I remove it from the DOM or change it its src to "" or anything else I tried, I cannot stop the execution of a script after it was added to the DOM. Any suggestions?
Thanks!
How about some sort of callback arrangement? Rather than have the dynamically added script simply execute itself when it loads, have it call a function within your main script which will decide whether to go ahead. You could have the main script's function simply return true or false (execute / don't execute), or it could accept a callback function as a parameter so that it can decide exactly when to start the dynamic script - that way if you had several dynamic scripts the main script could wait until they're all loaded and then execute them in a specific order.
In your main script JS:
function dynamicScriptLoaded(scriptId,callback) {
if (scriptId === something && someOtherCondition())
callback();
// or store the callback for later, put it on a timeout, do something
// to sequence it with other callbacks from other dynamic scripts,
// whatever...
}
In your dynamically added script:
function start() {
doMyThing();
doMyOtherThing();
}
if (window.dynamicScriptLoaded)
dynamicScriptLoaded("myIdOrName",start);
else
start();
The dynamic script checks to see if there is a dynamicScriptLoaded() function defined, expecting it to be in the main script (feel free to upgrade this to a more robust test, i.e., checking that dynamicScriptLoaded actually is a function). If it is defined it calls it, passing a callback function. If it isn't defined it assumes it is OK to go ahead and execute itself - or you can put whatever fallback functionality there that you like.
UPDATE: I changed the if test above since if(dynamicScriptLoaded) would give an error if the function didn't exist, whereas if(window.dynamicScriptLoaded) will work. Assuming the function is global - obviously this could be changed if using a namespacing scheme.
In the year since I originally posted this answer I've become aware that the yepnope.js loader allows you to load a script without executing it, so it should be able to handle the situation blankSlate mentioned in the comment below. yepnope.js is only 1.7kb.
Here's my issue - I need to dynamically download several scripts using jQuery.getScript() and execute certain JavaScript code after all the scripts were loaded, so my plan was to do something like this:
function GetScripts(scripts, callback)
{
var len = scripts.length
for (var i in scripts)
{
jQuery.getScript(scripts[i], function()
{
len --;
// executing callback function if this is the last script that loaded
if (len == 0)
callback()
})
}
}
This will only work reliably if we assume that script.onload events for each script fire and execute sequentially and synchronously, so there would never be a situation when two or more of the event handlers would pass check for (len == 0) and execute callback method.
So my question - is that assumption correct and if not, what's the way to achieve what I am trying to do?
No, JavaScript is not multi-threaded. It is event driven and your assumption of the events firing sequentially (assuming they load sequentially) is what you will see. Your current implementation appears correct. I believe jQuery's .getScript() injects a new <script> tag, which should also force them to load in the correct order.
Currently JavaScript is not multithreaded, but the things will change in near future. There is a new thing in HTML5 called Worker. It allows you to do some job in background.
But it's currently is not supported by all browsers.
The JavaScript (ECMAScript) specification does not define any threading or synchronization mechanisms.
Moreover, the JavaScript engines in our browsers are deliberately single-threaded, in part because allowing more than one UI thread to operate concurrently would open an enormous can of worms. So your assumption and implementation are correct.
As a sidenote, another commenter alluded to the fact that any JavaScriptengine vendor could add threading and synchronization features, or a vendor could enable users to implement those features themselves, as described in this article: Multi-threaded JavaScript?
JavaScript is absolutely not multithreaded - you have a guarantee that any handler you use will not be interrupted by another event. Any other events, like mouse clicks, XMLHttpRequest returns, and timers will queue up while your code is executing, and run one after another.
No, all the browsers give you only one thread for JavaScript.
To be clear, the browser JS implementation is not multithreaded.
The language, JS, can be multi-threaded.
The question does not apply here however.
What applies is that getScript() is asynchronous (returns immediately and get's queued), however, the browser will execute DOM attached <script> content sequentially so your dependent JS code will see them loaded sequentially. This is a browser feature and not dependent on the JS threading or the getScript() call.
If getScript() retrieved scripts with xmlHTTPRequest, setTimeout(), websockets or any other async call then your scripts would not be guaranteed to execute in order. However, your callback would still get called after all scripts execute since the execution context of your 'len' variable is in a closure which persists it's context through asynchronous invocations of your function.
JS in general is single threaded. However HTML5 Web workers introduce multi-threading. Read more at http://www.html5rocks.com/en/tutorials/workers/basics/
Thought it might be interesting to try this out with a "forced", delayed script delivery ...
added two available scripts from
google
added delayjs.php as the 2nd
array element. delayjs.php sleeps
for 5 seconds before delivering an empty js
object.
added a callback that
"verifies" the existence of the
expected objects from the script
files.
added a few js commands that
are executed on the line after the
GetScripts() call, to "test" sequential js commands.
The result with the script load is as expected; the callback is triggered only after the last script has loaded. What surprised me was that the js commands that followed the GetScripts() call triggered without the need to wait for the last script to load. I was under the impression that no js commands would be executed while the browser was waiting on a js script to load ...
var scripts = [];
scripts.push('http://ajax.googleapis.com/ajax/libs/prototype/1.6.1.0/prototype.js');
scripts.push('http://localhost/delayjs.php');
scripts.push('http://ajax.googleapis.com/ajax/libs/scriptaculous/1.8.3/scriptaculous.js');
function logem() {
console.log(typeof Prototype);
console.log(typeof Scriptaculous);
console.log(typeof delayedjs);
}
GetScripts( scripts, logem );
console.log('Try to do something before GetScripts finishes.\n');
$('#testdiv').text('test content');
<?php
sleep(5);
echo 'var delayedjs = {};';
You can probably get some kind of multithreadedness if you create a number of frames in an HTML document, and run a script in each of them, each calling a function in the main frame that should make sense of the results of those functions.