I'm a little bit confused about how browsers handle JavaScript events.
Let's say I have two event handlers attached to buttons A and B. Both event handlers take exactly the same time to complete. If I click on button A first and button B next, is it true that the event handler for the button A is always executed first (because the event loop is a FIFO queue), but when they finish is completely unpredictable? If so, what actually determines this order?
Yes. The order of event handlers executed is guaranteed, and in practice they will not overlap.
This is the beauty of the event loop as a concurrency model. You don't have to think about threading issues like deadlocks, livelocks and race conditions most of the time (though not always).
Order of execution is simple and JavaScript in the browser is single threaded most of the time and in practice you do not have to worry about order of execution of things.
However the fact order of mouse events is guaranteed has hardly anything has to do with JavaScript. This is not a part of the JavaScript language but a part of something called the DOM API, the DOM (document object model) is how JavaScript interacts with your browser and the HTML you write.
Things called Host Objects are defined in the JavaScript specification as external objects JS in the browser works with, and their behavior in this case is specified in the DOM API.
Whether or not the order DOM events are registered is guaranteed is not a part of JavaScript but a part of that API. More specifically, it is defined right here. So to your question: Yes, order of event execution is certain except for control keys (like (control alt delete)) which can mess order of evaluation up.
The Javascript engine is single threaded. All of your event handlers happen sequentially; the click handler for A will be called, and finish before the handler for B ever starts. You can see this by sleep()ing in one handler, and verifying that the second handler will not start until the first has finished.
Note that setTimeout is not valid for this test, because it essentially registers a function with the engine to be called back at a later time. setTimeout returns immediately.
This fiddle should demonstrate this behavior.
Well the commands are indeed in a FIFO when executed by javascript. However, the handlers may take different amounts of time to send you the result. In this case the response from handler B may come back earlier and response from handler A may come later.
Related
Conceptually just having one queue for jobs seems to be sufficient for most use-cases.
What are the reasons for having multiple queues and distinguishing those into "microtasks" and (macro)"tasks"?
Having multiple (macro) task queues allows for task prioritization.
For instance, an User Agent (UA) can choose to prioritize a user event (e.g a click event) over a network event, even if the latter was actually registered by the system first, because the former is probably more visible to the user and requires lower latency.
In HTML this is allowed by the first step of the Event Loop's Processing model, which states that the UA must choose the next task to execute from one of its task queues.
(note: HTML specs do not require that there are multiple queues, but they do define multiple task sources to guarantee the execution order of similar tasks).
Now, why have yet an other beast called microtask queue?
We can find a few early discussions about "how" it should be implemented, but I didn't dig as far as to find who proposed this idea the first and for what use case.
However from the discussions I found, we can see that a few proposals needing such a mechanism were on the road:
Mutation Observers
Now-deprecated ES Object.observe()
At-that-time-incoming ES Promises,
Also-incoming-at-that-time-and-I-don't-know-why-it's-cited ES WeakRefs
HTML Custom Elements callback
The first two being also the first to be implemented in browsers, we can probably say that their use case was the main reason for implementing this new kind of queue.
Both actually did similar things: they listened for changes on an object (or the DOM tree), and coalesced all the changes that occurred during a job into a single event (not to be read as Event).
One could argue that this event could have been queued in the next (macro) task, with the highest priority, except that the Event Loop was already a bit complex, and every jobs did not necessarily come from tasks.
For instance, the rendering is actually a part of every Event Loop iteration, except that most of the time it early exits because it wasn't the time to render.
So if you do your DOM modifications during a rendering frame, you could have the modifications rendered, and only after the whole rendering took place you'd get the callback.
Since the main use case for observers is to act on the observed changes before they trigger performance heavy side-effects, I guess you can see how it was necessary to have a mean to insert our callback right after the job which has done the modifications.
PS: At that time I didn't know much about the Event Loop, I was far from specs matters and I may have put some anachronisms in this answer.
It is to my understanding that the node event loop will continue to handle requests until the event loop is empty, at which point it will look the the event queue to complete the blocking I/O requests.
My question is.. What happens if the event loop never becomes empty? Not due to bad code (i.e. never ending loop) but due to consistent client requests (thinking something like google that gets never ending requests)?
I realize there is a possibility I am misunderstanding a fundamental aspect of how client requests are handled by a server.
There are actually several different phases of the event loop (timers, I/O, check events, pending callbacks, etc...) and they are checked in a circular order. In addition some things (like promises and other microtasks) go to the front of the line no matter what phase of the event loop is currently processing.
It is possible that a never ending set of one type of event can block the event queue from serving other types of events. That would be a design/implementation problem that needs to be prevented.
You can read a bit more about the different types of things in the event loop here: https://developer.ibm.com/tutorials/learn-nodejs-the-event-loop/ and https://www.geeksforgeeks.org/node-js-event-loop/ and https://snyk.io/blog/nodejs-how-even-quick-async-functions-can-block-the-event-loop-starve-io.
While it is possible to overload the event loop such that is doesn't get out of one phase of the event loop, it's not very common because of the way most processing in nodejs consists of multiple events which gives other things a chance to interleave. For example, processing an incoming http request consists of connecting, reading, writing, closing, etc... and the processing of that event may involve other types of events. So, it can happen that you overload one type of event (I've done it only once in my coding and that was because of poorly written communication between the main thread and a bunch of WorkerThreads that was easily fixed to not have the problem once I realized what the problem was.
I got confused when I read that when I set the (same) time in 2 timeSetout methods, the order in which the function will be called cannot always be predicted (there was not reasoning provided).
setTimeout(()=> console.log("first"), 5000);
setTimeout(()=> console.log("second"), 5000);
As I understand, the timesetouts will be added to the event table, which keeps track of the events (in this case the event is the time), then it sends it to the event queue, which stores the order in which the functions should be executed, then event loop checks and sends those functions to the call stack.
Since the first timesetout (logging "first") is being added FIRST to the event table, I would imagine it being the FIRST to be send to the event queue (also considering the event queue is a stak data structure) and the FIRST to be send to the call stack during the event loop.
I couldnt find anything about that online, also not reasons as to why it could be executed in random order and would be happy about an answer!
Also, on a side note: Could someone please explain when race conditions could be possible as if the above is not true, I cannot think of any case since javascript is single threaded.
Thanks!!
The specification states the following (emphasis added):
If method context is a Window object, wait until the Document associated with method context has been fully active for a further timeout milliseconds (not necessarily consecutively). Otherwise, method context is a WorkerGlobalScope object; wait until timeout milliseconds have passed with the worker not suspended (not necessarily consecutively).
Wait until any invocations of this algorithm that had the same method context, that started before this one, and whose timeout is equal to or less than this one’s, have completed.
As far as I understand this should ensure the execution to be in order, and you will always see first and then second.
Is there a set number of instructions statements that get processed before checking the event queue/per tick/per loop (ways of saying the same thing, I think?)
Is there a set number of instructions that get processed before checking the event queue/per tick/per loop (ways of saying the same thing, I think?)
No, there is not.
In the node.js architecture, when an event is pulled from the event queue, it's tied to a callback. The interpreter calls that callback and that callback runs to completion. Only when it returns and the stack is again empty does it check to see if there is another event in the event queue to run.
So, it has absolutely nothing to do with a number of instructions. node.js runs your Javascript as single-threaded so there is no time slicing between pieces of Javascript which it sounds like your question perhaps was anticipating. Once a callback is called that corresponds to an even in the event queue, that callback runs until it finishes and returns control back to the interpreter.
So, it goes like this:
Pull event from the event queue
Call the Javascript callback associated with that event
Javascript callback runs until completion and then returns from the callback
node.js internals check event queue for next event. If an event is there, go to step 1 and repeat
If no event is there, go to sleep until an event is placed into the event queue.
In reality, this is a bit of a simplification because there are several different types of event queues with a priority order for which one gets to go first, but this describes the general process as it relates to your question.
There is no set number of instructions that get processed before checking the event queue. Each message is run to completion. From the Mozilla documentation (https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop):
Each message is processed completely before any other message is processed. This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be pre-empted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it may be stopped at any point by the runtime system to run some other code in another thread.
A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the "a script is taking too long to run" dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
Does the code (e.g. functions) execute at the same time or does it follows the order in which it was written (top to bottom)? I know that order matters in HTML, what about JavaScript?
For instance, if there are two function calls one after the other, will they get executed simultaneously or one after the other even if they have nothing to do with each other?
It may seem as if Javascript functions are executing in an unpredictable order because the model for Javascript in a browser is event-driven. This means that a Javascript program typically attaches event handlers to DOM elements and these are triggered by user actions such as clicking or moving the pointer over an element. However, the script that sets up the event handlers runs as a traditional structured imperative program.
A further complication is that modern Javascript applications make extensive use of asynchronous functions. This means that a function call might return quickly but will have set in motion an action which completes at a later time. An obvious example is the sending of requests to a server in so-called AJAX applications. Typically the request function is passed a callback function which is called when the request completes. However the Javascript program will go on to the next statement without waiting for the completion of the request. This can be somewhat confusing if you aren't thinking clearly enough about what your program is actually doing.
Another example that you might sometimes encounter is the launching of animations in jQuery. These too work asynchronously and you can pass a callback function that runs after the animation completes. Once again this can be surprising sometimes if you expect the next statement to be executed after the animation completes rather than after it starts.
It occurs in the order it was written (with various exceptions). More specifically it's an imperative structured object-oriented prototype based scripting language :)
See Imperative Programming