How many events can Node.js queue? - javascript

From what I see, if an event in Node take a "long time" to be dispatched, Node creates some kind of "queue of events", and they are triggered as soon as possible, one by one.
How long can this queue be?

While this may seem like a simple question, it is actually a rather complex problem; unfortunately, there's no simple number that anyone can give you.
First: wall time doesn't really play a part in anything here. All events are dispatched in the same fashion, whether or not things are taking "a long time." In other words, all events pass through a "queue."
Second: there is no single queue. There are many places where different kinds of events can be dispatched into JS. (The following assumes you know what a tick is.)
There are the things you (or the libraries you use) pass to process.nextTick(). They are called at the end of the current tick until the nextTick queue is empty.
There are the things you (or the libraries you use) pass to setImmediate(). They are called at the start of the next tick. (This means that nextTick tasks can add things to the current tick indefinitely, preventing other operations from happening whereas setImmediate tasks can only add things to the queue for the next tick.)
I/O events are handled by libuv via epoll/kqueue/IOCP on Linux/Mac/Windows respectively. When the OS notifies libuv that I/O has happened, it in turn invokes the appropriate handler in JS. A given tick of the event loop may process zero or more I/O events; if a tick takes a long time, I/O events will queue in an operating system queue.
Signals sent by the OS.
Native code (C/C++) executed on a separate thread may invoke JS functions. This is usually accomplished through the libuv work queue.
Since there are many places where work may be queued, it is not easy to answer "how many items are currently queued", much less what the absolute limit of those queues are. Essentially, the hard limit for the size of your task queues is available RAM.
In practice, your app will:
Hit V8 heap constraints
For I/O, max out the number of allowable open file descriptors.
...well before the size of any queue becomes problematic.
If you're just interested in whether or not your app under heavy load, toobusy may be of interest -- it times each tick of the event loop to determine whether or not your app is spending an unusual amount of time processing each tick (which may indicate that your task queues are very large).

Handlers for a specific event are called synchronously (in the order they were added) as soon as the event is emitted, they are not delayed at all.
The total number of event handlers is limited only by v8 and/or the amount of available RAM.

I believe you're talking about operations that can take an undefined amount of time to complete, such as an http request or filesystem access.
Node gives you a method to complete these types of operations asynchronously, meaning that you can tell node, or a 3rd party library, to start an operation, and then call some code (a function that you define) to inform you when the operation is complete. This can be done through event listeners, or callback functions, both of which have their own limitations.
With event listeners the maximum amount of listeners you can have is dependent on the maximum array size of your environment. In the case of node.js the javascript engine is v8, but according to this post there is a maximum set out by the 5th ECMA standard of ~4billion elements, which is a limit that you shouldn't ever overcome.
With callbacks the limitation you have is the max call stack size, meaning how deep your functions can call each other. For instance you can have a callback calling a callback calling a callback calling another callback, etc etc. The call stack size dictates how may callbacks calling callbacks you can have. Note that the call stack size can be a limitation with event listeners as well as they're essentially callbacks that can be executed multiple times.
And these are the limitations with each.

Related

Why is there a distinction between microtask and (macro)task in JavaScript?

Conceptually just having one queue for jobs seems to be sufficient for most use-cases.
What are the reasons for having multiple queues and distinguishing those into "microtasks" and (macro)"tasks"?
Having multiple (macro) task queues allows for task prioritization.
For instance, an User Agent (UA) can choose to prioritize a user event (e.g a click event) over a network event, even if the latter was actually registered by the system first, because the former is probably more visible to the user and requires lower latency.
In HTML this is allowed by the first step of the Event Loop's Processing model, which states that the UA must choose the next task to execute from one of its task queues.
(note: HTML specs do not require that there are multiple queues, but they do define multiple task sources to guarantee the execution order of similar tasks).
Now, why have yet an other beast called microtask queue?
We can find a few early discussions about "how" it should be implemented, but I didn't dig as far as to find who proposed this idea the first and for what use case.
However from the discussions I found, we can see that a few proposals needing such a mechanism were on the road:
Mutation Observers
Now-deprecated ES Object.observe()
At-that-time-incoming ES Promises,
Also-incoming-at-that-time-and-I-don't-know-why-it's-cited ES WeakRefs
HTML Custom Elements callback
The first two being also the first to be implemented in browsers, we can probably say that their use case was the main reason for implementing this new kind of queue.
Both actually did similar things: they listened for changes on an object (or the DOM tree), and coalesced all the changes that occurred during a job into a single event (not to be read as Event).
One could argue that this event could have been queued in the next (macro) task, with the highest priority, except that the Event Loop was already a bit complex, and every jobs did not necessarily come from tasks.
For instance, the rendering is actually a part of every Event Loop iteration, except that most of the time it early exits because it wasn't the time to render.
So if you do your DOM modifications during a rendering frame, you could have the modifications rendered, and only after the whole rendering took place you'd get the callback.
Since the main use case for observers is to act on the observed changes before they trigger performance heavy side-effects, I guess you can see how it was necessary to have a mean to insert our callback right after the job which has done the modifications.
PS: At that time I didn't know much about the Event Loop, I was far from specs matters and I may have put some anachronisms in this answer.

When event looping is blocking the application from I/O actions?

I just read an article about the event loop in JavaScript.
I found two contradictive phrases and I would be glad if someone could clarify.
A downside of this model is that if a message takes too long to
complete, the web application is unable to process user interactions
like click or scroll. The browser mitigates this with the "a script is
taking too long to run" dialog
A very interesting property of the event loop model is that
JavaScript, unlike a lot of other languages, never blocks. Handling
I/O is typically performed via events and callbacks, so when the
application is waiting for an IndexedDB query to return or an XHR
request to return, it can still process other things like user input
So, when is the first one true and when is the second one true?
"A very interesting property of the event loop model is that
JavaScript, unlike a lot of other languages, never blocks.
This is misleading. Without clever programming, JavaScript would always block the UI thread, because runtime logic always blocks the UI, by design. At a smooth sixty frames a second, that means your application logic must always cooperatively yield control (or simply complete execution) within about 16 milliseconds, otherwise your UI will freeze or stutter.
Because of this, most JavaScript APIs that might take a long time (eg. network requests) are designed in such a way to use techniques (eg callbacks, promises) to circumvent this problem, so that they do not block the event loop, avoiding the UI becoming unresponsive.
Put another way: host environments (eg a Web browser or a Node.js runtime instance) are specifically designed to enable the use of an event-based programming model (originally inspired by programming environments like Hypercard on the Mac) whereby the host environment can be asked to perform a long-running task (eg run a timer), without blocking the main thread of execution, and for your program to be notified later, via an "event" when the long-running task is complete, enabling your program to pick-up where it left-off.
Both are correct, even though I agree it is somewhat wrongly expressed.
So by points:
It's true that if a synchronous task takes too long to complete, the event loop "gets stuck" there and then all other queued tasks can't run till it finishes.
Here it is talking about asynchronous tasks so even though an HTTP request, an I/O request or whatever that is async takes too long to process, all the synchronous tasks can keep doing their job, like processing user input
There are two types of code inside Javascript
Synchronous (it's like going one by one).
Asynchronous (it's like skipping for the future)
Synchronous code
You want to find the prime number from 1 to 10000000 with synchronous code you will write a function and that function will perform the calculation and finds out the prime number in the given range but what will happen with synchronous code. The javascript engine is not able to do any task until that task gets finished.
Asynchronous Code
If you wrap the same code inside a callback or more friendly with the SetTimeout method the javascript put that function inside the event queue and perform the other operation when a certain time came the timeout method fires callback certainly when there is nothing inside the call stack, it will ask event loop to pass the first thing which is inside the event queue. So this more about finding an idle time to perform the heavy operation.
Use javascript workers to perform heavy mathematics tasks not
SetTimeout because eventually, it will block the engine when the
function is inside the call stack.

Bull Queue Concurrency Questions

I need help understanding how Bull Queue (bull.js) processes concurrent jobs.
Suppose I have 10 Node.js instances that each instantiate a Bull Queue connected to the same Redis instance:
const bullQueue = require('bull');
const queue = new bullQueue('taskqueue', {...})
const concurrency = 5;
queue.process('jobTypeA', concurrency, job => {...do something...});
Does this mean that globally across all 10 node instances there will be a maximum of 5 (concurrency) concurrently running jobs of type jobTypeA? Or am I misunderstanding and the concurrency setting is per-Node instance?
What happens if one Node instance specifies a different concurrency value?
Can I be certain that jobs will not be processed by more than one Node instance?
The TL;DR is: under normal conditions, jobs are being processed only once. If things go wrong (say Node.js process crashes), jobs may be double processed.
Quoting from Bull's official README.md:
Important Notes
The queue aims for an "at least once" working strategy. This means that in some situations, a job could be processed more than once. This mostly happens when a worker fails to keep a lock for a given job during the total duration of the processing.
When a worker is processing a job it will keep the job "locked" so other workers can't process it.
It's important to understand how locking works to prevent your jobs from losing their lock - becoming stalled - and being restarted as a result. Locking is implemented internally by creating a lock for lockDuration on interval lockRenewTime (which is usually half lockDuration). If lockDuration elapses before the lock can be renewed, the job will be considered stalled and is automatically restarted; it will be double processed. This can happen when:
The Node process running your job processor unexpectedly terminates.
Your job processor was too CPU-intensive and stalled the Node event loop, and as a result, Bull couldn't renew the job lock (see #488 for how we might better detect this). You can fix this by breaking your job processor into smaller parts so that no single part can block the Node event loop. Alternatively, you can pass a larger value for the lockDuration setting (with the tradeoff being that it will take longer to recognize a real stalled job).
As such, you should always listen for the stalled event and log this to your error monitoring system, as this means your jobs are likely getting double-processed.
As a safeguard so problematic jobs won't get restarted indefinitely (e.g. if the job processor aways crashes its Node process), jobs will be recovered from a stalled state a maximum of maxStalledCount times (default: 1).
Bull is designed for processing jobs concurrently with "at least once" semantics, although if the processors are working correctly, i.e. not stalling or crashing, it is in fact delivering "exactly once". However you can set the maximum stalled retries to 0 (maxStalledCount https://github.com/OptimalBits/bull/blob/develop/REFERENCE.md#queue) and then the semantics will be "at most once".
Having said that I will try to answer to the 2 questions asked by the poster:
What happens if one Node instance specifies a different concurrency value?
I will assume you mean "queue instance". If so, the concurrency is specified in the processor. If the concurrency is X, what happens is that at most X jobs will be processed concurrently by that given processor.
Can I be certain that jobs will not be processed by more than one Node instance?
Yes, as long as your job does not crash or your max stalled jobs setting is 0.
I spent a bunch of time digging into it as a result of facing a problem with too many processor threads.
The short story is that bull's concurrency is at a queue object level, not a queue level.
If you dig into the code the concurrency setting is invoked at the point in which you call .process on your queue object. This means that even within the same Node application if you create multiple queues and call .process multiple times they will add to the number of concurrent jobs that can be processed.
One contributor posted the following:
Yes, It was a little surprising for me too when I used Bull first
time. Queue options are never persisted in Redis. You can have as many
Queue instances per application as you want, each can have different
settings. The concurrency setting is set when you're registering a
processor, it is in fact specific to each process() function call, not
Queue. If you'd use named processors, you can call process() multiple
times. Each call will register N event loop handlers (with Node's
process.nextTick()), by the amount of concurrency (default is 1).
So the answer to your question is: yes, your processes WILL be processed by multiple node instances if you register process handlers in multiple node instances.
Ah Welcome! This is a meta answer and probably not what you were hoping for but a general process for solving this:
Read the documentation ultra carefully to identify which guarantees your solution aims to provide:
You can specify a concurrency argument. Bull will then call your
handler in parallel respecting this maximum value.
I personally don't really understand this or the guarantees that bull provides. Since it's not super clear:
Dive into source to better understand what is actually happening. I usually just trace the path to understand:
https://github.com/OptimalBits/bull/blob/f05e67724cc2e3845ed929e72fcf7fb6a0f92626/lib/queue.js#L629
https://github.com/OptimalBits/bull/blob/f05e67724cc2e3845ed929e72fcf7fb6a0f92626/lib/queue.js#L651
https://github.com/OptimalBits/bull/blob/f05e67724cc2e3845ed929e72fcf7fb6a0f92626/lib/queue.js#L658
... more this is pretty big :p
If the implementation and guarantees offered are still not clear than create test cases to try and invalidate assumptions it sounds like:
Initialize process for the same queue with 2 different concurrency values
Create a queue and two workers, set a concurrent level of 1, and a callback that logs message process then times out on each worker, enqueue 2 events and observe if both are processed concurrently or if it is limited to 1
IMO the biggest thing is:
Can I be certain that jobs will not be processed by more than one Node
instance?
If exclusive message processing is an invariant and would result in incorrectness for your application, even with great documentation, I would highly recommend to perform due diligence on the library :p
Looking into it more, I think Bull doesn't handle being distributed across multiple Node instances at all, so the behavior is at best undefined.

What would happen if a variable were manipulated more than once at the exact same time? Is it possible? [duplicate]

Lets assume I run this piece of code.
var score = 0;
for (var i = 0; i < arbitrary_length; i++) {
async_task(i, function() { score++; }); // increment callback function
}
In theory I understand that this presents a data race and two threads trying to increment at the same time may result in a single increment, however, nodejs(and javascript) are known to be single threaded. Am I guaranteed that the final value of score will be equal to arbitrary_length?
Am I guaranteed that the final value of score will be equal to
arbitrary_length?
Yes, as long as all async_task() calls call the callback once and only once, you are guaranteed that the final value of score will be equal to arbitrary_length.
It is the single-threaded nature of Javascript that guarantees that there are never two pieces of Javascript running at the exact same time. Instead, because of the event driven nature of Javascript in both browsers and node.js, one piece of JS runs to completion, then the next event is pulled from the event queue and that triggers a callback which will also run to completion.
There is no such thing as interrupt driven Javascript (where some callback might interrupt some other piece of Javascript that is currently running). Everything is serialized through the event queue. This is an enormous simplification and prevents a lot of stickly situations that would otherwise be a lot of work to program safely when you have either multiple threads running concurrently or interrupt driven code.
There still are some concurrency issues to be concerned about, but they have more to do with shared state that multiple asynchronous callbacks can all access. While only one will ever be accessing it at any given time, it is still possible that a piece of code that contains several asynchronous operations could leave some state in an "in between" state while it was in the middle of several async operations at a point where some other async operation could run and could attempt to access that data.
You can read more about the event driven nature of Javascript here: How does JavaScript handle AJAX responses in the background? and that answer also contains a number of other references.
And another similar answer that discusses the kind of shared data race conditions that are possible: Can this code cause a race condition in socket io?
Some other references:
how do I prevent event handlers to handle multiple events at once in javascript?
Do I need to be concerned with race conditions with asynchronous Javascript?
JavaScript - When exactly does the call stack become "empty"?
Node.js server with multiple concurrent requests, how does it work?
To give you an idea of the concurrency issues that can happen in Javascript (even without threads and without interrupts, here's an example from my own code.
I have a Raspberry Pi node.js server that controls the attic fans in my house. Every 10 seconds it checks two temperature probes, one inside the attic and one outside the house and decides how it should control the fans (via relays). It also records temperature data that can be presented in charts. Once an hour, it saves the latest temperature data that was collected in memory to some files for persistence in case of power outage or server crash. That saving operation involves a series of async file writes. Each one of those async writes yields control back to the system and then continues when the async callback is called signaling completion. Because this is a low memory system and the data can potentially occupy a significant portion of the available RAM, the data is not copied in memory before writing (that's simply not practical). So, I'm writing the live in-memory data to disk.
At any time during any of these async file I/O operations, while waiting for a callback to signify completion of the many file writes involved, one of my timers in the server could fire, I'd collect a new set of temperature data and that would attempt to modify the in-memory data set that I'm in the middle of writing. That's a concurrency issue waiting to happen. If it changes the data while I've written part of it and am waiting for that write to finish before writing the rest, then the data that gets written can easily end up corrupted because I will have written out one part of the data, the data will have gotten modified from underneath me and then I will attempt to write out more data without realizing it's been changed. That's a concurrency issue.
I actually have a console.log() statement that explicitly logs when this concurrency issue occurs on my server (and is handled safely by my code). It happens once every few days on my server. I know it's there and it's real.
There are many ways to work around those types of concurrency issues. The simplest would have been to just make a copy in memory of all the data and then write out the copy. Because there are not threads or interrupts, making a copy in memory would be safe from concurrency (there would be no yielding to async operations in the middle of the copy to create a concurrency issue). But, that wasn't practical in this case. So, I implemented a queue. Whenever I start writing, I set a flag on the object that manages the data. Then, anytime the system wants to add or modify data in the stored data while that flag is set, those changes just go into a queue. The actual data is not touched while that flag is set. When the data has been safely written to disk, the flag is reset and the queued items are processed. Any concurrency issue was safely avoided.
So, this is an example of concurrency issues that you do have to be concerned about. One great simplifying assumption with Javascript is that a piece of Javascript will run to completion without any thread of getting interrupted as long as it doesn't purposely return control back to the system. That makes handling concurrency issues like described above lots, lots easier because your code will never be interrupted except when you consciously yield control back to the system. This is why we don't need mutexes and semaphores and other things like that in our own Javascript. We can use simple flags (just a regular Javascript variable) like I described above if needed.
In any entirely synchronous piece of Javascript, you will never be interrupted by other Javascript. A synchronous piece of Javascript will run to completion before the next event in the event queue is processed. This is what is meant by Javascript being an "event-driven" language. As an example of this, if you had this code:
console.log("A");
// schedule timer for 500 ms from now
setTimeout(function() {
console.log("B");
}, 500);
console.log("C");
// spin for 1000ms
var start = Date.now();
while(Data.now() - start < 1000) {}
console.log("D");
You would get the following in the console:
A
C
D
B
The timer event cannot be processed until the current piece of Javascript runs to completion, even though it was likely added to the event queue sooner than that. The way the JS interpreter works is that it runs the current JS until it returns control back to the system and then (and only then), it fetches the next event from the event queue and calls the callback associated with that event.
Here's the sequence of events under the covers.
This JS starts running.
console.log("A") is output.
A timer event is schedule for 500ms from now. The timer subsystem uses native code.
console.log("C") is output.
The code enters the spin loop.
At some point in time part-way through the spin loop the previously set timer is ready to fire. It is up to the interpreter implementation to decide exactly how this works, but the end result is that a timer event is inserted into the Javascript event queue.
The spin loop finishes.
console.log("D") is output.
This piece of Javascript finishes and returns control back to the system.
The Javascript interpreter sees that the current piece of Javascript is done so it checks the event queue to see if there are any pending events waiting to run. It finds the timer event and a callback associated with that event and calls that callback (starting a new block of JS execution). That code starts running and console.log("B") is output.
That setTimeout() callback finishes execution and the interpreter again checks the event queue to see if there are any other events that are ready to run.
Node uses an event loop. You can think of this as a queue. So we can assume, that your for loop puts the function() { score++; } callback arbitrary_length times on this queue. After that the js engine runs these one by one and increase score each time. So yes. The only exception if a callback is not called or the score variable is accessed from somewhere else.
Actually you can use this pattern to do tasks parallel, collect the results and call a single callback when every task is done.
var results = [];
for (var i = 0; i < arbitrary_length; i++) {
async_task(i, function(result) {
results.push(result);
if (results.length == arbitrary_length)
tasksDone(results);
});
}
No two invocations of the function can happen at the same time (b/c node is single threaded) so that will not be a problem. The only problem would be ifin some cases async_task(..) drops the callback. But if, e.g., 'async_task(..)' was just calling setTimeout(..) with the given function, then yes, each call will execute, they will never collide with each other, and 'score' will have the value expected, 'arbitrary_length', at the end.
Of course, the 'arbitrary_length' can't be so great as to exhaust memory, or overflow whatever collection is holding these callbacks. There is no threading issue however.
I do think it’s worth noting for others that view this, you have a common mistake in your code. For the variable i you either need to use let or reassign to another variable before passing it into the async_task(). The current implementation will result in each function getting the last value of i.

What's an event-loop and how is it different than using other models?

I have been looking into Node.JS and all the documentation and blogs talk about how it uses an event-loop rather than a per-request model.
I am having some confusion understanding the difference. I feel like I am 80% there understanding it but not fully getting it yet.
A threaded model will spawn a new thread for every request. This means that you get quite some overhead in terms of computation and memory. An event loop runs in a single thread, which means you don't get the overhead.
The result of this is that you must change your programming model. Because all these different things are happening in the same thread, you cannot block. This means you cannot wait for something to happen because that would block the whole thread. Instead you define a callback that is called once the action is complete. This is usually referred to as non-blocking I/O.
Pseudo example for blocking I/O:
row = db_query('SELECT * FROM some_table');
print(row);
Pseudo example for non-blocking I/O:
db_query('SELECT * FROM some_table', function (row) {
print(row);
});
This example uses lambdas (anonymous functions) like they are used in JavaScript all the time. JS makes heavy use of events, and that's exactly what callbacks are about. Once the action is complete, an event is fired which triggers the callback. This is why it is often referred to as an evented model or also asynchronous model.
The implementation of this model uses a loop that processes and fires these events. That's why it is called an event queue or event loop.
Prominent examples of event queue frameworks include:
EventMachine (Ruby)
Tornado (Python)
node.js (V8 server-side JavaScript)
Think of incoming requests or callbacks as events, that are enqueued and processed.
That is exactly the same what is done in most of the GUI systems. The system can't know when a user will click a button or do some interaction. But when he does, the event will propagated to the event loop, which is basically a loop that checks for new events in the queue and process them.
The advantage is, that you don't have to wait for results for yourself. Instead, you register callback functions that are executed when the event is triggered. This allows the framework to handle I/O stuff and you can easily rely on it's internal efficiency when dealing with long-taking actions instead of blocking processes by yourself.
In short, everythings runs in parallel but your code. There will never be two fragments of callback functions running concurrently – the event loop is a single thread. The processes that execute stuff externally and finally propagate events however can be distributed in multiple threads/processes.
An evented loop allows you to handle the time it takes to talk to the hard drive or network. take this list of time:
Source | CPU Cycles
L1 | 3 Cycles
L2 | 14 Cycles
RAM | 250 Cycles
Disk | 41,000,000 Cycles
Network| 240,000,000 Cycles
That time you're running curl in PHP is just wasting CPU.

Categories