I know messages come into call stack from the queue when call stack is empty. Wouldn't it be better though, if event loop could push messages from queue directly to call stack without waiting? What reasons are behind this behavior? If the event loop would push a message at an exact time we could always rely on function such as setTimeout etc.
setTimeout(() => console.log("I want to be logged for 10ms, but I will never be :("), 10);
// some blocking operations
for(let i = 0; i < 500000000; i++){
Math.random() * 2 + 2 - 3;
}
console.log("I'll be logged first lol");
It'll probably never be changed due to consistency reason but I'm still curious. Maybe I'm not seeing something, and there is the serious technical reason behind the concept of waiting for an empty stack. Do you have access to some articles about architectural decisions in JS, or maybe you know fundamental examples when this behavior is necessary? There are many articles about how JS works, but I couldn't find anything like "Why event loop works exactly that way". Any help would be greatly appreciated.
V8 developer here. This question seems to be based on a misunderstanding of what "the call stack" is: it is not a data structure that anyone can just push things onto. Instead, it is a term for the current state of things when a bunch of functions have called each other. The only way to "push" another function onto the call stack is when the currently executing function calls it. If the event system inserted random calls at random places into your functions, that would lead to a pretty weird programming model.
You could design a programming environment that's conceptually similar, but rather than pushing anything onto the call stack, what it would do is interrupt and suspend whatever is currently executing, and execute a setTimeout-scheduled function (or event handler, etc) instead, and resume previous execution afterwards. One issue you'd have to solve is: what if this repeats, i.e. what if the scheduled function is interrupted by another scheduled function which is interrupted by another scheduled function, and so on? What if a scheduled function takes forever to finish: when does the previously executing code get to make progress again? Also, while this can be done in a single-threaded world, getting random interruptions is concurrency (which from a consistency point of view is equivalent to parallelism/multi-threading), so you'd need synchronization primitives like locks (essentially, have a way for functions to say "don't interrupt this section" -- which in turn means you can't actually guarantee the accuracy of scheduling requests). Don't underestimate the complexity cost all this would impose on programmers: when writing code, they'd have to keep in mind that anything could get interrupted at any time, and on the flip side that any data that one function might want to process might not be ready yet because another function that produced it hasn't finished running yet.
So in short, JavaScript's event loop system is what it is because the language avoids concurrency, and randomly interrupting functions to execute others is concurrency, even on a single-threaded system.
Related
Is there a set number of instructions statements that get processed before checking the event queue/per tick/per loop (ways of saying the same thing, I think?)
Is there a set number of instructions that get processed before checking the event queue/per tick/per loop (ways of saying the same thing, I think?)
No, there is not.
In the node.js architecture, when an event is pulled from the event queue, it's tied to a callback. The interpreter calls that callback and that callback runs to completion. Only when it returns and the stack is again empty does it check to see if there is another event in the event queue to run.
So, it has absolutely nothing to do with a number of instructions. node.js runs your Javascript as single-threaded so there is no time slicing between pieces of Javascript which it sounds like your question perhaps was anticipating. Once a callback is called that corresponds to an even in the event queue, that callback runs until it finishes and returns control back to the interpreter.
So, it goes like this:
Pull event from the event queue
Call the Javascript callback associated with that event
Javascript callback runs until completion and then returns from the callback
node.js internals check event queue for next event. If an event is there, go to step 1 and repeat
If no event is there, go to sleep until an event is placed into the event queue.
In reality, this is a bit of a simplification because there are several different types of event queues with a priority order for which one gets to go first, but this describes the general process as it relates to your question.
There is no set number of instructions that get processed before checking the event queue. Each message is run to completion. From the Mozilla documentation (https://developer.mozilla.org/en-US/docs/Web/JavaScript/EventLoop):
Each message is processed completely before any other message is processed. This offers some nice properties when reasoning about your program, including the fact that whenever a function runs, it cannot be pre-empted and will run entirely before any other code runs (and can modify data the function manipulates). This differs from C, for instance, where if a function runs in a thread, it may be stopped at any point by the runtime system to run some other code in another thread.
A downside of this model is that if a message takes too long to complete, the web application is unable to process user interactions like click or scroll. The browser mitigates this with the "a script is taking too long to run" dialog. A good practice to follow is to make message processing short and if possible cut down one message into several messages.
It feels like this must be a stupid question but I don't understand something fundamental about making a request in indexedDB.
Why are the requests made before the event handlers are defined? For example, the request = objectStore.add(data) is made before the request.onsuccess and request.onerror functions are declared. Is this correct? Is it possible that the request could complete before the event handlers are registered?
I'm comparing it to the creation of an image element, followed by declaring the event handlers for onload and onerror, all before setting the source attribute to the location of a file and attempting to load it. But a request "element" can't be created before a request is made; so, there's nothing to attach the events to until the request has already been made.
Please let me know what I'm missing here. I've been writing and retrieving data from indexedDB without issue, and think I've been coding it correctly; but I want to make sure this is right and will always work.
Thank you.
Duplicate Response
I read that question and answer sometime ago, when I first started reading about indexedDB, and completely forgot about it. Had I found it again before I wrote this question, I likely would not have submitted it and would've just accepted that the code ought to work out whether I understand it or not. Handling error events and transaction aborts is what got me to thinking about the statement order again.
However, after reading the answer again, I don't understand enough to be any further along than just accepting it and hoping it will always work. I'm not trying to be snide. In one sense, it is just confusing for my limited capacity to think of event loops and epochs and that everything sort of happens all at once.
At the end of the epoch (or the start of the next, whatever you think is easier to understand), the underlying JS engine goes back and looks at what is registered to execute, and then executes everything nearly all at once.
There has to be an order of execution or nothing makes sense, asynchronous or not. I understand that the interpreter does not wait for an anysnchronous process to complete before starting to execute the next line of code. But aren't the synchronous statements processed completely in turn in the order they appear in the code and the asynchronous ones started off in the order they appear in the code, such that if an asynchronous process errored quickly, the event could be missed if the event handlers weren't declared in advance? The event handlers are not hoisted like function declarations, are they? This is the part that I still find confusing.
In this article by Jake Archibald on promises, in the introduction he presents an example concerning the loading of images and writes:
Unfortunately, in the example above, it's possible that the events happened before we started listening for them, so we need to work around that using the "complete" property of images.
and
This doesn't catch images that error'd before we got a chance to listen for them; unfortunately the DOM doesn't give us a way to do that. Also, this is loading one image, things get even more complex if we want to know when a set of images have loaded.
That gives the impression that order is important, such that, in the case of images, when possible, the source should be assigned after declaring all the event handlers, in order to not miss hearing events. The important part for me was the fact that an event could take place before the event handler was declared/registered.
I tried to follow the same pattern of making the request after declaring the event handlers in indexedDB, but it doesn't appear possible because there's nothing to attach the events to until the request is made.
Even when all the statements are asynchronous, such as in this example in the MDN Web Docs on Using IndexedDB, some things are still rather confusing. The objectStore.transaction.oncomplete is an interesting statement. We're waiting for the objectStore to be created before attempting to write data to it. (I think that's considered bad practice, writing data in an onupgradeneeded event; so, we don't use that statement.) But what is confusing is why we don't worry about the objectStore being created before creating an index in it. Why isn't the createIndex statement started at the same time the createObjectStore statement started, if everything is processed all at once? If the createObjectStore statement doesn't complete before the createIndex statement begins, shouldn't an event handler be required or it would fail because objectStore wouldn't yet exist?
I know it works because I've been using the same code pattern, but I really don't understand it.
These two items--the potential to miss events and why an event handler isn't needed in this indexedDB example--are what I'd like to better understand. I don't know if this makes my question different or not, but the answer to the duplicate question doesn't answer these for me. Perhaps, I'd have to understand the JS engine better to understand the answer to these questions.
const dbName = "the_name";
var request = indexedDB.open(dbName, 2);
request.onerror = function(event) {
// Handle errors.
};
request.onupgradeneeded = function(event) {
var db = event.target.result;
// Create an objectStore to hold information about our customers. We're
// going to use "ssn" as our key path because it's guaranteed to be
// unique - or at least that's what I was told during the kickoff meeting.
var objectStore = db.createObjectStore("customers", { keyPath: "ssn" });
// Create an index to search customers by name. We may have duplicates
// so we can't use a unique index.
objectStore.createIndex("name", "name", { unique: false });
// Create an index to search customers by email. We want to ensure that
// no two customers have the same email, so use a unique index.
objectStore.createIndex("email", "email", { unique: true });
// Use transaction oncomplete to make sure the objectStore creation is
// finished before adding data into it.
objectStore.transaction.oncomplete = function(event) {
// Store values in the newly created objectStore.
var customerObjectStore = db.transaction("customers", "readwrite").objectStore("customers");
customerData.forEach(function(customer) {
customerObjectStore.add(customer);
});
};
};
Clarification/Response to Answer/Comments
Thank you for taking the time to respond to my question and for the added explanation.
First, by "before" I simply mean the order in which the statements appear in the script.
I think I follow your analogy and it is a good one. I'm just still not clear on why the employees never submit their work to the secretary until the next day, when it is guaranteed that a secretary will be there to receive it.
It sounds similar to the fact that the javascript interpreter, when it performs its equivalent of compiling the script, hoists the function declarations, such that a function can be invoked in the code before the function declaration has been made.
It would appear that your saying, restated in my simple terms, that the JS engine, at some point before final execution, assigns the event handlers (the secretaries) to be registered in an earlier epoch than the one in which the requests (the employees) that utlimately trigger the events will complete. Therefore, it doesn't matter where in the code the request statements appear relative to the event handlers, that is, as long as they are defined within the same epoch.
The JS engine doesn't know when the request will complete, but only when the event handlers have been registered to start listening and when the request has commenced. As long as the JS engine has a process to order these steps properly independent of the order the statements appear in the code, such that an event cannot be missed, then it's no different to me than the hoisting of function declarations and I don't really have to think much about it any longer to accomplish my tasks.
However, I would still like to better understand what an epoch is, at least in terms of knowing that the statements are made within the same epoch. I don't see any mention of an epoch in the article on "Concurrecny Model and Event Loop" in MDN Web Docs. Would you mind pointing me to any good resources you know of?
Thank you.
Final Notes
I came across these two items through a link, here, on stack overflow. This same question was asked eight years ago and was answered about the same way but with different terminology; that is, instead of epochs, it was that javascript code will "run to completion" or has run-to-completion semantics. This question refers you to this document which can be searched for "run to completion" to read two exchanges about why there is not a race condition in this set up of making a request before registering the event handlers. The older JavaScript book I have by David Flanagan, in discussing the execution of JS "programs," states that, because JS has single-threaded execution, one never has to worry about race conditions; but I don't know if he was referring to this situation exactly.
Thus, this question has been asked and answered multiple times in the past and I guess I'm just another newbie asking an old question as if I was the first to have thought of it, and without enough knowledge of how JS processes.
The article "Concurrency Model and Event Loop", linked above, has a brief "Run-to-completion" section; but I didn't understand its implications until after reading the last document linked above.
I now take it to mean that all the code in a function will run to completion before any other code can begin, which seems to have two interpretations.
One is that the asynchronous request on the database is queued when the statement is reached in the function code but won't actually commence until all the other statements in the function are run, including the event handlers that are declared afterward.
Or, according to that last linked document above, the asynchronous request may run and even complete before the event handlers are registered, but the notification of its completion will remain in the queue and won't execute until after the rest of the statements in the function are run and the event handlers have been registered.
Interpretation 2 appears to be the accurate one but, whichever is the actual case, it all makes adequate sense to me now and explains why the secretary will always be there before the employee submits the work, and why, even if the employee finishes the work in a nanosecond, the employee won't submit the work until the next day when a secretary is guaranteed to be present to receive it. The employee may place the work-complete notification in the queue, but the queue won't sound the notification for a secretary to hear until the next day.
Thanks, Josh, for the additional explanation about what is meant by epochs and how that terminology works out in operation. I accepted your answer and appreciate you taking the time to write it all out.
Now that I seem to understand why the event-handler declarations can be made later in the code than the making of the request, I still don't understand why we can create an object store and then immediately create an index on that object store without having to wait until we know the object store has been successfully created, unless it is synchronous or something else special takes place in a versionchange transaction / onupgradeneeded event. There are no events mentioned in the MDN Web Docs description of createObjectStore and no examples that have any listeners on it; so; I'll just assume it's never necessary.
Thanks again.
Why are the requests made before the event handlers are defined?
It does not matter.
For example, the request = objectStore.add(data) is made before the request.onsuccess and request.onerror functions are declared. Is this correct?
Yes it is correct because again it does not matter.
I would be careful about your use of the word before. Maybe it means something different to me than it does to you. I can't tell. But maybe this is what is tripping you up.
Is it possible that the request could complete before the event handlers are registered?
If you register the event handlers in the same epoch as when you make the request, then no. The request only completes in a later epoch.
Ok, here is my attempt at explaining by example (sorry if this is bad!). Personification is generally a good educational technique, and is less intimidating then using raw technical terms, so let's go with that.
Let's say you are a boss, and have employees. Let's say you ask an employee to do some work for you. Then, you ask that employee to report back to your secretary when they completed the work. Immediately after asking the employee to go do that other work, you carry on doing your own work, without waiting for that employee to finish their work and report back. You are both basically doing work at the same time.
Now, in this situation, what happens if you don't have a secretary at the time you hand the employee a request to do something? Well, no problem. You go and hire another secretary before that employee finishes their work and before that employee even knows who to report back to, which is fine because all the employee knows is they report to your secretary. The employee does not know whether your secretary exists or not at the time of being assigned work, and does not need to know that. The missing secretary did not prevent that employee from getting started, or understanding the work to be done. And by the time that employee completes their work, you have a secretary ready and waiting. Or, you don't, because you don't happen to care to even acknowledge whether the work was actually completed, you just made a command and trusted the employee to do their job, whatever. You really only care about having them report back to your secretary if you need to do some other work that has to wait until after the first project is done, and that is a different concern.
Let's pretend you already had a secretary at the time you assigned the employee the work. What is the difference between this situation of already having a secretary, and the situation where you go and hire one shortly after assigning the work, but before it is done? There is NO difference.
Now, let's try and really address your concern. What you're suggesting is that it seems impossible to reliably go out and hire that secretary before you know whether the employee finished their assignment. This I think is the critical misunderstanding. It is entirely possible to do this. Why is that? I suppose it is not the easiest thing to grasp.
I am going to stretch this metaphor a bit and impose a strange rule. No matter how simple the project you hand off to the employee, even if it is just to run and get you coffee in the morning, they will never ever get back to you the same day. They will always finish their work some later day, at the earliest tomorrow. They might even finish their work within one fleeting nanosecond of you telling them, but they will NEVER get back to you or your secretary right away, they will always be delayed until tomorrow at the earliest.
This means you have all day to go and hire that secretary that did not exist at the time you gave the employee the order. So long as you do it before tomorrow, you're good. That secretary will exist and be working for you by the time the employee responds tomorrow, and will be able to receive the message from the employee.
Edit response to your added comments:
Yep, hoisting is similar in many respects. Things can happen in a different order then written in code. Hoisting is of course synchronous so it is not a perfect similarity, but the out-of-order aspect is still similar.
Epoch is just my own word i use for a single iteration of the event loop. Like in the case of a for loop using i for i from 0 to 2, there are 3 epochs, iteration 0, iteration 1, and iteration 2. i just call them epochs because it is like categories of time.
In a promise case, it might even be a microtask. In a js worker case, it might be thread-like (and workers are the new hotness over the old child-iframe technique). Basically these are all just ways to 'realize' doing more than one thing at a time. Node calls it a tick, and has things like nextTick() that defers code execution until the next tick of its loop. Within a single epoch, things happen in the order they are written (and notably hoisting is all in epoch 0). but some code may be async, and therefore happen across epochs, and therefore may run in a different order than it was written. Code written earlier may happen in a later epoch.
When you make a request, it says, start doing this thing, and get back to me at the earliest in the next epoch. You have up until the end of the current epoch to register handlers for the request.
Some code, like in the case of an image preloader as mentioned in your example, has to take into account that it attaches the listeners too late (images are being preloaded in an alternate timeline and some may already be loaded and in some browsers this means load will not fire), so it wants to check imageElement.complete to catch that case. In other cases of event listener implementations, some dispatcher implementations will fire events to newly added listeners for events that already happened where the new listener was not listening at the time of the event. But that is not a universal characteristic of event listener implementations, just a characteristic of certain implementations.
And in the case of the transaction.oncomplete thing from within onupgradeneeded, that is just not a great example. It is doing stuff it does not need to do.
This is the technical answer to your question:
https://html.spec.whatwg.org/multipage/webappapis.html#event-loops
The JS concurrency model is cooperative with "run-to-completion" semantics (no parallel processing of events in the same queue). This means that any async response will be posted as a message to the window event loop, and all the sequential code you see after the request is guaranteed to execute before the async response processing starts.
That said, from usability point of view, the IndexDB API is not affording the intent in the most expressive manner, and coming from other languages with preemptive threading you are excused for being confused :-)
Lets assume I run this piece of code.
var score = 0;
for (var i = 0; i < arbitrary_length; i++) {
async_task(i, function() { score++; }); // increment callback function
}
In theory I understand that this presents a data race and two threads trying to increment at the same time may result in a single increment, however, nodejs(and javascript) are known to be single threaded. Am I guaranteed that the final value of score will be equal to arbitrary_length?
Am I guaranteed that the final value of score will be equal to
arbitrary_length?
Yes, as long as all async_task() calls call the callback once and only once, you are guaranteed that the final value of score will be equal to arbitrary_length.
It is the single-threaded nature of Javascript that guarantees that there are never two pieces of Javascript running at the exact same time. Instead, because of the event driven nature of Javascript in both browsers and node.js, one piece of JS runs to completion, then the next event is pulled from the event queue and that triggers a callback which will also run to completion.
There is no such thing as interrupt driven Javascript (where some callback might interrupt some other piece of Javascript that is currently running). Everything is serialized through the event queue. This is an enormous simplification and prevents a lot of stickly situations that would otherwise be a lot of work to program safely when you have either multiple threads running concurrently or interrupt driven code.
There still are some concurrency issues to be concerned about, but they have more to do with shared state that multiple asynchronous callbacks can all access. While only one will ever be accessing it at any given time, it is still possible that a piece of code that contains several asynchronous operations could leave some state in an "in between" state while it was in the middle of several async operations at a point where some other async operation could run and could attempt to access that data.
You can read more about the event driven nature of Javascript here: How does JavaScript handle AJAX responses in the background? and that answer also contains a number of other references.
And another similar answer that discusses the kind of shared data race conditions that are possible: Can this code cause a race condition in socket io?
Some other references:
how do I prevent event handlers to handle multiple events at once in javascript?
Do I need to be concerned with race conditions with asynchronous Javascript?
JavaScript - When exactly does the call stack become "empty"?
Node.js server with multiple concurrent requests, how does it work?
To give you an idea of the concurrency issues that can happen in Javascript (even without threads and without interrupts, here's an example from my own code.
I have a Raspberry Pi node.js server that controls the attic fans in my house. Every 10 seconds it checks two temperature probes, one inside the attic and one outside the house and decides how it should control the fans (via relays). It also records temperature data that can be presented in charts. Once an hour, it saves the latest temperature data that was collected in memory to some files for persistence in case of power outage or server crash. That saving operation involves a series of async file writes. Each one of those async writes yields control back to the system and then continues when the async callback is called signaling completion. Because this is a low memory system and the data can potentially occupy a significant portion of the available RAM, the data is not copied in memory before writing (that's simply not practical). So, I'm writing the live in-memory data to disk.
At any time during any of these async file I/O operations, while waiting for a callback to signify completion of the many file writes involved, one of my timers in the server could fire, I'd collect a new set of temperature data and that would attempt to modify the in-memory data set that I'm in the middle of writing. That's a concurrency issue waiting to happen. If it changes the data while I've written part of it and am waiting for that write to finish before writing the rest, then the data that gets written can easily end up corrupted because I will have written out one part of the data, the data will have gotten modified from underneath me and then I will attempt to write out more data without realizing it's been changed. That's a concurrency issue.
I actually have a console.log() statement that explicitly logs when this concurrency issue occurs on my server (and is handled safely by my code). It happens once every few days on my server. I know it's there and it's real.
There are many ways to work around those types of concurrency issues. The simplest would have been to just make a copy in memory of all the data and then write out the copy. Because there are not threads or interrupts, making a copy in memory would be safe from concurrency (there would be no yielding to async operations in the middle of the copy to create a concurrency issue). But, that wasn't practical in this case. So, I implemented a queue. Whenever I start writing, I set a flag on the object that manages the data. Then, anytime the system wants to add or modify data in the stored data while that flag is set, those changes just go into a queue. The actual data is not touched while that flag is set. When the data has been safely written to disk, the flag is reset and the queued items are processed. Any concurrency issue was safely avoided.
So, this is an example of concurrency issues that you do have to be concerned about. One great simplifying assumption with Javascript is that a piece of Javascript will run to completion without any thread of getting interrupted as long as it doesn't purposely return control back to the system. That makes handling concurrency issues like described above lots, lots easier because your code will never be interrupted except when you consciously yield control back to the system. This is why we don't need mutexes and semaphores and other things like that in our own Javascript. We can use simple flags (just a regular Javascript variable) like I described above if needed.
In any entirely synchronous piece of Javascript, you will never be interrupted by other Javascript. A synchronous piece of Javascript will run to completion before the next event in the event queue is processed. This is what is meant by Javascript being an "event-driven" language. As an example of this, if you had this code:
console.log("A");
// schedule timer for 500 ms from now
setTimeout(function() {
console.log("B");
}, 500);
console.log("C");
// spin for 1000ms
var start = Date.now();
while(Data.now() - start < 1000) {}
console.log("D");
You would get the following in the console:
A
C
D
B
The timer event cannot be processed until the current piece of Javascript runs to completion, even though it was likely added to the event queue sooner than that. The way the JS interpreter works is that it runs the current JS until it returns control back to the system and then (and only then), it fetches the next event from the event queue and calls the callback associated with that event.
Here's the sequence of events under the covers.
This JS starts running.
console.log("A") is output.
A timer event is schedule for 500ms from now. The timer subsystem uses native code.
console.log("C") is output.
The code enters the spin loop.
At some point in time part-way through the spin loop the previously set timer is ready to fire. It is up to the interpreter implementation to decide exactly how this works, but the end result is that a timer event is inserted into the Javascript event queue.
The spin loop finishes.
console.log("D") is output.
This piece of Javascript finishes and returns control back to the system.
The Javascript interpreter sees that the current piece of Javascript is done so it checks the event queue to see if there are any pending events waiting to run. It finds the timer event and a callback associated with that event and calls that callback (starting a new block of JS execution). That code starts running and console.log("B") is output.
That setTimeout() callback finishes execution and the interpreter again checks the event queue to see if there are any other events that are ready to run.
Node uses an event loop. You can think of this as a queue. So we can assume, that your for loop puts the function() { score++; } callback arbitrary_length times on this queue. After that the js engine runs these one by one and increase score each time. So yes. The only exception if a callback is not called or the score variable is accessed from somewhere else.
Actually you can use this pattern to do tasks parallel, collect the results and call a single callback when every task is done.
var results = [];
for (var i = 0; i < arbitrary_length; i++) {
async_task(i, function(result) {
results.push(result);
if (results.length == arbitrary_length)
tasksDone(results);
});
}
No two invocations of the function can happen at the same time (b/c node is single threaded) so that will not be a problem. The only problem would be ifin some cases async_task(..) drops the callback. But if, e.g., 'async_task(..)' was just calling setTimeout(..) with the given function, then yes, each call will execute, they will never collide with each other, and 'score' will have the value expected, 'arbitrary_length', at the end.
Of course, the 'arbitrary_length' can't be so great as to exhaust memory, or overflow whatever collection is holding these callbacks. There is no threading issue however.
I do think it’s worth noting for others that view this, you have a common mistake in your code. For the variable i you either need to use let or reassign to another variable before passing it into the async_task(). The current implementation will result in each function getting the last value of i.
From what I see, if an event in Node take a "long time" to be dispatched, Node creates some kind of "queue of events", and they are triggered as soon as possible, one by one.
How long can this queue be?
While this may seem like a simple question, it is actually a rather complex problem; unfortunately, there's no simple number that anyone can give you.
First: wall time doesn't really play a part in anything here. All events are dispatched in the same fashion, whether or not things are taking "a long time." In other words, all events pass through a "queue."
Second: there is no single queue. There are many places where different kinds of events can be dispatched into JS. (The following assumes you know what a tick is.)
There are the things you (or the libraries you use) pass to process.nextTick(). They are called at the end of the current tick until the nextTick queue is empty.
There are the things you (or the libraries you use) pass to setImmediate(). They are called at the start of the next tick. (This means that nextTick tasks can add things to the current tick indefinitely, preventing other operations from happening whereas setImmediate tasks can only add things to the queue for the next tick.)
I/O events are handled by libuv via epoll/kqueue/IOCP on Linux/Mac/Windows respectively. When the OS notifies libuv that I/O has happened, it in turn invokes the appropriate handler in JS. A given tick of the event loop may process zero or more I/O events; if a tick takes a long time, I/O events will queue in an operating system queue.
Signals sent by the OS.
Native code (C/C++) executed on a separate thread may invoke JS functions. This is usually accomplished through the libuv work queue.
Since there are many places where work may be queued, it is not easy to answer "how many items are currently queued", much less what the absolute limit of those queues are. Essentially, the hard limit for the size of your task queues is available RAM.
In practice, your app will:
Hit V8 heap constraints
For I/O, max out the number of allowable open file descriptors.
...well before the size of any queue becomes problematic.
If you're just interested in whether or not your app under heavy load, toobusy may be of interest -- it times each tick of the event loop to determine whether or not your app is spending an unusual amount of time processing each tick (which may indicate that your task queues are very large).
Handlers for a specific event are called synchronously (in the order they were added) as soon as the event is emitted, they are not delayed at all.
The total number of event handlers is limited only by v8 and/or the amount of available RAM.
I believe you're talking about operations that can take an undefined amount of time to complete, such as an http request or filesystem access.
Node gives you a method to complete these types of operations asynchronously, meaning that you can tell node, or a 3rd party library, to start an operation, and then call some code (a function that you define) to inform you when the operation is complete. This can be done through event listeners, or callback functions, both of which have their own limitations.
With event listeners the maximum amount of listeners you can have is dependent on the maximum array size of your environment. In the case of node.js the javascript engine is v8, but according to this post there is a maximum set out by the 5th ECMA standard of ~4billion elements, which is a limit that you shouldn't ever overcome.
With callbacks the limitation you have is the max call stack size, meaning how deep your functions can call each other. For instance you can have a callback calling a callback calling a callback calling another callback, etc etc. The call stack size dictates how may callbacks calling callbacks you can have. Note that the call stack size can be a limitation with event listeners as well as they're essentially callbacks that can be executed multiple times.
And these are the limitations with each.
Set UP:
Hi. I am trying to learn about creating/instantiating objects. I was thinking I should be able to create multiple objects that may have different amounts of similar works (like gather news articles) and would "report completion" regardless of the order created. So far I am not clear on this, so here is a basic example followed by the stated question:
function test(count){
this.count = count;
for(var i = 0; i< count; i++){}
console.log(i);
}
new test(1000);
new test(10);
Actual Question:
Based on the code above, I would expect the second instance to print first, but it does not. What would be the correct way to set this up so that which ever object finishes its work would print first?
* Modify my question *
Sorry...what I am really intending to ask is how to set objects to have more of an asynchronous behavior. I am new to Stack, so please let me know if I should close/move this question.
In general JavaScript is uses a synchronous execution model, the event queue. All calls are placed here, basically in the order they appear in your source code (respecting the scope they are in).
So if you start some function, nothing else is executed until that very function is finished. In your case you place both calls on the event queue, but the second call will only be executed, when the first one has finished.
There, however, exceptions: Worker or AJAX requests.
Here the execution is outside the usual event queue and you use message handlers or callbacks to use the results, after the execution is finished. In most cases, however, you can't be sure in which order the calls are finished as there are many circumstances affecting the ordering of execution (network delay, cpu usage, etc.)
In your case, it seems, you want to load external resources, anyway. So have a look at how AJAX works.
You're not creating different parallel execution threads : the first test(1000) is wholly executed before the following line even starts.
In Javascript, when you're not using webworkers (which you should probably not use), all your code is always executed in a single thread. This thread is awaken by the browser on events and goes back to sleep when the function called by the browser returns.
Note that even with threads, even in naturally parallel languages, you hardly would have had any guarantee regarding what loop ends first.
The reason why the second instance does not print first is that the first call new test(1000) runs to completion before the next line new test(10) starts executing.
If you want the second call to run first, swap the two lines...
Edit:
By the way, any decent compiler will remove this line completely:
for(var i = 0; i< count; i++){}
So you can't even expect the first call to take more time than the second...
The second instance gets instantiated only after first object is instantiated.
So it first prints 1000 after it comes to new test(10)
Because it is single threaded program