how node.js server is better than thread based server - javascript

Node.js server is works on event based models where callback functions are supported. But I am not able to understand how is it better than traditional thread based servers where threads wait for system IO. In case of thread based model, when a thread needs to wait for IO, it gets preempted so doesn't consume CPU cycles hence doesn't contribute to wait time.
How Node.js improves wait time?

when a thread needs to wait for IO, it gets preempted
Actually, it's not preempted. Preemption is something completely different. What happens is that the thread is blocked.
For an event based model something similar happens. Event based interpreters are basically state machines. Only, the state machine is abstracted away and is not visible to the user. When something is waiting for an event it passes the control back to the interpreter. When the interpreter has nothing else to process it blocks itself waiting for I/O. Only, unlike traditional threading code the interpreter waits for multiple I/O.
What's happening at the C level is that the interpreter is using something like select(), poll(), epoll() and friends (depends on the OS and library installed) to do the blocking and waiting for I/O.
Now, why does a select()/poll() based mechanism generally perform better? Actually, 'generally' here depends on what you mean. A select() based server executes all code in a single process/thread. The biggest performance gain from this is that it avoids context switching - every time the OS transfers control over from one thread to another it has to save all the relevant registers, memory map, stack pointers, FPU context etc. so that the other thread can resume execution where it left off. The overhead of doing this can be quite significant.
In fact, there is a historical example of how extreme the overhead can be. Back in the early 2000s someone started benchmarking web servers. To the surprise of everyone, tclhttpd outperformed Apache for serving static files. Now, tcl is not only an interpreted language, but back in 2000 it was a very slow interpreted language because it didn't have a seperate compilation phase (it sort of does now). Tcl scripts are interpreted directly in string form making it around 400x slower than C. Apache is obviously written in C so what's making tclhttpd faster?
It turned out that tclhttpd is event based running only on a single thread while Apache was multithreaded. The overhead of constant thread switching turned out to give tclhttpd enough advantage to perform better than Apache.
Of course, there is always a compromise. A single threaded server like tclhttpd or node.js cannot take advantage of multiple CPUs. Back in the early 2000s multiple CPUs were uncommon. These days they are almost default. Not to mention that most CPUs are also hyperthreaded (hyperthreading adds hardware to the CPU to make context switching cheap).
The best servers these days have learned from history and are a combination of both. Apache2, and Nginx use therad pools: they are multithreaded but each thread serves more than a single connection. This is a hybrid of the two approaches but is more complex to manage.
Read the following article for a more in-depth discussion on this topic: The C10K problem

Threads are relatively heavy-weight objects that have a resource footprint extending all the way into the kernel. When you park a thread in a blocking syscall or on a mutex or condition variable, you are tying up all those resources but doing nothing. Now the OS has to find more resources so your program can create another thread... Then you idle them too. It doesn't take long before the OS is struggling to scavenge more resources for your program to waste.
CPU time is just one small part of he bigger picture. :-)

Simply put:
In a threaded server, no matter how many threads you have, you can always have that many threads waiting for IO.
In node, no matter how many IO operations are pending, you always have your event loop ready to do the next thing.

When having a lot of threads you are going to have a lot of context switching which is going to be expensive. You want have this overhead when using node.js's Event loop
Context Switch
A context switch is the
computing process of storing and
restoring state (context) of a CPU so
that execution can be resumed from the
same point at a later time.
Event loop
In computer science, the event loop,
message dispatcher, message loop or
message pump is a programming
construct that waits for and
dispatches events or messages in a
program.

I think you are full of myths regarding to threads and cost of context switching.
Discover yourself the truth.

Related

Javascript: How multiple threads are implemented in javascript?

I am building a small two-player gaming app. It is very important to send data from one player to another in real-time and for that sockets look promising.
A few places I read that javascript doesn't support multithreading. Then what can be possible solution for both side communication as two threads will be needed to manage C1->C2 and C2->C1 communication in parallel.
My high-level architecture looks like
How can three threads can be managed by javascript in a webpage? One for C1 to C2 message transfer, second for C2 to C1 message transfer and third for user interface?
A JavaScript program runs on a single thread of execution using a "run to completion" semantic.
Operations that would normally block in other languages are non-blocking, and simply handed off to the host (in this case, the browser), with your program notified of progress asynchronously via events.
When the host raises an event to be consumed by your program (eg. an inbound message), it puts a notification of that event in a queue as a "job". When that job reaches the front of the queue, and as soon as the call stack is empty (ie. the current script being run has run to completion), the JavaScript runtime dequeues the job and invokes the continuation function associated with it (ie. the part of your program configured to handle the event).
Your game will be sending messages over the network (eg via WebSocket). Your program will simply hand each message to the browser. This process is not computationally expensive or time consuming. The browser is multithreaded and will handle the low-level and time consuming networking concerns for you.
JavaScript is an event-based language. If you wish to be notified of future events related to a message you sent, then you can supply a callback (or use a promise) to be invoked by the runtime in the future at the appropriate time, rather than simply waiting for it. In this way the time available on the main thread of execution is used efficiently.
Your game loop will probably use requestAnimationFrame. That gives you about 16 milliseconds of computation per frame. Computation of game state might take a few milliseconds. Handling scheduled and time-based events might take another few milliseconds. Finally rendering needs some time too. In effect your program cooperatively multi-tasks on a single thread of execution.
For long-running, computationally expensive tasks you can use the Worker API to create new threads of execution with which you can communicate in a controlled way, but you will probably not need this here.
There is plenty of information online already about this topic. Search for "how the event loop works".
Relevant questions here, here, here, here, and here.

Is there any other way to implement a "listening" function without an infinite while loop?

I've been thinking a lot about code and libraries like React that automatically, well, react to events as they happen, and was wondering about how all of that is implemented at the lower levels of C++ and machine code.
I can't seem to figure out any other way something like an event listener could be implemented with if not with a while loop running on another thread.
So is that all this is under the hood? Just while loops all the way down? Like, for example, RethinkDB, which advertises itself as a "realtime database" that has the repubsub library. Is the "subscribe" method just implemented using a while loop under the hood? I couldn't seem to find any information on that.
Like, sockets and stuff, too. When a computer is a "listening" on a port for a socket connection, is that computer just running something like:
while(1) {
if (connectionFound) {
return True;
}
}
Or is there something I'm missing?
I've written the answer to this question as an aside in another answer. Normally I'd close this question as a duplicate and point to that answer however this is a very different question. The other question asked about javascript performance. In order to answer that I had to first write the answer to this question.
As such I'm going to do something that's not normally supposed to be done: I'm going to copy part of my answer to another question. So here's my answer:
Actual events that javascript and node.js waits on requires no looping at all. In fact they require 0% CPU time.
How asynchronous I/O works (in any programming language)
Hardware
If we really need to understand how node (or browser) internals work we must unfortunately first understand how computers work - from the hardware to the operating system. Yes, this is going to be a deep dive so bear with me..
It all began with the invention of interrupts..
It was a great invention, but also a Box of Pandora - Edsger Dijkstra
Yes, the quote above is from the same "Goto considered harmful" Dijkstra. From the very beginning introducing asynchronous operation to computer hardware was considered a very hard topic even for some of the legends in the industry.
Interrupts was introduced to speed up I/O operations. Rather than needing to poll some input with software in an infinite loop (taking CPU time away from useful work) the hardware will send a signal to the CPU to tell it an event has occurred. The CPU will then suspend the currently running program and execute another program to handle the interrupt - thus we call these functions interrupt handlers. And the word "handler" has stuck all the way up the stack to GUI libraries which call callback functions "event handlers".
Wikipedia actually has a fairly nice article about interrupts if you're not familiar with it and want to know more: https://en.wikipedia.org/wiki/Interrupt.
If you've been paying attention you will notice that this concept of an interrupt handler is actually a callback. You configure the CPU to call a function at some later time when an event happens. So even callbacks are not a new concept - it's way older than C.
OS
Interrupts make modern operating systems possible. Without interrupts there would be no way for the CPU to temporarily stop your program to run the OS (well, there is cooperative multitasking, but let's ignore that for now). How an OS works is that it sets up a hardware timer in the CPU to trigger an interrupt and then it tells the CPU to execute your program. It is this periodic timer interrupt that runs your OS.
Apart form the timer, the OS (or rather device drivers) sets up interrupts for I/O. When an I/O event happens the OS will take over your CPU (or one of your CPU in a multi-core system) and checks against its data structure which process it needs to execute next to handle the I/O (this is called preemptive multitasking).
Everything form keyboard and mouse to storage to network cards use interrupts to tell the system that there is data to be read. Without those interrupts, monitoring all those inputs would take a lot of CPU resources. Interrupts are so important that they are often designed into I/O standards like USB and PCI.
Processes
Now that we have a clear picture of this we can understand how node/javascript actually handle I/O and events.
For I/O, various OSes have various different APIs that provide asynchronous I/O - from overlapped I/O on Windows to poll/epoll on Linux to kqueue on BSD to the cross-platform select(). Node internally uses libuv as a high-level abstraction over these APIs.
How these APIs work are similar though the details differ. Essentially they provide a function that when called will block your thread until the OS sends an event to it. So yes, even non-blocking I/O blocks your thread. The key here is that blocking I/O will block your thread in multiple places but non-blocking I/O blocks your thread in only one place - where you wait for events.
Check out my answer to this other question for a more concrete example of how this kind of API works at the C/C++ level: I know that callback function runs asynchronously, but why?
For GUI events like button click and mouse move the OS just keep track of your mouse and keyboard interrupts then translate them into UI events. This frees your software form needing to know the positions of buttons, windows, icons etc.
What this allows you to do is design your program in an event-oriented manner. This is similar to how interrupts allow OS designers to implement multitasking. In effect, asynchronous I/O is to frameworks what interrupts are to OSes. It allows javascript to spend exactly 0% CPU time to process (wait for) I/O. This is what makes asynchronous code fast - it's not really faster but does not waste time waiting.
This answer is fairly long as is so I'll leave links to my answers to other questions that's related to this topic:
Node js architecture and performance (Note: This answer provides a bit of insight on the relationship of events and threads - tldr: the OS implements threads on top of kernel events)
Does javascript process using an elastic racetrack algorithm
how node.js server is better than thread based server
"listeners" and "subscriptions" are just ideas. Everything can be abstracted with lambdas. Here is one possible implementation -
const logger =
// create a new "listener",
// send any data we "hear" to console.log
listen(console.log)
// implement so-called "listener"
const listen = (responder) =>
x => (responder(x), x)
// run it synchronously
logger(1)
logger(2)
// or asynchronously
setTimeout(_ => logger(3), 2000)
// 1
// 2
// some time later...
// 3
So let's say we have a

Redis caching in nodejs

So I was looking at this module, and I cannot understand why it uses callbacks.
I thought that memory caching is supposed to be fast and that is also the purpose someone would use caching, because it's fast... like instant.
Adding callbacks implies that you need to wait for something.
But how much you need to wait actually? If the result gets back to you very fast, aren't you slowing things down by wrapping everything in callbacks + promises on top (because as a user of this module you are forced to promisify those callbacks) ?
By design, javascript is asynchronous for most of its external calls (http, 3rd parties libraries, ...).
As mention here
Javascript is a single threaded language. This means it has one call stack and one memory heap. As expected, it executes code in order and must finish executing a piece code before moving onto the next. It's synchronous, but at times that can be harmful. For example, if a function takes a while to execute or has to wait on something, it freezes everything up in the meanwhile.
Having synchronous function will block the thread and the execution of the script. To avoid any blocking (due to networking, file access, etc...), it is recommended to get these information asynchronously.
Most of the time, the redis caching will take a few ms. However, this is preventing a possible network lag and will keep your application up and running with a tiny amount of connectivity errors for your customers.
TLDR: reading from memory is fast, reading from network is slower and shouldn't block the process
You're right. Reading from memory cache is very fast. It's as fast as accessing any variable (micro or nano seconds), so there is no good reason to implement this as a callback/promise as it will be significantly slower. But this is only true if you're reading from the nodejs process memory.
The problem with using redis from node, is that the memory cache is stored on another machine (redis server) or at least another process. So the even if redis reads the data very quickly, it still has to go through the network to return to your node server, which isn't always guaranteed to be fast (usually few milliseconds at least). For example, if you're using a redis server which is not physically close to your nodejs server, or you have too many network requests, ... the request can take longer to reach redis and return back to your server. Now imagine if this was blocking by default, it would prevent your server from doing anything else until the request is complete. Which will result in a very poor performance as your server is sitting idle waiting for the network trip. That's the reason why any I/O (disk, network, ..) operation in nodejs should be async by default.
Alex, you remarked with "I thought that memory caching is supposed to be fast and that is also the purpose someone would use caching, because it's fast... like instant." And you're near being wholly right.
Now, what does Redis actually mean?
It means REmote DIctionary Server.
~ FAQ - Redis
Yes, a dictionary usually performs in O(1) time. However, do note that the perception of the said performance is effective from the facade of procedures running inside the process holding the dictionary. Therefore, access to the memory owned by the Redis process from another process, is a channel of operations that is not O(1).
So, because Redis is a REmote DIctionary Server asynchronous APIs are required to access its service.
As it has already been answered here, your redis instance could be on your machine (and accessing redis RAM storage is nearly as fast as accessing a regular javascript variable) but it could also be an another machine/cloud/cluster/you name it. And in that case, network latency could be problematic, that's why the promises/callbacks syntax.
If you are 100% confident that your Redis instance would always lay on the same machine your code is, that having some blocking asynchronous calls to it is fine, you could just use the ES6 await syntax to write it as blocking synchronous events and avoid the callbacks or the promises :)
But I'm not sure it is worth it, in term of coding habit and scalability. But every project is different and that could suits you.

How does Node.js use fewer threads to handle multiple connections?

I've got no problem with events and callbacks, synchrony/asynchrony, the call stack and the queue.
However, as I understand it, other servers make a new thread for each connection which contain both the blocking request and handler for the response of that request where as in node this handler would be passed to the main thread as a callback. The ability of this kind server to handle multiple requests is therefore limited by it's ability to create and switch between multiple threads.
When Node receives a blocking request it sends it into asynchrony land while it carries on processing the main thread. What happens in asynchrony land, doesn't a thread still need to be created to await the response for that request and then to sent the event to node event loop? If so, why isn't Node as limited by the server's ability to create and switch between threads? If not, what happens to the request?
I think there's some confusion over how the event loop actually works. NodeJS doesn't "receive a blocking request" and "send it into asynchrony land". It's asynchronous to begin with - unless you call a ...Sync() pattern function, EVERY call and EVERY operation is async. Confusingly, once you are inside your CODE, EVERY operation is synchronous.
It's a "cooperative multitasking" approach - all calls to the system are expected to "start the ball rolling" and return immediately, while your own code is suppose to do what it needs to do as quickly as possible and yield control back to the JSVM (by returning from your function).
To understand how this works when you're dealing with network communications, you need to go back in time to before threads really even existed. In the early days, if you had multiple network connections, your single-threaded process would have to put together a list of all the sockets it wanted information on (such as "has data arrived for me to read?"), and ask the OS if that was true by calling select(). This would a yes/no for each socket for each question. This was typically done in a while() loop that ran until the program was terminated. You would ask for a list of sockets with new data, read that data, do something with it, and then go back to sleep, over and over again.
NodeJS is far more sophisticated but this analogy works well for it. It has a main "event loop" that is constantly sleeping until there is work to do, then waking up and doing it.
Everything that you do comes from, or goes into, this channel. If you write data to a network socket, and ask to be notified (called back) when it's done, NodeJS passes your request to the operating system and then goes to sleep. You stop running. Your context is saved - all your local vars are saved. When the OS comes back and says "done!", NodeJS checks its list and sees you wanted to know about this, and calls your function, reloading your context so all your local vars are where you need them.
To be very clear, it is entirely possible that when the data is finished being written to the network, and the OS notification comes back for that, NodeJS is busy with other work! NodeJS won't "create a thread" to handle it - it'll ignore it completely until it gets some free time! It won't be lost... it just won't be handled "yet".
This drives programmers used to threading models nuts - it seems illogical that this constant state of never immediately responding to an incoming event "until it has a chance" could possibly be efficient. But software architectures are often deceiving. Threading models actually have fairly high overhead. CPU core counts aren't infinite - the entire computer as a whole is doing plenty of work all the time. Threads aren't free - just because you make one doesn't mean the CPU itself has time to do anything with it. And the overhead of thread creation and management often means an efficiency loss.
Old-school event-loop models eliminate this overhead. When things go badly like you have an infinite loop in your code, they can behave very badly - often locking up completely. But when things are going well they can actually be a lot faster, and many benchmarks have shown that well-written NodeJS modules can perform as well as or even better than similar modules in other languages.
In summary, the most common confusion in NodeJS is what "async" really means. A good way to think of it is that in threading models, programmers are expected to be "bad"/simplistic (write blocking code and just wait for things to return) and the core VM or OS is expected to be "good"/smart (tolerate this by making threads to handle async work). In NodeJS, programmers are expected to be "good"/sophisticated (write well-structured async code), allowing the JSVM to focus on what it does best and not need as much magic to make things work well. Well-used, NodeJS puts a lot of power in your hands.

NodeJS: understanding nonblocking / event queue / single thread

I'm new to Node and try to understand the non-blocking nature of node.
In the image below I've created a high level diagram of the request.
As I understand, all processes from a single user for a single app run on a single thread.
What I would like to understand is the how the logic of the event loop fits in this diagram. Is the event loop the same as the processor pipeline where instructions are queued?
Imagine that we load an app page into RAM that creates a stream to read from by the program:
readstream.on('data', function(data) {});
Instructions for creating the readstream and waiting for data to occur: does this instruction "hang" in a register (waiting for the I/O to finish) in the processor whereas in a multithreaded environment, the processor just doesn't take new instructions from the RAM until the result of the previous I/O request has been returned to the RAM?
Or is the way I see this entirely/partially wrong?
Just a supplementary (related, perhaps stupid) question: run different users on different threads on the server and isn't the single threaded benefit only for a single user?
I'm new to this type of detail, so excuse me if this question doesn't entirely make sense to you. But understanding this seems essential for me before moving forward.
Event-driven non-blocking I/O relies on the fact that modern operating systems have a 'select' method that performs polling at the O/S level (not wasting CPU cycles). The select method allows you to register callbacks for certain I/O events. This tends to be much more efficient than the 'thread-per-connection' model commonly used in thread enabled languages. For more info, do a 'man select' on a Unix/Linux OS.
Threads and I/O have to do with operating system implementation and services, not CPU architecture.
Operations that involve input/output devices of any kind — mass storage, networks, serial ports, etc. — are structured as requests from the CPU to an external device that, by one of several possible mechanisms, are later satisfied.
On top of that reality, operating systems provide alternative programming models. In one model, the factual nature of input/output operations are essentially disguised such that executing programs are given an API that appears to be synchronous. In a C program, a call to the write() system call will cause the entire process to delay until the operation has completed.
Another programming model more closely exposes the asynchronous reality of the system. That's what Node uses. Operating systems provide ways to initiate long-duration asynchronous operations, along with ways for a process to either check for results or to block and wait for results. In Node, the runtime system can juggle lots of separate operations because the entire model is based on code running in response to events. An event can be a synthetic thing (such as the "event" of a Node module being loaded and run initially), or it can be something that's a result of actual asynchronous external events. In the case of input/output operations, the Node runtime waits for operating system notification and translates that into an event that causes some JavaScript code to run.

Categories