So I was looking at this module, and I cannot understand why it uses callbacks.
I thought that memory caching is supposed to be fast and that is also the purpose someone would use caching, because it's fast... like instant.
Adding callbacks implies that you need to wait for something.
But how much you need to wait actually? If the result gets back to you very fast, aren't you slowing things down by wrapping everything in callbacks + promises on top (because as a user of this module you are forced to promisify those callbacks) ?
By design, javascript is asynchronous for most of its external calls (http, 3rd parties libraries, ...).
As mention here
Javascript is a single threaded language. This means it has one call stack and one memory heap. As expected, it executes code in order and must finish executing a piece code before moving onto the next. It's synchronous, but at times that can be harmful. For example, if a function takes a while to execute or has to wait on something, it freezes everything up in the meanwhile.
Having synchronous function will block the thread and the execution of the script. To avoid any blocking (due to networking, file access, etc...), it is recommended to get these information asynchronously.
Most of the time, the redis caching will take a few ms. However, this is preventing a possible network lag and will keep your application up and running with a tiny amount of connectivity errors for your customers.
TLDR: reading from memory is fast, reading from network is slower and shouldn't block the process
You're right. Reading from memory cache is very fast. It's as fast as accessing any variable (micro or nano seconds), so there is no good reason to implement this as a callback/promise as it will be significantly slower. But this is only true if you're reading from the nodejs process memory.
The problem with using redis from node, is that the memory cache is stored on another machine (redis server) or at least another process. So the even if redis reads the data very quickly, it still has to go through the network to return to your node server, which isn't always guaranteed to be fast (usually few milliseconds at least). For example, if you're using a redis server which is not physically close to your nodejs server, or you have too many network requests, ... the request can take longer to reach redis and return back to your server. Now imagine if this was blocking by default, it would prevent your server from doing anything else until the request is complete. Which will result in a very poor performance as your server is sitting idle waiting for the network trip. That's the reason why any I/O (disk, network, ..) operation in nodejs should be async by default.
Alex, you remarked with "I thought that memory caching is supposed to be fast and that is also the purpose someone would use caching, because it's fast... like instant." And you're near being wholly right.
Now, what does Redis actually mean?
It means REmote DIctionary Server.
~ FAQ - Redis
Yes, a dictionary usually performs in O(1) time. However, do note that the perception of the said performance is effective from the facade of procedures running inside the process holding the dictionary. Therefore, access to the memory owned by the Redis process from another process, is a channel of operations that is not O(1).
So, because Redis is a REmote DIctionary Server asynchronous APIs are required to access its service.
As it has already been answered here, your redis instance could be on your machine (and accessing redis RAM storage is nearly as fast as accessing a regular javascript variable) but it could also be an another machine/cloud/cluster/you name it. And in that case, network latency could be problematic, that's why the promises/callbacks syntax.
If you are 100% confident that your Redis instance would always lay on the same machine your code is, that having some blocking asynchronous calls to it is fine, you could just use the ES6 await syntax to write it as blocking synchronous events and avoid the callbacks or the promises :)
But I'm not sure it is worth it, in term of coding habit and scalability. But every project is different and that could suits you.
Related
This topic was on my mind for a long time.
Let's assume we have a typical web server, one in Node.js and the other in Java(or any other language with threads).
Why would node perform better (handle more IO/network based requests per second) than a java server just because it uses async/await? Isn't it just a syntatic sugar that utilizes the same threads java/c#/c++ use behind the scenes?
There is no reason to expect Node to be faster than a server written in Java. Why do you think it might be?
It seems the other answers here (so far) are explaining the benefits of asynchronous programming in JS compared to single-threaded synchronous operations -- that's obvious, but not the question.
The key point everyone agrees on is that certain operations are inherently slow (e.g.: waiting for network requests, waiting for disk/database access), and it's efficient to let the CPU do something else while such operations are in flight. Using several threads in your application is one well-established way to do that; but of course that's only possible in languages that give you threads. Many traditional server implementations (in Java, C, C++, ...) use one thread per request (or, equivalently, a thread pool to distribute incoming requests over). These threads can block waiting for, say, the database -- that's okay, the operating system will put the waiting thread to sleep and let the CPU work on another thread (handling another request) in the meantime. The end result is fairly similar to what you get with Node.
JavaScript, of course, doesn't make threads available to the programmer. But instead, it has this concept of scheduling requests with the JavaScript engine and providing a callback to be invoked upon completion of the request. That's how the overall system behaves similarly to a traditional threaded programming language: user code can say "do this stuff, then schedule a database access, and when the result is available, continue with this [callback] code here", and while waiting for the database request, the CPU gets to execute some other code. What you want to avoid is the CPU sitting around idly waiting while there is other work waiting for the CPU to have time for it, and both approaches (Java threads and JavaScript callbacks) accomplish that.
Finally, async/await (just like Promises) are indeed just syntactic sugar that make it easier to write callback-based code. Code using async/await isn't any faster than old-style code using callbacks directly, just prettier and less error-prone. It also isn't any faster than a (well-written) Java-based server.
Node.js is popular because it's convenient to use the same language for the client and server parts of an app. From a performance point of view, it's not better than traditional alternatives, it's just also not worse (or at least not much; in practice how efficiently you design your app matters more than whether you implement it in Java or JavaScript). Don't fall for the hype :-)
Asynchrony(asyn/await) is essential for activities that are potentially blocking, such as when your application accesses the web. Access to a web resource sometimes is slow or delayed. If such an activity is blocked within a synchronous process, the entire application must wait.
In an asynchronous process (thread), the application can continue with other work that doesn't depend on the web resource until the potentially blocking task finishes.
https://learn.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2012/hh191443(v=vs.110)?redirectedfrom=MSDN#threads Though you should understand the differences between threads and async/await programming.
In regards to Asynchronous programming, Usability is the key. For instance, one request can be split up into smaller tasks i.e. fetching into internal results, reading, writing, establishing connections etc... thus, half the time gets wasted waiting on dependent tasks. Asynchronous models use this time to handle other incoming requests keeping a callback function, registering in a queue saving the state and becomes available for another request. Thus, they can handle more requests.
Understand more on handling requests: https://www.youtube.com/watch?v=cCOL7MC4Pl0
I've got no problem with events and callbacks, synchrony/asynchrony, the call stack and the queue.
However, as I understand it, other servers make a new thread for each connection which contain both the blocking request and handler for the response of that request where as in node this handler would be passed to the main thread as a callback. The ability of this kind server to handle multiple requests is therefore limited by it's ability to create and switch between multiple threads.
When Node receives a blocking request it sends it into asynchrony land while it carries on processing the main thread. What happens in asynchrony land, doesn't a thread still need to be created to await the response for that request and then to sent the event to node event loop? If so, why isn't Node as limited by the server's ability to create and switch between threads? If not, what happens to the request?
I think there's some confusion over how the event loop actually works. NodeJS doesn't "receive a blocking request" and "send it into asynchrony land". It's asynchronous to begin with - unless you call a ...Sync() pattern function, EVERY call and EVERY operation is async. Confusingly, once you are inside your CODE, EVERY operation is synchronous.
It's a "cooperative multitasking" approach - all calls to the system are expected to "start the ball rolling" and return immediately, while your own code is suppose to do what it needs to do as quickly as possible and yield control back to the JSVM (by returning from your function).
To understand how this works when you're dealing with network communications, you need to go back in time to before threads really even existed. In the early days, if you had multiple network connections, your single-threaded process would have to put together a list of all the sockets it wanted information on (such as "has data arrived for me to read?"), and ask the OS if that was true by calling select(). This would a yes/no for each socket for each question. This was typically done in a while() loop that ran until the program was terminated. You would ask for a list of sockets with new data, read that data, do something with it, and then go back to sleep, over and over again.
NodeJS is far more sophisticated but this analogy works well for it. It has a main "event loop" that is constantly sleeping until there is work to do, then waking up and doing it.
Everything that you do comes from, or goes into, this channel. If you write data to a network socket, and ask to be notified (called back) when it's done, NodeJS passes your request to the operating system and then goes to sleep. You stop running. Your context is saved - all your local vars are saved. When the OS comes back and says "done!", NodeJS checks its list and sees you wanted to know about this, and calls your function, reloading your context so all your local vars are where you need them.
To be very clear, it is entirely possible that when the data is finished being written to the network, and the OS notification comes back for that, NodeJS is busy with other work! NodeJS won't "create a thread" to handle it - it'll ignore it completely until it gets some free time! It won't be lost... it just won't be handled "yet".
This drives programmers used to threading models nuts - it seems illogical that this constant state of never immediately responding to an incoming event "until it has a chance" could possibly be efficient. But software architectures are often deceiving. Threading models actually have fairly high overhead. CPU core counts aren't infinite - the entire computer as a whole is doing plenty of work all the time. Threads aren't free - just because you make one doesn't mean the CPU itself has time to do anything with it. And the overhead of thread creation and management often means an efficiency loss.
Old-school event-loop models eliminate this overhead. When things go badly like you have an infinite loop in your code, they can behave very badly - often locking up completely. But when things are going well they can actually be a lot faster, and many benchmarks have shown that well-written NodeJS modules can perform as well as or even better than similar modules in other languages.
In summary, the most common confusion in NodeJS is what "async" really means. A good way to think of it is that in threading models, programmers are expected to be "bad"/simplistic (write blocking code and just wait for things to return) and the core VM or OS is expected to be "good"/smart (tolerate this by making threads to handle async work). In NodeJS, programmers are expected to be "good"/sophisticated (write well-structured async code), allowing the JSVM to focus on what it does best and not need as much magic to make things work well. Well-used, NodeJS puts a lot of power in your hands.
After reading about event loops and how async works in node.js, this is my understanding of node.js:
Node actually runs processes one at a time and not simultaneously.
Node really shines when multiple databse I/O tasks are called.
It runs faster (than blocking I/O) because it doesn't wait for the response of one call before dealing with the next call. And while dealing with the other call, when the result of the first call arrives, it "gets back to it", basically going back and forth crossing calls and callbacks, without leaving the OS process idle, as opposed to what blocking I/O does. Please correct me if I'm wrong.
But here's my question:
Non-blocking I/O seems to be faster than blocking I/O only if the entity (server/process/thread?) that handles the request sent by node, is not the node server itself.
What would be the cases when the sever handling the request is the same server making the request? If my first bullet is correct, in this case a blocking I/O will work faster than non-blocking if it uses different threads for the task?
Would file compression be an example to such I/O task that works faster on multithreaded blocking I/O?
The main benefit of non-blocking operations is that a relatively heavyweight CPU thread is not kept busy while the server is waiting for something to happen elsewhere (networking, disk I/O, etc...). This means that many different requests can be "in-flight" with only the single CPU thread and no thread is stuck waiting for I/O. A burden is placed back on the developer to write async-friendly code and to use async I/O operations, but in a heavy I/O bound operation, there can be a real benefit to server scalability. The single thread model also really simplifies access to shared resources since there is far, far less opportunity for threading conflicts, deadlocks, etc... This can result in fewer hard-to-find thread synchronization bugs that tend to only nail your server at the worst time (e.g. when it's busy).
Yes, non-blocking I/O only really helps if the agent handling the I/O operation is not node.js itself because the whole point of non-blocking I/O in node is that node is free to use its single thread to go do other things while the I/O operation is running and if it's node that is serving the I/O operation then that wouldn't be true.
Sorry, but I don't understand the part of your question about file compression. File compression takes a certain amount of CPU, no matter who handles it and there are a bunch of different considerations if you were trying to decide whether to handle it inside of node itself or in an outside process (running a different thread). That isn't a simple question. I'd probably start with using whatever code I already had for the compression (e.g. use node code if that's what you had or an external library/process if that's what you had) and only investigate a different option if you actually ran into a performance or scalability issue or knew you had an issue.
FYI, a simple mechanism for handling compression would be to spool the uncompressed data to files in a temporary directory from your node.js app and then have another process (which could be written in any system, even include node) that just looks for files in the temporary directory to which it applies the compression and then does something more permanent with the resulting compressed data.
I'm working with a code that handles all AJAX requests using Web Workers (when available). These workers do almost nothing more than XMLHttpRequest object handling (no extra computations). All requests created by workers are asynchronous (request.open("get",url,true)).
Recently, I got couple of issues regarding this code and I started to wonder if I should spend time fixing this or just dump the whole solution.
My research so far suggests that this code may be actually hurting performance. However, I wasn't able to find any credible source supporting this. My only two findings are:
2 year old jQuery feature suggestion to use web workers for AJAX calls
this SO question that seems to ask about something a bit different (using synchronous requests in web workers vs AJAX calls)
Can someone point me to a reliable source discussing this issue? Or, are there any benchmarks that may dispel my doubts?
[EDIT] This question gets a little bit more interesting when WebWorker is also responsible for parsing the result (JSON.parse). Is asynchronous parsing improving performance?
I have created a proper benchmark for that on jsperf. Depending on the browser, WebWorker approach is 85-95% slower than a raw ajax call.
Notes:
since network response time can be different for each request, I'm testing only new XMLHttpRequest() and JSON.parse(jsonString);. There are no real AJAX calls being made.
WebWorker setup and teardown operations are not being measured
note that I'm testing a single request, results for webworker approach may be better for multiple simultaneous requests
Calvin Metcalf explained to me that comparing sync and async on jsperf won't give accurate results and he created another benchmark that eliminates async overhead. Results still show that WebWorker approach is significantly slower.
From the Reddit discussion I learned that data passed between the main page and WebWorker are copied and have to be serialized in the process. Therefore, using WebWorker for parsing only doesn't make much sense, data will have to be serialized and deserialized anyway before you can use them on the main page.
First thing to remember is that web workers rarely make things faster in the sense of taking less time, they make things faster in the sense that they off load computation to a background thread so that processing related to user interaction is not blocked. For instance when you take into account transferring the data, doing a huge calculation might take 8 seconds instead of 4. But if it was done on the main thread the entire page would be frozen for 4 seconds which is likely unacceptable.
With that in mind moving just the ajax calls off the main thread won't gain you anything as ajax calls are non blocking. But if you have to parse JSON or even better, extract a small subset out of a large request then a web worker can help you out.
A caveat i've heard but not confirmed is that workers use a different cache than the main page so that if the same resources are being loaded in the main thread and the worker it could cause a large duplication of effort.
You are optimizing your code in the wrong place.
AJAX requests already run in a separate thread and return to the main event loop once they fulfil (and call the defined callback function).
Web workers are an interface to threads, meant for computationally expensive operations. Just like in classical desktop applications when you don't want to block the interface with computations that take a long time.
Asynchronous IO is an important concept of Javascript.
First, your request is already asynchronous, the IO is non-blocking and during your request, you can run any another Javascript code. Executing the callback in a worker is much more interesting than the request.
Second, Javascript engines execute all code in the same thread, if you create new threads, you need to handle data communication with the worker message api (see Semaphore).
In conclusion, the asynchronous and single-threaded nature of JavaScript is powerful, use it as much as possible and create workers only if you really need it, for example in a long Javascript process.
From my experience, Web Workers should not be used for AJAX calls. First of all, they are asynchronous, meaning code will still run while you're waiting for the information to come back.
Now, using a worker to handle the response is definitely something you could use the Web Worker for. Some examples:
Parsing the response to build a large model
Computing large amounts of data from the response
Using a Shared Web Worker with a template engine in conjunction with the AJAX response to build the HTML which will then be returned for appending to the DOM.
Edit: another good read would be: Opinion about synchronous requests in web workers
Node.js server is works on event based models where callback functions are supported. But I am not able to understand how is it better than traditional thread based servers where threads wait for system IO. In case of thread based model, when a thread needs to wait for IO, it gets preempted so doesn't consume CPU cycles hence doesn't contribute to wait time.
How Node.js improves wait time?
when a thread needs to wait for IO, it gets preempted
Actually, it's not preempted. Preemption is something completely different. What happens is that the thread is blocked.
For an event based model something similar happens. Event based interpreters are basically state machines. Only, the state machine is abstracted away and is not visible to the user. When something is waiting for an event it passes the control back to the interpreter. When the interpreter has nothing else to process it blocks itself waiting for I/O. Only, unlike traditional threading code the interpreter waits for multiple I/O.
What's happening at the C level is that the interpreter is using something like select(), poll(), epoll() and friends (depends on the OS and library installed) to do the blocking and waiting for I/O.
Now, why does a select()/poll() based mechanism generally perform better? Actually, 'generally' here depends on what you mean. A select() based server executes all code in a single process/thread. The biggest performance gain from this is that it avoids context switching - every time the OS transfers control over from one thread to another it has to save all the relevant registers, memory map, stack pointers, FPU context etc. so that the other thread can resume execution where it left off. The overhead of doing this can be quite significant.
In fact, there is a historical example of how extreme the overhead can be. Back in the early 2000s someone started benchmarking web servers. To the surprise of everyone, tclhttpd outperformed Apache for serving static files. Now, tcl is not only an interpreted language, but back in 2000 it was a very slow interpreted language because it didn't have a seperate compilation phase (it sort of does now). Tcl scripts are interpreted directly in string form making it around 400x slower than C. Apache is obviously written in C so what's making tclhttpd faster?
It turned out that tclhttpd is event based running only on a single thread while Apache was multithreaded. The overhead of constant thread switching turned out to give tclhttpd enough advantage to perform better than Apache.
Of course, there is always a compromise. A single threaded server like tclhttpd or node.js cannot take advantage of multiple CPUs. Back in the early 2000s multiple CPUs were uncommon. These days they are almost default. Not to mention that most CPUs are also hyperthreaded (hyperthreading adds hardware to the CPU to make context switching cheap).
The best servers these days have learned from history and are a combination of both. Apache2, and Nginx use therad pools: they are multithreaded but each thread serves more than a single connection. This is a hybrid of the two approaches but is more complex to manage.
Read the following article for a more in-depth discussion on this topic: The C10K problem
Threads are relatively heavy-weight objects that have a resource footprint extending all the way into the kernel. When you park a thread in a blocking syscall or on a mutex or condition variable, you are tying up all those resources but doing nothing. Now the OS has to find more resources so your program can create another thread... Then you idle them too. It doesn't take long before the OS is struggling to scavenge more resources for your program to waste.
CPU time is just one small part of he bigger picture. :-)
Simply put:
In a threaded server, no matter how many threads you have, you can always have that many threads waiting for IO.
In node, no matter how many IO operations are pending, you always have your event loop ready to do the next thing.
When having a lot of threads you are going to have a lot of context switching which is going to be expensive. You want have this overhead when using node.js's Event loop
Context Switch
A context switch is the
computing process of storing and
restoring state (context) of a CPU so
that execution can be resumed from the
same point at a later time.
Event loop
In computer science, the event loop,
message dispatcher, message loop or
message pump is a programming
construct that waits for and
dispatches events or messages in a
program.
I think you are full of myths regarding to threads and cost of context switching.
Discover yourself the truth.