There are a lot info about application (node), and render(chromium) processes in Electron. About the communication between these processes via data marshalling through IPC and separated contexts.
But there is no info about event-loop.
So there is the question. How many event loops are used in Electron: several event-loops (one for each process: app, renders) and then how does libuv work with it?; or is there one event-loop which is shared between these processes?
Related
I'm still trying to understand what is worker threads, and how it's different from child process so please bear with me.
So I'm currently building a desktop app with Node.JS + Electron. The app would work with several tasks at a time, which some of them are CPU and I/O intensive tasks.
The architecture currently has 1 main process, and numbers of child process which follows the number of host's CPU core count.
The main process handles Electron instance, renderer process, local database process, and handle the other child processes.
Meanwhile child processes would do the other tasks that is CPU and I/O intensive in nature.
So far, I have 4 questions here:
In my case, is it more beneficial to use worker threads instead?
If a tasks requires several package / library, does worker threads will require them first each time the task is run?
Currently, child process has no access to Electron API, thus only main process handles them. Does using worker threads allows me to handle Electron API?
In simple term, what's the difference between worker threads and threads pool? and should I use it instead of the other 2 (child process and worker threads)?
When a Node.js process is spun up top command shows 7 threads attached to the process. What are all these threads doing? Also, as the load on the API increases, with the request handlers themselves asynchronously awaiting other upstream API calls does Node spawn additional worker threads? I see in top that it does that. But I was thinking this only happens for file I/o. Why does it need these additional worker threads?
LIBUV (the underlying cross platform system library that node.js is built-on) uses a thread pool for certain operations such as disk I/O and some crypto operations. By default that thread pool contains 4 threads.
Plus, there is a thread for the execution of your Javascript so that accounts for 5.
Then, it appears there is a thread used by the garbage collector for background marking of objects (per this reference from a V8 developer) and this article. That would make 6.
I don't know for sure what the 7th one would be. It's possible there's a thread used by the event loop itself.
Then, starting some time around 2018, it appears that nodejs switched to a separate set of threads for DNS requests (separate from the file I/O thread pool). This was probably because of problems in node.js where 4 slow DNS requests could block all file I/O because they took over the thread pool. So, now it looks like node.js used the C-ARES library for DNS which makes its own set of threads.
FYI, you can actually control the thread pool size with the UV_THREADPOOL_SIZE environment variable.
And, of course, you can create your own Worker Threads that actually create new instances of the V8 Javascript execution engine (so they will likely end up creating more than one new thread).
I'm new to JavaScript/Node.js/Electron stack. I'm writing an Electron frontend to a C++ backend (a standalone executable), with ZeroMQ as the IPC vehicle in-between.
With languages capable of threading, I'd create a thread for polling multiple ZMQ sockets, but it seems that threading is not a thing for JS client code. Should I resort to child processes instead? But while there are already main and renderer, having yet another PROCESS just for IPC seems a little crazy to me.
What would be the canonical way to achieve what I do?
I am trying to use worker threads with the worker pool in my application which is intended to be running in a 256mb docker containers.
My main thread is taking around 30mb of memory and 1 worker thread to be taking around 25mb of memory (considering require of third party node modules). Considering this, I would only be able to create a pool of ~7 workers.
But my application requirements are such that it should be able to handle many jobs at a time by creating many workers up and listening for a job (like around 20 or more).
Is there any way wherein I can use the third-party modules like (lodash, request, etc) to be shared across worker threads to save memory it needs in requiring all necessary modules.
My initial thought process was like I can give a try with shared memory (SharedArrayBuffer) but then it will not work as it won't allow passing such complex object structure and functions.
Can anyone help me what can be a possible solution?
Thanks in advance!
In a tutorial I've read that one should you Node's event-loop approach mainly for I/O intensive tasks. Like reading from hard disk or using network. But not for CPU-intensive task.
What's the concrete reason for the quoted statements?
Or the otherwayaround asked:
What would happen if you occupy Node.js with CPU-intesive tasks to do?
Node uses a small number of threads to handle many clients. In Node there are two types of threads: one Event Loop (aka the main loop, main thread, event thread, etc.), and a pool of k Workers in a Worker Pool (aka the threadpool).
If a thread is taking a long time to execute a callback (Event Loop) or a task (Worker), we call it "blocked". While a thread is blocked working on behalf of one client, it cannot handle requests from any other clients.
You can read more about it in official nodejs guide