Monitoring mongodb connection to a replicaset in NodeJS - javascript

I want to connect to a MongoDB replica set (only one instance to works with change streams) while being able to be notified of connection lost/reconnect.
I followed what described here:
const { MongoClient } = require("mongodb");
// Replace the following with your MongoDB deployment's connection
// string.
const uri =
"mongodb+srv://<clusterUrl>/?replicaSet=rs&writeConcern=majority";
const client = new MongoClient(uri);
// Replace <event name> with the name of the event you are subscribing to.
const eventName = "<event name>";
client.on(eventName, event => {
console.log(`received ${eventName}: ${JSON.stringify(event, null, 2)}`);
});
async function run() {
try {
await client.connect();
// Establish and verify connection
await client.db("admin").command({ ping: 1 });
console.log("Connected successfully");
} finally {
// Ensures that the client will close when you finish/error
await client.close();
}
}
run().catch(console.dir);
I tried subscribing to events:
serverOpening and works fine
serverClosed and I can't understand why but it does not work!!!
No "reconnect" event, any solution?

You are mixing monitoring connections and application connections. Unfortunately the documentation you referenced doesn't talk about this and doesn't document CMAP events so the confusion is understandable. See the Ruby driver docs for a more in-depth explanation of the events that drivers publish (including the Node driver).
Monitoring connections are established by the driver to figure out what server(s) exist in the deployment it was instructed to work with. One (or two depending on driver and server version) such connection is established per known server. You don't control when these connections are established. These connections are NOT used for operations you initiate (inserts/finds etc.). They are only used internally by the driver.
The events published for monitoring connections are server opened, server closed, server heartbeat - the ones listed here. You are going to get these events when the client is instantiated (assuming a spec-compliant client, which the Node one is not in it default configuration as you are using it) without any operations being issued like creating a change stream.
Application connections are established by the driver to satisfy the application's operations like finds and inserts. One of these would be needed for your change stream. The events relevant to these connections are CMAP ones and start with "Connection" or "Pool", e.g. ConnectionCreated. These connections aren't established until you issue an operation, unless you have the min pool size on the client set to a value greater than zero.
If you want to "monitor connections", you can subscribe to either category of events or both.
With that said, both types of connections are managed internally by the driver. You don't get a say in when they are created or destroyed (other than setting min pool size and idle timeouts). So if your goal is to have a working, continuously-running, resuming change stream, you don't need any of this and instead you should be using the proper change stream consumption patterns like the one described here in Ruby syntax although all spec-compliant drivers should provide the equivalent interface.
Lastly, there isn't a "reconnect" event defined in any driver specification. If you have a question specifically about this event you should reference the driver documentation where it is described and read that documentation carefully to ascertain the implemented behavior.

Related

How to reconnect socket after a passive network switch

So I have a Swift client, Node.js server, and am using socket.io. I have an issue where when the user changes from WiFi to LTE (passively, if they turn off wifi manually it works fine) while being connected to the server, for some reason they don't reconnect to the server (just hit a ping timeout). I've tried increasing ping timeout to 50 seconds with no effect. My users interact with each other while being connected to the same room so this is a big issue.
My connection code on the client-side looks like this:
var socket: SocketIOClient?
fileprivate var manager: SocketManager?
func establishConnection(_ completion: (() -> Void)? = nil) {
let socketUrlString: String = serverURL
self.manager = SocketManager(socketURL: URL(string: socketUrlString)!, config: [.forceWebsockets(true), .log(false), .reconnects(true), .extraHeaders(["id": myDatabaseID])])
self.socket = manager?.defaultSocket
self.socket?.connect()
//self.socket?.on events go here
}
On the client side, my connection code looks like:
const io = require('socket.io')(http, {
pingTimeout: 10000
});
io.on('connection', onConnection);
function onConnection(socket){
let headerDatabaseID = socket.handshake.headers.id
//in the for loop below, I essentially disconnect any socket that has the same database ID as the one that just connected (in this case, when a client is in a room with other clients,
//and then he/she switches from WiFi to LTE, the client reconnects to the socket server and this removes the old connection from the room it was in)
for (let [id, connectedSocket] of io.sockets.sockets) {
if (connectedSocket.databaseID == headerDatabaseID && socket.id != id) {
connectedSocket.disconnect()
break
}
}
//socket.on() events here
}
My issue is this--how do I go about reconnecting the client when it makes the passive network switch (WiFi -> LTE or vice versa)? I thought that just adding .reconnects(true) would work but for some reason, it's not...
Please let me know if I can be more detailed/helpful or if you'd like to see other codes.
I believe the solution to you problem can be either simple or complex; that depends on your requirements. I assume that each chat room has its own ID.
If you store that ID in memory on the device, when the user reconnects, you can have the socket reconnect to the room ID you had last and they will re-join that room. This is insecure.
If rooms are protected and not public, someone may be able to connect to a room that they are not allowed in if they know/can guess the room ID. To solve that problem, you'd need to implement some sort of authentication or server side database that keep keep track of that sort of stuff.
Considering the behavior varies based on whether the handoff is manual or passive it sounds like the issue is on the iOS client. I notice that you are using sockets - it seems to be some sort of custom sockets package, right? Is there a reason for using this? URLSession is a higher level implementation and it manages things like handoff.
There is something called Wifi assist, developed by apple, to manage handoff. It is part of the OS and manages this internally. According to apple: "Using URLSession and the Network framework already gives us the new WiFi assist benefits.". This was released in iOS 9, in Sept 2015.
But if you are using some other kind of sockets, whatever this "socketIOClient" is - especially packages developed prior to Sept 2015, you are probably bypassing Wifi assist. The latest version of SocketIO client I see was written in 2015 and it appears support for this package was discontinued when iOS 9 came out.
When the user manually changes the connection this is manually prompting the OS to tear down & reestablish the connection, whereas with passive tradeoff it normally relies on this Wifi Assist.
You could try to programmatically tear down & reestablish the connection when you detect that a passive handoff has occurred, but I wouldn't recommend this... for starters, it will make your code much messier. It will probably degrade the user experience. But worse, this may not be the only problem you run into using this outdated socketIO package. There's really no telling what kind of maintenance problems you will wind up with. Better to just refactor your code to use the up to date networking mechanisms provided by iOS.
If .reconnects(true) isn't working, you can try to manually take care of the problem with Apple's Reachability. This may make it easier - it's the Reachability functionality "re-written in Swift with closures."
In your case, you might use it as such:
let reachability = try! Reachability()
reachability.whenReachable = { reachability in
if reachability.connection == .wifi {
print("Reachable via WiFi")
self.socket.disconnect();
establishConnection() //this is your method defined in the question
} else {
print("Reachable via Cellular")
self.socket.disconnect();
establishConnection() //this is your method defined in the question
}
}
reachability.whenUnreachable = { _ in
print("Not reachable")
}
do {
try reachability.startNotifier()
} catch {
print("Unable to start notifier")
}

Node JS: Will a Bidirection GRPC Call Open Multiple http2 Connections?

Will a bidirectional RPC call ever open multiple http2 connections?
I'm writing a GRPC client that's talking to a GRPC server I don't own/control. I'm using the #grpc/grpc-js package. I've been asked whether this library will open multiple HTTP2 connections to the grpc endpoint and I'm not familiar enough with the source code to answer this question. My code for making a call and opening a stream looks like this
const protoLoader = require('#grpc/proto-loader')
const packageDefinition = protoLoader.loadSync(
__dirname + '/path/to/v1.proto',
{keepCase: true,
longs: String,
enums: String,
defaults: true,
oneofs: true
})
const packageDefinition = grpc.loadPackageDefinition(packageDefinition).com.foo.bar.v1
const client = new packageDefinition.IngestService(
'server.url.here.com:443',
grpc.credentials.createSsl()
)
const stream = client.doTheThing(metadata)
I've started to look into this myself and I see that it's the Subchannel objects that initiate the http2 connections, so it seems like it's one http2 connection per sub-channel. However, the relationship between the call, the http2call stream, the main channel, the sub-channel(s?), load balancers, and filter stacks is unclear to me, and I can't reason about when (if at all) a second HTTP2 connection would ever be opened.
Ideally, if someone can answer the question Will a Bidirection RPC Call Open Multiple http2 Connections? that would great. If that's too complicated an answer I'd settle for a theory of operation on what the relationship between those various objects is intended to be so I can reason about this myself, or anything else you might think would help.
No matter what streaming type the request is, gRPC will use each single connection to open multiple streams. This is one major reason HTTP/2 was chosen as the underlying protocol for gRPC: multiplexing streams onto connections is already part of that protocol.
Of the classes you mentioned, the Channel is the API-level abstraction over connections. A Channel represents any number of connections to backends referred to by the target string. It will automatically establish connections as needed to handle any requests that are initiated.
The Resolver, which you didn't mention, determines what backend addresses are associated with the target string. For example, the DnsResolver will look up DNS records.
A LoadBalancer determines what specific connections to establish and how to distribute requests among those connections. The default load balancing policy, "pick first" just sends all requests to whichever connection is successfully established first. There is also the "round robin" load balancing policy, which tries to establish connections to multiple backends and then cycles through them when starting calls.
A Subchannel represents a connection to a single backend, that can be reestablished if it drops.
The filter stack applies some transformations to requests between when they are initiated at the top-level API and when they are sent out on the network.

Node Postgres Pub/Sub - remaining connection slots are reserved

I've built a notification system with node-postgres and socket.io. The system works alright, however, I am getting an error on startup.
remaining connection slots are reserved for non-replication superuser connections
I suspect this is due to not releasing the client back to the pool.
Pool.connect()
.then(client => {
return client.query('LISTEN "new_notification"')
.then(result => {
client.on('notification', data => {
// Handle the notification
});
// Should release here
})
.catch(e => {
// Should release here
this.log(e.message, 'error');
});
})
.catch(e => {
this.log(e.message, 'error')
});
However even after adding client.release() in the locations where // Should release here are marked I still receive the error message. The notification still works, though.
When the server starts it creates a single HeyListen object in which the Pool above is created as well.
Now, the above error usually happens when the server starts and is flooded with connections. There are 6 sites that handle connections with users. Each user spawns a new connection to the socket.io server and when the disconnecting event is triggered via socket.io they are removed from a list of connected users. Each time a user connects they trigger a query against postgres if they have outstanding notifications. Here's the query object:
const conn = this.getConnectionType(pool);
conn.connect()
.then(client => {
return client.query(query_string, params)
.then(result => {
client.release();
callback(null, result);
})
.catch(error => {
client.release();
callback(error, null);
});
})
.catch(error => {
callback(error, null);
});
If I run Select * From pg_stat_activity on my psql server I see:
I thought using client.release() was supposed to remove these connections? Rerunning the query above shows different query results so some are being removed. Is this an issue of not having enough max_connections available? If so is it a good idea to bump that number up for my use case?
While experimenting with my own pub/sub client functionality for Node.js and PostgreSQL, I've seen that error when I run out of usable connections. If I remember correctly, the problem you're running into is that with the node-postgres package, a connection being used with LISTEN is considered active and in-use by the underlying connection pool until the client stops listening with UNLISTEN.
However, if you stop listening on a given connection, you'll no longer receive those useful notification events. Bit of a dilemma.
If possible, I'd advise you to look into setting up a smaller number of dedicated connections specifically for listening to various channels through PostgreSQL in each of your six applications, and then use separate connections to execute your additional queries. I don't know how feasible or performant this would be with your current traffic load, but in theory it should reduce the likelihood of using all available connections from the pool.
Out of curiosity, what was your rationale for having each user's connection LISTEN for notifications all on its own? It's possible to pass a payload to listeners through NOTIFY, so if you have some means of identifying an individual user, you could send your six apps a payload that declares which user a notification is for, and each of your six apps could forward that information to the correct user connection.

Node.js: Closing all Redis clients on shutdown

Today, I integrated Redis into my node.js application and am using it as a session store. Basically, upon successful authentication, I store the corresponding user object in Redis.
When I receive http requests after authentication, I attempt to retrieve the user object from Redis using a hash. If the retrieval was successful, that means the user is logged in and the request can be fulfilled.
The act of storing the user object in Redis and the retrieval happen in two different files, so I have one Redis client in each file.
Question 1:
Is it ok having two Redis clients, one in each file? Or should I instantiate only one client and use it across all areas of the application?
Question 2:
Does the node-redis library provide a method to show a list of connected clients? If it does, I will be able to iterate through the list, and call client.quit() for each of them when the server is shutting down.
By the way, this is how I'm implementing the "graceful shutdown" of the server:
//Gracefully shutdown and perform clean-up when kill signal is received
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
function cleanup() {
server.stop(function() {
//todo: quit all connected redis clients
console.log('Server stopped.');
//exit the process
process.exit();
});
};
In terms of design and performance, it's best to create one client and use it across your application. This is pretty easy to do in node. I'm assuming you're using the redis npm package.
First, create a file named redis.js with the following contents:
const redis = require('redis');
const RedisClient = (function() {
return redis.createClient();
})();
module.exports = RedisClient
Then, say in a file set.js, you would use it as so:
const client = require('./redis');
client.set('key', 'value');
Then, in your index.js file, you can import it and close the connection on exit:
const client = require('./redis');
process.on('SIGINT', cleanup);
process.on('SIGTERM', cleanup);
function cleanup() {
client.quit(function() {
console.log('Redis client stopped.');
server.stop(function() {
console.log('Server stopped.');
process.exit();
});
});
};
Using multiple connections may be required by how the application uses Redis.
For instance, as soon as a connection is used the purpose of listening to a pub/sub channel, then it can only be used for this and nothing else. Per the documentation on SUBSCRIBE:
Once the client enters the subscribed state it is not supposed to issue any other commands, except for additional SUBSCRIBE, PSUBSCRIBE, UNSUBSCRIBE and PUNSUBSCRIBE commands.
So if your application needs to subscribe to channels and use Redis as general value cache, then it needs two clients at a minimum: one for subscribing to channels and one for using Redis as a cache.
There are also Redis commands that are blocking like BLPOP. A busy web server normally replies to multiple requests at once. Suppose that for answering request A the server uses its Redis client to issue a blocking command. Then request B comes and the server needs to answer Redis with a non-blocking command but the client is still waiting for the blocking command issued for request A to finish. Now the response to request B is delayed by another request. This can be avoided by using a different client for the second request.
If you do not use any of the facilities that require more than one connection, then you can and should use just one connection.
If the way you use Redis is such that you need more than one connection, and you just need a list of connections but no sophisticated connection management, you could just create your own factory function: it would call redis.createClient() and save the client before returning it. Then at shutdown time, you could go over the list of saved clients and close them. Unfortunately, node-redis does not provide such functionality built-in.
If you need more sophisticated client management than the factory function described above, then the typical way to manage the multiple connections created is to use a connection pool but node-redis does not provide one. I usually access Redis through Python code so I don't have a recommendation for Node.js libraries, but an npm search shows quite a few candidates.

What's the most efficient node.js inter-process communication library/method?

We have few node.js processes that should be able to pass messages,
What's the most efficient way doing that?
How about using node_redis pub/sub
EDIT: the processes might run on different machines
If you want to send messages from one machine to another and do not care about callbacks then Redis pub/sub is the best solution. It's really easy to implement and Redis is really fast.
First you have to install Redis on one of your machines.
Its really easy to connect to Redis:
var client = require('redis').createClient(redis_port, redis_host);
But do not forget about opening Redis port in your firewall!
Then you have to subscribe each machine to some channel:
client.on('ready', function() {
return client.subscribe('your_namespace:machine_name');
});
client.on('message', function(channel, json_message) {
var message;
message = JSON.parse(json_message);
// do whatever you vant with the message
});
You may skip your_namespace and use global namespace, but you will regret it sooner or later.
It's really easy to send messages, too:
var send_message = function(machine_name, message) {
return client.publish("your_namespace:" + machine_name, JSON.stringify(message));
};
If you want to send different kinds of messages, you can use pmessages instead of messages:
client.on('ready', function() {
return client.psubscribe('your_namespace:machine_name:*');
});
client.on('pmessage', function(pattern, channel, json_message) {
// pattern === 'your_namespace:machine_name:*'
// channel === 'your_namespace:machine_name:'+message_type
var message = JSON.parse(message);
var message_type = channel.split(':')[2];
// do whatever you want with the message and message_type
});
send_message = function(machine_name, message_type, message) {
return client.publish([
'your_namespace',
machine_name,
message_type
].join(':'), JSON.stringify(message));
};
The best practice is to name your processes (or machines) by their functionality (e.g. 'send_email'). In that case process (or machine) may be subscribed to more than one channel if it implements more than one functionality.
Actually, it's possible to build a bi-directional communication using redis. But it's more tricky since it would require to add unique callback channel name to each message in order to receive callback without losing context.
So, my conclusion is this: Use Redis if you need "send and forget" communication, investigate another solutions if you need full-fledged bi-directional communication.
Why not use ZeroMQ/0mq for IPC? Redis (a database) is over-kill for doing something as simple as IPC.
Quoting the guide:
ØMQ (ZeroMQ, 0MQ, zmq) looks like an embeddable networking library
but acts like a concurrency framework. It gives you sockets that carry
atomic messages across various transports like in-process,
inter-process, TCP, and multicast. You can connect sockets N-to-N with
patterns like fanout, pub-sub, task distribution, and request-reply.
It's fast enough to be the fabric for clustered products. Its
asynchronous I/O model gives you scalable multicore applications,
built as asynchronous message-processing tasks.
The advantage of using 0MQ (or even vanilla sockets via net library in Node core, minus all the features provided by a 0MQ socket) is that there is no master process. Its broker-less setup is best fit for the scenario you describe. If you are just pushing out messages to various nodes from one central process you can use PUB/SUB socket in 0mq (also supports IP multicast via PGM/EPGM). Apart from that, 0mq also provides for various different socket types (PUSH/PULL/XREP/XREQ/ROUTER/DEALER) with which you can create custom devices.
Start with this excellent guide:
http://zguide.zeromq.org/page:all
For 0MQ 2.x:
http://github.com/JustinTulloss/zeromq.node
For 0MQ 3.x (A fork of the above module. This supports PUBLISHER side filtering for PUBSUB):
http://github.com/shripadk/zeromq.node
More than 4 years after the question being ask there is an interprocess communication module called node-ipc. It supports unix/windows sockets for communication on the same machine as well as TCP, TLS and UDP, claiming that at least sockets, TCP and UDP are stable.
Here is a small example taken from the documentation from the github repository:
Server for Unix Sockets, Windows Sockets & TCP Sockets
var ipc=require('node-ipc');
ipc.config.id = 'world';
ipc.config.retry= 1500;
ipc.serve(
function(){
ipc.server.on(
'message',
function(data,socket){
ipc.log('got a message : '.debug, data);
ipc.server.emit(
socket,
'message',
data+' world!'
);
}
);
}
);
ipc.server.start();
Client for Unix Sockets & TCP Sockets
var ipc=require('node-ipc');
ipc.config.id = 'hello';
ipc.config.retry= 1500;
ipc.connectTo(
'world',
function(){
ipc.of.world.on(
'connect',
function(){
ipc.log('## connected to world ##'.rainbow, ipc.config.delay);
ipc.of.world.emit(
'message',
'hello'
)
}
);
ipc.of.world.on(
'disconnect',
function(){
ipc.log('disconnected from world'.notice);
}
);
ipc.of.world.on(
'message',
function(data){
ipc.log('got a message from world : '.debug, data);
}
);
}
);
Im currently evaluating this module for a replacement local ipc (but could be remote ipc in the future) as a replacement for an old solution via stdin/stdout. Maybe I will expand my answer when I'm done to give some more information how and how good this module works.
i would start with the built in functionality that node provide.
you can use process signalling like:
process.on('SIGINT', function () {
console.log('Got SIGINT. Press Control-D to exit.');
});
this signalling
Emitted when the processes receives a signal. See sigaction(2) for a
list of standard POSIX signal names such as SIGINT, SIGUSR1, etc.
Once you know about process you can spwn a child-process and hook it up to the message event to retrive and send messages. When using child_process.fork() you can write to the child using child.send(message, [sendHandle]) and messages are received by a 'message' event on the child.
Also - you can use cluster. The cluster module allows you to easily create a network of processes that all share server ports.
var cluster = require('cluster');
var http = require('http');
var numCPUs = require('os').cpus().length;
if (cluster.isMaster) {
// Fork workers.
for (var i = 0; i < numCPUs; i++) {
cluster.fork();
}
cluster.on('exit', function(worker, code, signal) {
console.log('worker ' + worker.process.pid + ' died');
});
} else {
// Workers can share any TCP connection
// In this case its a HTTP server
http.createServer(function(req, res) {
res.writeHead(200);
res.end("hello world\n");
}).listen(8000);
}
For 3rd party services you can check:
hook.io, signals and bean.
take a look at node-messenger
https://github.com/weixiyen/messenger.js
will fit most needs easily (pub/sub ... fire and forget .. send/request) with automatic maintained connectionpool
we are working on multi-process node app, which is required to handle large number of real-time cross-process message.
We tried redis-pub-sub first, which failed to meet the requirements.
Then tried tcp socket, which was better, but still not the best.
So we switched to UDP datagram, that is much faster.
Here is the code repo, just a few of lines of code.
https://github.com/SGF-Games/node-udpcomm
I needed IPC between web server processes in another language (Perl;) a couple years ago. After investigating IPC via shared memory, and via Unix signals (e.g. SIGINT and signal handlers), and other options, I finally settled on something quite simple which works quite well and is fast. It may not fit the bill if your processes do not all have access to the same file system, however.
The concept is to use the file system as the communication channel. In my world, I have an EVENTS dir, and under it sub dirs to direct the message to the appropriate process: e.g. /EVENTS/1234/player1 and /EVENTS/1234/player2 where 1234 is a particular game with two different players. If a process wants to be aware of all events happening in the game for a particular player, it can listen to /EVENTS/1234/player1 using (in Node.js):
fs.watch
(or fsPromises.watch)
If a process wanted to listen to all events for a particular game, simply watch /EVENTS/1234 with the 'recursive: true' option set for fs.watch. Or watch /EVENTS to see all msgs -- the event produced by fs.watch will tell you the which file path was modified.
For a more concrete example, I my world I have the web browser client of player1 listening for Server-Sent Events (SSE), and there is a loop running in one particular web server process to send those events. Now, a web server process servicing player2 wants to send a message (IPC) to the server process running the SSEs for player1, but doesn't know which process that might be; it simply writes (or modifies) a file in /EVENTS/1234/player1. That directory is being watched -- via fs.watch -- in the web server process handling SSEs for player1. I find this system very flexible, and fast, and it can also be designed to leave a record of all messages sent. I use it so that one random web server process of many can communicate to one other particular web server process, but it could also be used in an N-to-1 or 1-to-N manner.
Hope this helps someone. You're basically letting the OS and the file system do the work for you. Here are a couple links on how this works in MacOS and Linux:
https://developer.apple.com/library/archive/documentation/Darwin/Conceptual/FSEvents_ProgGuide/Introduction/Introduction.html#//apple_ref/doc/uid/TP40005289
https://man7.org/linux/man-pages/man7/inotify.7.html
Any module you're using in whatever language is hooking into an API like one of these. It's been 30+ years since I've fiddled much with Windows, so I don't know how file system events work there, but I bet there's an equivalent.
EDIT (more info on different platforms from https://nodejs.org/dist/latest-v19.x/docs/api/fs.html#fswatchfilename-options-listener):
Caveats#
The fs.watch API is not 100% consistent across platforms, and is unavailable in some situations.
On Windows, no events will be emitted if the watched directory is moved or renamed. An EPERM error is reported when the watched directory is deleted.
Availability#
This feature depends on the underlying operating system providing a way to be notified of file system changes.
On Linux systems, this uses inotify(7).
On BSD systems, this uses kqueue(2).
On macOS, this uses kqueue(2) for files and FSEvents for directories.
On SunOS systems (including Solaris and SmartOS), this uses event ports.
On Windows systems, this feature depends on ReadDirectoryChangesW.
On AIX systems, this feature depends on AHAFS, which must be enabled.
On IBM i systems, this feature is not supported.
If the underlying functionality is not available for some reason, then fs.watch() will not be able to function and may throw an exception. For example, watching files or directories can be unreliable, and in some cases impossible, on network file systems (NFS, SMB, etc) or host file systems when using virtualization software such as Vagrant or Docker.
It is still possible to use fs.watchFile(), which uses stat polling, but this method is slower and less reliable.
EDIT2: https://www.npmjs.com/package/node-watch is a wrapper that may help on some platforms
Not everybody knows that pm2 has an API thanks to which you can communicate to its processes.
// pm2-call.js:
import pm2 from "pm2";
pm2.connect(() => {
pm2.sendDataToProcessId(
{
type: "process:msg",
data: {
some: "data",
hello: true,
},
id: 0,
topic: "some topic",
},
(err, res) => {}
);
});
pm2.launchBus((err, bus) => {
bus.on("process:msg", (packet) => {
packet.data.success.should.eql(true);
packet.process.pm_id.should.eql(proc1.pm2_env.pm_id);
done();
});
});
// pm2-app.js:
process.on("message", (packet) => {
process.send({
type: "process:msg",
data: {
success: true,
},
});
});

Categories