I want to create a rabbitmq cli running like foreverjs with node. It can spawn child_process and keep it running in the background and can communicate with child_process at any time. The problem I am facing is when main cli program exit the child_process seems to stop running as well, I tried to fork with detached:true and .unref() it doesn't work. How do i run a child process in the background even after the parent caller process exited?
cli.js - parent
const { fork, spawn } = require('child_process');
const options = {
stdio: ['pipe', 'pipe', 'pipe', 'ipc'],
slient:true,
detached:true
};
child = fork('./rabbit.js', [], options)
child.on('message', message => {
console.log('message from child:', message);
child.send('Hi');
// exit parent
process.exit(0);
});
child.unref()
rabbit.js - child
if it is up and running, 'i' should keep incrementing
var i=0;
i++;
if (process.send) {
process.send("Hello"+i);
}
process.on('message', message => {
console.log('message from parent:', message);
});
I think fork doesn't have a detached option. Refer node docs for fork.
If you use spawn, the child keeps running even if the parent exits. I have modified your code a bit to use spawn.
cli.js
const { fork, spawn } = require('child_process');
const options = {
slient:true,
detached:true,
stdio: [null, null, null, 'ipc']
};
child = spawn('node', ['rabbit.js'], options);
child.on('message', (data) => {
console.log(data);
child.unref();
process.exit(0);
});
rabbit.js
var i=0;
i++;
process.send(i);
// this can be a http server or a connection to rabbitmq queue. Using setInterval for simplicity
setInterval(() => {
console.log('yash');
}, 1000);
I think when you use fork, an IPC channel is established between the parent and the child process. You could try disconnecting the IPC channel gracefully before exiting the parent process. I'll try it out and update the answer if it works.
Update:
I have updated cli.js and rabbit.js to get it working as asked. The trick is to use ipc file descriptor in stdio options. That way you can communicate with the parent from the child. The first three fds will be the default values if marked as null. For more info, refer stdio options docs
An old question, but for those picking up where I am today: fork does have a detached option. However it also opens an IPC channel which has to be explicitly closed with disconnect() as well if you want to break the relationship between the parent and the child.
In my case it was advantageous to use the channel until I had confirmation that the child process was ready to do its job, and then disconnect it:
// Run in background
const handle = cp.fork('./service/app.js', {
detached: true,
stdio: 'ignore'
});
// Whenever you are ready to stop receiving IPC messages
// from the child
handle.unref();
handle.disconnect();
This allows my parent process to exit without killing the background process or being kept alive by a reference to it.
If you do establish any handle.on(...) handlers, it's a good idea to disconnect them with handle.off(...) as well when you are through with them. I used a handle.on('message', (data) => { ... }) handler to allow the child to tell the parent when it was ready for its duties after doing some async startup work.
Both fork and spawn have the detached option.
However, when the parent process exits, the child may want to write to the initial standard output (via "process.stdout.write", "console.log", etc.).
However, this standard output may no longer be available (as the parent died), raising some exceptions (for instance, broken pipe) in the child process. These exceptions may cause the child also to fail unexpectedly.
If we allow the child to write to some output always available (files, for example), it will no longer fail, as it can still write information to a valid entity.
/**
* This code apart from the comments is available on the Node website
*/
// We use fork, but spawn should also work
const {fork} = require('child_process');
let out = fs.openSync("/path/to/outfile", "a");
let err = fs.openSync("/path/to/errfile", "a");
const child = fork(jsScriptPath, ["--some", "arg"], {
detached: true,
stdio: ["pipe", out, err, "ipc"], // => Ask the child to redirect its standard output and error messages to some files
// silent is overriden by stdio
});
// SetTimeout here is only for illustration. You will want to use more valid code
setTimeout( () => {
child.unref();
process.exit(0);
}, 1000);
Related
Issue description
I have a child process spawned by NodeJS which output stream (stdout) needs to be connected to a second NodeJS child process input stream (stdin).
However, from time to time, the first process gets killed, in which case I want to restart that process and rewire its output stream to the same second process input, without having to restart the second process.
First try
I first tried to connect the stdout and stdin, which works fine until a kill signal is received by the first process:
const firstProc = cp.spawn('/some/proc/path', [/* args */])
const secondProc = cp.spawn('/ffmpeg/path', [/* args */])
firstProc.stdout.pipe(secondProc.stdin)
But as soon as the first process receives a kill signal, it gets propagated to the second process which terminates as well.
On the main NodeJS process, I'm able to intercept a SIGINT signal for example, but this does not seem to be available for child processes:
process.on('SIGINT', () => {
/* do something upon SIGINT kill signal */
})
Question summary
So my question is: is it possible to intercept the kill signal on a child process before it gets transmitted to the second process, 'detach' the stream connection, start a new process and pipe its output to the input stream of the second process?
Additional Notes
I've tried to add a duplex transform stream between the stdout and stdin but that doesn't seem to resolve my problem as it closes as well when its input gets closed.
I thought about creating some kind of socket connection between the two processes but I've never done something like that and I'm a bit afraid of the added complexity.
If there is an easier way to handle my scenario, I'd be glad to know! Thanks for any idea!
See https://nodejs.org/api/stream.html#readablepipedestination-options:
By default,
stream.end()
is called on the destination Writable stream when the source
Readable stream emits
'end', so that the
destination is no longer writable. To disable this default behavior,
the end option can be passed as false, causing the destination
stream to remain open
So you're looking for something like
const secondProc = cp.spawn('/ffmpeg/path', [/* args */]);
function writeForever() {
const firstProc = cp.spawn('/some/proc/path', [/* args */])
firstProc.stdout.pipe(secondProc.stdin, { end: false });
firstProc.stdout.on('end', writeForever); // just spawn a new firstProc and continue…
}
writeForever();
Since I do use the Angular framework, I am quite accustomed to RXJS, which makes this kind of streaming task very easy.
If you are manipulating a lot of streams, I would suggest using RXJS with RXJS-stream.
The resulting code would look like this:
import { concat, of } from 'rxjs';
import { rxToStream, streamToRx } from 'rxjs-stream';
const concatedStreams$ = concat([
streamToRx(cp.spawn('/some/proc/path', [/* args */])),
//of('End of first, start of second'), // Optional
streamToRx(cp.spawn('/ffmpeg/path', [/* args */])),
]);
rxToStream(concatedStreams$).pipe(process.stdout);
I am checking if my application has an update by pinging a certain URL, pinging this URL returns whether I need an update or not.
Now, I have a powershell file which actually handles the update, so I'm trying to launch this powershell file from inside of my application.
I have this working, I can spawn my updater file and it will run through and everything is good. However, my application stays open the whole time, which means that once the updater is finished I will have 2 instances of it running.
The obvious solution to this in my mind is to close the application if an update is found (after spawning the updater).
Here is my code:
child = spawn("powershell.exe",['-ExecutionPolicy', 'ByPass', '-File', require("path").resolve(__dirname, '../../../../updater.ps1')]);
child.unref();
self.close();
However, when I try to make the application close, it seems like the updater is never launched. Or rather, I believe it is launched but gets closed when the main application gets closed.
I have the line child.unref() which I thought was supposed to make the spawned window not attached to the main application, but the updater won't stay open.
I have also tried adding {detached: true} as the 3rd parameter of my spawn() command, but it didn't make a difference in the way it was running.
How can I spawn the updater completely separate from my application?
To start the update separated from your application I think you should use a script instead of a inline parameter. This will ensure that OS creates a separated process from your node app. For example:
var fs = require('fs');
var spawn = require('child_process').spawn;
var out = fs.openSync('./out.log', 'a');
var err = fs.openSync('./out.log', 'a');
var child = spawn('./myscript.sh', [], {
detached: true,
stdio: [ 'ignore', out, err ]
});
child.unref();
setTimeout(function(){
process.exit();
}, 1000);
The myscript.sh looks like this:
sleep 5; ls >> out2.log
The code above will force node exit (after 2 seconds) but just before it started a bash script (which will wait 5 seconds to run ls command). Running this code results in 2 output files (out.log and out2.log). The first one (out.log) is the output of node app calling a child process while the second (out2.log) is the result of script redirected from separated script.
A better approach and more elegant is using on function. But this means that your main process will actually wait for child process to complete the execution. For example:
var fs = require('fs');
var spawn = require('child_process').spawn;
var out = fs.openSync('./out.log', 'a');
var err = fs.openSync('./out.log', 'a');
var child = spawn('ls', [], {
detached: true,
stdio: [ 'ignore', out, err ]
});
child.on('exit', (code) => {
console.log(`Child exited with code ${code}`);
});
child.unref();
In the second example, the ls result will be saved in out.log file. Since the main process will wait for child to complete.
So all depends on what you are willing to achieve. The first solution is not beautiful but will start something really apart from your node app.
Is it possible to use setTimeout in NodeJS to terminate a process even if the event loop is being occupied by something else?
For example, say I have code that looks like the following
setTimeout(async () => {
// Some code I run to gracefully exit my process.
}, timeout);
while (true) {
let r = 1;
}
The callback in my timeout will never be hit since the while loop will occupy the event loop. Is there some way that I can say: "Execute the following code after N seconds regardless of everything else?"
I'm writing selenium tests, but for some reason every once in a while the test will get "stuck" and never terminate. I basically want to always timeout my tests after a certain amount of time so we don't get into the position of tests that run forever.
Thanks!
Since JavaScript is single threaded, what you are going to want to do is to create a worker using fork, which will give it the feeling of being multi threaded. This will actually just give us two instances of node, each one having their own event loop. This fork will have your endless loop which you can then kill with your timeout.
main.js
const cp = require('child_process')
const path = require('path')
// Create the child
const child = cp.fork(path.join(__dirname, './worker.js'), [])
// Kill after "x" milliseconds
setTimeout(() => {
process.exit()
}, 5000);
// Listen for messages from the child
child.on('message', data => console.log(data))
Next you will setup your worker:
worker.js
let i = 0;
while (true) {
// Send the value of "i" to the parent
process.send(i++);
}
The child can communicate info about itself to the parent using process.send(data).
The parent can listen for messages from the child using child.on('message', ...).
Another thing we can do is kill the child instead of the main process if you need the main process to do more stuff still. And in this case you would call child.kill() inside the setTimeout instead.
const cp = require('child_process')
const path = require('path')
// Create the child
let child = cp.fork(path.join(__dirname, './worker.js'), [])
// Kill after "x" milliseconds
setTimeout(() => {
child.kill()
}, 5000);
If there are no more events in the eventloop, the main process will automatically close itself thus we don't need to call process.exit().
How should I implement a PHP exec like call to a system function with HapiJS? The user submits a processing job that needs to run in the background for some time.
I somehow need to return a job id / session id to the user, run the job asynchronously, allow the user to check back for completion and reroute when completed...
I bet there are existing solutions for that, yet I'd highly welcome a pointer into the right direction.
Check out node's child process documentation: here
To do what you are describing I would spawn a process without a callback and then use a little trick: trying to kill a process that isn't running causes an error see here
const exec = require('child_process').exec;
//Launch the process
const child = exec('ls');
const pid = child.pid;
//later in another scope when you are looking to see if it is running
try {
process.kill(pid, 0);
}
catch (e) {
console.log("it's finished");
}
I am learning nodejs, and checked some examples about sending signals to child process, just as following codes, it is said "SIGINT" handler in child should be response, while I did not get any output.
// parent.js
var spawn = require('child_process').spawn;
var child = spawn('node', ['child.js']);
child.stdout.on('data', function(data) {
console.log('data from child: ' + data);
});
child.kill('SIGINT');
// child.js
console.log('child calling');
process.on('SIGINT', function() {
console.log('Received SIGINT signal');
});
When I am typing
node parent.js
Why there is no output? even the "child calling" output in the child.js?
Hope anyone help me? Thanks.
In addition, I am not clear when the child.js is executed, at the time the execution of statement? Hope anyone can give a detail explanation? Thanks.
var child = spawn('node', ['child.js']);
The first problem is that you spawn a child process and then immediately send it SIGINT. Since the child process (probably) hasn't run at all yet, it hasn't registered for that signal and thus is killed by it. The easiest way to get this working is to just put it in a setTimeout:
setTimeout(function() {
child.kill('SIGINT');
}, 100);
Now you'll see
data from child: child calling
and then it will exit. But you're expecting the SIGINT message. The problem this time is that the child process logs, registers for the signal, and then is done. node.js will end if it finishes and isn't waiting on anything else. Things it will wait on includes open sockets, setTimeouts, etc. You need it to be doing something, though. As a simple test, you can just add to child.js:
setInterval(function() {
console.log('hey!');
}, 1000);
Or something like that. Then you'll see the output you're expecting.