I am checking if my application has an update by pinging a certain URL, pinging this URL returns whether I need an update or not.
Now, I have a powershell file which actually handles the update, so I'm trying to launch this powershell file from inside of my application.
I have this working, I can spawn my updater file and it will run through and everything is good. However, my application stays open the whole time, which means that once the updater is finished I will have 2 instances of it running.
The obvious solution to this in my mind is to close the application if an update is found (after spawning the updater).
Here is my code:
child = spawn("powershell.exe",['-ExecutionPolicy', 'ByPass', '-File', require("path").resolve(__dirname, '../../../../updater.ps1')]);
child.unref();
self.close();
However, when I try to make the application close, it seems like the updater is never launched. Or rather, I believe it is launched but gets closed when the main application gets closed.
I have the line child.unref() which I thought was supposed to make the spawned window not attached to the main application, but the updater won't stay open.
I have also tried adding {detached: true} as the 3rd parameter of my spawn() command, but it didn't make a difference in the way it was running.
How can I spawn the updater completely separate from my application?
To start the update separated from your application I think you should use a script instead of a inline parameter. This will ensure that OS creates a separated process from your node app. For example:
var fs = require('fs');
var spawn = require('child_process').spawn;
var out = fs.openSync('./out.log', 'a');
var err = fs.openSync('./out.log', 'a');
var child = spawn('./myscript.sh', [], {
detached: true,
stdio: [ 'ignore', out, err ]
});
child.unref();
setTimeout(function(){
process.exit();
}, 1000);
The myscript.sh looks like this:
sleep 5; ls >> out2.log
The code above will force node exit (after 2 seconds) but just before it started a bash script (which will wait 5 seconds to run ls command). Running this code results in 2 output files (out.log and out2.log). The first one (out.log) is the output of node app calling a child process while the second (out2.log) is the result of script redirected from separated script.
A better approach and more elegant is using on function. But this means that your main process will actually wait for child process to complete the execution. For example:
var fs = require('fs');
var spawn = require('child_process').spawn;
var out = fs.openSync('./out.log', 'a');
var err = fs.openSync('./out.log', 'a');
var child = spawn('ls', [], {
detached: true,
stdio: [ 'ignore', out, err ]
});
child.on('exit', (code) => {
console.log(`Child exited with code ${code}`);
});
child.unref();
In the second example, the ls result will be saved in out.log file. Since the main process will wait for child to complete.
So all depends on what you are willing to achieve. The first solution is not beautiful but will start something really apart from your node app.
Related
Issue description
I have a child process spawned by NodeJS which output stream (stdout) needs to be connected to a second NodeJS child process input stream (stdin).
However, from time to time, the first process gets killed, in which case I want to restart that process and rewire its output stream to the same second process input, without having to restart the second process.
First try
I first tried to connect the stdout and stdin, which works fine until a kill signal is received by the first process:
const firstProc = cp.spawn('/some/proc/path', [/* args */])
const secondProc = cp.spawn('/ffmpeg/path', [/* args */])
firstProc.stdout.pipe(secondProc.stdin)
But as soon as the first process receives a kill signal, it gets propagated to the second process which terminates as well.
On the main NodeJS process, I'm able to intercept a SIGINT signal for example, but this does not seem to be available for child processes:
process.on('SIGINT', () => {
/* do something upon SIGINT kill signal */
})
Question summary
So my question is: is it possible to intercept the kill signal on a child process before it gets transmitted to the second process, 'detach' the stream connection, start a new process and pipe its output to the input stream of the second process?
Additional Notes
I've tried to add a duplex transform stream between the stdout and stdin but that doesn't seem to resolve my problem as it closes as well when its input gets closed.
I thought about creating some kind of socket connection between the two processes but I've never done something like that and I'm a bit afraid of the added complexity.
If there is an easier way to handle my scenario, I'd be glad to know! Thanks for any idea!
See https://nodejs.org/api/stream.html#readablepipedestination-options:
By default,
stream.end()
is called on the destination Writable stream when the source
Readable stream emits
'end', so that the
destination is no longer writable. To disable this default behavior,
the end option can be passed as false, causing the destination
stream to remain open
So you're looking for something like
const secondProc = cp.spawn('/ffmpeg/path', [/* args */]);
function writeForever() {
const firstProc = cp.spawn('/some/proc/path', [/* args */])
firstProc.stdout.pipe(secondProc.stdin, { end: false });
firstProc.stdout.on('end', writeForever); // just spawn a new firstProc and continue…
}
writeForever();
Since I do use the Angular framework, I am quite accustomed to RXJS, which makes this kind of streaming task very easy.
If you are manipulating a lot of streams, I would suggest using RXJS with RXJS-stream.
The resulting code would look like this:
import { concat, of } from 'rxjs';
import { rxToStream, streamToRx } from 'rxjs-stream';
const concatedStreams$ = concat([
streamToRx(cp.spawn('/some/proc/path', [/* args */])),
//of('End of first, start of second'), // Optional
streamToRx(cp.spawn('/ffmpeg/path', [/* args */])),
]);
rxToStream(concatedStreams$).pipe(process.stdout);
I want to create a rabbitmq cli running like foreverjs with node. It can spawn child_process and keep it running in the background and can communicate with child_process at any time. The problem I am facing is when main cli program exit the child_process seems to stop running as well, I tried to fork with detached:true and .unref() it doesn't work. How do i run a child process in the background even after the parent caller process exited?
cli.js - parent
const { fork, spawn } = require('child_process');
const options = {
stdio: ['pipe', 'pipe', 'pipe', 'ipc'],
slient:true,
detached:true
};
child = fork('./rabbit.js', [], options)
child.on('message', message => {
console.log('message from child:', message);
child.send('Hi');
// exit parent
process.exit(0);
});
child.unref()
rabbit.js - child
if it is up and running, 'i' should keep incrementing
var i=0;
i++;
if (process.send) {
process.send("Hello"+i);
}
process.on('message', message => {
console.log('message from parent:', message);
});
I think fork doesn't have a detached option. Refer node docs for fork.
If you use spawn, the child keeps running even if the parent exits. I have modified your code a bit to use spawn.
cli.js
const { fork, spawn } = require('child_process');
const options = {
slient:true,
detached:true,
stdio: [null, null, null, 'ipc']
};
child = spawn('node', ['rabbit.js'], options);
child.on('message', (data) => {
console.log(data);
child.unref();
process.exit(0);
});
rabbit.js
var i=0;
i++;
process.send(i);
// this can be a http server or a connection to rabbitmq queue. Using setInterval for simplicity
setInterval(() => {
console.log('yash');
}, 1000);
I think when you use fork, an IPC channel is established between the parent and the child process. You could try disconnecting the IPC channel gracefully before exiting the parent process. I'll try it out and update the answer if it works.
Update:
I have updated cli.js and rabbit.js to get it working as asked. The trick is to use ipc file descriptor in stdio options. That way you can communicate with the parent from the child. The first three fds will be the default values if marked as null. For more info, refer stdio options docs
An old question, but for those picking up where I am today: fork does have a detached option. However it also opens an IPC channel which has to be explicitly closed with disconnect() as well if you want to break the relationship between the parent and the child.
In my case it was advantageous to use the channel until I had confirmation that the child process was ready to do its job, and then disconnect it:
// Run in background
const handle = cp.fork('./service/app.js', {
detached: true,
stdio: 'ignore'
});
// Whenever you are ready to stop receiving IPC messages
// from the child
handle.unref();
handle.disconnect();
This allows my parent process to exit without killing the background process or being kept alive by a reference to it.
If you do establish any handle.on(...) handlers, it's a good idea to disconnect them with handle.off(...) as well when you are through with them. I used a handle.on('message', (data) => { ... }) handler to allow the child to tell the parent when it was ready for its duties after doing some async startup work.
Both fork and spawn have the detached option.
However, when the parent process exits, the child may want to write to the initial standard output (via "process.stdout.write", "console.log", etc.).
However, this standard output may no longer be available (as the parent died), raising some exceptions (for instance, broken pipe) in the child process. These exceptions may cause the child also to fail unexpectedly.
If we allow the child to write to some output always available (files, for example), it will no longer fail, as it can still write information to a valid entity.
/**
* This code apart from the comments is available on the Node website
*/
// We use fork, but spawn should also work
const {fork} = require('child_process');
let out = fs.openSync("/path/to/outfile", "a");
let err = fs.openSync("/path/to/errfile", "a");
const child = fork(jsScriptPath, ["--some", "arg"], {
detached: true,
stdio: ["pipe", out, err, "ipc"], // => Ask the child to redirect its standard output and error messages to some files
// silent is overriden by stdio
});
// SetTimeout here is only for illustration. You will want to use more valid code
setTimeout( () => {
child.unref();
process.exit(0);
}, 1000);
How should I implement a PHP exec like call to a system function with HapiJS? The user submits a processing job that needs to run in the background for some time.
I somehow need to return a job id / session id to the user, run the job asynchronously, allow the user to check back for completion and reroute when completed...
I bet there are existing solutions for that, yet I'd highly welcome a pointer into the right direction.
Check out node's child process documentation: here
To do what you are describing I would spawn a process without a callback and then use a little trick: trying to kill a process that isn't running causes an error see here
const exec = require('child_process').exec;
//Launch the process
const child = exec('ls');
const pid = child.pid;
//later in another scope when you are looking to see if it is running
try {
process.kill(pid, 0);
}
catch (e) {
console.log("it's finished");
}
So in Node I can execute a JavaScript file using a command like:
$ node src/someFile.js
But is there a way to execute all of the JavaScript files in a given directory synchronously (one file executes, then after it has finished the next one executes, etc)? Basically, is there a single command that would have the effect of something like
$ node src/firstFile.js
$ node src/secondFile.js
$ node src/thirdFile.js
...
I've tried commands like
$ node src/*.js
but with no success.
If there exists no such command, what's the best way to go about doing something like this?
I am not sure if this is going to work for you because this is a feature of the shell not of the node runtime but..
for f in src/*.js; do node "$f"; done
Or in Powershell:
Get-ChildItem .\*.js | Foreach-Object {
node $_
}
You could use spawn to run a node process from node like
(function() {
var cp = require('child_process');
var childProcess = cp.spawn('node', [`src/firstFile.js`]);
At this point you have to add some listeners:
// now listens events
// Listen for an exit event:
child.on('exit', function(exitCode) {
console.log("Child exited with code: " + exitCode);
return resolve(exitCode);
});
// Listen for stdout data
child.stdout.on('data', function(data) {
console.log(data.toString());
});
// child error
child.stderr.on('data',
function(data) {
console.log('err data: ' + data);
// on error, kill this child
child.kill();
}
);
}).call(this);
Of course you need to serialize execution here, but it's easy since you have the child.on('exit') that tells you that the process ended, so you can start the next one.
Look to Controlling Multiple Processes in Node for my example working solution that run multi processes in node and wait execution to end / join.
Using a POSIX shell:
$ for js in src/*.js; do node "$js"; done
If the calling each one from the shell thing isn't a hard requirement, I would kick them all off with a single node process from the shell. This node script would:
traverse the directory of modules
require the first one, which executes it, and pass a callback which the module will call on completion
When the complete callback is called, execute the next script in your directory.
I'm automating running the ECMA-402 test suite against the Intl polyfill I wrote, and I've hit some problems. Currently, the tests are run against a fully-built version of the library, which means having to recompile every time a change is made before the tests can run. I'm trying to improve it by splitting the code up into separate modules and using require to run the tests.
The main problem comes into focus when I try and run the tests using the vm module. If I add the polyfill to the test's sandbox, some of the tests fail when checking native behaviour — the polyfill's objects don't inherit from the test context's Object.prototype, for example. Passing require to the tests will not work because the modules are still compiled and executed in the parent's context.
The easiest solution in my head was to spawn a new node process and write the code to the process's stdin, but the spawned node process doesn't execute the code written to it and just waits around forever. This is the code I tried:
function runTest(testPath, cb) {
var test,
err = '',
content = 'var IntlPolyfill = require("' + LIB_PATH + '");\n';
content += LIBS.fs.readFileSync(LIBS.path.resolve(TEST_DIR, testPath)).toString();
content += 'runner();';
test = LIBS.spawn(process.execPath, process.execArgv);
test.stdin.write(content, 'utf8');
// cb runs the next test
test.on('exit', cb);
}
Does anyone have any idea why Node.js doesn't execute the code written to its stdin stream, or if there's another way I can get the module to compile in the same context as the tests?
You must close the stdin for the child process to consume data and exit. Do this when you are done passing code.
test.stdin.end();
In the end, I chose to use the -e command line switch to pass the code directly to the new node instance. It only took a slight modification to the code:
function runTest(testPath, cb) {
var test,
err = '',
content = 'var IntlPolyfill = require("' + LIB_PATH + '");\n';
content += LIBS.fs.readFileSync(LIBS.path.resolve(TEST_DIR, testPath)).toString();
content += 'runner();';
test = LIBS.spawn(process.execPath, process.execArgv.concat('-e', content));
// cb runs the next test
test.on('exit', cb);
}