Nodejs subprocess not waiting on finish while logging output - javascript

Well I'm trying to get a nodejs process to launch a python script. - And this python script logs while it is busy - as it logs I wish to display this in the console window used by the nodejs process.
The python script is really trivial
from time import sleep
if __name__ == '__main__':
print('small text testing')
sleep(10)
raise Exception('test')
prints 'small text testing', sleeps for 10 seconds(!) and then raises an exception which is uncaught and thus finishes the script.
In node I tried to get this to work with:
const { exec } = require('child_process');
const exec_str = '. BackgroundServer/BackgroundServer/bin/activate && python BackgroundServer/main.py 1';
const child = exec(exec_str,
{
// detachment and ignored stdin are the key here:
detached: true,
stdio: [ 'ignore', 1, 2 ]
});
child.unref();
child.stdout.on('data', function(data) {
console.log(data.toString());
});
child.stderr.on('data', function(data) {
console.error(data.toString());
});
However this "fails" in the sense that it will only print after the python process has finished running.
Now I know it is possible to run a script through spawn but that would require me to create a temporary script, give that script permissions and then execute that script. Not optimal either.

Not knowing much about javascript or node.js I am pretty sure your problem is due to the fact that Python buffers its output if it is run as subprocess.
To fix this issue, you can do either manually ensure that Python flushes the buffer by adding calls to sys.stdout.flush() as
import sys
from time import sleep
if __name__ == '__main__':
print('small text testing')
sys.stdout.flush()
sleep(10)
raise Exception('test')
or you can force Python to not be buffered also when used as a subprocess by calling the intrepreter with the -u argument, thus modifying exec_str to
const exec_str = '. BackgroundServer/BackgroundServer/bin/activate && \
python -u BackgroundServer/main.py 1';
The first solution will always flush the output, if that is desirable and you use it in another place, without you having to think about the -u option. However, I would still recommend the second approach as it still allows the code to run buffered (which sometimes can be what you want) and also when working with longer scripts you may have to insert quite a number of manual sys.stdout.flush() calls otherwise.
Also, as a sidenote, there is no need for raising an exception in the Python script. It will end anyway, when it reaches its last line.

Related

Running stateful commands in PowerShell through Node.js

Context: I have a javascript file that activates PowerShell's native SpeechSynthesizer module. The script receives a message and passes that through to PowerShell, where it is rendered as speech.
Problem: there is horrible latency (~5sec) between execution and response. This is because the script creates an entirely new PowerShell session and SpeechSynthesizer object with every execution.
Objective: I want to change the script so that a single PowerShell session and SpeechSynthesizer object is persisted and used across multiple sessions. I believe this will eradicate the latency completely.
Limiting Factor: this modification requires making the PowerShell execution stateful. Currently, I don't know how to incorporate stateful commands for the PowerShell in a javascript file.
Code:
const path = require('path');
const Max = require('max-api');
const { exec } = require('child_process');
// This will be printed directly to the Max console
Max.post(`Loaded the ${path.basename(__filename)} script`);
const execCommand = command => {
// Max.post(`Running command: ${command}`);
exec(command, {'shell':'powershell.exe'}, (err, stdout, stderr) => {
if (err) {
// node couldn't execute the command
Max.error(stderr);
Max.error(err);
return;
}
// the *entire* stdout and stderr (buffered)
Max.outletBang()
});
}
// Use the 'outlet' function to send messages out of node.script's outlet
Max.addHandler("speak", (msg) => {
let add = 'Add-Type -AssemblyName System.speech'
let create = '\$speak = New-Object System.Speech.Synthesis.SpeechSynthesizer'
let speak = `\$speak.Speak(\'${msg}\')`
let command = ([add,create,speak]).join('; ')
execCommand(command)
});
Objective, Re-stated: I want to move the add and create commands to a 'create' handler which will only be ran once. The speak command will be run an arbitrary amount of times afterward.
Attempted Solution: I've found one package (https://github.com/bitsofinfo/powershell-command-executor) that supposedly supports stateful PowerShell commands, but it's very complicated. Also, the author mentions a risk of command injection and other insecurities, of which I have no knowledge of.
Any and all suggestions are welcome. Thanks!

WebSocket connection closes after calling child process

Here's the setup: I create simple WebSocket Server using the ws library. I then attach a listener for when the client sends me the URL of a PDF to transform. I download it locally then I call another command to transform it:
const download = require("download");
wss.on("connection", ws => {
ws.onmessage = async msg => {
await download(msg.data, destination, {
filename: fileName
});
transformPDF(ws, msg.data);
};
// ...
});
After that, the transformPDF function calls the spawn command to execute a command line binary. I parse the percentage done from the stdout and then try to emit it to the client. But even before this, the connection has been closed and I'm not sure why:
const { spawn } = require("child_process");
const transformPDF = (ws, url) => {
// ...
const child = spawn("k2pdfopt", settings);
child.stdout.on("data", data => {
// ...
ws.send(percentageDone); // <--- connection is broken before this is called
});
};
I have tried to make the transformPDF function a promise and then awaiting it. I have also tried adding an optional detached option to the spawn process. I'm not really sure why it's closing since I've also successfully replaced the command k2pdfopt with something like a lengthy find, and that worked just fine (although it did batch all of the data in the stdout before calling ws.send).
Any help or insight on why it's closing is much appreciated.
Turns out that when I was creating a child process, it was resetting the Visual Studio Code live-server extension that I had running the index.html. That explains why I was also getting a status code of 1001, which I found out most likely means the client refreshed.
I fixed the issue by simply installing the node package live-server and running my index.html from a different terminal.

Pepper naoqi 2.5 onConsoleMessage null

I am trying to print in python the messages from the web console using a callback on a onConsoleMessage event. Pepper (Edit: version 1.6) is running naoqi 2.5.5.5. I've modified the executeJS example as a test. The problem is I keep getting null for the message in the callback. Is it a bug that has been fixed in a newer version of naoqi ? I've had a look at the release notes but I didn't find anything.
Here is the code I am using:
#! /usr/bin/env python
# -*- encoding: UTF-8 -*-
"""Example: Use executeJS Method"""
import qi
import argparse
import sys
import time
import signal
def signal_handler(signal, frame):
print('Bye!')
sys.exit(0)
def main(session):
"""
This example uses the executeJS method.
To Test ALTabletService, you need to run the script ON the robot.
"""
# Get the service ALTabletService.
try:
tabletService = session.service("ALTabletService")
# Display a local web page located in boot-config/html folder
# The ip of the robot from the tablet is 198.18.0.1
tabletService.showWebview("http://198.18.0.1/apps/boot-config/preloading_dialog.html")
time.sleep(3)
# Javascript script for displaying a prompt
# ALTabletBinding is a javascript binding inject in the web page displayed on the tablet
script = """
console.log('A test message');
"""
# Don't forget to disconnect the signal at the end
signalID = 0
# function called when the signal onJSEvent is triggered
# by the javascript function ALTabletBinding.raiseEvent(name)
def callback(message):
print "[callback] received : ", message
# attach the callback function to onJSEvent signal
signalID = tabletService.onConsoleMessage.connect(callback)
# inject and execute the javascript in the current web page displayed
tabletService.executeJS(script)
print("Waiting for Ctrl+C to disconnect")
signal.pause()
except Exception, e:
print "Error was: ", e
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--ip", type=str, default="127.0.0.1",
help="Robot IP address. On robot or Local Naoqi: use '127.0.0.1'.")
parser.add_argument("--port", type=int, default=9559,
help="Naoqi port number")
args = parser.parse_args()
session = qi.Session()
try:
session.connect("tcp://" + args.ip + ":" + str(args.port))
except RuntimeError:
print ("Can't connect to Naoqi at ip \"" + args.ip + "\" on port " + str(args.port) +".\n"
"Please check your script arguments. Run with -h option for help.")
sys.exit(1)
main(session)
Output:
python onConsoleMessage.py --ip=192.168.1.20
[W] 1515665783.618190 30615 qi.path.sdklayout: No Application was created, trying to deduce paths
Waiting for Ctrl+C to disconnect
[callback] received : null
Did someone face the same issue?
Thanks
I have the same issue. You can easily reproduce it by opening two ssh consoles on the robot, and on the first one executing
qicli watch ALTabletService.onConsoleMessage
and on the second
qicli call ALTabletService.showWebview
qicli call ALTabletService.executeJS "console.log('hello')"
... and instead of "hello", you will see "null" appear in your first console.
HOWEVER - if your goal is to effectively test your webpage, what I usually do is just open the page on my computer and use the chrome console (you can set chrome up to act as if the page was a tablet of the right size, 1280x800); you can do this while still connecting the page to Pepper, as if it was on her tablet, using the method described here. This is enough for 99% of the case; the remaining 1% is things where Pepper's tablet is actually different from Chrome.

How to use child process module on windows from PhantomJS/CasperJS

I'm using CasperJS to test my webApp, the thing is that I need to access a DB to automatize some necessary inputs from my tests.
I'm looking for an alternatives on how to retrieve this data from the DB inside a casperJS js script an finally I decide to use phantomJS child process module to call a groovy script to connect a DB and make a select and print the result to stdout to get it from CasperJS. However from the sample of the phantomJS can not realize how to do it, based on the sample I made some attempts with spawn and execFile with no luck. i.e I try:
var process = require("child_process")
var spawn = process.spawn
var execFile = process.execFile
var child = spawn("groovy", ["script.groovy"])
child.stdout.on("data", function (data) {
console.log("spawnSTDOUT:", JSON.stringify(data))
})
child.stderr.on("data", function (data) {
console.log("spawnSTDERR:", JSON.stringify(data))
})
child.on("exit", function (code) {
console.log("spawnEXIT:", code)
})
This doesn't work and not produce any output. I also try directly executing dir command directly and also nothing happens.
I try also with linux and it doesn't work either, I try also creating a simple echo .sh and nothing..., however in linux when I try to run ls command this times it works as expected.
After some tries I found a way to do it.
Seems that in windows the only way to do it is passing cmd.exe as command and groovy script.groovy as argument.
So I use
var child = spawn("cmd.exe", ["/k","groovy script.groovy"])
instead of:
var child = spawn("groovy", ["script.groovy"])
This way works correctly on windows.
I also found the way to run a shell script on linux, which executes the groovy.It's similar to the windows solution, instead of invoke .sh I've to use sh command:
var child = spawn("sh", ["script.sh"])
And script.sh executes the groovy script:
#!/bin/bash
groovy script.groovy

Listening for outside events. Bash to NodeJS bridge

Being inside of a NodeJS process, how can I listen for events from bash?
For example
NodeJS side
obj.on("something", function (data) {
console.log(data);
});
Bash side
$ do-something 'Hello World'
Then in the NodeJS stdout will appear "Hello World" message.
How can I do this?
I guess it's related to signal events.
The problem with using signals is that you can't pass arguments and most of them are reserved for system use already (I think SIGUSR2 is really the only safe one for node since SIGUSR1 starts the debugger and those are the only two that are supposed to be for user-defined conditions).
Instead, the best way that I've found to do this is by using UNIX sockets; they're designed for inter process communication.
The easiest way to setup a UNIX socket in node is by setting up a standard net server with net.createServer() and then simply passing a file path to server.listen() to create the socket at the path you specified. Note: It's important that a file at that path doesn't exist, otherwise you'll get a EADDRINUSE error.
Something like this:
var net = require('net');
var server = net.createServer(function(connection) {
connection.on('data', function(data) {
// data is a Buffer, so we'll .toString() it for this example
console.log(data.toString());
});
});
// This creates a UNIX socket in the current directory named "nodejs_bridge.sock"
server.listen('nodejs_bridge.sock');
// Make sure we close the server when the process exits so the file it created is removed
process.on('exit', function() {
server.close();
});
// Call process.exit() explicitly on ctl-c so that we actually get that event
process.on('SIGINT', function() {
process.exit();
});
// Resume stdin so that we don't just exit immediately
process.stdin.resume();
Then, to actually send something to that socket in bash, you can pipe to nc like this:
echo "Hello World" | nc -U nodejs_bridge.sock
What about using FIFOs?
NodeJS code:
process.stdin.on('readable', function() {
var chunk = process.stdin.read();
if (chunk !== null) {
process.stdout.write('data: ' + chunk);
}
});
NodeJS startup (the 3>/tmp/... is a trick to keep FIFO open):
mkfifo /tmp/nodeJsProcess.fifo
node myProgram.js </tmp/nodeJsProcess.fifo 3>/tmp/nodeJsProcess.fifo
Bash linkage:
echo Hello >/tmp/nodeJsProcess.fifo
The signals described in the page that you've linked are used to send some specific "command" to processes. This is called "Inter Process Communication". You can see here a first definition of IPC.
You can instruct you node.js code to react to a specific signal, as in this example:
// Start reading from stdin so we don't exit.
process.stdin.resume();
process.on('SIGUSR1', function() {
console.log('Got SIGUSR1. Here you can do something.');
});
Please note that the signal is sent to the process, and not to a specific object in the code.
If you need to communicate in a more specific way to the node.js daemon you can listen on another port too, and use it to receive (and eventually send) control commands.

Categories