Here's the setup: I create simple WebSocket Server using the ws library. I then attach a listener for when the client sends me the URL of a PDF to transform. I download it locally then I call another command to transform it:
const download = require("download");
wss.on("connection", ws => {
ws.onmessage = async msg => {
await download(msg.data, destination, {
filename: fileName
});
transformPDF(ws, msg.data);
};
// ...
});
After that, the transformPDF function calls the spawn command to execute a command line binary. I parse the percentage done from the stdout and then try to emit it to the client. But even before this, the connection has been closed and I'm not sure why:
const { spawn } = require("child_process");
const transformPDF = (ws, url) => {
// ...
const child = spawn("k2pdfopt", settings);
child.stdout.on("data", data => {
// ...
ws.send(percentageDone); // <--- connection is broken before this is called
});
};
I have tried to make the transformPDF function a promise and then awaiting it. I have also tried adding an optional detached option to the spawn process. I'm not really sure why it's closing since I've also successfully replaced the command k2pdfopt with something like a lengthy find, and that worked just fine (although it did batch all of the data in the stdout before calling ws.send).
Any help or insight on why it's closing is much appreciated.
Turns out that when I was creating a child process, it was resetting the Visual Studio Code live-server extension that I had running the index.html. That explains why I was also getting a status code of 1001, which I found out most likely means the client refreshed.
I fixed the issue by simply installing the node package live-server and running my index.html from a different terminal.
Related
Context: I have a javascript file that activates PowerShell's native SpeechSynthesizer module. The script receives a message and passes that through to PowerShell, where it is rendered as speech.
Problem: there is horrible latency (~5sec) between execution and response. This is because the script creates an entirely new PowerShell session and SpeechSynthesizer object with every execution.
Objective: I want to change the script so that a single PowerShell session and SpeechSynthesizer object is persisted and used across multiple sessions. I believe this will eradicate the latency completely.
Limiting Factor: this modification requires making the PowerShell execution stateful. Currently, I don't know how to incorporate stateful commands for the PowerShell in a javascript file.
Code:
const path = require('path');
const Max = require('max-api');
const { exec } = require('child_process');
// This will be printed directly to the Max console
Max.post(`Loaded the ${path.basename(__filename)} script`);
const execCommand = command => {
// Max.post(`Running command: ${command}`);
exec(command, {'shell':'powershell.exe'}, (err, stdout, stderr) => {
if (err) {
// node couldn't execute the command
Max.error(stderr);
Max.error(err);
return;
}
// the *entire* stdout and stderr (buffered)
Max.outletBang()
});
}
// Use the 'outlet' function to send messages out of node.script's outlet
Max.addHandler("speak", (msg) => {
let add = 'Add-Type -AssemblyName System.speech'
let create = '\$speak = New-Object System.Speech.Synthesis.SpeechSynthesizer'
let speak = `\$speak.Speak(\'${msg}\')`
let command = ([add,create,speak]).join('; ')
execCommand(command)
});
Objective, Re-stated: I want to move the add and create commands to a 'create' handler which will only be ran once. The speak command will be run an arbitrary amount of times afterward.
Attempted Solution: I've found one package (https://github.com/bitsofinfo/powershell-command-executor) that supposedly supports stateful PowerShell commands, but it's very complicated. Also, the author mentions a risk of command injection and other insecurities, of which I have no knowledge of.
Any and all suggestions are welcome. Thanks!
I know you can use navigator onLine inside the renderer process because it's a rendered inside a browser. But what I'm trying to do is something like this in the main process:
if (navigator.onLine){
mainWindow.loadURL("https://google.com")
} else {
mainWindow.loadFile(path.join(__dirname, 'index.html'));
}
So basically if the user is offline, just load a local html file, and if they're online, take them to a webpage. But, like expected, I keep getting the error that 'navigator is not defined'. Does anyone know how can I somehow import the navigate cdn in the main process? Thanks!
TL;DR: The easiest thing to do is to just ask Electron. You can do this via the net module from within the Main Process:
const { net } = require ("electron");
const isInternetAvailable = () => return net.isOnline ();
// To check:
if (isInternetAvailable ()) { /* do something... */ }
See Electron's documentation on the method; specifically, this approach doesn't tell you whether your service is accessible via the internet, but rather that a service can be contacted (or not even this, as the documentation mentions links which would not involve any HTTP request at all).
However, this is not a reliable measurement and you might want to increase its hit rate by manuallly checking whether a certain connection can be made.
In order to check whether an internet connection is available, you'll have to make a connection yourself and see if it fails. This can be done from the Main Process using plain NodeJS:
// HTTP code basically from the NodeJS HTTP tutorial at
// https://nodejs.dev/learn/making-http-requests-with-nodejs/
const https = require('https');
const REMOTE_HOST = "google.com"; // Or your domain
const REMOTE_EP = "/"; // Or your endpoint
const REMOTE_PAGE = "https://" + REMOTE_HOST + REMOTE_EP;
function checkInternetAvailability () {
return new Promise ((resolve, reject) => {
const options = {
hostname: REMOTE_HOST,
port: 443,
path: REMOTE_EP,
method: 'GET',
};
// Try to fetch the given page
const req = https.request (options, res => {
// Yup, that worked. Tell the depending code.
resolve (true);
req.destroy (); // This is no longer needed.
});
req.on ('error', error => {
reject (error);
});
req.on ('timeout', () => {
// No, connection timed out.
resolve (false);
req.destroy ();
});
req.end ();
});
}
// ... Your window initialisation code ...
checkInternetAvailability ().then (
internetAvailable => {
if (internetAvailable) mainWindow.loadURL (REMOTE_PAGE);
else mainWindow.loadFile (path.join (__dirname, 'index.html'));
// Call any code needed to be executed after this here!
}
).catch (error => {
console.error ("Oops, couldn't initialise!", error);
app.quit (1);
});
Please note that this code here might not be the most desirable since it just "crashes" your app with exit code 1 if there is any error other than connection timeout.
This, however, makes your startup asynchronous, which means that you need to pay attention on the execution chain of your app startup. Also, startup may be really slow in case the timeout is reached, it may be worth considering NodeJS' http module documentation.
Also, it makes sense to actually try to retrieve the page you're wanting to load in the BrowserWindow (constant values REMOTE_HOST and REMOTE_EP), because that also gives you an indication whether your server is up or not, although that means that the page will be fetched twice (in the best case, when the connection test succeeds and when Electron loads the page into the window). However, that should not be that big of a problem, since no external assets (images, CSS, JS) will be loaded.
One last note: This is not a good metric of whether any internet connection is available, it just tells you whether your server answered within the timeout window. It might very well be that any other service works or that the connection just is very slow (i.e., expect false negatives). Should be "good enough" for your use-case though.
I've been worked on a vue project.
This vue project use the nodejs API I've created, in simple way, they are two entire differents project which are not located in the same directory and they are launched separately.
The problem is whenever I debug a route with node --inspect --debug-break event_type.controller.js for example named:
"/eventtype/create"
exports.create = (req, res) => {
const userId = jwt.getUserId(req.headers.authorization);
if (userId == null) {
res.status(401).send(Response.response401());
return;
}
// Validate request
if (!req.body.label || !req.body.calendarId) {
res.status(400).send(Response.response400());
return;
}
const calendarId = req.body.calendarId; // Calendar id
// Save to database
EventType.create({
label: req.body.label,
}).then((eventType) => {
Calendar.findByPk(calendarId).then((calendar) => {
eventType.addCalendar(calendar); // Add a Calendar
res.status(201).send(eventType);
}).catch((err) => {
res.status(500).send(Response.response500(err.message));
});
}).catch((err) => {
res.status(500).send(Response.response500(err.message));
});
};
Even if I create a breakpoint on const userId = jwt.getUserId(req.headers.authorization);
and from my vue app I trigger the api createEventType event, my break point is not passed.
Also when I press f8 after the breakpoint on my first line with the debugger, my file close automatically.
I do not use VS Code but Vim for coding but I've heard that maybe Vs Code could allow a simplified way to debug nodesjs application.
NOTE: I use the V8 node debugger.
For newer NodeJS versions (> 7.0.0) you need to use
node --inspect-brk event_type.controller.js
instead of
node --inspect --debug-break event_type.controller.js
to break on the first line of the application code. See https://nodejs.org/api/debugger.html#debugger_advanced_usage for more information.
The solution (even if it's not really a solution) has been to add console.log to the line I wanted to debug.
For around 3 weeks I've been working on an Electron app and finally decided to get around to adding update checking. For my research, the standard way to do this in Electron (using Squirrel) requires the user to physically install the application onto their computer. I would rather not do this, and keep everything as portable as possible. I then decided to try making my own update script by having the program download the update.zip, and extract it to overwrite the existing files. This works well, up until the very end. At the very end of the extraction, I receive a Invalid package error, and the actual app.asar file is missing, rendering the application useless.
I am using this to download and extract the updates:
function downloadFile(url, target, fileName, cb) { // Downloads
var req = request({
method: 'GET',
uri: url
});
var out = fs.createWriteStream(target+'/'+fileName);
req.pipe(out);
req.on('end', function() {
unzip(target+'/'+fileName, target, function() {
if (cb) {
cb();
}
});
});
}
function unzip(file, target, cb) { // Unzips
var out = fs.createReadStream(file);
out.pipe(unzipper.Extract({ path: target })).on('finish', function () {
dialog.showMessageBox({
type: 'question',
message: 'Finished extracting to `'+target+'`'
});
if (cb) {
cb();
}
});
}
And call it with:
downloadFile('http://example.com/update.zip', path.join(__dirname, './'), 'update.zip', function() { // http://example.com/update.zip is not the real source
app.relaunch();
app.quit();
});
And I use the unzipper NPM package (https://www.npmjs.com/package/unzipper).
The code works perfectly for all other zips, but it fails when trying to extract a zip containing an Electron app.
Anything I'm doing wrong, or maybe a different package that properly supports extracting zips with .asar files?
Edit 1
I just found https://www.npmjs.com/package/electron-basic-updater, which does not throw the same JavaScript error however it still does not extract the .asar files correctly, and will throw it's own error. Since the .asar is still missing, the app is still useless after the "update"
Thanks to your link to electron-basic-updater, I have found this issue mentioned there: https://github.com/TamkeenLMS/electron-basic-updater/issues/4.
They refer to the issue in the electron app: https://github.com/electron/electron/issues/9304.
Finally, in the end of the second topic there's a solution:
This is due to the electron fs module treating asar files as directories rather than files. To make the unzip process work you need to do one of two things:
Set process.noAsar = true
Use original-fs instead of fs
I have seen the people working with original-fs. But it looked like a big trouble to me.
So I tried setting process.noAsar = true (and then process.noAsar = false after unzipping) - and that worked like a charm.
Being inside of a NodeJS process, how can I listen for events from bash?
For example
NodeJS side
obj.on("something", function (data) {
console.log(data);
});
Bash side
$ do-something 'Hello World'
Then in the NodeJS stdout will appear "Hello World" message.
How can I do this?
I guess it's related to signal events.
The problem with using signals is that you can't pass arguments and most of them are reserved for system use already (I think SIGUSR2 is really the only safe one for node since SIGUSR1 starts the debugger and those are the only two that are supposed to be for user-defined conditions).
Instead, the best way that I've found to do this is by using UNIX sockets; they're designed for inter process communication.
The easiest way to setup a UNIX socket in node is by setting up a standard net server with net.createServer() and then simply passing a file path to server.listen() to create the socket at the path you specified. Note: It's important that a file at that path doesn't exist, otherwise you'll get a EADDRINUSE error.
Something like this:
var net = require('net');
var server = net.createServer(function(connection) {
connection.on('data', function(data) {
// data is a Buffer, so we'll .toString() it for this example
console.log(data.toString());
});
});
// This creates a UNIX socket in the current directory named "nodejs_bridge.sock"
server.listen('nodejs_bridge.sock');
// Make sure we close the server when the process exits so the file it created is removed
process.on('exit', function() {
server.close();
});
// Call process.exit() explicitly on ctl-c so that we actually get that event
process.on('SIGINT', function() {
process.exit();
});
// Resume stdin so that we don't just exit immediately
process.stdin.resume();
Then, to actually send something to that socket in bash, you can pipe to nc like this:
echo "Hello World" | nc -U nodejs_bridge.sock
What about using FIFOs?
NodeJS code:
process.stdin.on('readable', function() {
var chunk = process.stdin.read();
if (chunk !== null) {
process.stdout.write('data: ' + chunk);
}
});
NodeJS startup (the 3>/tmp/... is a trick to keep FIFO open):
mkfifo /tmp/nodeJsProcess.fifo
node myProgram.js </tmp/nodeJsProcess.fifo 3>/tmp/nodeJsProcess.fifo
Bash linkage:
echo Hello >/tmp/nodeJsProcess.fifo
The signals described in the page that you've linked are used to send some specific "command" to processes. This is called "Inter Process Communication". You can see here a first definition of IPC.
You can instruct you node.js code to react to a specific signal, as in this example:
// Start reading from stdin so we don't exit.
process.stdin.resume();
process.on('SIGUSR1', function() {
console.log('Got SIGUSR1. Here you can do something.');
});
Please note that the signal is sent to the process, and not to a specific object in the code.
If you need to communicate in a more specific way to the node.js daemon you can listen on another port too, and use it to receive (and eventually send) control commands.