Multiple arguments in Gio.Subprocess - javascript

I'm developing my first gnome-shell-extension currently. In the extension, I want to execute a simple shell command and use the output afterwards, for which I use Gio.Subprocess like it is used in this wiki: https://wiki.gnome.org/AndyHolmes/Sandbox/SpawningProcesses
Currently, I have an argument like this one with some parameters: "ProgramXYZ -a -bc" which I pass as the argument vector argv as ['ProgramXYZ','-a','-bc']. This case works fine.
So let's say I would like to call two programs and combine the output with your approach, like: "ProgramXYZ -a -bc && ProgramB". My current output is correct in a normal terminal, but I'm not sure how to pass it to the Gio.Subprocess. Something like ['ProgramXYZ','-a','-bc','&&','ProgramB'] does not work, is there a way to achieve that or do i have to make two seperate calls?

Sorry, I haven't managed to finish that page (that's why it's in my sandbox 😉).
Here is our Promise wrapper for running a subprocess:
function execCommand(argv, input = null, cancellable = null) {
let flags = Gio.SubprocessFlags.STDOUT_PIPE;
if (input !== null)
flags |= Gio.SubprocessFlags.STDIN_PIPE;
let proc = new Gio.Subprocess({
argv: argv,
flags: flags
});
proc.init(cancellable);
return new Promise((resolve, reject) => {
proc.communicate_utf8_async(input, cancellable, (proc, res) => {
try {
resolve(proc.communicate_utf8_finish(res)[1]);
} catch (e) {
reject(e);
}
});
});
}
Now you have two reasonable choices, since you have a nice wrapper.
I would prefer this option myself, because if I'm launching sequential processes I probably want to know which failed, what the error was and so on. I really wouldn't worry about extra overhead, since the second process only executes if the first succeeds, at which point the first will have been garbage collected.
async function dualCall() {
try {
let stdout1 = await execCommand(['ProgramXYZ', '-a', '-bc']);
let stdout2 = await execCommand(['ProgramB']);
} catch (e) {
logError(e);
}
}
On the other hand, there is nothing preventing you from launching a sub-shell if you really want to do shell stuff. Ultimately you're just offloading the same behaviour to a shell, though:
async function shellCall() {
try {
let stdout = await execCommand([
'/bin/sh',
'-c',
'ProgramXYZ -a -bc && ProgramB'
]);
} catch (e) {
logError(e);
}
}

Related

Can two browser windows cause race conditions in synchronous JavaScript code?

I have the following function that synchronizes some locally cached data with my online database.
In the process, existing values are retrieved from the database, added to the locally cached values, and then sent as an update to the database. Because of this summing process, this procedure must not run multiple times in parallel because it could result in invalid values (adding more than it should).
I implemented a locking mechanism using local storage. It seems to work well because JavaScript is single-threaded and runs all the locking code synchronously, even if this function is called rapidly in succession.
The problem is, how do I also make this thread-safe across browser tabs/windows? The way I see it, multiple windows behave like separate threads, taking away the benefit of JavaScript's single-threading approach.
So 1) Is my assumption about multiple browser windows behaving like multiple threads correct and 2) How do I make my particular code safe across these different threads/processes?
export async function mergeCachedGameStats(supabaseClient: SupabaseClient) {
const lockKey = "merge_cached_game_stats_lock";
const lock = localStorage.getItem(lockKey);
if (lock && (Date.now() - parseInt(lock) > 15_000)) localStorage.removeItem(lockKey);
if (lock) return;
localStorage.setItem(lockKey, Date.now().toString());
try {
const { data, error } = await supabaseClient?.auth.getSession();
if (error) throw error;
const loggedInUser = data.session?.user;
for (const key of Object.values(gameStatsKeys)) {
if (!localStorage.getItem(key + "_cached")) continue;
const gameStats = await loadCombinedGameStats(key, supabaseClient);
if (loggedInUser) {
const { error } = await supabaseClient
.from("user_profiles")
.update({ [key]: JSON.stringify(gameStats) })
.eq("id", loggedInUser.id);
if (error) throw error;
} else {
localStorage.setItem(key, JSON.stringify(gameStats));
}
localStorage.removeItem(key + "_cached");
}
} catch (error) {
console.error(error);
} finally {
localStorage.removeItem(lockKey)
}
}

Wait all promises in a map function

I want to wait to read all my pictures in a map function
I tried this
let buffer = [];
// Folder of the dataset.
const rootFolder = './dataset'
console.log("Entering in folder dataset");
fs.readdirSync(rootFolder);
// For each folders
const files = fs.readdirSync(rootFolder).map(dirName => {
if(fs.lstatSync(path.join(rootFolder, dirName)).isDirectory()){
console.log(`Entering in folder ${path.join(rootFolder, dirName)}`);
// For each files
fs.readdirSync(path.join(rootFolder, dirName)).map(picture => {
if(fs.lstatSync(path.join(rootFolder, dirName, picture)).isFile()){
if(picture.startsWith("norm")){
return fileToTensor(path.join(rootFolder, dirName, picture)).then((img) => {
buffer.push(img);
}).catch((error) => {console.log(error)});
}
}
});
}
});
Promise.all(files);
console.log(buffer);
async function fileToTensor(path) {
return await sharp(path)
.removeAlpha()
.raw()
.toBuffer({ resolveWithObject: true });
}
But my buffer is still empty...
I know promises exist but I don't know how can I include them in map(map())
Thanks you :)
I would refactor the above code to this:
let files = [];
// loop each dir.
fs.readdirSync(rootFolder).forEach(dirName => {
// if it's a directory, procede.
if(fs.lstatSync(path.join(rootFolder, dirName)).isDirectory()){
console.log(`Entering in folder ${path.join(rootFolder, dirName)}`);
fs.readdirSync(path.join(rootFolder, dirName)).forEach(picture => {
if(fs.lstatSync(path.join(rootFolder, dirName, picture)).isFile()){
// If lstatsync says it's a file and if it starts with "norm"
if(picture.startsWith("norm")){
// push a new promise to the array.
files.push(new Promise((resolve, reject) => {
fileToTensor(path.join(rootFolder, dirName, picture)).then((img) => {
buffer.push(img);
resolve();
}).catch((error) => {console.log(error); reject(error);});
}));
}
}
});
}
});
// Resolve all promises.
Promise.all(files).then(() => {
// Then do whatever you need to do.
console.log(buffer);
}).catch((errors) => {
console.log('one ore more errors occurred', errors);
});
Basically, here is what I did:
Removed .map, since it's not necessary in this context. Also, in your case, not all code paths returned a result, hence not every callback returned a result.
Pushed each needed item to the files array, which is a Promise[].
Called Promise.all on the files array. Each resolved promise will push the result to the buffer array. I would've handled it in a different way, but still, this is the fastest I could think of.
Registered a callback on Promise.all, so that buffer will be defined.
As a side note, there are a lot of third party libraries that helps you to avoid having nested loops and promises looping the file system. I've just posted this to try giving something that could actually work from the existing code, despite an entire refactor would be clever here, and a preliminary analysis of available node libraries would also help to make the code easier to read and to mantain.
First of all a few advices:
DON't use arrow functions for anything you cannot put in a single line (they aren't intended for that and this wrecks readability)
Check that each callback you pass to .map() actually return something (first one doesn't. It seems you missed a return before inner fs.readdir(..)....
Better try to name all functions (except arrow ones in the cases that they're good choice). This way not only I could name it to better identify them in the previous point but also stack traces would be much more readable and useful (traceable).
That being said, you are reading directories (and subdirectories) synchronously to finally return promises (I understand that fileToTensor() is expected to return a promise). It may not have a major impact on the overall execution time because I suppose actual file processings would be much more expensive BUT this is a bad pattern because you are blocking the event loop during the tree scan (so, if your code is for a server that needs to attend other petitions, you are pulling performance a bit down...).
Finally, as others already said, there are libraries, such as glob that eases that task.
On the other hand, if you want to do it by yourself (as an understanding exercise) I myself implemented my own library for the same task before knowing about glob which could serve you as a simpler example.
Hye I've bit updated your code please go through once. It might be helpful :)
let fsReadDir = Util.promisify(Fs.readdir);
let fsStat = Util.promisify(Fs.stat);
let picturePromises = [];
let directories = await fsReadDir(rootFolder);
for (let dirIndex = 0; dirIndex < directories.length; dirIndex++) {
let file = directories[dirIndex];
let stat = await fsStat(path[pathIndex] + '/' + file);
if (stat.isDirectory()) {
let pictures = await fsReadDir(path.join(rootFolder, dirName));
for (let picIndex = 0; picIndex < pictures.length; picIndex++) {
let stat = await fsStat(path.join(rootFolder, dirName, pictures[picIndex]));
if (stat.isFile()) {
if (picture.startsWith("norm")) {
let pTensor = fileToTensor(path.join(rootFolder, dirName, pictures[picIndex])).then((img) => {
buffer.push(img);
}).catch((error) => { console.log(error) });
picturePromises.push(pTensor);
}
}
}
}
}
Promise.all(picturePromises);
console.log(buffer);
async function fileToTensor(path) {
return await sharp(path)
.removeAlpha()
.raw()
.toBuffer({ resolveWithObject: true });
}

Merged gulp tasks never fire `end` event

I've got a gulp task that loops through a folder looking for sub folders and outputs a JavaScript file based upon the contents of each folder. Below is a more visual example.
src
assets
scripts
critical
loadCSS.init.js
legacy
flexibility.init.js
picturefill.init.js
modern
connectivity.js
layzr.init.js
menu_list.js
scroll-hint.init.js
slideout.init.js
swiper.init.js
service-worker
service-worker.js
becomes:
dev
assets
scripts
critical.js
legacy.js
modern.js
service-worker.js
This is achieved by reading the contents of the src/assets/scripts directory, then running a loop against each folder (critical, legacy, modern, service-worker) and sending the contents of each folder to a Gulp tasks which get merged together with merge-stream.
All this works great, except that once the tasks are merged back together, I want to trigger a notification if the compilation succeeded. If I try to pipe anything to the merged streams, it doesn't work. It just returns the merged streams, and never continues on.
If I un-promisify my PROCESS_SCRIPTS function and don't use merge-stream (i.e. only processing one manually specified folder), it works fine, so I'm at a loss as to what's going on.
Here's my full task:
module.exports = {
scripts(gulp, plugins, ran_tasks, on_error) {
// task-specific plugins
const ESLINT = require("gulp-eslint");
const WEBPACK = require("webpack-stream");
// process scripts
const PROCESS_SCRIPTS = (js_directory, destination_file_name = "modern.js", compare_file_name = "modern.js", source = [global.settings.paths.src + "/assets/scripts/*.js"]) => {
return new Promise((resolve, reject) => {
const WEBPACK_CONFIG = {
mode: "development",
};
// update webpack config for the current target destination and file name
WEBPACK_CONFIG.mode = plugins.argv.dist ? "production" : WEBPACK_CONFIG.mode;
WEBPACK_CONFIG.output = {
filename: destination_file_name
};
const TASK = gulp.src(source)
// prevent breaking on error
.pipe(plugins.plumber({errorHandler: on_error}))
// check if source is newer than destination
.pipe(plugins.newer(js_directory + "/" + compare_file_name))
// lint all scripts
.pipe(ESLINT())
// print lint errors
.pipe(ESLINT.format())
// run webpack
.pipe(WEBPACK(WEBPACK_CONFIG))
// generate a hash and add it to the file name
.pipe(plugins.hash({template: "<%= name %>.<%= hash %><%= ext %>"}))
// output scripts to compiled directory
.pipe(gulp.dest(js_directory))
// generate a hash manfiest
.pipe(plugins.hash.manifest(".hashmanifest-scripts", {
deleteOld: true,
sourceDir: js_directory
}))
// output hash manifest in root
.pipe(gulp.dest("."))
// reject after errors
.on("error", () => {
reject(TASK);
})
// return the task after completion
.on("end", () => {
resolve(TASK);
});
});
};
// scripts task, lints, concatenates, & compresses JS
return new Promise ((resolve) => {
// set JS directory
const JS_DIRECTORY = plugins.argv.dist ? global.settings.paths.dist + "/assets/scripts" : global.settings.paths.dev + "/assets/scripts";
// set the source directory
const SOURCE_DIRECTORY = global.settings.paths.src + "/assets/scripts";
// set up an empty merged stream
const MERGED_STREAMS = plugins.merge();
// get the script source folder list
const SCRIPT_FOLDERS = plugins.fs.readdirSync(SOURCE_DIRECTORY);
// get the script destination file list
const SCRIPT_FILES = plugins.fs.existsSync(JS_DIRECTORY) ? plugins.fs.readdirSync(JS_DIRECTORY) : false;
// process all the script folders
const PROCESS_SCRIPT_FOLDERS = () => {
return Promise.resolve().then(() => {
// shift to the next folder
const FOLDER_NAME = SCRIPT_FOLDERS.shift();
// find the existing destination script file name
const FILE_NAME = SCRIPT_FILES ? SCRIPT_FILES.find((name) => {
return name.match(new RegExp(FOLDER_NAME + ".[a-z0-9]{8}.js"));
}) : FOLDER_NAME + ".js";
// process all scripts, update the stream
return PROCESS_SCRIPTS(JS_DIRECTORY, FOLDER_NAME + ".js", FILE_NAME, SOURCE_DIRECTORY + "/" + FOLDER_NAME + "/**/*").then((processed) => {
MERGED_STREAMS.add(processed);
});
}).then(() => SCRIPT_FOLDERS.length > 0 ? PROCESS_SCRIPT_FOLDERS() : resolve());
};
PROCESS_SCRIPT_FOLDERS().then(() => {
// wrap up
return MERGED_STREAMS
// prevent breaking on error
.pipe(plugins.plumber({
errorHandler: on_error,
}))
// notify that task is complete, if not part of default or watch
.pipe(plugins.gulpif(gulp.seq.indexOf("scripts") > gulp.seq.indexOf("default"), plugins.notify({
title: "Success!",
message: "Scripts task complete!",
onLast: true,
})))
// push task to ran_tasks array
.on("data", () => {
if (ran_tasks.indexOf("scripts") < 0) {
ran_tasks.push("scripts");
}
})
// resolve the promise on end
.on("end", () => {
resolve();
});
});
});
}
};
Also visible on my GitHub: https://github.com/JacobDB/new-site/blob/master/gulp-tasks/scripts.js
EDIT: I've tried a few things, I'll detail them here.
console.log("hello world") never fires after MERGED_STREAMS.on("data"), MERGED_STREAMS.on("error"), or MERGED_STREAMS.on("end").
Moving const MERGED_STREAMS = plugins.merge(); to a module-level variable (i.e. just after const WEBPACK = require("webpack-stream")) does not change the outcome.
Doing #2 and then using MERGED_STREAMS.add(gulp.src(source) ...) instead of adding the stream after the promise completes does not change the outcome, except when leaving .pipe(gulp.dist(".")), which is required to output a .hashmanifest, and always marks the task as ran.
Disabling webpack, hash, or eslint, in any combination does not make a difference.
Changing PROCESS_SCRIPTS from returning a promise to return the stream, then processing each folder as individual variables, then merging them manually does appear to correctly trigger the task as ran, but webpack can only be run once, so it only outputs one file – critical.hash.js. Note: I haven't tested this method in conjunction with disabling hash, which may be causing it to be marked as correctly ran if .hashmanifest is always being output.
Splitting the linting step and the webpack step in to separate task kind of causes the task to be correctly marked as ran, but only if the lint task is not a promise, which results in unexpected end of stream errors in the console.
EDIT 2: Updated with a revised version of my task based on #Louis's advice.
There are many problems with the code above. One major issue that makes the code hard to follow and debug is that you use new Promise where you don't need it. Generally, if you have new Promise and the logic inside the promise executor will resolve or reject depending on the result of another promise, then you don't need to use new Promise.
Sometimes people have code like this:
function foo() {
const a = getValueSynchronously();
const b = getOtherValueSynchronously();
return doSomethingAsynchronous(a, b).then(x => x.someMethod());
}
Suppose that doSomethigAsynchronous returns a promise. The problem with the foo function above is that if getValueSynchronously and getOtherValueSynchronously fail, then foo will raise an exception, but if doSomethingAsynchronous fails, then it will reject a promise. So code that uses foo has to handle synchronous exceptions and asynchronous rejections if it wants to handle all possible failures. Sometimes people feel they can fix the issue by causing all failures to be promise rejections:
function foo() {
return new Promise((resolve, reject) => {
const a = getValueSynchronously();
const b = getOtherValueSynchronously();
doSomethingAsynchronous(a, b).then(x => x.someMethod()).then(resolve, reject);
});
}
In the code above, if getValueSynchronously or getOtherValueSynchronously fail, that's a promise rejection. But the problem with the code above is that it is easy to get it wrong. You can forget to call reject everywhere it is needed. (As a matter of fact, you do have this error in your code. You have nested promises whose rejection won't be propagated up. They are just lost, which means if an error occurs your code will just stop, without you knowing why.) Or you may be tempted to call `resolve way down in a nested function, which makes the logic hard to follow.
You can just as well do:
function foo() {
return Promise.resolve().then(() => {
const a = getValueSynchronously();
const b = getOtherValueSynchronously();
return doSomethingAsynchronous(a, b);
}).then(x => x.someMethod());
}
You can use Promise.resolve() to enter the promisified world (hmm... "the promised land?"). In the code above, you do not have to remember to call reject. If the code inside the callback to .then fails for any reason, you get a rejected promise.
I also noticed in a number of places you return a value from the executor function you pass to new Promise. Your code would behave exactly the same if you did not use return there. To illustrate, this code:
function foo() {
return new Promise((resolve, reject) => {
return doSomethingAsynchronous().then(resolve, reject);
});
}
behaves the exactly same way as this code:
function foo() {
return new Promise((resolve, reject) => {
doSomethingAsynchronous().then(resolve, reject);
});
}
The value returned from the executor is ignored. End of story. If you think the value you return from your executors are doing something, then that's incorrect.

Why the nodejs with amqplib consume function is closure?

I use the nodejs amqplib module to connect rabbitmq.
I found the consume function is become a closure function, but I couldn't understand why. I didn't use closure.
My code is below. I found the corr in the returnOK still get the first time value. When I fire this function second times. The corr still the value at first time.
I think that is odd. Someone could explain this?
const corr = new Date().getTime();
try {
const params = JSON.stringify(req.body);
console.log('corr first =', corr);
await ch.sendToQueue(q, Buffer.from(params), {
deliveryMode: true,
correlationId: corr.toString(),
replyTo: queue.queue,
});
const returnOK = (msg) => {
if (msg.properties.correlationId === corr.toString()) {
console.info('******* Proxy send message done *******');
res.status(HTTPStatus.OK).json('Done');
}
};
await ch.consume(queue.queue, returnOK, { noAck: true });
} catch (error) {
res.status(HTTPStatus.INTERNAL_SERVER_ERROR).json(error);
}
It appears you're calling ch.consume on every request, in effect creating a new consumer every time. You should only do that once.
What is happening is that the first consumer is picking up the messages.
To fix this, you probably want to move ch.consume outside the request handler.

Can Multiple fs.write to append to the same file guarantee the order of execution?

Assume we have such a program:
// imagine the string1 to string1000 are very long strings, which will take a while to be written to file system
var arr = ["string1",...,"string1000"];
for (let i = 1; i < 1000; i++) {
fs.write("./same/path/file.txt", arr[i], {flag: "a"}});
}
My question is, will string1 to string1000 be gurantted to append to the same file in order?
Since fs.write is async function, I am not sure how each call to fs.write() is really executed. I assume the call to the function for each string should be put somewhere in another thread (like a callstack?) and once the previous call is done the next call can be executed.
I'm not really sure if my understanding is accurate.
Edit 1
As in comments and answers, I see fs.write is not safe for multiple write to same file without waiting for callback. But what about writestream?
If I use the following code, would it guarantee the order of writing?
// imagine the string1 to string1000 are very long strings, which will take a while to be written to file system
var arr = ["string1",...,"string1000"];
var fileStream = fs.createWriteFileStream("./same/path/file.txt", { "flags": "a+" });
for (let i = 1; i < 1000; i++) {
fileStream.write(arr[i]);
}
fileStream.on("error", () => {// do something});
fileStream.on("finish", () => {// do something});
fileStream.end();
Any comments or corrections will be helpful! Thanks!
The docs say that
Note that it is unsafe to use fs.write multiple times on the same file without waiting for the callback. For this scenario, fs.createWriteStream is strongly recommended.
Using a stream works because streams inherently guarantee that the order of strings being written to them is the same order that is read out of them.
var stream = fs.createWriteStream("./same/path/file.txt");
stream.on('error', console.error);
arr.forEach((str) => {
stream.write(str + '\n');
});
stream.end();
Another way to still use fs.write but also make sure things happen in order is to use promises to maintain the sequential logic.
function writeToFilePromise(str) {
return new Promise((resolve, reject) => {
fs.write("./same/path/file.txt", str, {flag: "a"}}, (err) => {
if (err) return reject(err);
resolve();
});
});
}
// for every string,
// write it to the file,
// then write the next one once that one is finished and so on
arr.reduce((chain, str) => {
return chain
.then(() => writeToFilePromise(str));
}, Promise.resolve());
You can synchronize the access to the file using the read/write locking for node, please see the following example an you could read the documentation
var ReadWriteLock = require('rwlock');
var lock = new ReadWriteLock();
lock.writeLock(function (release) {
fs.appendFile(fileName, addToFile, function(err, data) {
if(err)
console.log("write error"); //logging error message
else
console.log("write ok");
release(); // unlock
});
});
I had the same problem and wrote an NPM package to solve it for my project. It works by buffering the data in an array, and waiting until the event loop turns over, to concatenate and write the data in a single call to fs.appendFile:
const SeqAppend = require('seqappend');
const writeLog = SeqAppend('log1.txt');
writeLog('Several...');
writeLog('...logged...');
writeLog('.......events');

Categories