I am trying to run a command from gjs and read the output asynchronously.
here is my synchronous code
let [res, pid, in_fd, out_fd, err_fd] = GLib.spawn_async_with_pipes(null,
['/bin/ls'], null, 0, null);
let out_reader = new Gio.DataInputStream({
base_stream: new Gio.UnixInputStream({fd: out_fd})
});
var out = out_reader.read_until("", null);
print(out);
this works fine but if I try to do it asynchronously it doesn't work
let [res, pid, in_fd, out_fd, err_fd] = GLib.spawn_async_with_pipes(null,
['/bin/ls'], null, 0, null);
let out_reader = new Gio.DataInputStream({
base_stream: new Gio.UnixInputStream({fd: out_fd})
});
function _SocketRead(source_object, res, user_data){
print("hi");
let length;
let out = out_reader.read_upto_finish(asyncResult, length);
print("out" + out);
print("length" + length);
}
var out = out_reader.read_upto_async("",0, 0, null, _SocketRead, "");
while(true){
i = 0;
}
the callback is not called at all
First of all thank you for the question, I also had the same underlying question, that is, your initial line "I am trying to run a command from gjs and read the output asynchronously" and your question had the details I needed to find the solution!
In your example code, the major problem is these lines:
while(true){
i = 0;
}
You are correctly trying to keep the program from terminating before you get the output, but this solution doesn't work.
Javascript is single threaded, meaning that while computations can run concurrently in the serial interleaved sense, there can't be two Javascript computations running in parallel. There is no way to explicitly yield the thread and the busy loop in the question just keeps on spinning and the callback never gets CPU time.
What you want instead is to enter an event loop. If you are developing a Gnome Shell extension, you are already running in one, but if you are just running a script with Gjs, you need to explicitly start one. I'm going to use Clutter, but some other event loop will do just as well. The following code segments constitute a full, working solution.
First of all, let's start by importing needed libraries:
const GLib = imports.gi.GLib;
const Gio = imports.gi.Gio;
const Clutter = imports.gi.Clutter;
Then add the spawning and file descriptor from the question:
const [res, pid, in_fd, out_fd, err_fd] = GLib.spawn_async_with_pipes(null, ['/bin/ls'], null, 0, null);
const out_reader = new Gio.DataInputStream({
base_stream: new Gio.UnixInputStream({fd: out_fd})
});
Call the async reading function and give it a callback (defined below, usable here thanks to Javascript hoisting):
out_reader.read_upto_async("", 0, 0, null, _SocketRead, "");
And start the event loop:
Clutter.init(null);
Clutter.main();
There were a couple of errors in your callback, so here a fixed version that also terminates the event loop once the command stops producing output:
function _SocketRead(source_object, res){
const [out, length] = out_reader.read_upto_finish(res);
if (length > 0) {
print("out: " + out);
print("length: " + length);
out_reader.read_upto_async("", 0, 0, null, _SocketRead, "");
} else {
Clutter.main_quit();
}
}
For further reading there are Gjs native bindings docs at https://gjs-docs.gnome.org/.
Related
I am writing an Electron app. I have this very simple function which at the moment is just returning an empty array:
async GetStuff()
{
var result:string[] = [];
var bExists = false;
var exec = require('child_process').exec;
exec("tasklist", (err:any, stdout:any, stderr:any) => {
err = err;
stderr = stderr;
bExists = stdout.toLowerCase().indexOf("unixsrv.exe") > -1;
});
return result
}
As it is, it correctly reports if a process named "unixsrv.exe" is running or not.
Thing is, the function as it is will first hit the "return result" line and later will hit the "bExists = " line.
How can modify the above so that I do not return until I have the answer as to whether or not the process exists?
More generically: How can I synchronously test whether a process is running or not?
Thanks for any help.
I have found the answer that works for me here:
node.js how to check a process is running by the process name?
It is the answer posted there by Christiaan Maks
I'm using the JavaScript fetch streams API to consume chunked JSON asynchronously like in this answer.
My application may be receiving up to 25 small JSON objects per second (one for each frame in a video) over the span of an hour.
When the incoming chunks are large (1000+ JSON objects per chunk), my code functions well - fast, minimal memory use - it can easily receive 1,000,000 JSON objects reliably.
When the incoming chunks are smaller (5 JSON objects per chunk), my code functions poorly - slow, lots of memory consumption. The browser dies at about 50,000 JSON objects.
After doing a lot of debugging in the Developer tools, it appears the problem lies in the recursive nature of the code.
I tried to remove the recursion, but it seems required because the API is reliant on my code returning a promise to chain?!
How do I remove this recursion, or should I use something other than fetch?
Code with recursion (works)
String.prototype.replaceAll = function(search, replacement) {
var target = this;
return target.replace(new RegExp(search, 'g'), replacement);
};
results = []
fetch('http://localhost:9999/').then(response => {
const reader = response.body.getReader();
td = new TextDecoder("utf-8");
buffer = "";
reader.read().then(function processText({ done, value }) {
if (done) {
console.log("Stream done.");
return;
}
try {
decoded = td.decode(value);
buffer += decoded;
if (decoded.length != 65536){
toParse = "["+buffer.trim().replaceAll("\n",",")+"]";
result = JSON.parse(toParse);
results.push(...result);
console.log("Received " + results.length.toString() + " objects")
buffer = "";
}
}
catch(e){
// Doesn't need to be reported, because partial JSON result will be parsed next time around (from buffer).
//console.log("EXCEPTION:"+e);
}
return reader.read().then(processText);
})
});
Code without recursion (doesn't work)
String.prototype.replaceAll = function(search, replacement) {
var target = this;
return target.replace(new RegExp(search, 'g'), replacement);
};
results = []
finished = false
fetch('http://localhost:9999/').then(response => {
const reader = response.body.getReader();
td = new TextDecoder("utf-8");
buffer = "";
lastResultSize = -1
while (!finished)
if (lastResultSize < results.length)
{
lastResultSize = results.length;
reader.read().then(function processText({ done, value }) {
if (done) {
console.log("Stream done.");
finished = true;
return;
}
else
try {
decoded = td.decode(value);
//console.log("Received chunk " + decoded.length.toString() + " in length");
buffer += decoded;
if (decoded.length != 65536){
toParse = "["+buffer.trim().replaceAll("\n",",")+"]";
result = JSON.parse(toParse);
results.push(...result);
console.log("Received " + results.length.toString() + " objects")
buffer = "";
//console.log("Parsed chunk " + toParse.length.toString() + " in length");
}
}
catch(e) {
// Doesn't need to be reported, because partial JSON result will be parsed next time around (from buffer).
//console.log("EXCEPTION:"+e);
}
})
}
});
For completeness, here is the python code I'm using on the test server. Note the line containing sleep which changes chunking behavior:
import io
import urllib
import inspect
from http.server import HTTPServer,BaseHTTPRequestHandler
from time import sleep
class TestServer(BaseHTTPRequestHandler):
def do_GET(self):
args = urllib.parse.parse_qs(self.path[2:])
args = {i:args[i][0] for i in args}
response = ''
self.send_response(200)
self.send_header('Content-type', 'text/html')
self.send_header('Access-Control-Allow-Origin', '*')
self.send_header('Transfer-Encoding', 'chunked')
self.end_headers()
for i in range (1000000):
self.wfile.write(bytes(f'{{"x":{i}, "text":"fred!"}}\n','utf-8'))
sleep(0.001) # Comment this out for bigger chunks sent to the client!
def main(server_port:"Port to serve on."=9999,server_address:"Local server name."=''):
httpd = HTTPServer((server_address, server_port), TestServer)
print(f'Serving on http://{httpd.server_name}:{httpd.server_port} ...')
httpd.serve_forever()
if __name__ == '__main__':
main()
The part you're missing is that the function passed to .then() is always called asynchronously, i.e. with an empty stack. So there is no actual recursion here. This is also why your 'without recursion' version doesn't work.
The simple solution to this is to use async functions and the await statement. If you call read() like this:
const {value, done} = await reader.read();
...then you can call it in a loop and it will work how you would expect.
I don't know specifically where your memory leak is, but your use of global variables looks like a problem. I recommend you always put 'use strict'; at the top of your code so the compiler will catch these problems for you. Then use let or const whenever you declare a variable.
I recommend you use TextDecoderStream to avoid problems when a character is split between multiple chunks. You will also have issues when a JSON object is split between multiple chunks.
See Append child writable stream demo for how to do this safely (but note that you need TextDecoderStream where that demo has "TextDecoder").
Note also the use of a WritableStream in that demo. Firefox doesn't support it yet AFAIK, but WritableStream provides much easier syntax to consume chunks without having to explicitly loop or recurse. You can find the web streams polyfill here.
The main function file main.js has:
var nLastPingTime = 0,
nLastPingNumber = 0;
module.exports = {
compareData: function(nPingTime, nLastPingNumber){
nLastPingTime = nPingTime;
nLastPingNumber = nLastPingNumber;
}
};
Now two other files dataGenOne.js and dataGenTwo.js look something like this:
const mainDataHolder = require('./main.js');
//Gets data from some API's here
mainDataHolder.compareData(nPingTime, nLastPingNumber);
Then to start we run:
node dataGenOne.js
and
node dataGenTwo.js
The problem is that the main.js file doesn't share nLastPingTime and nLastPingNumber mutually between both sets of data.
For example when looking at nLastPingNumber, its the number from dataGenOne.js specifically and not from dataGenTwo.js at all (or vise versa).
I believe this is because they are running on two separate threads.
Is there anyway to achieve what I'm trying to do? The alternative could be to connect database or write to a file but if possible I would rather not do that.
To achieve what you are attempting to do, have two node processes communicate, you are going to have create process, lets call it spawn, that spawns both of the processes (let's call them p1 & p2) and then handles communication between p1 & p2.
So spawn would be a very simple process that would just wire the events for p1 & p2 and then forward those events to the other process. I don't have a working example of this but if you take a look here you should be able to piece that together pretty quickly.
Adam H pointed me in the right direct. The correct way to do this is in fact child_processes.
Below are the code changes:
The main function file main.js now has:
var cp = require('child_process');
var childDataOne = cp.fork('./dataGenOne.js', [process.argv[2]], { silent: true });
var childDataTwo = cp.fork('./dataGenTwo.js', [process.argv[3]], { silent: true });
childDataOne.stdout.on('data', function(data) {
console.log('parent: ' + data);
compareData(data);
//Here is where the output goes
});
childDataTwo.stdout.on('data', function(data) {
console.log('parent: ' + data);
compareData(data);
//Here is where the output goes
});
Now two other files dataGenOne.js and dataGenTwo.js changed to something like this:
process.stdin.resume();
var passingString = nPingTime + "," + nLastPingNumber;
process.stdout.write(passingString);
To start running we only have to do:
node main.js param1 param2
Instead of running dataGenOne.js and dataGenTwo.js individually.
This correctly allows the child processes to pass data back to the parent process. main.js is listening with stdout.on and the two dataGen child processes are passing the data with stdout.write.
So to avoid the complexity of storing these variables somewhere, merge the processes, but reorganize your code to make it easier to navigate.
main.js (the compare function?) - remove the variables from the top but make sure the compare function returns the latest ping values along with the comparison data i.e.
return {
data,
lastPingTime,
lastPingNumber
}
move the api stuff into separate files so you can do this
var dataSetOne = require('./dataOne');
var dataSetTwo = require('./dataTwo');
var datasets = [dataSetOne, DataSetTwo];
// initialize the values
var lastPingTime = 0;
var lastPingNumber = 0;
// loop through the datasets
for (var i = 0, len = datasets.length; i < len; i++) {
let currentDataSet = datasets[i];
const results = comparePrices(lastPingTime, lastPingumber, aAsks, aBids);
// update the ping info here
lastPingTime = results.lastPingTime;
lastPingNumber = results.lastPingNumber;
}
And if you have a lot of datasets, make an 'index.js' file that does all those requires and just returns the datasets array.
Hope that helps!
I'm trying to copy a sqlite database from the data folder in my extension directory, to the profile folder, in order to use it.
So for now, I'm trying with that:
const {Cc, Ci, Cu} = require("chrome");
const {NetUtils} = Cu.import("resource://gre/modules/NetUtil.jsm");
const data = require('sdk/self').data;
Cu.import("resource://gre/modules/Services.jsm");
Cu.import("resource://gre/modules/FileUtils.jsm");
var file = Cc["#mozilla.org/file/directory_service;1"].
getService(Ci.nsIProperties).
get("TmpD", Ci.nsIFile);
file.append("searchEngines.sqlite");
file.createUnique(Ci.nsIFile.NORMAL_FILE_TYPE, 0666);
// Then, we need an output stream to our output file.
var ostream = Cc["#mozilla.org/network/file-output-stream;1"].createInstance(Ci.nsIFileOutputStream);
ostream.init(file, -1, -1, 0);
// Finally, we need an input stream to take data from.
var iStreamData = NetUtil.ioService.newChannel(data.url("searchEngines.sqlite"), null, null).open();
let istream = Cc["#mozilla.org/io/string-input-stream;1"].createInstance(Ci.nsIStringInputStream);
istream.setData(iStreamData, iStreamData.length);
NetUtil.asyncCopy(istream, ostream, function(aResult) {
console.log(aResult); // return 0
})
console.log(FileUtils.getFile("ProfD", ["searchEngines.sqlite"]).exists()); // return false
let dbConn = Services.storage.openDatabase(file);
The file seems to exist since the console.log(file.exists()) return FALSE and is not populated (the console.log(aResult) return 0).
Where is my mistake, and is there a better way to do that?
Besides that it uses sync I/O (opening the channel with .open instead of .asyncOpen), the NetUtil.asyncCopy operation is still async, meaning the code
NetUtil.asyncCopy(istream, ostream, function(aResult) {
console.log(aResult); // return 0
})
console.log(FileUtils.getFile("ProfD", ["searchEngines.sqlite"]).exists()); // return false
let dbConn = Services.storage.openDatabase(file);
will try to open the file before the copy likely finishes!
However, file.exists() will be likely true, because you already opened the file for writing. It's just that the file is still blank because the data copy isn't done (or even started) yet. (Actually, it is true, because you're checking searchEngines.sqlite in ProfD and not TmpD, but if you correct that the previous statement would apply).
You can only use the file when/after your callback to .asyncCopy is done, e.g.
NetUtil.asyncCopy(istream, ostream, function(aResult) {
console.log(aResult);
console.log(FileUtils.getFile("ProfD", ["searchEngines.sqlite"]).exists()); // return false
let dbConn = Services.storage.openDatabase(file);
// ...
});
PS: You might want to .asyncOpen the channel, then use NetUtil.asyncFetch and pass the resulting stream to .asyncCopy to be truly async for smallish files, since this caches the contents in memory first.
For large files you could create a variant of the NetUtil.asyncFetch implementation that feeds the .outputStream end directly to NetUtils.asyncCopy. That is a bit more complicated, so I won't be writing this up in detail until somebody is truly interested in this and ask the corresponding question.
Edit, so here is how I'd write it:
const data = require('sdk/self').data;
Cu.import("resource://gre/modules/Services.jsm");
Cu.import("resource://gre/modules/NetUtil.jsm");
function copyDataURLToFile(url, file, callback) {
NetUtil.asyncFetch(url, function(istream) {
var ostream = Cc["#mozilla.org/network/file-output-stream;1"].
createInstance(Ci.nsIFileOutputStream);
ostream.init(file, -1, -1, Ci.nsIFileOutputStream.DEFER_OPEN);
NetUtil.asyncCopy(istream, ostream, function(result) {
callback && callback(file, result);
});
});
}
var file = Services.dirsvc.get("TmpD", Ci.nsIFile);
file.append("searchEngines.sqlite");
copyDataURLToFile(data.url("searchEngine.sqlite"), file, function(file, result) {
console.log(result);
console.log(file.exists());
console.log(file.fileSize);
});
Try using OS.File it's much more straight forward.
Cu.import("resource://gre/modules/FileUtils.jsm");
Cu.import("resource://gre/modules/osfile.jsm")
var fromPath = FileUtils.getFile("ProfD", ["searchEngines.sqlite"]).path;
var toPath = FileUtils.getFile("TmpD", ["searchEngines.sqlite"]).path;;
var promise = OS.File.copy(fromPath, toPath);
var dbConn;
promise.then(
function(aStat) {
alert('success will now open connection');
dbConn = Services.storage.openDatabase(toPath);
},
function(aReason) {
console.log('promise rejected', aReason);
alert('copy failed, see console for details');
}
);
I am looking for a way of getting the process memory of any process running.
I am doing a web application. I have a server (through Nodejs), my file app.js, and an agent sending information to app.js through the server.
I would like to find a way to get the process memory of any process (in order to then sending this information to the agent) ?
Do you have any idea how I can do this ? I have searched on google but I haven't found my answer :/
Thank you
PS : I need a windows compatible solution :)
Windows
For windows, use tasklist instead of ps
In the example below, i use the ps unix program, so it's not windows compatible.
Here, the %MEM is the 4st element of each finalProcess iterations.
On Windows the %MEM is the 5th element.
var myFunction = function(processList) {
// here, your code
};
var parseProcess = function(err, process, stderr) {
var process = (process.split("\n")),
finalProcess = [];
// 1st line is a tab descriptor
// if Windows, i should start to 2
for (var i = 1; i < process.length; i++) {
finalProcess.push(cleanArray(process[i].split(" ")));
}
console.log(finalProcess);
// callback to another function
myFunction(finalProcess);
};
var getProcessList = function() {
var exec = require('child_process').exec;
exec('ps aux', parseProcess.bind(this));
}
// thx http://stackoverflow.com/questions/281264/remove-empty-elements-from-an-array-in-javascript
function cleanArray(actual){
var newArray = new Array();
for(var i = 0; i<actual.length; i++){
if (actual[i]){
newArray.push(actual[i]);
}
}
return newArray;
}
getProcessList();