Async functions in Node.js module - javascript

I'm kind of new to JavaScript/Node.js so bear with me. Also my english may not be that good.
I'm trying to write a Node.js module module.js with functions that do some long-running work. Kind of like this:
var exec = require('child_process').exec;
module.exports.myFunction1 = function(callback) {
// this function runs for like 3 seconds
exec('long running shell command' ,function(err,stdout,stderr) {
callback(stdout);
})
};
module.exports.myFunction2 = function(callback) {
// this function runs for like 1 second
exec('long running shell command' ,function(err,stdout,stderr) {
callback(stdout);
})
};
Now, I also have a main.js where I invoke these functions:
var module = require('./module.js');
var output1 = module.myFunction1();
var output2 = module.myFunction2();
My first problem is that my functions return undefined. I understand that this is because the exec function runs asynchronously and therefore the function returns before exec has finished. This is basically what I want but how can I tell my function that it should only callback when exec has finished?
I also don't want the functions to block node.js when I invoke them in my main.js. So basically, my output of the above code would be...
Output myFunction2: Output2
Output myFunction1: Output1
...because myFunction2() finishes faster than myFunction1().
I tried many, many solutions I found online but nothing seems to work properly.
Thank you very much in advance!
--- EDIT ---
Ok, I'm having a somewhat correct solution. Right now my code looks like this:
module.js
var Q = require('q');
require('shelljs/global')
module.exports = {
myFunction1: function () {
var deferred = Q.defer();
var result = exec('long running command', {silent:true}).output.toString();
if (ok) {
deferred.resolve(result);
}
else {
deferred.reject('Error');
}
return deferred.promise;
},
myFunction2: function () {
var deferred = Q.defer();
var result = exec('long running command', {silent:true}).output.toString();
if (ok) {
deferred.resolve(result);
}
else {
deferred.reject('Error');
}
return deferred.promise;
}
}
My main.js lloks like this now:
var module = require('./module');
module.myFunction1()
.then(function (result) {
console.log('Result 1: ' + result);
})
.fail(function (error) {
console.log(error)
});
module.myFunction2()
.then(function (result) {
console.log('Result 2: ' + result);
})
.fail(function (error) {
console.log(error)
});
And I get the expected output:
Result 1: Output that myFunction1() generated
Result 2: Output that myFunction2() generated
My Problem now is, that myFunction1() always logs before myFunction2(), even if myFunction2() finished first. Did I understood something wrong about Promises? Shouldn't myFunction2() return immediately after it finished?

Your functions take callbacks. Those parameters are functions which are called on completion, which makes it easy to do
var exec = require('child_process').exec;
module.exports.myFunction1 = function(callback) {
// this function runs for like 3 seconds
exec('long running shell command' ,function(err,stdout,stderr) {
callback(stdout);
})
};
module.myFunction1(function(stdout){
console.log("Output myFunction1: " + stdout);
});
Using a callback, in your case, is the simplest solution but you should be aware that there are other patterns to deal with asynchronous executions. Here's a good overview. For example, a popular solution, especially interesting when you have to chain asychronous continuations, is to use promises, which allow
var exec = require('child_process').exec;
module.exports.myFunction1 = function() {
return new Promise(function(resolve, fail){
// this function runs for like 3 seconds
exec('long running shell command' ,function(err,stdout,stderr) {
if (err) fail(err);
else resolve(stdout, stderr);
});
});
};
module.myFunction1()
.then(function(stdout){
console.log("Output myFunction1: " + stdout);
})
.then(module.myFunction2)
.then(function(stdout){
console.log("Output myFunction2: " + stdout);
})

At first, I would suggest you to handle errors (err, stderr) in your modules. As you can see, your functions takes one argument which is callback. If your asynchronous function runs, the callback function is called. So you can use it like this:
module.myFunction1(function(stdout) {
console.log("Output myFunction1: " + stdout);
module.myFunction2(function(stdout2) {
console.log("Output myFunction2: " + stdout2);
});
});
exec function also takes callback function (with first argument error err - error first callbacks). There are other options how to handle flow control of asynchronous code (e.g. library async). You can also learn about Promises which is today's alternative to error first callbacks.

Callback functions don't return values directly... what you need is to setup what will happen when value will get read. Something like this:
my_function(what_will_happen_when_my_function_will_get_finished());
exectly:
myFunction1(function(data){console.log('hello! I've finished, and received: '+data);});

Related

Meteor WrapAsync working asynchronously

I am aware of this stackoverflow answer and I have been using it to help me.
However something weird happens when I apply the code to my situation.
It seems that the wrapAsync function, called execSync in my code, runs and outputs what it is supposed to; however, it just finished last as it did before i had the wrapAsync in place.
The code
Meteor.methods({
'distinctSpecs'({}){
console.log("called");
var json_categories_clean = [];
var execSync =
Meteor.wrapAsync(require("child_process").exec,
require("child_process"))
var returned_data =
execSync(
"mongo products --eval \"var collection='laptops', outputFormat='json'\" variety.js",
{ cwd:"/home/jonathan/Documents/variety-master"},
(err, stdout, stderr) => {
if (err) {
console.error(err);
console.log(stdout);
console.log(stderr);
return;
}
console.log("waited for this");
var json_categories =
JSON.parse(stdout.substring(
stdout.indexOf('[', stdout.indexOf('[')+1),
stdout.lastIndexOf(']')+1));
for (var x=0; x < json_categories.length; x++) {
json_categories_clean.push(json_categories[x]["_id"])
}
console.log("returning inner");
return json_categories_clean;
});
console.log("returning outer");
return returned_data;
}
});
**The output **
called
returning outer
waited for this
returning inner
After formatting your code it's pretty clear that you are invoking wrapAsync wrong:
Meteor.wrapAsync(require("child_process").exec,
require("child_process"))
you probably want:
const exec = Npm.require("child_process").exec;
Meteor.wrapAsync(a, b, function(callback) {
exec(a, b, function(err, stdout, stderr) {
callback(err, stdout);
});
});
The last parameter to the function you wrap needs to be a function that takes an error and a result as parameters (and nothing else).
Also, once you have the async function, you don't provide a callback anymore. You are waiting for the return instead.

How can I properly call a list of async functions in order? [duplicate]

This question already has an answer here:
How to sequentially run promises with Q in Javascript?
(1 answer)
Closed 7 years ago.
I am attempting to write a "robocopy /mir" like function within Node.js and cannot seem to wrap my head around how to properly execute several async functions in order.
Some background:
The script is run on Windows, therefore, I needed to find some way to copy files while retaining modification time AND receiving progress notifications.
To solve this problem, I went ahead & wrote my copy function in .NET (calling it with Edge.js)--this copy function simply calls back to a Node function reporting file copy progress. This piece works flawlessly.
To have the files copy in order, my first thought was to do something like follows:
Object.keys(filesToCopy).forEach(function(key) {
var deferred = q.defer();
var payload = {
sourcePath: key,
destPath: filesToCopy[key],
progressCallback: progressCallback
};
console.log('Copying %s...', sourcePath);
// Edge.js called here
copyFile(payload, deferred.makeNodeResolver());
deferred.promise.then(function(result) {
console.log('%s complete.', result);
}, function(err) {
console.error('Error: ', err.message);
});
promises.push(deferred.promise);
});
Unfortunately, this (as expected) begins copying each files as soon as the .NET function is called, therefore, I get progress notifications for all files at once giving me output like:
1%
2%
1%
2%
3%
3%
It seems like I need a way to queue up the work to be done before firing it off all at once, with each item completing before the next proceeds. When all items are complete I would need to be notified. The solution seems simple enough but continues to elude me as every angle I try comes with another issue. Any help would be greatly appreciated, thank you!
EDIT: As stated in my comment, the answer Bergi provided was utilizing a function which did in fact return a promise whilst my Edge.js function did not. I was able to resolve my issue first by utilizing an array instead of an object for filesToCopy, then doing something like so:
return filesToCopy.reduce(function(prev, curr) {
return prev.then(function() {
var deferred = q.defer();
copyFile(curr, function(err, result) {
deferred.resolve(result);
console.log('Completed %s', result);
});
return deferred.promise;
})
}, q());
This may not be the best way to do this but it works for my uses.
Maybe something like that will do the trick :
var $j = function(val, space) {
return JSON.stringify(val, null, space || '')
}
var log = function(val) {
document.body.insertAdjacentHTML('beforeend', '<div><pre>' + val + '</div></pre>')
}
var files = '12345'.split('').map(function(v) {
return {
name: 'file_' + v + '.js',
load: function() {
var cur = this;
var pro = new Promise(function(resolve, reject) {
log('loading : ' + cur.name);
// we simualate the loading stuff
setTimeout(function() {
resolve(cur.name);
}, 1 * 1000);
}).then( function( val ) {
// once loaded
log('loaded : ' + val);
return val;
});
return pro;
}
};
});
files.reduce(function(t, v) {
t.promise = t.promise.then(function(){
return v.load();
});
return t;
}, {
promise: Promise.resolve(1)
});
Use async.eachSeries on arrays, or async.forEachOfSeries on objects.
Looping an object
var async = require('async');
var filesObject = {'file/path/1': {}, 'file/path/2': {}};
async.forEachOfSeries(filesObject, copyFileFromObj, allDone);
function copyFileFromObj(value, key, callback) {
console.log('Copying file ' + key + '...');
callback(); // when done
}
function allDone(err) {
if (err) {
console.error(err.message);
}
console.log('All done.');
}
Looping an array
var async = require('async');
var filesArray = ['file/path/1', 'file/path/2'];
async.eachSeries(filesArray, copyFile, allDone);
function copyFile(file, callback) {
console.log('Copying file ' + file + '...');
callback(); // when done
}
function allDone(err) {
if (err) {
console.error(err.message);
}
console.log('All done.');
}
Working example here: https://tonicdev.com/edinella/sync-loop-of-async-operations

Mocha test case - are nested it( ) functions kosher?

I have this case where I think I want to have nested it() test cases in a Mocha test. I am sure this is wrong, and I don't see any recommendations to do what I am doing, but I don't really know of a better way at the moment -
basically, I have a "parent" test, and inside the parent test there's a forEach loop with all the "child" tests:
it('[test] enrichment', function (done) {
var self = this;
async.each(self.tests, function (json, cb) {
//it('[test] ' + path.basename(json), function (done) {
var jsonDataForEnrichment = require(json);
jsonDataForEnrichment.customer.accountnum = "8497404620452729";
jsonDataForEnrichment.customer.data.accountnum = "8497404620452729";
var options = {
url: self.serverURL + ':' + self.serverPort + '/event',
json: true,
body: jsonDataForEnrichment,
method: 'POST'
};
request(options,function (err, response, body) {
if (err) {
return cb(err);
}
assert.equal(response.statusCode, 201, "Error: Response Code");
cb(null);
});
//});
}, function complete(err) {
done(err)
});
});
as you can see, two separate lines are commented out - I want to include them so that I can easily see the results of each separate test, but then I have this awkward situation of firing the callback for the test alongside the callback for async.each.
Has anyone seen this time of situation before and know of a good solution where the tester can easily see the results of each test in a loop?
Don't nest it calls. Call them synchronously.
Nested it calls are never okay in Mocha. Nor are it calls performed asynchronously. (The test can be asynchronous, but you cannot call it asynchronously.) Here's a simple test:
describe("level 1", function () {
describe("first level 2", function () {
it("foo", function () {
console.log("foo");
it("bar", function () {
console.log("bar");
});
});
setTimeout(function () {
it("created async", function () {
console.log("the asyncly created one");
});
}, 500);
});
describe("second level 2", function () {
// Give time to the setTimeout above to trigger.
it("delayed", function (done) {
setTimeout(done, 1000);
});
});
});
If you run this you won't get the nested test bar will be ignored and the test created asynchronously (delayed) will also be ignored.
Mocha has no defined semantics for these kinds of calls. When I ran my test with the latest version of Mocha at the time of writing (2.3.3), it just ignored them. I recall that an earlier version of Mocha would have recognized the tests but would have attached them to the wrong describe block.
I think the need for dynamic tests are relatively common (data-driven tests?), and there is common use for dynamic it and test cases.
I think it could be easier to manage testcase completion if tests could be executed in series. This way you wouldn't have to worry about managing nested async done's. Since request is async (i'm assuming), your test cases will still mainly be executing concurrently.
describe('[test] enrichment', function () {
var self = this;
_.each(self.tests, function (json, cb) {
it('[test] ' + path.basename(json), function (done) {
var jsonDataForEnrichment = require(json);
jsonDataForEnrichment.customer.accountnum = "8497404620452729";
jsonDataForEnrichment.customer.data.accountnum = "8497404620452729";
var options = {
url: self.serverURL + ':' + self.serverPort + '/event',
json: true,
body: jsonDataForEnrichment,
method: 'POST'
};
request(options,function (error, response, body) {
if (error) {
cb(error);
}
else{
assert.equal(response.statusCode, 201, "Error: Response Code");
cb(null);
}
done();
});
});
}
});

Stubbing a function in sinon to return different value every time it is called

I have a function as shown below:
function test(parms) {
var self = this;
return this.test2(parms)
.then(function (data) {
if (data) {
return ;
}
else {
return Bluebird.delay(1000)
.then(self.test.bind(self, parms));
}
}.bind(self));
};
I am trying to write unit tests for this function. I am using sinon.stub to mock the functionality of the function test2.
I wrote a test case where test2 returns true and therefore the test function successfully completes execution. However I want a test case where on the first instance test2 returns false, it waits for delay and next time test2 returns true. For that I wrote my test case as below:
var clock;
var result;
var test2stub;
var count = 0;
before(function () {
clock = sinon.useFakeTimers();
//object is defined before
test2stub = sinon.stub(object,"test2", function () {
console.log("Count is: " + count);
if (count === 0) {
return (Bluebird.resolve(false));
}
else if (count === 1) {
return (Bluebird.resolve(true));
}
});
clock.tick(1000);
object.test("xyz")
.then(function (data) {
result = data;
});
clock.tick(1000);
count = count + 1;
clock.tick(1000);
});
after(function () {
test2stub.restore();
clock.restore();
});
it("result should be undefined. Check if test2 returned false first & true next",
function () {
expect(result).to.be.undefined;
});
In the logs it shows that count has value 0 only.
The code of test is actually incorrect. It never returns data on success. It returns undefined. The function should return data on success otherwise you won't be able to use it as parameter for next .then handler
.then(function (data) {
if (data) {
return data;
}
Next you make wrong assumptions about function test. It will NEVER return undefined. The function is rather dangerous and will call itself forever in an endless chain of promises until it squeezes out any not-null data from test2.
One shouldn't launch test code in before or beforeEach section. before and after are meant to prepare the environment like faking timers and then restoring them.
One reason for calling tested code in the it handler is because promises should be handled differently. The handler should accept a parameter which indicates that the test will be asynchronous and the the test engine gives it a timeout (usually 10 secs) to complete. The test is expected to either call done() to indicate test is successful or call done(error) if it failed and there is an error object (or expect threw an exception).
Also you should move the fake timer after the async operation started. In your code actually the first clock.tick is useless.
There is a trick with using fakeTimers. You can move time manually however it doesn't move on its own. For the first tick it works well. The promise is executed. However upon returning .delay(1000) promise, there will be no command to move the time forward. So, to finish the test correctly (not to modify tested code) you have also to stub Bluebird.delay
I would change the stubs implementation and do something like this
describe("test2", function(){
beforeEach(function(){
clock = sinon.useFakeTimers();
test2stub = sinon.stub(object,"test2", function () {
console.log("Count is: " + count);
return (Bluebird.resolve((count++) > 0));
});
var _delay = Bluebird.delay.bind(Bluebird);
bluebirdDelayStub = sinon.stub(Bluebird,"delay", function (delay) {
var promise = _delay(delay);
clock.tick(1000);
return promise;
});
})
it("should eventually return true", function (done) {
object.test("xyz")
.then(function (data) {
expect(data).to.be.true;
expect(count).to.equal(2);
done();
})
.catch(function(err){
done(err);
});
clock.tick(1000);
});
after(function () {
test2stub.restore();
clock.restore();
bluebirdDelayStub.restore();
});
})
PS I verified this code under Node.js 0.10.35 and Bluebird 2.9.34

Wrapping MongoDB calls within a Promise

I'm using Meteor (1.0.3) in general, but for one particular case I'm using a raw server side route to render a file -- so I'm outside a Meteor method.
I'm using node fs.writeFile/fs.readFile and exec commands to call out to Linux command-line utilities too.
My only point in brining this up is that the node calls are async of course. And so I'm opted to use the node Q library in order to manage async callbacks.
This all worked until I added a line to call out to the MongoDB database.
A call like so:
var record_name = Mongo_Collection_Name.findOne({_personId: userId}, {fields: {'_id': 0}});
Produces the following error:
[Error: Can't wait without a fiber]
The error only occurs when I wrap the function in a Promise.
For example, something like this will throw:
getRecordExample = function () {
var deferred = Q.defer();
var record_name = Mongo_Collection_Name.findOne({_personId: userId}, {fields: {'_id': 0}});
// do something
// if no error
deferred.resolve(record_name);
return deferred.promise;
}
If I use the Meteor Fibers library I don't get the error:
getRecordExample = function () {
var deferred = Q.defer();
Fiber = Npm.require('fibers');
var record_name
Fiber(function () {
record_name = Mongo_Collection_Name.findOne({_personId: userId});
}).run()
// do something
// if no error
deferred.resolve(record_name);
return deferred.promise;
}
but, the record_name variable is undefined outside the fiber, so I don't have a way to pass the variable outside of the Fiber scope as far as I can tell.
A More Precise Example
This is a little long, so you have to scroll down to see it all. I'm basically building a workflow here so there are processes and subprocesses.
// both/routes.js
Router.route('/get-route', function(req, res) {
// get the userId then start the workflow below
// using Promises here because these were firing concurrently
Q(userId)
.then(process_1)
.then(process_2)
.done();
}, { name: 'server-side-ir-route', where: 'server' }
// server.js
process_1 = function (userId) {
sub_process_1(userId);
sub_process_2(userId);
return userId;
}
process_2 = function (userId) {
sub_process_3(userId);
sub_process_4(userId);
return userId;
}
sub_process_1 = function (userId) {
var result = get_record_1(userId);
// do stuff with result
// using Q library to call out to async fs.writeFile, return Promise
fs_writeFile_promise(result)
.catch(function (error) {
console.log('error in sub_process_1_write', error);
})
.done(function () {
console.log('done with sub_process_1');
}
return userId;
}.future() // <-- if no future() here, the exception is thrown.
sub_process_2 = function (userId) {
var result = get_record_2(userId);
// do stuff with result
// using Q library to call out to async fs.writeFile, return Promise
fs_writeFile_promise(result)
.catch(function (error) {
console.log('error in sub_process_1_write', error);
})
.done(function () {
console.log('done with sub_process_1');
}
return userId;
}.future()
// async because of I/O operation (I think)
get_record_1 = function (userId) {
var record_1 = Mongo_Collection_Name.findOne({'userId': userId});
// do stuff
return record_1;
}
get_record_2 = function (userId) {
var record_2 = Mongo_Collection_Name.findOne({'userId': userId});
// do stuff
return record_2;
}
// async operation using Q library to return a Promise
fs_writeFile_promise = function (obj) {
var deferred = Q.defer();
fs.writeFile(obj.file, obj.datas, function (err, result) {
if (err) deferred.reject(err);
else deferred.resolve('write data completed');
});
return deferred.promise;
}
For now, lets assume that the process_2 function is exactly like process_1
Also, we should assume I have console.log('step_start') and console.log('step_end') in each function. This is what it would look like on the command line:
start processes
end processes
start processes 1
end processes 1
start processes 2
start sub processes 1
getting record 1
start sub processes 2
getting record 2
returning record 1
end sub processes 1
called writeData in sub process 1
returning record 2
called writeData in sub process 2
end processes 2
ending sub processes 1
The reason I had to place a Fiber (future) on the sub_process_1() function was because when I placed the function process_1() in the Q chain at the top I got the Error: Can't wait without a fiber].
If I remove the process_1() in the Q chain at the top and remove the .future() from sub_process_1() no exception is thrown.
Questions
Why does calling out to a Mongo collection within a Promise cause a
fiber error within a Meteor application?
Does calling a async function within a sync function in general cause the sync function to become a async function?
How do I solve this problem?
The most common way to solve this is wrap your asynchronous callbacks that use Meteor functions in Meteor.bindEnvironment().
If you are using the Meteor core WebApp package to handle your server side route, the code would be like this (also in meteorpad):
WebApp.connectHandlers.use(
'/test',
Meteor.bindEnvironment(function(req, res, next) {
var someSyncData = Players.findOne();
res.write(JSON.stringify(someSyncData));
res.end();
})
);
Working with fibers or promises yourself is unnecessary unless you are trying to get multiple async events to run concurrently.
To deal with file reading or other functions that are not already synchronous, Meteor also provides Meteor.wrapAsync() to make them synchronous.
There are also packages and a help page that give you other high level alternatives.

Categories