I have multiple Meteor.calls, where each methods depends on the response of another Meteor method.
Client
Meteor.call('methodOne', function(err, resOne){
if(!err){
Meteor.call('methodTwo', resOne, function(err, resTwo){
if(!err){
Meteor.call('methodThree', resTwo, function(err, resThree){
if(err){
console.log(err);
}
})
}
});
}
});
From Meteor's documentation I know
"Methods called on the client run asynchronously, so you need to pass a callback in order to observe the result of the call."
I know I can create yet another Meteor Method on the server to execute methods 'methodOne', 'MethodTwo', 'MethodThree' wrapped using Meteor.async, or sequentially without the callback all together. But I am worried this path will cause my meteor methods to get bloated and entangled, leading to spaghetti code. I would rather keep each Meteor method simple with one job to do and find a more elegant way of chaining the calls on the client. Any ideas, is there any way to use Promises on the client?
Since the other answer suggests RSVP this answer will suggest Bluebird which is actually the fastest promise library when running real benchmarks. Rather than a micro benchmark that does not really measure anything meaningful. Anyway, I'm not picking it for performance, I'm picking it here because it's also the easiest to use and the one with the best debuggability.
Unlike the other answer, this one also does not suppress errors and the cost of making the function return a promise is marginal since no promise constructor is called.
var call = Promise.promisify(Meteor.call, Meteor);
var calls = call("methodOne").
then(call.bind(Meteor, "methodTwo")).
then(call.bind(Meteor, "methodThree"));
calls.then(function(resThree){
console.log("Got Response!", resThree);
}).catch(function(err){
console.log("Got Error", err);
});
Your approach on the client results in a lot more round trips between the server and the browser. I know you indicated that you were worried about spaghetti code on the server and I don't have visibility into your application as you do, but just going by the example you provide, it seems like an ideal place to wrap all three calls on the server and make only one call from the client, IMHO.
EDIT: You're probably better off looking at #Benjamin Gruenbaum answer, which not only results in better performance, but also provides much more concise code.
Promises - yes there is.
I like RSVP very much, why? Simply because it's the fastest one. (quick benchmark: jsperf ).
Here's quick re-write of your code:
var promise = new RSVP.Promise(function(fulfill, reject) {
Meteor.call('methodOne', '', function(err, resOne) {
if (!err) {
return reject(err);
}
fulfill(resOne);
});
});
promise.then(function(resOne) {
return new RSVP.Promise(function(fulfill, reject) {
Meteor.call('methodTwo', resOne, function(err, resTwo) {
if (err) {
return reject(err);
}
fulfill(resTwo);
});
});
}).then(function(resTwo) {
return new RSVP.Promise(function(fulfill, reject) {
Meteor.call('methodTwo', resTwo, function(err, resThree) {
if (err) {
reject(err);
}
fulfill(resThree);
});
});
}).then(function(resThree) {
// resThree is available - continue as you like
console.log(resThree);
}).catch(function(err) {
console.log(err);
});
That's the way to prevent "the ever rightward drift" of your code.
Promises are cool, use them.
Related
I have a feeling that I'm just trying to fit a square peg into a round hole, but I'm trying to apply some things with Angular2 and Typescript, and I'm banging my head against a wall.
I've written a Javascript module that acts as an API client library to an API I'm consuming. It just packages some convenience things like setting up the correct API keys, switching keys based on certain desired data, etc. It's basically just a convenience library.
Most of the methods follow a pattern where you provide a query term and then execute a callback.
So for example:
API.searchAutocomplete("angular", function(err, data) {
// handle the data/error
});
Inside that method:
searchAutocomplete: function(query, callback) {
// set up request with data payload, url, headers, etc
$.ajax(settings)
.done(function(response) {
// callback with success
})
.fail(function () {
// callback with error
});
}
I'm struggling with trying to understand how to run this function in Typescript in an Angular service with a Promise (square peg round hole). Or should I just pass a callback within the service and treat it like it's Javascript?
My attempt:
public getAutocomplete(query:string): Promise < any > {
return new Promise((resolve, reject) => {
API.searchAutocomplete(query, function (err, result) {
if (err) {
reject(err);
return;
}
resolve(result);
});
});
}
Second, I've been able to load the library into my Angular app but I can't seem to actually make any of the requests. Even if I break in the console and access the library object it doesn't seem to actually make any network requests. Which I really don't understand.
Edit: I've sorted this part out.
When I made my service call return a promise, I had to subscribe to the promise otherwise I wouldn't execute it correctly. I think I still need to understand how to write my service call to return an observable and map the callback response.
As expected, I was trying to do more work than I should have.
This is pretty simple, just return an observable that calls the external library.
public autoCompleteResults(query: string): Observable<string[]> {
return new Observable<string[]>(observer => {
API.searchAutocomplete(query, function (err, result) {
if (err) {
console.log(err);
observer.next([]);
// OR
observer.error(err);
return;
}
observer.next(result);
});
});
}
I've the following mongoose queries which each producing the same result but with different Promise implementations. I've tested the scenario where mongooseModel throws an error and again each implementation produces the same result i.e. the calling method catches the exception and returns the correct error message.
What I'm wondering is which is the best approach? The fnCall is more readable and makes more sense when reading! In my opinion anyway. Whats important here is a repeatable pattern that we can use across all our NodeJS modules.
First way:
ListService.prototype.getItems = function (queryParams, restrictFields, startIndex, pageSize) {
return Promise.fnCall(function() {
return mongooseModel().find(queryParams, restrictFields)
.skip(startIndex)
.limit(pageSize)
.lean(true)
.exec();
})
.then(function (items) { return items; })
.catch(function (error) {return error; });
};
Second way:
ListService.prototype.getItems = function (queryParams, restrictFields, startIndex, pageSize) {
return new Promise(function (resolve, reject) {
mongooseModel().find(queryParams, restrictFields)
.skip(startIndex)
.limit(pageSize)
.lean(true)
.exec(function (err, items) {
if (err) {
reject(err);
}
else {
resolve(items);
}
});
};
The best approach depends on your needs and the environment you are working in. Thus far you have not considered performance or security, only syntax and compatibility. If you need speed, low memory foot print, and trust the code then go with BlueBird. If you do not trust the code i.e. need security, do not mind a heavy memory footprint, and don't mind a nearly 10x slow down then go with Q.
Ref: https://github.com/petkaantonov/bluebird/issues/381
Good on you for testing the results for consistent behaviour. However, the question of which is the 'best approach' depends on what you are seeking.
If you are wanting the fastest result, you should test both scenarios yourself using your own code. You may find that one form will outperform the other slightly and consume less memory. You may also want to look at defer(), if it is supported, as it is sometimes significantly faster.
If speed is not key and your promise library doesn't have a clear convention, then the best pattern is the one that you find the easiest to read. Some people prefer promisifying wrappers (like your fnCall()) over creating a new Promise() over and over, and a lot of people seem to dislike defer() for some reason. But without a convincing performance argument, there is really no good answer.
The last thing worth considering is whether to stray from JS's native implementation of Promises at all, particularly in Node, now that it is officially supported. In this case, you are pretty much restricted to new Promise().
Disclaimer: I know that synchro stuff is to be avoided, and that promises and callbacks are preferrable, but I'm stuck writing a system that needs a small amount of backwards compatability, and need to write this for a temporary stop gap.
Writing an express.js app, I have a function that takes the request object from a .get or .post etc function, and confirms whether the session key is valid (after checking this with a remote API server). The main version of the function is like this:
module.exports.isValidSession(req, cb) {
// REST API Calls and callbacks, error handling etc, all works fine.
}
I need a version of the above to be compatible with an older system (that will soon be phased out). I'm quite new to Node, although not to JS, so I'm wondering if there's a good convention on how to do this?
Additional Info
The main problem I hit is returning a synchronous function from some kind of 'watcher' - See below for one approach I considered, which doesn't work, but I figure maybe someone knows an approach to. (It's essentialy polling the value of a variable until the async function sets it to indicate it's done.)
function isValidSession(req) {
var running = 1, ret = -1;
var looper = setInterval(function () {
if (!running) {
clearInterval(looper);
return ret; // This is the problem bit. Returns an async interval function, not the parent.
}
}, 500);
request(requestOpts, function (error, response, body) {
if (error) {
ret = new Error('Some error went on');
running = -1;
}
if (response.statusCode === 200) {
ret = true;
running = -1;
}
});
}
Might well not be possible, or more likely, not viable, but it'd be really useful if I could include this for backwards compat for a week or so. It's a dev project atm, so if it's not good practice, it'll do as long as it doesn't comprimise everything. I do realise, though, that it basically goes against the whole point of Node (although Node itself includes several functions that are available in both sync and async versions).
All help gratefully received.
Lets pretend there is a strong reason why author desires this.
There is a rather popular (so, you are not alone) module called synchronize.js
Use case:
Add callback to your isValidSession
function isValidSession(req, cb) {
request(requestOpts, function (error, response, body) {
if (error) {
cb(new Error('Some error went on'));
}
if (response.statusCode === 200) {
cb(null, true);
}
});
}
Use it with sync module:
var ret = sync.await(isValidSession(req, sync.defer()))
P.S. Might require testing, refer to the documentation.
I am working with Node and I have a "class" that takes a directory as a parameter. It tries to create that directory and if it fails, then it throws an error:
function Config(dir) {
fs.mkdir(dir, function(err) {
if(err) throw new Error('Error', err);
}
}
My question is, is this an approved way of doing this? If I were to use a callback, then the rest of my program would have to reside in that callback, which seems odd to me.
This issue manifested itself when I tried to write a test using mocha which won't work since the exception is thrown in an async call:
it('should throw an error on a bad directory', function() {
var fn = function() {
var badConfig = new Config('/asdf');
};
assert.throws(fn, Error);
});
I've investigated domains as a way to solve the unit test issue, but that didn't seem to solve my problem (or I didn't implement them correctly).
var d = domain.create().on('error', function(err) { throw err; }
d.run(function() {
function Config(dir) {
fs.mkdir(dir, function(err) {
if(err) throw err;
}
}
});
Ultimately, I'm looking for a best practice that allows me to indicate to the application that something bad happened, and allows me to create tests for that solution.
You have three possibilities:
Using a synchronous call. As AsolBerg explained, your case suits exactly why some fs functions have their synchronous equivalent. It's ok because in your case, all your application depends on one Config instance to be loaded. but there are cases
Using a callback as constructor argument.
If constructor callback sounds really too odd for you, put your initialization code into an init() method, that takes a callback. It's a matter of personnal preference, but rather use this technic.
Last option, you can returns a Future in your init() method. There are several future libraries in NodeJS, that are an elegant alternative to callback parameter. But you can't use it in your constructor... as the constructor's return is the created object.
It sounds like in this case you might actually want to make a synchronous call (e.g. the rest of your application depends on this call being finished before proceeding). So although its normally not the way you want to think about building your node apps you could use the synchronous version mkdirSync().
http://nodejs.org/api/fs.html#fs_fs_mkdirsync_path_mode
Then if the call fails you can catch the error and return it and (probably) exit the app.
Callbacks are more and more a requirement in coding, especially when you think about Node.JS non-blocking style of working. But writing a lot of coroutine callbacks quickly becomes difficult to read back.
For example, imagine something like this Pyramid Of Doom:
// This asynchronous coding style is really annoying. Anyone invented a better way yet?
// Count, remove, re-count (verify) and log.
col.count(quertFilter, function(err, countFiltered) {
col.count(queryCached, function(err, countCached) {
col.remove(query, function(err) {
col.count(queryAll, function(err, countTotal) {
util.log(util.format('MongoDB cleanup: %d filtered and %d cached records removed. %d last-minute records left.', countFiltered, countCached, countTotal));
});
});
});
});
is something we see often and can easily become more complex.
When every function is at least a couple of lines longer, it starts to become feasible to separate the functions:
// Imagine something more complex
function mary(data, pictures) {
// Do something drastic
}
// I want to do mary(), but I need to write how before actually starting.
function nana(callback, cbFinal) {
// Get stuff from database or something
callback(nene, cbFinal, data);
}
function nene(callback, cbFinal, data) {
// Do stuff with data
callback(nini, cbFinal, data);
}
function nini(callback, data) {
// Look up pictures of Jeff Atwood
callback(data, pictures);
}
// I start here, so this story doesn't read like a book even if it's quite straightforward.
nana(nene, mary);
But there is a lot of passing vars around happening all the time. With other functions written in between, this becomes hard to read. The functions itself might be too insignificant on their own to justify giving them their own file.
Use an async flow control library like async. It provides a clean way to structure code that requires multiple async calls while maintaining whatever dependency is present between them (if any).
In your example, you'd do something like this:
async.series([
function(callback) { col.count(queryFilter, callback); },
function(callback) { col.count(queryCached, callback); },
function(callback) { col.remove(query, callback); },
function(callback) { col.count(queryAll, callback); }
], function (err, results) {
if (!err) {
util.log(util.format('MongoDB cleanup: %d filtered and %d cached records removed. %d last-minute records left.',
results[0], results[1], results[3]));
}
});
This would execute each of the functions in series; once the first one calls its callback the second one is invoked, and so on. But you can also use parallel or waterfall or whatever flow matches the flow you're looking for. I find it's much cleaner than using promises.
A different approach to callbacks are promises.
Example: jQuery Ajax. this one might look pretty familiar.
$.ajax({
url: '/foo',
success: function() {
alert('bar');
}
});
But $.ajax also returns a promise.
var request = $.ajax({
url: '/foo'
});
request.done(function() {
alert('bar');
});
A benefit is, that you simulate synchronous behavior, because you can use the returned promise instead of providing a callback to $.ajax.success and a callback to the callback and a callback.... Another advantage is, that you can chain / aggregate promises, and have error handlers for one promise-aggregate if you like.
I found this article to be pretty useful.
It describes the pro and cons of callbacks, promises and other techniques.
A popular implementation (used by e.g. AngularJS iirc) is Q.
Combined answers and articles. Please edit this answer and add libraries/examples/doc-urls in a straightforward fasion for everyone's benefit.
Documentation on Promises
Asynchronous Control Flow with Promises
jQuery deferreds
Asynchronous Libraries
async.js
async.waterfall([
function(){ // ... },
function(){ // ... }
], callback);
node fibers
step
Step(
function func1() {
// ...
return value
},
function func2(err, value) {
// ...
return value
},
function funcFinal(err, value) {
if (err) throw err;
// ...
}
);
Q
Q.fcall(func1)
.then(func2)
.then(func3)
.then(funcSucces, funcError)
API reference
Mode examples
More documentation