For arbitrary promise implementation, the deferred pattern (not to be confused with antipattern) may may look like:
const deferred = new Deferred;
...
// scopes where `deferred` object reference was passed before promise settlement
deferred.promise.then((result) => { ... }, (error) => { ... });
...
deferred.resolve(...);
// doesn't affect promise state
deferred.reject();
...
// after promise settlement
deferred.promise.then((result) => { ... }, (error) => { ... });
deferred object holds unsettled promise that can be passed to other function scopes by reference. All promise chains will be executed on promise settlement, it doesn't matter if deferred.promise was settled before chaining with then or after. The state of promise cannot be changed after it was settled.
As the answer suggests, the initial choices are ReplaySubject and AsyncSubject.
For the given setup (a demo)
var subject = new Rx.AsyncSubject;
var deferred = subject.first();
deferred.subscribe(
console.log.bind(console, 'Early result'),
console.log.bind(console, 'Early error')
);
setTimeout(() => {
deferred.subscribe(
console.log.bind(console, 'Late result'),
console.log.bind(console, 'Late error')
);
});
This results in desirable behaviour:
subject.error('one');
subject.next('two');
Early error one
Late error one
This results in undesirable behaviour:
subject.error('one');
subject.next('two');
subject.complete();
Early error one
Late result two
This results in undesirable behaviour:
subject.next('two');
subject.complete();
subject.next('three');
Early result two
Late result three
The results from ReplaySubject differ but are still inconsistent with expected results. next values and error errors are treated separately, and complete doesn't prevent the observers from receiving new data. This may work for single next/error, the problem is that next or error may be called multiple times unintentionally.
The reason why first() is used is because subscribes are one-time subscriptions, and I would like to remove them to avoid leaks.
How should it be implemented with RxJS observables?
You are probably looking for a Rx.ReplaySubject(1) (or an Rx.AsyncSubject() depending on your use case).
For a more detailed explanation of subjects, see What are the semantics of different RxJS subjects?.
Basically, a subject can be passed around by reference, like a deferred. You can emit values (resolve would be an 'next' (Rxjs v5) or 'onNext' (Rxjs v4) followed by 'complete' or 'onCompleted()') to it, as long as you hold that reference.
You can have any amount of subscribers to a subject, similar to the then to a deferred. If you use a replaySubject(1), any subscribers will receive the last emitted value, which should answer your it doesn't matter if deferred.promise was settled before chaining with then or after.. In Rxjs v4, a replaySubject will emit its last value to a subscriber subscribing after it has completed. I am not sure about the behaviour in Rxjs v5.
https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/subjects/asyncsubject.md
https://github.com/Reactive-Extensions/RxJS/blob/master/doc/api/subjects/replaysubject.md
Update
The following code executed with Rxjs v4 :
var subject = new Rx.AsyncSubject();
var deferred = subject;
deferred.subscribe(
console.log.bind(console, 'First result'),
console.log.bind(console, 'First error')
);
setTimeout(() => {
deferred.subscribe(
console.log.bind(console, 'Second result'),
console.log.bind(console, 'Second error')
);
});
subject.onNext('one');
subject.onCompleted();
subject.onNext('two');
subject.onNext('three');
subject.onNext('four');
produces the following output:
First result one
Second result one
However, the same code executed with Rxjs v5 does not :
First result one
Second result four
So basically that means that subjects' semantics have changed in Rxjs v5!!! That really is a breaking change to be aware of. Anyways, you could consider moving back to Rxjs v4, or use the turnaround suggested by artur grzesiak in his answer. You could also file an issue on the github site. I would believe that the change is intentional, but in the advent it is not, filing the issue might help clarify the situation. In any case, whatever behaviour chosen should definitely be documented properly.
The question about subjects' semantics features a link showing the async subject in relation with multiple and late subscription
As #user3743222 wrote AsyncSubject maybe used in deferred implementation, but the thing is it has to be private and guarded from multiple resolves / rejects.
Below is a possible implementation mirroring resolve-reject-promise structure:
const createDeferred = () => {
const pending = new Rx.AsyncSubject(); // caches last value / error
const end = (result) => {
if (pending.isStopped) {
console.warn('Deferred already resloved/rejected.'); // optionally throw
return;
}
if (result.isValue) {
pending.next(result.value);
pending.complete();
} else {
pending.error(result.error);
}
}
return {
resolve: (value) => end({isValue: true, value: value }),
reject: (error) => end({isValue: false, error: error }),
observable: pending.asObservable() // hide subject
};
}
// sync example
let def = createDeferred();
let obs = def.observable;
obs.subscribe(n => console.log('BEFORE-RESOLVE'));
def.resolve(1);
def.resolve(2); // warn - no action
def.reject('ERROR') // warn - no action
def.observable.subscribe(n => console.log('AFTER-RESOLVE'));
// async example
def = createDeferred();
def.observable.subscribe(() => console.log('ASYNC-BEFORE-RESOLVE'));
setTimeout(() => {
def.resolve(1);
setTimeout(() => {
def.observable.subscribe(() => console.log('ASYNC-AFTER-RESOLVE'));
def.resolve(2); // warn
def.reject('err'); // warn
}, 1000)
}, 1000);
// async error example
const def3 = createDeferred();
def3.observable.subscribe(
(n) => console.log(n, 'ERROR-BEFORE-REJECTED (I will not be called)'),
(err) => console.error('ERROR-BEFORE-REJECTED', err));
setTimeout(() => {
def3.reject('ERR');
setTimeout(() => {
def3.observable.subscribe(
(n) => console.log(n, 'ERROR-AFTER-REJECTED (I will not be called)'),
(err) => console.error('ERROR-AFTER-REJECTED', err));
def3.resolve(2); // warn
def3.reject('err'); // warn
}, 1000)
}, 3000);
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/5.0.0-beta.9/Rx.umd.js"></script>
Related
Coming from a heavy background in PHP I am struggling with some aspects of node/js.
const ldap = require('ldapjs');
class LdapClient {
constructor({
url,
}) {
this.isBound = null;
this.client = ldap.createClient({ url });
}
authenticate(credentials) {
const _this = this;
return new Promise((resolve, reject) => {
return this.client.bind(credentials.username, credentials.password, (err, res) => {
if (err) {
this.client.unbind();
return reject(err);
}
_this.isBound = true;
return resolve(res);
});
});
}
}
const client = new Client({url: ''})
const credentials = {
'username': '',
'password': ''
}
client.authenticate(credentials)
.then(() => {
console.log('authenticated');
console.log('race = ' + client.isBound); // SHOWS TRUE
})
.catch(e => {
console.log(e);
})
console.log(client.isBound); // SHOWS NULL... WANT TRUE (RACE ISSUE as consoles before PROMISE)
I am trying to access the isBound property outside of the promise return where it is set to true inside the authentication method on success.
However as you can see there appears to be a possible race condition?
Is there a way to handle this...
Thanks
It is not a race condition. It's working fine as expected. There are two console.logs in your code. The first one is in promise and the other one is outside the promise.
Your call goes into asynchronous mode, and the last console.log get executed sequentially as the next command in order, which at that time, the value of the variable was null. Your variable resolves later with the correct value.
If you have to perform further actions, you have to do it in the .then() portion of your Client method which will only execute when your Promise has resolved.
For example
Client().then() {//all of your post response related logic should be here}
So you're misunderstanding something about promises. They're meant to be used for Asynchronous code, like so:
let p = new Promise(resolve => setTimeout(resolve, 1000, 'here'))
p.then(console.log)
//in one second 'here'
As you can see the then doesn't actually happen until AFTER the promise resolves. With asynchronous code that's whenever resolve gets called.
So what's happening in your code is as follows:
Create Promise -> set event loop callback
console.log(isBound) // false, but happens first because it's called sync
Promise resolves // true
so really in your promise resolution is the first place you're even going to be able to check it successfully. If you return it from the call you can chain there and make sure the scope is continued later.
let a = 0
let p = Promise.resolve(a)
.then(a =>{
a += 2;
return a;
})
.then(a => console.log(a) || a)
console.log(a) // 0
p == p.then(a =>{
a += 4;
return a;
})
.then(console.log) // false because promises aren't equal and .then/.catch return new promise chains
// 2
// 6
The 2,6 and the false comparison may print out of order because of the event loop, however if you keep it all in the same lexical scope then you'll still have access to a or this within the confines of your promise chain.
Side note: You don't need to reference _this versus this with arrow function inside class methods. They will lexically scope and thus this will be bound to the local scope of that function. More information can be found at You Don't know JS
You're trying to set isBound when the promise is created, not when it's resolved.
Rather than returning the promise directly from the authenticate() method, you can store it in a variable, call .then() on it, and return the promise chain at that point.
authenticate(credentials) {
// create your promise and store it
let authPromise = new Promise((resolve, reject) => {
...
})
// handle the promise and set isBound before returning the chain
return authPromise.then(res => {
_this.isBound = true
return res
})
}
This can be written with fewer lines, but this is meant to illustrate promise chaining and interception before returning.
ADDITIONALLY Your final console.log() is outside of your promise handler (a .then()) so it's always going to be null since that code gets run synchronously, before the authenticate async function has time to complete.
client.authenticate(credentials)
.then(res => {
// you MUST do all your async-dependant operations inside
// promise handlers like this
console.log(client.isBound);
})
I have an array of promise objects that must be resolved in the same sequence in which they are listed in the array, i.e. we cannot attempt resolving an element till the previous one has been resolved (as method Promise.all([...]) does).
And if one element is rejected, I need the chain to reject at once, without attempting to resolve the following element.
How can I implement this, or is there an existing implementation for such sequence pattern?
function sequence(arr) {
return new Promise(function (resolve, reject) {
// try resolving all elements in 'arr',
// but strictly one after another;
});
}
EDIT
The initial answers suggest we can only sequence results of such array elements, not their execution, because it is predefined in such example.
But then how to generate an array of promises in such a way as to avoid early execution?
Here's a modified example:
function sequence(nextPromise) {
// while nextPromise() creates and returns another promise,
// continue resolving it;
}
I wouldn't want to make it into a separate question, because I believe it is part of the same problem.
SOLUTION
Some answers below and discussions that followed went a bit astray, but the eventual solution that did exactly what I was looking for was implemented within spex library, as method sequence. The method can iterate through a sequence of dynamic length, and create promises as required by the business logic of your application.
Later on I turned it into a shared library for everyone to use.
Here are some simple examples for how you sequence through an array executing each async operation serially (one after the other).
Let's suppose you have an array of items:
var arr = [...];
And, you want to carry out a specific async operation on each item in the array, one at a time serially such that the next operation does not start until the previous one has finished.
And, let's suppose you have a promise returning function for processing one of the items in the array fn(item):
Manual Iteration
function processItem(item) {
// do async operation and process the result
// return a promise
}
Then, you can do something like this:
function processArray(array, fn) {
var index = 0;
function next() {
if (index < array.length) {
fn(array[index++]).then(next);
}
}
return next();
}
processArray(arr, processItem);
Manual Iteration Returning Promise
If you wanted a promise returned from processArray() so you'd know when it was done, you could add this to it:
function processArray(array, fn) {
var index = 0;
function next() {
if (index < array.length) {
return fn(array[index++]).then(function(value) {
// apply some logic to value
// you have three options here:
// 1) Call next() to continue processing the result of the array
// 2) throw err to stop processing and result in a rejected promise being returned
// 3) return value to stop processing and result in a resolved promise being returned
return next();
});
}
} else {
// return whatever you want to return when all processing is done
// this returne value will be the ersolved value of the returned promise.
return "all done";
}
return next();
}
processArray(arr, processItem).then(function(result) {
// all done here
console.log(result);
}, function(err) {
// rejection happened
console.log(err);
});
Note: this will stop the chain on the first rejection and pass that reason back to the processArray returned promise.
Iteration with .reduce()
If you wanted to do more of the work with promises, you could chain all the promises:
function processArray(array, fn) {
return array.reduce(function(p, item) {
return p.then(function() {
return fn(item);
});
}, Promise.resolve());
}
processArray(arr, processItem).then(function(result) {
// all done here
}, function(reason) {
// rejection happened
});
Note: this will stop the chain on the first rejection and pass that reason back to the promise returned from processArray().
For a success scenario, the promise returned from processArray() will be resolved with the last resolved value of your fn callback. If you wanted to accumulate a list of results and resolve with that, you could collect the results in a closure array from fn and continue to return that array each time so the final resolve would be an array of results.
Iteration with .reduce() that Resolves With Array
And, since it now seems apparent that you want the final promise result to be an array of data (in order), here's a revision of the previous solution that produces that:
function processArray(array, fn) {
var results = [];
return array.reduce(function(p, item) {
return p.then(function() {
return fn(item).then(function(data) {
results.push(data);
return results;
});
});
}, Promise.resolve());
}
processArray(arr, processItem).then(function(result) {
// all done here
// array of data here in result
}, function(reason) {
// rejection happened
});
Working demo: http://jsfiddle.net/jfriend00/h3zaw8u8/
And a working demo that shows a rejection: http://jsfiddle.net/jfriend00/p0ffbpoc/
Iteration with .reduce() that Resolves With Array with delay
And, if you want to insert a small delay between operations:
function delay(t, v) {
return new Promise(function(resolve) {
setTimeout(resolve.bind(null, v), t);
});
}
function processArrayWithDelay(array, t, fn) {
var results = [];
return array.reduce(function(p, item) {
return p.then(function() {
return fn(item).then(function(data) {
results.push(data);
return delay(t, results);
});
});
}, Promise.resolve());
}
processArray(arr, 200, processItem).then(function(result) {
// all done here
// array of data here in result
}, function(reason) {
// rejection happened
});
Iteration with Bluebird Promise Library
The Bluebird promise library has a lot of concurrency controlling features built right in. For example, to sequence iteration through an array, you can use Promise.mapSeries().
Promise.mapSeries(arr, function(item) {
// process each individual item here, return a promise
return processItem(item);
}).then(function(results) {
// process final results here
}).catch(function(err) {
// process array here
});
Or to insert a delay between iterations:
Promise.mapSeries(arr, function(item) {
// process each individual item here, return a promise
return processItem(item).delay(100);
}).then(function(results) {
// process final results here
}).catch(function(err) {
// process array here
});
Using ES7 async/await
If you're coding in an environment that supports async/await, you can also just use a regular for loop and then await a promise in the loop and it will cause the for loop to pause until a promise is resolved before proceeding. This will effectively sequence your async operations so the next one doesn't start until the previous one is done.
async function processArray(array, fn) {
let results = [];
for (let i = 0; i < array.length; i++) {
let r = await fn(array[i]);
results.push(r);
}
return results; // will be resolved value of promise
}
// sample usage
processArray(arr, processItem).then(function(result) {
// all done here
// array of data here in result
}, function(reason) {
// rejection happened
});
FYI, I think my processArray() function here is very similar to Promise.map() in the Bluebird promise library which takes an array and a promise producing function and returns a promise that resolves with an array of resolved results.
#vitaly-t - Here some some more detailed comments on your approach. You are welcome to whatever code seems best to you. When I first started using promises, I tended to use promises only for the simplest things they did and write a lot of the logic myself when a more advanced use of promises could do much more of it for me. You use only what you are fully comfortable with and beyond that, you'd rather see your own code that you intimately know. That's probably human nature.
I will suggest that as I understood more and more of what promises can do for me, I now like to write code that uses more of the advanced features of promises and it seems perfectly natural to me and I feel like I'm building on well tested infrastructure that has lots of useful features. I'd only ask that you keep your mind open as you learn more and more to potentially go that direction. It's my opinion that it's a useful and productive direction to migrate as your understanding improves.
Here are some specific points of feedback on your approach:
You create promises in seven places
As a contrast in styles, my code has only two places where I explicitly create a new promise - once in the factory function and once to initialize the .reduce() loop. Everywhere else, I'm just building on the promises already created by chaining to them or returning values within them or just returning them directly. Your code has seven unique places where you're creating a promise. Now, good coding isn't a contest to see how few places you can create a promise, but that might point out the difference in leverage the promises that are already created versus testing conditions and creating new promises.
Throw-safety is a very useful feature
Promises are throw-safe. That means that an exception thrown within a promise handler will automatically reject that promise. If you just want the exception to become a rejection, then this is a very useful feature to take advantage of. In fact, you will find that just throwing yourself is a useful way to reject from within a handler without creating yet another promise.
Lots of Promise.resolve() or Promise.reject() is probably an opportunity for simplification
If you see code with lots of Promise.resolve() or Promise.reject() statements, then there are probably opportunities to leverage the existing promises better rather than creating all these new promises.
Cast to a Promise
If you don't know if something returned a promise, then you can cast it to a promise. The promise library will then do it's own checks whether it is a promise or not and even whether it's the kind of promise that matches the promise library you're using and, if not, wrap it into one. This can save rewriting a lot of this logic yourself.
Contract to Return a Promise
In many cases these days, it's completely viable to have a contract for a function that may do something asynchronous to return a promise. If the function just wants to do something synchronous, then it can just return a resolved promise. You seem to feel like this is onerous, but it's definitely the way the wind is blowing and I already write lots of code that requires that and it feels very natural once you get familiar with promises. It abstracts away whether the operation is sync or async and the caller doesn't have to know or do anything special either way. This is a nice use of promises.
The factory function can be written to create one promise only
The factory function can be written to create one promise only and then resolve or reject it. This style also makes it throw safe so any exception occuring in the factory function automatically becomes a reject. It also makes the contract to always return a promise automatic.
While I realize this factory function is a placeholder function (it doesn't even do anything async), hopefully you can see the style to consider it:
function factory(idx) {
// create the promise this way gives you automatic throw-safety
return new Promise(function(resolve, reject) {
switch (idx) {
case 0:
resolve("one");
break;
case 1:
resolve("two");
break;
case 2:
resolve("three");
break;
default:
resolve(null);
break;
}
});
}
If any of these operations were async, then they could just return their own promises which would automatically chain to the one central promise like this:
function factory(idx) {
// create the promise this way gives you automatic throw-safety
return new Promise(function(resolve, reject) {
switch (idx) {
case 0:
resolve($.ajax(...));
case 1:
resole($.ajax(...));
case 2:
resolve("two");
break;
default:
resolve(null);
break;
}
});
}
Using a reject handler to just return promise.reject(reason) is not needed
When you have this body of code:
return obj.then(function (data) {
result.push(data);
return loop(++idx, result);
}, function (reason) {
return promise.reject(reason);
});
The reject handler is not adding any value. You can instead just do this:
return obj.then(function (data) {
result.push(data);
return loop(++idx, result);
});
You are already returning the result of obj.then(). If either obj rejects or if anything chained to obj or returned from then .then() handler rejects, then obj will reject. So you don't need to create a new promise with the reject. The simpler code without the reject handler does the same thing with less code.
Here's a version in the general architecture of your code that tries to incorporate most of these ideas:
function factory(idx) {
// create the promise this way gives you automatic throw-safety
return new Promise(function(resolve, reject) {
switch (idx) {
case 0:
resolve("zero");
break;
case 1:
resolve("one");
break;
case 2:
resolve("two");
break;
default:
// stop further processing
resolve(null);
break;
}
});
}
// Sequentially resolves dynamic promises returned by a factory;
function sequence(factory) {
function loop(idx, result) {
return Promise.resolve(factory(idx)).then(function(val) {
// if resolved value is not null, then store result and keep going
if (val !== null) {
result.push(val);
// return promise from next call to loop() which will automatically chain
return loop(++idx, result);
} else {
// if we got null, then we're done so return results
return result;
}
});
}
return loop(0, []);
}
sequence(factory).then(function(results) {
log("results: ", results);
}, function(reason) {
log("rejected: ", reason);
});
Working demo: http://jsfiddle.net/jfriend00/h3zaw8u8/
Some comments about this implementation:
Promise.resolve(factory(idx)) essentially casts the result of factory(idx) to a promise. If it was just a value, then it becomes a resolved promise with that return value as the resolve value. If it was already a promise, then it just chains to that promise. So, it replaces all your type checking code on the return value of the factory() function.
The factory function signals that it is done by returning either null or a promise whose resolved value ends up being null. The above cast maps those two conditions to the same resulting code.
The factory function catches exceptions automatically and turns them into rejects which are then handled automatically by the sequence() function. This is one significant advantage of letting promises do a lot of your error handling if you just want to abort processing and feed the error back on the first exception or rejection.
The factory function in this implementation can return either a promise or a static value (for a synchronous operation) and it will work just fine (per your design request).
I've tested it with a thrown exception in the promise callback in the factory function and it does indeed just reject and propagate that exception back to reject the sequence promise with the exception as the reason.
This uses a similar method as you (on purpose, trying to stay with your general architecture) for chaining multiple calls to loop().
Promises represent values of operations and not the operations themselves. The operations are already started so you can't make them wait for one another.
Instead, you can synchronize functions that return promises invoking them in order (through a loop with promise chaining for instance), or using the .each method in bluebird.
You can't simply run X async operations and then want them to be resolved in an order.
The correct way to do something like this is to run the new async operation only after the one before was resolved:
doSomethingAsync().then(function(){
doSomethingAsync2().then(function(){
doSomethingAsync3();
.......
});
});
Edit Seems like you want to wait for all promises and then invoke their callbacks in a specific order. Something like this:
var callbackArr = [];
var promiseArr = [];
promiseArr.push(doSomethingAsync());
callbackArr.push(doSomethingAsyncCallback);
promiseArr.push(doSomethingAsync1());
callbackArr.push(doSomethingAsync1Callback);
.........
promiseArr.push(doSomethingAsyncN());
callbackArr.push(doSomethingAsyncNCallback);
and then:
$.when(promiseArr).done(function(promise){
while(callbackArr.length > 0)
{
callbackArr.pop()(promise);
}
});
The problems that can occur with this is when one or more promises fail.
Although quite dense, here's another solution that will iterate a promise-returning function over an array of values and resolve with an array of results:
function processArray(arr, fn) {
return arr.reduce(
(p, v) => p.then((a) => fn(v).then(r => a.concat([r]))),
Promise.resolve([])
);
}
Usage:
const numbers = [0, 4, 20, 100];
const multiplyBy3 = (x) => new Promise(res => res(x * 3));
// Prints [ 0, 12, 60, 300 ]
processArray(numbers, multiplyBy3).then(console.log);
Note that, because we're reducing from one promise to the next, each item is processed in series.
It's functionally equivalent to the "Iteration with .reduce() that Resolves With Array" solution from #jfriend00 but a bit neater.
I suppose two approaches for handling this question:
Create multiple promises and use the allWithAsync function as follow:
let allPromiseAsync = (...PromisesList) => {
return new Promise(async resolve => {
let output = []
for (let promise of PromisesList) {
output.push(await promise.then(async resolvedData => await resolvedData))
if (output.length === PromisesList.length) resolve(output)
}
}) }
const prm1= Promise.resolve('first');
const prm2= new Promise((resolve, reject) => setTimeout(resolve, 2000, 'second'));
const prm3= Promise.resolve('third');
allPromiseAsync(prm1, prm2, prm3)
.then(resolvedData => {
console.log(resolvedData) // ['first', 'second', 'third']
});
Use the Promise.all function instead:
(async () => {
const promise1 = new Promise(resolve => {
setTimeout(() => { console.log('first');console.log(new Date());resolve() }, 1000)
})
const promise2 = new Promise(resolve => {
setTimeout(() => {console.log('second');console.log(new Date()); resolve() }, 3000)
})
const promise3 = new Promise(resolve => {
setTimeout(() => { console.log('third');console.log(new Date()); resolve() }, 7000)
})
const promises = [promise1, promise2, promise3]
await Promise.all(promises)
console.log('This line is shown after 7000ms')
})()
In my opinion, you should be using a for loop(yes the only time I would recommend a for loop). The reason is that when you are using a for loop it allows you to await on each of the iterations of your loop where using reduce, map or forEach with run all your promise iterations concurrently. Which by the sounds of it is not what you want, you want each promise to wait until the previous promise has resolved. So to do this you would do something like the following.
const ids = [0, 1, 2]
const accounts = ids.map(id => getId(id))
const accountData = async() => {
for await (const account of accounts) {
// account will equal the current iteration of the loop
// and each promise are now waiting on the previous promise to resolve!
}
}
// then invoke your function where ever needed
accountData()
And obviously, if you wanted to get really extreme you could do something like this:
const accountData = async(accounts) => {
for await (const account of accounts) {
// do something
}
}
accountData([0, 1, 2].map(id => getId(id)))
This is so much more readable than any of the other examples, it is much less code, reduced the number of lines needed for this functionality, follows a more functional programming way of doing things and is using ES7 to its full potential!!!!
Also depending on your set up or when you are reading this you may need to add the plugin-proposal-async-generator-functions polyfill or you may see the following error
#babel/plugin-proposal-async-generator-functions (https://git.io/vb4yp) to the 'plugins' section of your Babel config to enable transformation.
I am creating a script in node.js (V8.1.3) which looks at similar JSON data from multiple API's and compares the values. To be more exact I am looking at different market prices of different stocks (actually cryptocurrencies).
Currently, I am using promise.all to wait for all responses from the respective APIs.
let fetchedJSON =
await Promise.all([getJSON(settings1), getJSON(settings2), getJSON(settings3) ... ]);
However, Promise.all throws an error if even just one promise rejects with an error. In the bluebird docos there is a function called Promise.some which is almost what I want. As I understand it takes an array of promises and resolves the two fastest promises to resolve, or otherwise (if less than 2 promises resolve) throws an error.
The problem with this is that firstly, I don't want the fastest two promises resolved to be what it returns, I want any successful promises to be returned, as long as there is more than 2. This seems to be what Promise.any does except with a min count of 1. (I require a minimum count of 2)
Secondly, I don't know how many Promises I will be awaiting on (In other words, I don't know how many API's I will be requesting data from). It may only be 2 or it may be 30. This depends on user input.
Currently writing this it seems to me there is probably just a way to have a promise.any with a count of 2 and that would be the easiest solution. Is this possible?
Btw, not sure if the title really summarizes the question. Please suggest an edit for the title :)
EDIT: Another way I may be writing the script is that the first two APIs to get loaded in start getting computed and pushed to the browser and then every next JSON that gets loaded and computed after it. This way I am not waiting for all Promises to be fulfilled before I start computing the data and passing results to the front end. Would this be possible with a function which also works for the other circumstances?
What I mean kind of looks like this:
Requesting JSON in parallel...
|-----JSON1------|
|---JSON-FAILS---| > catch error > do something with error. Doesn't effect next results.
|-------JSON2-------| > Meets minimum of 2 results > computes JSON > to browser.
|-------JSON3---------| > computes JSON > to browser.
How about thening all the promises so none fail, pass that to Promise.all, and filter the successful results in a final .then.
Something like this:
function some( promises, count = 1 ){
const wrapped = promises.map( promise => promise.then(value => ({ success: true, value }), () => ({ success: false })) );
return Promise.all( wrapped ).then(function(results){
const successful = results.filter(result => result.success);
if( successful.length < count )
throw new Error("Only " + successful.length + " resolved.")
return successful.map(result => result.value);
});
}
This might be somewhat clunky, considering you're asking to implement an anti-pattern, but you can force each promise to resolve:
async function fetchAllJSON(settingsArray) {
let fetchedJSON = await Promise.all(
settingsArray.map((settings) => {
// force rejected ajax to always resolve
return getJSON(settings).then((data) => {
// initial processing
return { success: true, data }
}).catch((error) => {
// error handling
return { success, false, error }
})
})
).then((unfilteredArray) => {
// only keep successful promises
return dataArray.filter(({ success }) => success)
})
// do the rest of your processing here
// with fetchedJSON containing array of data
}
You can use Promise.allSettled([]). the difference is that allSettled will return an array of objects after all the promises are settled regardless if successful or failed. then just find the successful o whatever you need.
let resArr = await Promise.allSettled(userNamesArr.map(user=>this.authenticateUserPassword(user,password)));
return resArr.find(e=>e.status!="rejected");
OR return resArr.find(e=>e.status=="fulfilled").
The other answers have the downside of having to wait for all the promises to resolve, whereas ideally .some would return as soon as any (N) promise(s) passes the predicate.
let anyNPromises = (promises, predicate = a => a, n = 1) => new Promise(async resolve => {
promises.forEach(async p => predicate(await p) && !--n && resolve(true));
await Promise.all(promises);
resolve(false);
});
let atLeast2NumbersGreaterThan5 = promises => anyNPromises(promises, a => a > 5, 2);
atLeast2NumbersGreaterThan5([
Promise.resolve(5),
Promise.resolve(3),
Promise.resolve(10),
Promise.resolve(11)]
).then(a => console.log('5, 3, 10, 11', a)); // true
atLeast2NumbersGreaterThan5([
Promise.resolve(5),
Promise.resolve(3),
Promise.resolve(10),
Promise.resolve(-43)]
).then(a => console.log('5, 3, 10, -43', a)); // false
atLeast2NumbersGreaterThan5([
Promise.resolve(5),
Promise.resolve(3),
new Promise(() => 'never resolved'),
Promise.resolve(10),
Promise.resolve(11)]
).then(a => console.log('5, 3, unresolved, 10, 11', a)); // true
I want to process an array of objects by moving them through a series of async/network operations (remote HTTP requests).
In some of these operations I would like to ensure no more than X items are being processed at the same time.
How can I achieve that?
Example code:
function someAsyncOp(item) {...} // returns a promise
var source = Rx.Observable.from([{item1},{item2},...])
source
.flatMap((item) => {
// I WANT THE FOLLOWING OPERATION TO BE EXECUTING
// ON AT MAX 10 ITEMS AT A TIME, NEXT ITEM SHOULD
// BE SUBMITTED ONLY WHEN A SLOT GETS FREED AS A
// RESULT OF THE PROMISE SUCCEEDING OR FAILING
return Rx.Observable.fromPromise(someAsyncOp(item))
})
.subscribe(
console.log,
console.error,
() => console.log('completed')
)
There is an sibling of flatMap called flatMapWithMaxConcurrent which takes a concurrency argument. It is functionally similar to map(fn).merge(n) which was suggested by Benjamin's answer.
function someAsyncOp(item) {...} // returns a promise
var source = Rx.Observable.from([{item1},{item2},...])
source
//Only allow a max of 10 items to be subscribed to at once
.flatMapWithMaxConcurrent(10, (item) => {
//Since a promise is eager you need to defer execution of the function
//that produces it until subscription. Defer will implicitly accept a promise
return Rx.Observable.defer(() => someAsyncOp(item))
//If you want the whole thing to continue regardless of exceptions you should also
//catch errors from the individual processes
.catch(Rx.Observable.empty())
})
.subscribe(
console.log,
console.error,
() => console.log('completed')
)
You can use merge with map instead of flatMap:
var concurrency = 10;
source.map(someAsyncOp).merge(concurrency).subscribe(x => console.log(x));
Note that since promises are eager and observables are lazy fromPromise wouldn't cut it (and Rx can assimilate promises without it anyway). I recommend wrapping it in a create.
var delay = function(ms){ return new Promise(function(r){ setTimeout(r, 2000, ms) }); }
var log = function(msg){ document.body.innerHTML += msg + "<br />"; }
Rx.Observable.range(1000, 10).map(delay).merge(2).subscribe(log)
<script src="https://cdnjs.cloudflare.com/ajax/libs/rxjs/4.0.7/rx.all.js"></script>
I have an array of promise objects that must be resolved in the same sequence in which they are listed in the array, i.e. we cannot attempt resolving an element till the previous one has been resolved (as method Promise.all([...]) does).
And if one element is rejected, I need the chain to reject at once, without attempting to resolve the following element.
How can I implement this, or is there an existing implementation for such sequence pattern?
function sequence(arr) {
return new Promise(function (resolve, reject) {
// try resolving all elements in 'arr',
// but strictly one after another;
});
}
EDIT
The initial answers suggest we can only sequence results of such array elements, not their execution, because it is predefined in such example.
But then how to generate an array of promises in such a way as to avoid early execution?
Here's a modified example:
function sequence(nextPromise) {
// while nextPromise() creates and returns another promise,
// continue resolving it;
}
I wouldn't want to make it into a separate question, because I believe it is part of the same problem.
SOLUTION
Some answers below and discussions that followed went a bit astray, but the eventual solution that did exactly what I was looking for was implemented within spex library, as method sequence. The method can iterate through a sequence of dynamic length, and create promises as required by the business logic of your application.
Later on I turned it into a shared library for everyone to use.
Here are some simple examples for how you sequence through an array executing each async operation serially (one after the other).
Let's suppose you have an array of items:
var arr = [...];
And, you want to carry out a specific async operation on each item in the array, one at a time serially such that the next operation does not start until the previous one has finished.
And, let's suppose you have a promise returning function for processing one of the items in the array fn(item):
Manual Iteration
function processItem(item) {
// do async operation and process the result
// return a promise
}
Then, you can do something like this:
function processArray(array, fn) {
var index = 0;
function next() {
if (index < array.length) {
fn(array[index++]).then(next);
}
}
return next();
}
processArray(arr, processItem);
Manual Iteration Returning Promise
If you wanted a promise returned from processArray() so you'd know when it was done, you could add this to it:
function processArray(array, fn) {
var index = 0;
function next() {
if (index < array.length) {
return fn(array[index++]).then(function(value) {
// apply some logic to value
// you have three options here:
// 1) Call next() to continue processing the result of the array
// 2) throw err to stop processing and result in a rejected promise being returned
// 3) return value to stop processing and result in a resolved promise being returned
return next();
});
}
} else {
// return whatever you want to return when all processing is done
// this returne value will be the ersolved value of the returned promise.
return "all done";
}
return next();
}
processArray(arr, processItem).then(function(result) {
// all done here
console.log(result);
}, function(err) {
// rejection happened
console.log(err);
});
Note: this will stop the chain on the first rejection and pass that reason back to the processArray returned promise.
Iteration with .reduce()
If you wanted to do more of the work with promises, you could chain all the promises:
function processArray(array, fn) {
return array.reduce(function(p, item) {
return p.then(function() {
return fn(item);
});
}, Promise.resolve());
}
processArray(arr, processItem).then(function(result) {
// all done here
}, function(reason) {
// rejection happened
});
Note: this will stop the chain on the first rejection and pass that reason back to the promise returned from processArray().
For a success scenario, the promise returned from processArray() will be resolved with the last resolved value of your fn callback. If you wanted to accumulate a list of results and resolve with that, you could collect the results in a closure array from fn and continue to return that array each time so the final resolve would be an array of results.
Iteration with .reduce() that Resolves With Array
And, since it now seems apparent that you want the final promise result to be an array of data (in order), here's a revision of the previous solution that produces that:
function processArray(array, fn) {
var results = [];
return array.reduce(function(p, item) {
return p.then(function() {
return fn(item).then(function(data) {
results.push(data);
return results;
});
});
}, Promise.resolve());
}
processArray(arr, processItem).then(function(result) {
// all done here
// array of data here in result
}, function(reason) {
// rejection happened
});
Working demo: http://jsfiddle.net/jfriend00/h3zaw8u8/
And a working demo that shows a rejection: http://jsfiddle.net/jfriend00/p0ffbpoc/
Iteration with .reduce() that Resolves With Array with delay
And, if you want to insert a small delay between operations:
function delay(t, v) {
return new Promise(function(resolve) {
setTimeout(resolve.bind(null, v), t);
});
}
function processArrayWithDelay(array, t, fn) {
var results = [];
return array.reduce(function(p, item) {
return p.then(function() {
return fn(item).then(function(data) {
results.push(data);
return delay(t, results);
});
});
}, Promise.resolve());
}
processArray(arr, 200, processItem).then(function(result) {
// all done here
// array of data here in result
}, function(reason) {
// rejection happened
});
Iteration with Bluebird Promise Library
The Bluebird promise library has a lot of concurrency controlling features built right in. For example, to sequence iteration through an array, you can use Promise.mapSeries().
Promise.mapSeries(arr, function(item) {
// process each individual item here, return a promise
return processItem(item);
}).then(function(results) {
// process final results here
}).catch(function(err) {
// process array here
});
Or to insert a delay between iterations:
Promise.mapSeries(arr, function(item) {
// process each individual item here, return a promise
return processItem(item).delay(100);
}).then(function(results) {
// process final results here
}).catch(function(err) {
// process array here
});
Using ES7 async/await
If you're coding in an environment that supports async/await, you can also just use a regular for loop and then await a promise in the loop and it will cause the for loop to pause until a promise is resolved before proceeding. This will effectively sequence your async operations so the next one doesn't start until the previous one is done.
async function processArray(array, fn) {
let results = [];
for (let i = 0; i < array.length; i++) {
let r = await fn(array[i]);
results.push(r);
}
return results; // will be resolved value of promise
}
// sample usage
processArray(arr, processItem).then(function(result) {
// all done here
// array of data here in result
}, function(reason) {
// rejection happened
});
FYI, I think my processArray() function here is very similar to Promise.map() in the Bluebird promise library which takes an array and a promise producing function and returns a promise that resolves with an array of resolved results.
#vitaly-t - Here some some more detailed comments on your approach. You are welcome to whatever code seems best to you. When I first started using promises, I tended to use promises only for the simplest things they did and write a lot of the logic myself when a more advanced use of promises could do much more of it for me. You use only what you are fully comfortable with and beyond that, you'd rather see your own code that you intimately know. That's probably human nature.
I will suggest that as I understood more and more of what promises can do for me, I now like to write code that uses more of the advanced features of promises and it seems perfectly natural to me and I feel like I'm building on well tested infrastructure that has lots of useful features. I'd only ask that you keep your mind open as you learn more and more to potentially go that direction. It's my opinion that it's a useful and productive direction to migrate as your understanding improves.
Here are some specific points of feedback on your approach:
You create promises in seven places
As a contrast in styles, my code has only two places where I explicitly create a new promise - once in the factory function and once to initialize the .reduce() loop. Everywhere else, I'm just building on the promises already created by chaining to them or returning values within them or just returning them directly. Your code has seven unique places where you're creating a promise. Now, good coding isn't a contest to see how few places you can create a promise, but that might point out the difference in leverage the promises that are already created versus testing conditions and creating new promises.
Throw-safety is a very useful feature
Promises are throw-safe. That means that an exception thrown within a promise handler will automatically reject that promise. If you just want the exception to become a rejection, then this is a very useful feature to take advantage of. In fact, you will find that just throwing yourself is a useful way to reject from within a handler without creating yet another promise.
Lots of Promise.resolve() or Promise.reject() is probably an opportunity for simplification
If you see code with lots of Promise.resolve() or Promise.reject() statements, then there are probably opportunities to leverage the existing promises better rather than creating all these new promises.
Cast to a Promise
If you don't know if something returned a promise, then you can cast it to a promise. The promise library will then do it's own checks whether it is a promise or not and even whether it's the kind of promise that matches the promise library you're using and, if not, wrap it into one. This can save rewriting a lot of this logic yourself.
Contract to Return a Promise
In many cases these days, it's completely viable to have a contract for a function that may do something asynchronous to return a promise. If the function just wants to do something synchronous, then it can just return a resolved promise. You seem to feel like this is onerous, but it's definitely the way the wind is blowing and I already write lots of code that requires that and it feels very natural once you get familiar with promises. It abstracts away whether the operation is sync or async and the caller doesn't have to know or do anything special either way. This is a nice use of promises.
The factory function can be written to create one promise only
The factory function can be written to create one promise only and then resolve or reject it. This style also makes it throw safe so any exception occuring in the factory function automatically becomes a reject. It also makes the contract to always return a promise automatic.
While I realize this factory function is a placeholder function (it doesn't even do anything async), hopefully you can see the style to consider it:
function factory(idx) {
// create the promise this way gives you automatic throw-safety
return new Promise(function(resolve, reject) {
switch (idx) {
case 0:
resolve("one");
break;
case 1:
resolve("two");
break;
case 2:
resolve("three");
break;
default:
resolve(null);
break;
}
});
}
If any of these operations were async, then they could just return their own promises which would automatically chain to the one central promise like this:
function factory(idx) {
// create the promise this way gives you automatic throw-safety
return new Promise(function(resolve, reject) {
switch (idx) {
case 0:
resolve($.ajax(...));
case 1:
resole($.ajax(...));
case 2:
resolve("two");
break;
default:
resolve(null);
break;
}
});
}
Using a reject handler to just return promise.reject(reason) is not needed
When you have this body of code:
return obj.then(function (data) {
result.push(data);
return loop(++idx, result);
}, function (reason) {
return promise.reject(reason);
});
The reject handler is not adding any value. You can instead just do this:
return obj.then(function (data) {
result.push(data);
return loop(++idx, result);
});
You are already returning the result of obj.then(). If either obj rejects or if anything chained to obj or returned from then .then() handler rejects, then obj will reject. So you don't need to create a new promise with the reject. The simpler code without the reject handler does the same thing with less code.
Here's a version in the general architecture of your code that tries to incorporate most of these ideas:
function factory(idx) {
// create the promise this way gives you automatic throw-safety
return new Promise(function(resolve, reject) {
switch (idx) {
case 0:
resolve("zero");
break;
case 1:
resolve("one");
break;
case 2:
resolve("two");
break;
default:
// stop further processing
resolve(null);
break;
}
});
}
// Sequentially resolves dynamic promises returned by a factory;
function sequence(factory) {
function loop(idx, result) {
return Promise.resolve(factory(idx)).then(function(val) {
// if resolved value is not null, then store result and keep going
if (val !== null) {
result.push(val);
// return promise from next call to loop() which will automatically chain
return loop(++idx, result);
} else {
// if we got null, then we're done so return results
return result;
}
});
}
return loop(0, []);
}
sequence(factory).then(function(results) {
log("results: ", results);
}, function(reason) {
log("rejected: ", reason);
});
Working demo: http://jsfiddle.net/jfriend00/h3zaw8u8/
Some comments about this implementation:
Promise.resolve(factory(idx)) essentially casts the result of factory(idx) to a promise. If it was just a value, then it becomes a resolved promise with that return value as the resolve value. If it was already a promise, then it just chains to that promise. So, it replaces all your type checking code on the return value of the factory() function.
The factory function signals that it is done by returning either null or a promise whose resolved value ends up being null. The above cast maps those two conditions to the same resulting code.
The factory function catches exceptions automatically and turns them into rejects which are then handled automatically by the sequence() function. This is one significant advantage of letting promises do a lot of your error handling if you just want to abort processing and feed the error back on the first exception or rejection.
The factory function in this implementation can return either a promise or a static value (for a synchronous operation) and it will work just fine (per your design request).
I've tested it with a thrown exception in the promise callback in the factory function and it does indeed just reject and propagate that exception back to reject the sequence promise with the exception as the reason.
This uses a similar method as you (on purpose, trying to stay with your general architecture) for chaining multiple calls to loop().
Promises represent values of operations and not the operations themselves. The operations are already started so you can't make them wait for one another.
Instead, you can synchronize functions that return promises invoking them in order (through a loop with promise chaining for instance), or using the .each method in bluebird.
You can't simply run X async operations and then want them to be resolved in an order.
The correct way to do something like this is to run the new async operation only after the one before was resolved:
doSomethingAsync().then(function(){
doSomethingAsync2().then(function(){
doSomethingAsync3();
.......
});
});
Edit Seems like you want to wait for all promises and then invoke their callbacks in a specific order. Something like this:
var callbackArr = [];
var promiseArr = [];
promiseArr.push(doSomethingAsync());
callbackArr.push(doSomethingAsyncCallback);
promiseArr.push(doSomethingAsync1());
callbackArr.push(doSomethingAsync1Callback);
.........
promiseArr.push(doSomethingAsyncN());
callbackArr.push(doSomethingAsyncNCallback);
and then:
$.when(promiseArr).done(function(promise){
while(callbackArr.length > 0)
{
callbackArr.pop()(promise);
}
});
The problems that can occur with this is when one or more promises fail.
Although quite dense, here's another solution that will iterate a promise-returning function over an array of values and resolve with an array of results:
function processArray(arr, fn) {
return arr.reduce(
(p, v) => p.then((a) => fn(v).then(r => a.concat([r]))),
Promise.resolve([])
);
}
Usage:
const numbers = [0, 4, 20, 100];
const multiplyBy3 = (x) => new Promise(res => res(x * 3));
// Prints [ 0, 12, 60, 300 ]
processArray(numbers, multiplyBy3).then(console.log);
Note that, because we're reducing from one promise to the next, each item is processed in series.
It's functionally equivalent to the "Iteration with .reduce() that Resolves With Array" solution from #jfriend00 but a bit neater.
I suppose two approaches for handling this question:
Create multiple promises and use the allWithAsync function as follow:
let allPromiseAsync = (...PromisesList) => {
return new Promise(async resolve => {
let output = []
for (let promise of PromisesList) {
output.push(await promise.then(async resolvedData => await resolvedData))
if (output.length === PromisesList.length) resolve(output)
}
}) }
const prm1= Promise.resolve('first');
const prm2= new Promise((resolve, reject) => setTimeout(resolve, 2000, 'second'));
const prm3= Promise.resolve('third');
allPromiseAsync(prm1, prm2, prm3)
.then(resolvedData => {
console.log(resolvedData) // ['first', 'second', 'third']
});
Use the Promise.all function instead:
(async () => {
const promise1 = new Promise(resolve => {
setTimeout(() => { console.log('first');console.log(new Date());resolve() }, 1000)
})
const promise2 = new Promise(resolve => {
setTimeout(() => {console.log('second');console.log(new Date()); resolve() }, 3000)
})
const promise3 = new Promise(resolve => {
setTimeout(() => { console.log('third');console.log(new Date()); resolve() }, 7000)
})
const promises = [promise1, promise2, promise3]
await Promise.all(promises)
console.log('This line is shown after 7000ms')
})()
In my opinion, you should be using a for loop(yes the only time I would recommend a for loop). The reason is that when you are using a for loop it allows you to await on each of the iterations of your loop where using reduce, map or forEach with run all your promise iterations concurrently. Which by the sounds of it is not what you want, you want each promise to wait until the previous promise has resolved. So to do this you would do something like the following.
const ids = [0, 1, 2]
const accounts = ids.map(id => getId(id))
const accountData = async() => {
for await (const account of accounts) {
// account will equal the current iteration of the loop
// and each promise are now waiting on the previous promise to resolve!
}
}
// then invoke your function where ever needed
accountData()
And obviously, if you wanted to get really extreme you could do something like this:
const accountData = async(accounts) => {
for await (const account of accounts) {
// do something
}
}
accountData([0, 1, 2].map(id => getId(id)))
This is so much more readable than any of the other examples, it is much less code, reduced the number of lines needed for this functionality, follows a more functional programming way of doing things and is using ES7 to its full potential!!!!
Also depending on your set up or when you are reading this you may need to add the plugin-proposal-async-generator-functions polyfill or you may see the following error
#babel/plugin-proposal-async-generator-functions (https://git.io/vb4yp) to the 'plugins' section of your Babel config to enable transformation.