Can yield be evaluated even if it is not followed by .next()?
I have a piece of code that will be saved in Firebase. Since the code will generate data contention error, I have wrapped the function generator that will be saved in a collection to Promise.all(). It works as expected, it's faster than the previous code I made, and seems there's no data contention I encountered. The file I uploaded has same id (100 same id) thus it change same collection over time.
My question is, how does it happen that yield still works even I did not call .next()?
function * sampleGenerator(payload) {
yield utils.toCollection(payload.id)
}
async function save() {
for(let i=0; i < data.length; i++) {
... code here
const payload = {id: "", data: "", others: ""}
await Promise.all(sampleGenerator(payload))
}
}
All data is saved in Firebase as expected here.
The Generator returns an iterable, and Promise.all accepts an iterable, so this works...but it might not be doing what you expect.
Generator functions, once called, return an Iterable object as on MDN:
The Generator object is returned by a generator function and it conforms to both the iterable protocol and the iterator protocol.
Promise.all accepts an iterable object as on MDN:
The Promise.all() method takes an iterable of promises as an input, and returns a single Promise that resolves to an array of the results of the input promises.
However, you should be aware of these aspects:
Because Promise.all will internally and synchronously turn your generator-based iterable into an array, it's not really more useful than just having a simple function that returns a collection.
function * sampleGenerator(payload) currently yields your collection as a single yielded item, not the yield* syntax that would yield items from your collection (generated here by utils.toCollection) one by one. If your sampleGenerator collection contained Promises instead, your Promise.all would receive something like Promise.all([[promise1, ...]]) (i.e. an array Promise.all creates that holds a single element, the collection you yield) and not wait for the results. If you yield Promises one by one, or yield* a collection of Promises, you'd see Promise.all([promise1, ...]) and correctly get back [result1, ...].
One typical way of combining generators and promises is with async generators, which might have more sequential ordering than you want here, but is worth knowing for the sake of being able to answer "why not".
If your await Promise.all is being returned from save, and you don't await anything else inside save, then the async aspect of the function isn't getting you much aside from documentation (and putting save on the stack trace). It would be mostly equivalent just to return the Promise.all promise directly and remove the async keyword from the function.
Related
I am using javascript.
My question comes from this scenario: I have a large array I am mapping through,
let myNewArray = myLargeArray.map(someFuntion)
console.log(myNewArray)
Is it possible that the mapping might take too long and that undefined might be logged? So should I use async-await or is it only reserved for promises?
Is it possible that the mapping might take too long
"too long" is subjective. The time it takes won't have any impact on the value you end up with though.
that undefined might be logged?
map always returns an array, so no.
The array might contain undefined values though.
So should I use async-await or is it only reserved for promises?
You can only usefully await a promise.
map will return an array, so you can't usefully await it.
If someFunction returns a promise, then the map will return an array of promises, which you could wrap with Promise.all which returns a promise that you could await if you wanted to log an array of resolved values instead of an array of promises.
Map is synchronous.
It accepts a callback function (someFuntion) and always creates a new array.
The callback function (someFuntion) is not event driven.
It is applied for every element in the array once in-order.
You will always receive an array and not undefined.
But values inside the returned array will depend on the callback function you provide to map.
If someFuntion returns nothing you will get array of undefined.
If returns promise then array of promise which can be resolved. Promise.all.
I'm trying to understand how promises work, so general idea is quite clear, but currently I'm stuck with all() method. I know, it used to make a promise for an array of other promises, which will be resolved when all promises from the array will resolved or will be rejected when any of the promises from the array will rejected. Here is my code snippet:
var qu = require('q');
var proArr = [];
for(var i = 0; i < 4; i++) {
var tmpDef = qu.defer();
(function(index, tmpDef) {
setTimeout(function() {
console.log('Timeout ' + index + ' has triggered!');
tmpDef.resolve();
}, (i + 1) * 1000);
proArr.push(tmpDef.promise);
})(i, tmpDef);
}
qu.all(proArr).then(function() {
console.log('All timeouts has passed with Q.all()!');
return 'some result';
});
qu.defer().promise.all(proArr).then(function() {
console.log('All timeouts has passed with promise.all()!');
return 'some result';
});
For this snippet a promise, which returned by qu.all() method will be resolved when all timeouts will be triggered, but a promise, which returned by qu.defer().promise.all() method will stay in pending state even if all timeouts will be triggered. So what the method Promise.prototype.all() have to be used for? And how it differs from Q.all() method?
Also I've looked in Q library sources, and here is a code for Promise.prototype.all() method:
Promise.prototype.all = function () {
return all(this);
};
As I can understand, this method calls Q.all() with an instance of current promise as an argument, but why? Don't the method Q.all() have to accept an array of promises? So I'll be very appreciated for clarification of all this moments.
Don't the method Q.all() have to accept an array of promises?
No, in fact the Q.all method also can take a promise for an array of promises. You can see that in the code well, it does call Q.when on the input. This might seem a bit useless, but it's a more forgiving API and apparently simiplifies the implementation of Promise.prototype.all.
What is the difference between Q.all() and Promise.prototype.all()?
Let's get back to our simpler mental model. Q.all is a static function that takes an array of promises and returns you a promise for an array of all results.
The .all prototype method is simply convenience. Instead of writing
….then(Q.all).…
you can use
….all().…
in a promise chain - these are exactly equivalent. Notice that the .all prototype method does not take any parameters - it does get the array from promise it is called on.
a promise, which returned by Q.defer().promise.all(proArr) method will stay in pending state even if all timeouts will be triggered
Yes. That's for two reasons:
Q.defer().promise is a promise that never resolves (and since you've thrown away the deferred, you never can). The chain just doesn't even advance to the .all(…) invocation.
As established above, the prototype method you're calling here doesn't take any arguments. The proArr is simply ignored.
If you want to use it, you can use the following though:
Q(proArr).all().…
I'm writing a nodejs thing, and trying the Pacta promise library for fun. Pacta's interface is "algebraic," but I don't have any experience with that paradigm.
I'd like to know what is the "Pacta way" to accomplish the same thing as
$.when.apply(undefined, arrayOfThings)
.then(function onceAllThingsAreResolved(thing1Val, thing2Val, ...) {
// code that executes once all things have been coerced to settled promises
// and which receives ordered resolution values, either as
// separate args or as a single array arg
}
That is, given an array, an iterator function that returns a promise, and a callback function, I'd like to map the iterator onto the array and provide an array of the resolution values (or rejection reasons) to the callback once all the promises have been settled.
If there isn't an idiomatically algebraic way to express this, I'd be just as interested to know that.
EDIT: updated use of $.when to properly accommodate an array, per #Bergi.
Pacta's interface is "algebraic," but I don't have any experience with that paradigm.
ADTs are type theory constructs that represent nested data types, like a Promise for Integer. They are heavily used in functional programming, a flavour where you always know the types of your expressions and values. There are no intransparent, implicit type coercions, but only explicit ones.
This is completely contrary to jQuery's approach, where $.when() and .then() do completely different things based on the types (and number) of its arguments. Therefore, translating your code is a bit complicated. Admittedly, Pacta doesn't have the most useful implementation, so we have to use some own helper functions to do this.
Assume you have an array of (multiple) promises, and your then callback takes the arguments and returns a non-promise value:
arrayOfPromises.reduce(function(arr, val) {
return arr.append(val);
}, Promise.of([])).spread(function (…args) {
// code that executes once all promises have been fulfilled
// and which receives the resolution values as separate args
});
If your callback does not take multiple arguments, use map instead of spread:
arrayOfPromises.reduce(function(arrp, valp) {
return arrp.append(valp);
}, Promise.of([])).map(function (arr) {
// code that executes once all promises have been fulfilled
// and which receives the resolution values as an array
});
If your callback does return a promise, use chain instead of map:
arrayOfPromises.reduce(function(arr, val) {
return arr.append(val);
}, Promise.of([])).chain(function (arr) {
// code that executes once all promises have been fulfilled
// and which receives the resolution values as an array
});
If you don't know what it returns, use then instead of chain. If you don't know what it returns and want to get multiple arguments, use .spread(…).then(identity).
If your array contains promises mixed with plain values, use the following:
arrayOfThings.reduce(function(arrp, val) {
var p = new Promise();
Promise.resolve(p, val);
return arrp.append(p);
}, Promise.of([])).…
If your array contains only a single or no (non-thenable) value, use
Promise.of(arrayOfThings[0]).…
If your array contains anything else, even $.when would not do what you expect.
Of course, promises that resolve with multiple values are not supported at all - use arrays instead. Also, your callback will only be called when all promises are fulfilled, not when they're settled, just as jQuery does this.
I'm after a function that would return a resolved value of a promise. Failing gracefully is definitely a bonus, but it's an assumed precondition that when the function is called the promise is ready to be resolved.
While I'm working with webdriver.js promise implementation which allows the queue manipulations similar to below, I don't want to get too lost in semantics of queues/chains etc. For that reason alone, here is some pseudocode to cover what I'm trying to achieve:
var inputs = [...], outputs;
outputs = inputs.map(function(input){
//queue some async tasks to be performed with input
queue.enqueue(...);
//I can't return the *output* value here yet, naturally, so instead
return promise;
});
//now I'll add another task to the same queue
//this means that by the time this task is run
//the async tasks above would have been executed
//and the promises would be "resolvable"... right?
queue.enqueue(function(){
console.log(outputs); //>an array of promises
console.log(doSomeMagic(outputs)); //>resolved values as needed <<<
});
NB: afaik Q.all() would not do what I'm after - it takes an array of promises and returns a promise of an array, not its resolved value. I'm only happy to be proved wrong.
For anyone else looking for an answer based on the title of the question the following works with ES 2017+ to take an array of promises and return an array of values:
var arrayOfValues = await Promise.all(arrayOfPromises)
The only way to get the eventual value for a promise is with then. If a function performs work asynchronously, it must return a promise and under no circumstances can it return a plain value. To do so, it would necessarily have to block the thread of execution until the work completes, which is only possible with threads or fibers, which entrain the perils of deadlock and interleaving hazards.
As such, Q.all is in fact the method you need, except to follow up with then to get the eventual value.
Q.all(inputs.map(function (input) {
return promiseForOutput; // however you go about this
}))
.then(function (outputs) {
// at this event, outputs is an array of output values
});
There are ways to cheat, of course. promise.inspect() will return an object describing the state of the promise, like {state: "fulfilled", value: value} if it’s ready, or {state: "rejected", error} if it failed, or {state: "pending"}, if it is not yet ready. If, as you say, you are guaranteed that the outputs promise, returned by Q.all has been fulfilled, you can do this:
outputs = outputs.inspect().value
I don’t recommend this. The best way to know that a promise is resolved is to use then.
You could also just push values onto an outputs array of your making if you can also guarantee that all the outputs are ready through some external means.
var endResult = Q.defer();
var outputs = [];
inputs.forEach(function (input) {
outputPromise.then(function (output) {
outputs.push(output);
check();
}, endResult.reject);
});
check();
function check() {
if (outputs.length === inputs.length) {
// manipulate outputs directly, they are ready
endResult.resolve();
}
}
return endResult.promise;
The best means, however, is to just use Q.all(outputs).then to get an event that is guaranteed to be after all the outputs are ready.
Since you in general never know whether promises are resolved or not, you cannot simply transform them into a plain value. Q.all must return a promise since it cannot extract the values array from the async context. The only time you know that a promise has a value is in the success handler, and there you're getting the value anyway. You should not use another event system that tells you when a promise has settled - use the promise itself.
So instead of using queue.enqueue, just put Q.all(outputs).then(function(values){ /* do something */ }). However, if you cannot work around that, you might have a look at the Promise inspect debugging method: _.pluck(_.invoke(outputs, "inspect"), "value"). But notice that it might be easier not to store the values in promises then.
async vs. Q generally
I'm learning Node.js development, and trying to wrap my brain around strategies for managing asynchronous "callback hell". The two main strategies I've explored are Caolan McMahon's async module, and Kris Kowal's promise-based Q module.
Like many other people, I'm still struggling to understand when you should use one vs. the other. However, generally speaking I have found promises and Q-based code to be slightly more intuitive, so I have been moving in that direction.
Mapping/Concatenating collections generally
However, I'm still stuck using the async module's functions for managing collections. Coming from a Java and Python background, most of the time when I work with a collection, the logic looks like this:
Initialize a new empty collection, in which to store results.
Perform a for-each loop with the old collection, applying some logic to each element and pushing its result into the new empty collection.
When the for-each loop ends, proceed to use the new collection.
In client-side JavaScript, I've grown accustomed to using jQuery's map() function... passing in that step #2 logic, and getting the step #3 result as a return value. Feels like the same basic approach.
Mapping/Concatenating collections with async and Q
The Node-side async module has similar map and concat functions, but they don't return the concatenated result back at the original scope level. You must instead descend into the callback hell to use the result. Example:
var deferred = Q.defer();
...
var entries = [???]; // some array of objects with "id" attributes
async.concat(entries, function (entry, callback) {
callback(null, entry.id);
}, function (err, ids) {
// We now have the "ids" array, holding the "id" attributes of all items in the "entries" array.
...
// Optionaly, perhaps do some sorting or other post-processing on "ids".
...
deferred.resolve(ids);
});
...
return deferred.promise;
Since my other functions are becoming promise-based, I have this code returning a promise object so it can be easily included in a then() chain.
Do I really need both?
The ultimate question that I'm struggling to articulate is: do I really need both async and Q in the code example above? I'm learning how to replace the async module's control flow with Q-style promise chains generally... but it hasn't yet "clicked" for me how to do mapping or concatenation of collections with a promise-based approach. Alternatively, I'd like to understand why you can't, or why it's not a good idea.
If async and Q are meant to work together as I am using them in the example above, then so be it. But I would prefer not to require the extra library dependency if I could cleanly use Q alone.
(Sorry if I'm missing something outrageously obvious. The asynchronous event-driven model is a very different world, and my head is still swimming.)
Do I really need both?
No. Mapping asynchronous iterators over a collection is quite simple with promises, but it requires two steps instead of one function call. First, the collection is mapped to an array of promises for the parallel iteration. Then, those promises are fed into Q.all to make one promise for the mapped collection. In contrast to async, the order of the result is guaranteed.
var entries = […]; // some array of objects with "id" attributes
var promises = entries.map(function(object) {
return asyncPromiseReturingFunction(object);
}); // the anonymous wrapper might be omitted
return Q.all(promises);
For concat, you would have to append a
.then(function(results) {
return Array.prototype.concat.apply([], results);
});