I am using javascript.
My question comes from this scenario: I have a large array I am mapping through,
let myNewArray = myLargeArray.map(someFuntion)
console.log(myNewArray)
Is it possible that the mapping might take too long and that undefined might be logged? So should I use async-await or is it only reserved for promises?
Is it possible that the mapping might take too long
"too long" is subjective. The time it takes won't have any impact on the value you end up with though.
that undefined might be logged?
map always returns an array, so no.
The array might contain undefined values though.
So should I use async-await or is it only reserved for promises?
You can only usefully await a promise.
map will return an array, so you can't usefully await it.
If someFunction returns a promise, then the map will return an array of promises, which you could wrap with Promise.all which returns a promise that you could await if you wanted to log an array of resolved values instead of an array of promises.
Map is synchronous.
It accepts a callback function (someFuntion) and always creates a new array.
The callback function (someFuntion) is not event driven.
It is applied for every element in the array once in-order.
You will always receive an array and not undefined.
But values inside the returned array will depend on the callback function you provide to map.
If someFuntion returns nothing you will get array of undefined.
If returns promise then array of promise which can be resolved. Promise.all.
Related
Can yield be evaluated even if it is not followed by .next()?
I have a piece of code that will be saved in Firebase. Since the code will generate data contention error, I have wrapped the function generator that will be saved in a collection to Promise.all(). It works as expected, it's faster than the previous code I made, and seems there's no data contention I encountered. The file I uploaded has same id (100 same id) thus it change same collection over time.
My question is, how does it happen that yield still works even I did not call .next()?
function * sampleGenerator(payload) {
yield utils.toCollection(payload.id)
}
async function save() {
for(let i=0; i < data.length; i++) {
... code here
const payload = {id: "", data: "", others: ""}
await Promise.all(sampleGenerator(payload))
}
}
All data is saved in Firebase as expected here.
The Generator returns an iterable, and Promise.all accepts an iterable, so this works...but it might not be doing what you expect.
Generator functions, once called, return an Iterable object as on MDN:
The Generator object is returned by a generator function and it conforms to both the iterable protocol and the iterator protocol.
Promise.all accepts an iterable object as on MDN:
The Promise.all() method takes an iterable of promises as an input, and returns a single Promise that resolves to an array of the results of the input promises.
However, you should be aware of these aspects:
Because Promise.all will internally and synchronously turn your generator-based iterable into an array, it's not really more useful than just having a simple function that returns a collection.
function * sampleGenerator(payload) currently yields your collection as a single yielded item, not the yield* syntax that would yield items from your collection (generated here by utils.toCollection) one by one. If your sampleGenerator collection contained Promises instead, your Promise.all would receive something like Promise.all([[promise1, ...]]) (i.e. an array Promise.all creates that holds a single element, the collection you yield) and not wait for the results. If you yield Promises one by one, or yield* a collection of Promises, you'd see Promise.all([promise1, ...]) and correctly get back [result1, ...].
One typical way of combining generators and promises is with async generators, which might have more sequential ordering than you want here, but is worth knowing for the sake of being able to answer "why not".
If your await Promise.all is being returned from save, and you don't await anything else inside save, then the async aspect of the function isn't getting you much aside from documentation (and putting save on the stack trace). It would be mostly equivalent just to return the Promise.all promise directly and remove the async keyword from the function.
Please see both answers. (Thanks JLRishe and AngularInDepth.com)
AngularInDepth had also added interesting link to interceptors and HttpClient mechanics in his post:
https://blog.angularindepth.com/insiders-guide-into-interceptors-and-httpclient-mechanics-in-angular-103fbdb397bf
Using RxJS and HttpClientModule, When calling .get() of HttpClient what are the return values in this chain?
constructor (private _http: HttpClient) {
...
this._http.get(url).map(res=>res.json()).toPromise() // how does this chaining work?
As I understand, _http.get(url) returns an observable, but we are chaining a .map() to .get() method.
I am also guessing that the following .map() work as receiving http-response object, which we promptly converted to json object. Finally, the whole observable is converted to promise with .toPromise() which doesn't seem to care whatever .map() is doing to the .get()
So far I understand that _http.get() returns an observable and will also provide http-response object (as next or callback function)
is .map() function acting just on the response object and will not change the return type of the whole _http.get() function. (return type remains the same as observable)? Is my assumption correct?
In general, if the above is true, how does JS figure out what kind of chaining function would work on the callback data and which would work on the return type? How do we implement my own custom function so that my chain function would strictly work on the callback data (correct me if I am using the right term) OR the return type?
Can we add .map() to all async functions (promise,observables,callbacks) to work on their callback data? Likewise, does .toPromise() work on all async function to convert them to promises?
So far I understand that _http.get() returns an observable and will also provide http-response object (as next or callback function)
_http.get() returns an observable of http-response objects. You can think of an observable as a window on a stream of values, and in this case that stream would be just a single value (the result)
is .map() function acting just on the response object and will not change the return type of the whole _http.get() function.
Yes. It creates a new observable in which the values from the original observable have been transformed in some way. In this case, that transformation is res => res.json().
In general, if the above is true, how does JS figure out what kind of chaining function would work on the callback data and which would work on the return type? How do we implement my own custom function so that my chain function would strictly work on the callback data (correct me if I am using the right term) OR the return type?
I don't really understand this question. The function passed to map needs to take the observable's contained type as input, and produce some type as output.
Can we add .map() to all async functions (promise,observables,callbacks) to work on their callback data? Likewise, does .toPromise() work on all async function to convert them to promises?
You can use .map() on any observable and you can use .toPromise() on any observable. Bear in mind that .toPromise() creates a promises for the first value that comes out of the observable, so it's typically suited for observables that produce a single value (as this one does).
So far I understand that _http.get() returns an observable and will
also provide http-response object (as next or callback function
The signature for the request method (which is what is called by get internally) is :
request(...): Observable<any>
which means that it returns an observable that emits items of type any. One of the values it returns is HttpResponse. However, depending on the observe parameter it can return text as in response.body or parsed JSON as in JSON.parse(response.body).
is .map() function acting just on the response object and will not
change the return type of the whole _http.get() function. (return type
remains the same as observable)? Is my assumption correct?
map operates on the values emitted by the observable. And the function signature doesn't explicitly specify these types and instead uses a generic type any.
Can we add .map() to all async functions
(promise,observables,callbacks)
Here map is specific to the RxJs library. It's not map from the Array.prototype.map.
In general, if the above is true, how does JS figure out what kind of
chaining function would work on the callback data and which would work
on the return type?
You need to read about RxJs operators and lettable operators here. RxJs will execute each operator one after the other. All operators work on the data emitted by an observable, in the case of HttpClient it's either HttpResponse, response.body or JSON.
You can read more about how observable chain works in HttpClient in the article:
Insider’s guide into interceptors and HttpClient mechanics in Angular
I'm trying to understand how promises work, so general idea is quite clear, but currently I'm stuck with all() method. I know, it used to make a promise for an array of other promises, which will be resolved when all promises from the array will resolved or will be rejected when any of the promises from the array will rejected. Here is my code snippet:
var qu = require('q');
var proArr = [];
for(var i = 0; i < 4; i++) {
var tmpDef = qu.defer();
(function(index, tmpDef) {
setTimeout(function() {
console.log('Timeout ' + index + ' has triggered!');
tmpDef.resolve();
}, (i + 1) * 1000);
proArr.push(tmpDef.promise);
})(i, tmpDef);
}
qu.all(proArr).then(function() {
console.log('All timeouts has passed with Q.all()!');
return 'some result';
});
qu.defer().promise.all(proArr).then(function() {
console.log('All timeouts has passed with promise.all()!');
return 'some result';
});
For this snippet a promise, which returned by qu.all() method will be resolved when all timeouts will be triggered, but a promise, which returned by qu.defer().promise.all() method will stay in pending state even if all timeouts will be triggered. So what the method Promise.prototype.all() have to be used for? And how it differs from Q.all() method?
Also I've looked in Q library sources, and here is a code for Promise.prototype.all() method:
Promise.prototype.all = function () {
return all(this);
};
As I can understand, this method calls Q.all() with an instance of current promise as an argument, but why? Don't the method Q.all() have to accept an array of promises? So I'll be very appreciated for clarification of all this moments.
Don't the method Q.all() have to accept an array of promises?
No, in fact the Q.all method also can take a promise for an array of promises. You can see that in the code well, it does call Q.when on the input. This might seem a bit useless, but it's a more forgiving API and apparently simiplifies the implementation of Promise.prototype.all.
What is the difference between Q.all() and Promise.prototype.all()?
Let's get back to our simpler mental model. Q.all is a static function that takes an array of promises and returns you a promise for an array of all results.
The .all prototype method is simply convenience. Instead of writing
….then(Q.all).…
you can use
….all().…
in a promise chain - these are exactly equivalent. Notice that the .all prototype method does not take any parameters - it does get the array from promise it is called on.
a promise, which returned by Q.defer().promise.all(proArr) method will stay in pending state even if all timeouts will be triggered
Yes. That's for two reasons:
Q.defer().promise is a promise that never resolves (and since you've thrown away the deferred, you never can). The chain just doesn't even advance to the .all(…) invocation.
As established above, the prototype method you're calling here doesn't take any arguments. The proArr is simply ignored.
If you want to use it, you can use the following though:
Q(proArr).all().…
I'm writing a nodejs thing, and trying the Pacta promise library for fun. Pacta's interface is "algebraic," but I don't have any experience with that paradigm.
I'd like to know what is the "Pacta way" to accomplish the same thing as
$.when.apply(undefined, arrayOfThings)
.then(function onceAllThingsAreResolved(thing1Val, thing2Val, ...) {
// code that executes once all things have been coerced to settled promises
// and which receives ordered resolution values, either as
// separate args or as a single array arg
}
That is, given an array, an iterator function that returns a promise, and a callback function, I'd like to map the iterator onto the array and provide an array of the resolution values (or rejection reasons) to the callback once all the promises have been settled.
If there isn't an idiomatically algebraic way to express this, I'd be just as interested to know that.
EDIT: updated use of $.when to properly accommodate an array, per #Bergi.
Pacta's interface is "algebraic," but I don't have any experience with that paradigm.
ADTs are type theory constructs that represent nested data types, like a Promise for Integer. They are heavily used in functional programming, a flavour where you always know the types of your expressions and values. There are no intransparent, implicit type coercions, but only explicit ones.
This is completely contrary to jQuery's approach, where $.when() and .then() do completely different things based on the types (and number) of its arguments. Therefore, translating your code is a bit complicated. Admittedly, Pacta doesn't have the most useful implementation, so we have to use some own helper functions to do this.
Assume you have an array of (multiple) promises, and your then callback takes the arguments and returns a non-promise value:
arrayOfPromises.reduce(function(arr, val) {
return arr.append(val);
}, Promise.of([])).spread(function (…args) {
// code that executes once all promises have been fulfilled
// and which receives the resolution values as separate args
});
If your callback does not take multiple arguments, use map instead of spread:
arrayOfPromises.reduce(function(arrp, valp) {
return arrp.append(valp);
}, Promise.of([])).map(function (arr) {
// code that executes once all promises have been fulfilled
// and which receives the resolution values as an array
});
If your callback does return a promise, use chain instead of map:
arrayOfPromises.reduce(function(arr, val) {
return arr.append(val);
}, Promise.of([])).chain(function (arr) {
// code that executes once all promises have been fulfilled
// and which receives the resolution values as an array
});
If you don't know what it returns, use then instead of chain. If you don't know what it returns and want to get multiple arguments, use .spread(…).then(identity).
If your array contains promises mixed with plain values, use the following:
arrayOfThings.reduce(function(arrp, val) {
var p = new Promise();
Promise.resolve(p, val);
return arrp.append(p);
}, Promise.of([])).…
If your array contains only a single or no (non-thenable) value, use
Promise.of(arrayOfThings[0]).…
If your array contains anything else, even $.when would not do what you expect.
Of course, promises that resolve with multiple values are not supported at all - use arrays instead. Also, your callback will only be called when all promises are fulfilled, not when they're settled, just as jQuery does this.
I'm after a function that would return a resolved value of a promise. Failing gracefully is definitely a bonus, but it's an assumed precondition that when the function is called the promise is ready to be resolved.
While I'm working with webdriver.js promise implementation which allows the queue manipulations similar to below, I don't want to get too lost in semantics of queues/chains etc. For that reason alone, here is some pseudocode to cover what I'm trying to achieve:
var inputs = [...], outputs;
outputs = inputs.map(function(input){
//queue some async tasks to be performed with input
queue.enqueue(...);
//I can't return the *output* value here yet, naturally, so instead
return promise;
});
//now I'll add another task to the same queue
//this means that by the time this task is run
//the async tasks above would have been executed
//and the promises would be "resolvable"... right?
queue.enqueue(function(){
console.log(outputs); //>an array of promises
console.log(doSomeMagic(outputs)); //>resolved values as needed <<<
});
NB: afaik Q.all() would not do what I'm after - it takes an array of promises and returns a promise of an array, not its resolved value. I'm only happy to be proved wrong.
For anyone else looking for an answer based on the title of the question the following works with ES 2017+ to take an array of promises and return an array of values:
var arrayOfValues = await Promise.all(arrayOfPromises)
The only way to get the eventual value for a promise is with then. If a function performs work asynchronously, it must return a promise and under no circumstances can it return a plain value. To do so, it would necessarily have to block the thread of execution until the work completes, which is only possible with threads or fibers, which entrain the perils of deadlock and interleaving hazards.
As such, Q.all is in fact the method you need, except to follow up with then to get the eventual value.
Q.all(inputs.map(function (input) {
return promiseForOutput; // however you go about this
}))
.then(function (outputs) {
// at this event, outputs is an array of output values
});
There are ways to cheat, of course. promise.inspect() will return an object describing the state of the promise, like {state: "fulfilled", value: value} if it’s ready, or {state: "rejected", error} if it failed, or {state: "pending"}, if it is not yet ready. If, as you say, you are guaranteed that the outputs promise, returned by Q.all has been fulfilled, you can do this:
outputs = outputs.inspect().value
I don’t recommend this. The best way to know that a promise is resolved is to use then.
You could also just push values onto an outputs array of your making if you can also guarantee that all the outputs are ready through some external means.
var endResult = Q.defer();
var outputs = [];
inputs.forEach(function (input) {
outputPromise.then(function (output) {
outputs.push(output);
check();
}, endResult.reject);
});
check();
function check() {
if (outputs.length === inputs.length) {
// manipulate outputs directly, they are ready
endResult.resolve();
}
}
return endResult.promise;
The best means, however, is to just use Q.all(outputs).then to get an event that is guaranteed to be after all the outputs are ready.
Since you in general never know whether promises are resolved or not, you cannot simply transform them into a plain value. Q.all must return a promise since it cannot extract the values array from the async context. The only time you know that a promise has a value is in the success handler, and there you're getting the value anyway. You should not use another event system that tells you when a promise has settled - use the promise itself.
So instead of using queue.enqueue, just put Q.all(outputs).then(function(values){ /* do something */ }). However, if you cannot work around that, you might have a look at the Promise inspect debugging method: _.pluck(_.invoke(outputs, "inspect"), "value"). But notice that it might be easier not to store the values in promises then.