I realized when I do something like this:
for (const entity of someArr) {
console.log('start now!')
await doSomeAsycAction()
console.log('waited X secs!')
}
It prints out:
start now!
waited X secs!
start now!
waited X secs!
...
But when I use map:
arr.map(async entity => {
console.log('start now!')
await doSomeAsycAction()
console.log('waited X secs!')
})
It prints out:
start now!
start now!
start now!
...
waited X secs!
waited X secs!
waited X secs!
...
Can someone explain why this is the case?
The difference between the two flows is that the first one (using for, for..in or for..of) is running the loop iterations sequentially and the other one (using map, filter, reduce, forEach and so on) are running (in case async symbol is used somewhere in the mapping function) concurrently (kind of).
In for loops, the next iteration must wait for the previous one to finish. This allows you to do some async operations relevant for the next iteration.
In contrast, using the async methods runs each iteration independently, so you can't rely on other iterations in your current iteration. Those kind of functions receive a function as an argument and executes it immediately for every item in the array.
Think of every iteration as an independent promise execution. When running an async function, the await symbol tells that this operation might take a while (i.e I/O, DB calls, network operations...) and let's the code outside of the current executed function keep going (and resume later on, after the async call returns). The map function sees that the current iteration is busy and go on to the next one. Somewhen in the future it would resume and execute console.log('waited X secs!').
You can simulate the same behavior of async executions with for loop this way (would maybe help demonstrating the difference):
for (const entity of someArr) {
(async () => {
console.log('start now!')
await doSomeAsycAction()
console.log('waited X secs!')
})()
}
The async-await syntax is working per function scope, and map is defining a new function scope (the function passed as the param to the function) just like the (anonymous) function that gets executed each iteration in my example. Wish it helps to understand.
One important thing to notice is that each iteration of the map doesn't return the mapped value you were expecting for, but a promise that will be resolved with this value. So if you try to rely on one of the mapped array values - you must add an await right before it, otherwise the value type would still be a promise. Take a look at the following example:
let arr = [1];
arr = arr.map(async entity => incrementAsync(entity));
console.log(arr[0]) // would print an unresolved Promise object
console.log(await arr[0]) // would print 2
Basic array prototype loop functions like forEach, map, filter, find etc. don't wait for next iteration.Their basic behavior is to iterate not awaiting. If you want use like waiting function then try to use like below
for (const event of events) {
if (failure or conditional) {
continue;
}
}
Related
Whenever I use a method on WebDriverIO's browser object, it completely breaks out of whatever loop I'm in.
The debugger confirms this. It doesn't execute any other code in the rest of this loop. The loop should iterate 51 times which it does. But it never executes the rest of the code after browser.methods()
Here is the code:
await redirects.map(async (obj) => {
iterator7 += 1;
console.log('In map method...');
giveMeURLString = await browser.url(obj.Origin); //Why does this return / reloop????
// Also why is giveMeURLString populated with undefined
iterator8 += 1;
console.log('I should be able to see this...')
})
You can also run the test yourself here:
https://github.com/DavidMLink/WebDriverIOBug
Why is it prematurely exiting my loops?
It was a problem with Async/Await. Using a simple ""synchronous loop"". Any type of for loop like "for of" or "for in" or "for" will do the trick.
I'm trying to make some checks before saving an array of objects (objects[]) to the DB (mongoDB using mongoose):
Those objects are already sorted by date, so objects[0].date is lower than objects[1].date.
Each object should check that last related saved object has a different value (to avoid saving the same info two times). This means that I've to query to the DB before each save, to make that check, AND each of these object MUST be stored in order to be able to make the check with the right object. If objects are not stored in order, the last related saved object might not be the correct one.
In-depth explanation:
HTTP request is send to an API. It returns an array of objects (sortered by date) that I want to process and save on my Mongo DB (using mongoose). I've to iterate through all these objects and, for each:
Look for the previous related object stored on DB (which COULD BE one of that array).
Check some values between the 'already stored' and the object to save to evaluate if new object must be saved or could be discarded.
Save it or discard it, and then jump to next iteration.
It's important to wait each iteration to finish because:
Items on array MUST be stored in DB in order: first those which lower date, because each could be modified by some object stored later with a higher date.
If next iteration starts before previous has finished, the query that searchs for the previous object could not find it if it hasn't been stored yet
Already tried:
Using promises or async/await on forEach/for loops only makes that iteration async, but it keeps launching all iterations at once.
I've tried using async/await functions inside forEach/for loops, even creating my own asyncForEach function as shown below, but none of this has worked:
Array.prototype.asyncForEach = function(fn) {
return this.reduce(
(promise, n) => promise.then(() => fn(n)),
Promise.resolve()
);
};
Test function:
let testArray = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
testArray.asyncForEach(function(element) {
setTimeout(() => {
console.log(element);
}, Math.random() * 500);
});
Provided example should show numbers on order in every case. It's not a problem if internal function (setTimeout in the example) should return a promise.
What I think I need is a loop that waits some function/promise between iterations, and only starts the next iteration when the first is already finished.
How could I do that? Thanks you in advance!
const myArray = ['a','b','c','d'];
async function wait(ms) { // comment 3
return new Promise(resolve => setTimeout(resolve, ms));
}
async function doSomething() {
await myArray.reduce(async (promise, item) => {
await promise; // comment 2
await wait(1000);
// here we could await something else that is async like DB call
document.getElementById('results').append(`${item} `);
}, Promise.resolve()); // comment 1
}
setTimeout(() => doSomething(), 1000);
<div id="results">Starting in 1 second <br/></div>
You can also use reduce and async await which you already said you've tried.
Basically, if you read how reduce works you can see that it accepts 2 parameters, first being callback to execute over each step and second optional initial value.
In the callback we have first argument being an accumulator which means that it accepts whatever the previous step returns or the optional initial value for first step.
1) You are giving initial value of promise resolve so that you start your first step.
2) Because of this await promise you will never go into next step until previous one has finished, since that is the accumulator value from previous step, which is promise since we said that callback is async. We are not resolving promise per say here, but as soon as the previous step is finish, we are going to implicitly resolve it and go to next step.
3) You can put for example await wait(30) to be sure that you are throttling the Ajax requests and not sending to many requests to 3rd party API's, since then there is no way that you will send more than 1000/30 requests per second, even if your code executes really fast on your machine.
Hm, ok i am not 100% sure if i understand your question in the right way. But if you try to perform an async array operation that awaits for your logic for each item, you can do it like follow:
async loadAllUsers() {
const test = [1,2,3,4];
const users = [];
for (const index in test) {
// make some magic or transform data or something else
await users.push(test[index])
}
return users;
}
Then you can simply invoke this function with "await". I hope that helps you.
In asyncForEach function you are resolving a Promise, setTimeout doesn't return a Promise.So if you convert your setTimeout to Promise. It will work as expected.
Here is the modified code:
testArray.asyncForEach(function(element) {
return new Promise((resolve, reject) => {
setTimeout(() => {
console.log(element);
return resolve(element)
}, Math.random() * 500);
})
});
I constantly run into problems with this pattern with callbacks inside loops:
while(input.notEnd()) {
input.next();
checkInput(input, (success) => {
if (success) {
console.log(`Input ${input.combo} works!`);
}
});
}
The goal here is to check every possible value of input, and display the ones that pass an asynchronous test after confirmed. Assume the checkInput function performs this test, returning a boolean pass/fail, and is part of an external library and can't be modified.
Let's say input cycles through all combinations of a multi-code electronic jewelry safe, with .next incrementing the combination, .combo reading out the current combination, and checkInput asynchronously checking if the combination is correct. The correct combinations are 05-30-96, 18-22-09, 59-62-53, 68-82-01 are 85-55-85. What you'd expect to see as output is something like this:
Input 05-30-96 works!
Input 18-22-09 works!
Input 59-62-53 works!
Input 68-82-01 works!
Input 85-55-85 works!
Instead, because by the time the callback is called, input has already advanced an indeterminate amount of times, and the loop has likely already terminated, you're likely to see something like the following:
Input 99-99-99 works!
Input 99-99-99 works!
Input 99-99-99 works!
Input 99-99-99 works!
Input 99-99-99 works!
If the loop has terminated, at least it will be obvious something is wrong. If the checkInput function is particularly fast, or the loop particularly slow, you might get random outputs depending on where input happens to be at the moment the callback checks it.
This is a ridiculously difficult bug to track down if you find your output is completely random, and the hint for me tends to be that you always get the expected number of outputs, they're just wrong.
This is usually when I make up some convoluted solution to try to preserve or pass along the inputs, which works if there is a small number of them, but really doesn't when you have billions of inputs, of which a very small number are successful (hint, hint, combination locks are actually a great example here).
Is there a general purpose solution here, to pass the values into the callback as they were when the function with the callback first evaluated them?
If you want to iterate one async operation at a time, you cannot use a while loop. Asynchronous operations in Javascript are NOT blocking. So, what your while loop does is run through the entire loop calling checkInput() on every value and then, at some future time, each of the callbacks get called. They may not even get called in the desired order.
So, you have two options here depending upon how you want it to work.
First, you could use a different kind of loop that only advances to the next iteration of the loop when the async operation completes.
Or, second, you could run them all in a parallel like you were doing and capture the state of your object uniquely for each callback.
I'm assuming that what you probably want to do is to sequence your async operations (first option).
Sequencing async operations
Here's how you could do that (works in either ES5 or ES6):
function next() {
if (input.notEnd()) {
input.next();
checkInput(input, success => {
if (success) {
// because we are still on the current iteration of the loop
// the value of input is still valid
console.log(`Input ${input.combo} works!`);
}
// do next iteration
next();
});
}
}
next();
Run in parallel, save relevant properties in local scope in ES6
If you wanted to run them all in parallel like your original code was doing, but still be able to reference the right input.combo property in the callback, then you'd have to save that property in a closure (2nd option above) which let makes fairly easy because it is separately block scoped for each iteration of your while loop and thus retains its value for when the callback runs and is not overwritten by other iterations of the loop (requires ES6 support for let):
while(input.notEnd()) {
input.next();
// let makes a block scoped variable that will be separate for each
// iteration of the loop
let combo = input.combo;
checkInput(input, (success) => {
if (success) {
console.log(`Input ${combo} works!`);
}
});
}
Run in parallel, save relevant properties in local scope in ES5
In ES5, you could introduce a function scope to solve the same problem that let does in ES6 (make a new scope for each iteration of the loop):
while(input.notEnd()) {
input.next();
// create function scope to save value separately for each
// iteration of the loop
(function() {
var combo = input.combo;
checkInput(input, (success) => {
if (success) {
console.log(`Input ${combo} works!`);
}
});
})();
}
You could use the new feature async await for asynchronous calls, this would let you wait for the checkInput method to finish when inside the loop.
You can read more about async await here
I believe the snippet below achieves what you are after, I created a MockInput function that should mock the behaviour of your input. Note the Async and await keywords in the doAsyncThing method and keep an eye on the console when running it.
Hope this clarifies things.
function MockInput() {
this.currentIndex = 0;
this.list = ["05-30-96", "18-22-09", "59-62-53", "68-82-0", "85-55-85"];
this.notEnd = function(){
return this.currentIndex <= 4;
};
this.next = function(){
this.currentIndex++;
};
this.combo = function(){
return this.list[this.currentIndex];
}
}
function checkInput(input){
return new Promise(resolve => {
setTimeout(()=> {
var isValid = input.currentIndex % 2 > 0; // 'random' true or false
resolve( `Input ${input.currentIndex} - ${input.combo()} ${isValid ? 'works!' : 'did not work'}`);
}, 1000);
});
}
async function doAsyncThing(){
var input = new MockInput();
while(input.notEnd()) {
var result = await checkInput(input);
console.log(result);
input.next();
}
console.log('Done!');
}
doAsyncThing();
So there are some ways to stopping a Generator in for of loop, but how does break send a signal to the Generator(in comparison with return in for-of)?
please consider the code.
As an example, the preceding code just increases a value from 1 to 10 ,and do pause and resume in between.
function *Generator() {
try {
var nextValue;
while (true) {
if (nextValue === undefined) {
nextValue = 1;
}
else {
nextValue++;
}
yield nextValue;
}
}
// cleanup clause
finally {
console.log( "finally has been reached." );
}
}
it loops over it 10 times by using for of:
var it = Generator();// it gets Generator's iterator
for (var v of it) {
console.log(v);
if (v > 9) {
//it.return("stop");//this is another tool for stopping, but it doesn't stop immediately.
break;
console.log("it won't run");//this line won't run
}
}
When it.return() is used by the way, the implementation's clear(it is the main Object and has got the control, but what about the break?);
Iterable objects like your it generator object have a property with the key Symbol.iterator that is a function returning an iterator. Iterators are required to have a .next() method to advance from one item to the next. Then can also optionally have a .return() method, which is called when you break, return, or throw, causing the for..of to stop before it runs to completion. So in your case, break; will automatically call it.return().
The other side of it is that on ES6 generator, .next() makes it resume execution at the currently paused yield, and .return() makes it act like the yield is a return statement, so break inside the loop causes yield nextValue; to behave like return;, which will exit the generator and trigger the finally block.
how does break send a signal to the Generator?
The loop will call the IteratorClose operation, which basically amounts to invoking the iterator's .return() method with no arguments if the iterator object has such a method - which generators do.
This also happens when a throw or return statement in the loop body is evaluated.
When it.return() is used by the way, the implementation is clear
…but horrible. As you found out, it doesn't stop immediately. That's because a method call just advances the generator and gets you some result back from it, but is has nothing to do with your loop. The loop body will just continue to be executed until the loop condition is evaluated again, which then will check the iterator and notice that it's already done.
The following example code confuses me...
"use strict";
var filesToLoad = [ 'fileA','fileB','fileC' ];
var promiseArray = [];
for( let i in filesToLoad ) {
promiseArray.push(
new Promise( function(resolve, reject ) {
setTimeout( function() {
resolve( filesToLoad[i] );
}, Math.random() * 1000 );
})
);
}
Promise.all( promiseArray ).then(function(value) {
console.log(value);
});
The reason I'm confused is that I was expecting a random ordered output on the console. But I always get the following...
[ 'fileA', 'fileB', 'fileC' ]
That confuses me little to say the least, but what really gets me scratching my head is when I change the let i to var i I get the following result....
[ 'fileC', 'fileC', 'fileC' ]
As someone who has only recently tried to fully understand Promises and not that long ago starting using let, I'm really stumped.
Further reading...
After getting lots of great answers I have refactored the example to get rid of the loop and i. Will seem obvious to most, but fun for me...
"use strict";
var filesToLoad = [ 'fileA','fileB','fileC' ];
function start( name ) {
return new Promise( function(resolve, reject ) {
setTimeout( function() {
resolve( name + '_done' );
}, Math.random() * 1000 );
});
}
Promise.all( filesToLoad.map(start) ).then(function(value) {
console.log(value);
});
It is because of closure. Read about it here and here.
Also let is block scoped whereas var is function scoped.
In case of using var i:
After timeout when the function is triggered, looping was completed and i was set to 2. so it got resolved with filesToLoad[2] for all the setTimeout functions.
In case of using let i:
Since it is block scoped, when function is resolved it remembers the state of i when setTimeOut was declared, so when it gets resolved it uses correct value of i.
Regarding the order of output in case of using let i.
Promise.all(Iterable<any>|Promise<Iterable<any>> input) -> Promise
Given an Iterable(arrays are Iterable), or a promise of an Iterable, which produces promises (or a mix of promises and values), iterate over all the values in the Iterable into an array and return a promise that is fulfilled when all the items in the array are fulfilled. The promise's fulfillment value is an array with fulfillment values at respective positions to the original array. If any promise in the array rejects, the returned promise is rejected with the rejection reason.
So irrespective of order in which your promises gets resolved, the result of promise.all will always have promise resolve value in correct order.
Why using let and var produces different results:
The reason that using let produces the desired result over var is that when using let you declare a block-scoped variable, such that when the loop moves to another iteration the value of i remains unaffected for the contents of the loop at that time.
Defining a var variable in the header of a for-loop does not mean that it exists only for the life of the for-loop, as you will notice if you do the following:
for (var i = 0; i < 10; i++) { /*...*/ }
console.log(i); //=> 10
// `i` is already declared and its value will be 10
for (; i < 20; i++) { /*...*/ }
console.log(i); //=> 20
You can avoid this problem altogether if you use Array#forEach, which does the job of filesToLoad[i] for you by giving you the next value within a callback function on each iteration:
filesToLoad.forEach((file) => {
promiseArray.push(
new Promise( function(resolve, reject ) {
setTimeout( function() {
resolve( file );
}, Math.random() * 1000 );
})
);
});
______
Does using either let or var affect the behaviour of Promise#all?
No. In your example, the position of the Promises in promiseArray defines in what order the values are added to the results array, not when each of those Promises is resolved. The fact that you resolve the Promises at random intervals does not move the position of the resolved value within promiseArray. What you have demonstrated is that Promise#all produces an array of values whose positions are mapped to the Promise that produced their value.
See this answer for more information about the behaviour of Promise#all:
All this means that the output is strictly ordered as the input as long as the input is strictly ordered (for example, an array).
It's because var's scope isn't a child of for, it's a sibling. So the loop runs once and sets i = 0. It then runs another 2 more times and sets i = 1 and then i = 2. After this has all happened, the timeouts then run and all run the resolve function and passes in resolve( filesToLoad[2] ). let works correctly because the value of let i is not overridden by the following loop iterations.
In short, the timeout only runs after the loop has already run 3 times and therefore passes in the same value. I've created a jsfiddle of a working version using var.
These are actually two different, unrelated problems.
Your first observation, that the output is in the original order, rather than a random order, is due to the behaviour of Promise.all. Promise.all returns a new promise that resolves to an array when all sub-promises have resolved, in the original order. You can think of it like Array.map, but for promises.
As stated in the Bluebird (a JS promise library) documentation:
The promise's fulfillment value is an array with fulfillment values at respective positions to the original array.
Your second problem, how changing let to var changes the output to have all of the same values, is due to array scoping. Put simply, a variable defined with var exists outside of the for loop scope. This means that on each iteration, you are actually modifying the value of the pre-existing variable, as opposed to creating a new variable. Since your code inside the Promise executes later, the variable will have the value from the last loop iteration.
let creates a variable that exists inside that for iteration. This means that, later on in the Promise, you are accessing a variable defined in that iterations scope, which isn't overridden by the next loop iteration.
That's the difference between let and var. Your issue is not directly related to Promises (besides the fact that they run async).
TL;DR
var declarations are moved to beginning of function by interpreter, so when timeouts executed the loop has already run to end of array on the same variable. let keeps the variable in scope, so each iteration of the loop used a different variable.
Background:
Javascript has a feature called hoisting, which means that a variable or function is always declared at the beginning of it's wrapping function. If the coder doesn't, the interpreter moves it. Hence:
var i;
for (i in array)
....
and
for (var i in array)
....
are equivalent, even though, having some reason, or other programming background, you would expect i to be scoped to the loop alone in the second case.
That's why
alert(i);
...
var i=5;
alerts undefined instead of throwing an error. As far as the interpreter is concerned - there was a declaration. It's like
var i;
alert(i);
...
i=5;
Now:
ES6/ES2015 has introduced let - which behaves as our first assumption in the second example above.
alert(i);
...
let i=5;
will actually throw an error. When reaching the alert there is no i to speak of.
In your case
With let you went through the loop with a new variable each time, since let was defined only to scope of the loop. Each iteration is a scope, each used a different variable (though all named i in their respective scopes)
When using var, you actually declared i before the loop
var i;
for( i in filesToLoad ) {
promiseArray.push(
new Promise( function(resolve, reject ) {
setTimeout( function() {
resolve( filesToLoad[i] );
}, Math.random() * 1000 );
})
);
}
so it's the same variable on each iteration (or, in each scope), meaning that the loop assigned the last item to i before any of the timeouts returned. That's why the result was 3 times the last item.