I have an array of Ids, I need to iterate through all the Ids, and for each Ids of the array make an async call to retrieve a value from DB, then sums all the value gathered. I did something like this
let quantity = 0;
for (const id of [1,2,3,4]) {
const subQuantity = await getSubQuantityById(id);
quantity += subQuantity;
}
Is there a more elegant and coincise way to write this for in javascript?
It is totally fine because your case include an async operation. Using a forEach instead is not possible here at all.
Your for loop is perfectly clean. If you want to make it shorter you could even do:
let totalQuantity = 0;
for (const id of arrayOfIds) {
totalQuantity += await getSubQuantityById(id);
}
As-is, it may even be more clear than using += await as above.
Naming could be improved as suggested.
I find the following one liner suggested in comments more cryptic/dirty:
(await Promise.all([1,2,3,4].map(i => getSubQuantityById(id))).reduce((p, c) => p + c, 0)
Edit: Props to #vitaly-t, who indicates that using Promise.all the way this one liner does will result in uncontrollable concurrency and lead to troubles in the context of a database
I can't follow #vitaly-t's argument that concurrent database queries will cause "problems" - at least not when we are talking about simple queries and there is a "moderate" number of these queries.
Here is my version of doing the summation. Obviously, the console.log in the last .then() needs to be replaced by the actual action that needs to happen with the calculated result.
// a placeholder function for testing:
function getSubQuantityById(i){
return fetch("https://jsonplaceholder.typicode.com/users/"+i).then(r=>r.json()).then(u=>+u.address.geo.lat);
}
Promise.all([1,2,3,4].map(id => getSubQuantityById(id)))
.then(d=>d.reduce((p, c) => p + c,0))
.then(console.log)
Is there a more elegant and coincise way to write this for in javascript?
Certainly, by processing your input as an iterable. The solution below uses iter-ops library:
import {pipeAsync, map, wait, reduce} from 'iter-ops';
const i = pipeAsync(
[1, 2, 3, 4], // your list of id-s
map(getSubQuantityById), // remap ids into async requests
wait(), // resolve requests
reduce((a, c) => a + c) // calculate the sum
); //=> AsyncIterableExt<number>
Testing the iterable:
(async function () {
console.log(await i.first); //=> the sum
})();
It is elegant, because you can inject more processing logic right into the iteration pipeline, and the code will remain very easy to read. Also, it is lazy-executing, initiates only when iterated.
Perhaps even more importantly, such a solution lets you control concurrency, to avoid producing too many requests against the database. And you can fine-tune concurrency, by replacing wait with waitRace.
P.S. I'm the author of iter-ops.
Related
We can use await in a for loop; however, I am trying to figure out if this is ever a good practice.
I read on MDN: "When an await is encountered in code (either in an async function or in a module), the awaited expression is executed, while all code that depends on the expression's value is paused and pushed into the microtask queue."
I would interpret that as meaning perhaps that //2 and everything below it that depends on the result of //1 would be "pushed into the microtask queue" in each iteration - if my interpretation is correct.
Has an authority on the subject (e.g., MDN) written on if and when this is a good practice?
let zeros = new Array(10).fill(0);
(async () => {
for (let zero of zeros) {
var r = await new Promise((r)=>setTimeout(r.bind(null, 1), 10)); //1
console.log(zero);
}
console.log(r); //2
})();
It depends on what you want to do.
If for example you must send 3 HTTP requests, it is probably better to run them together like this :
await Promise.all([http_get(url_1), http_get(url_2), http_get(url_3)])
This way the whole duration of the operation is as long as the longest HTTP GET request, instead of being the sum of the duration of each request.
I'm trying to make some checks before saving an array of objects (objects[]) to the DB (mongoDB using mongoose):
Those objects are already sorted by date, so objects[0].date is lower than objects[1].date.
Each object should check that last related saved object has a different value (to avoid saving the same info two times). This means that I've to query to the DB before each save, to make that check, AND each of these object MUST be stored in order to be able to make the check with the right object. If objects are not stored in order, the last related saved object might not be the correct one.
In-depth explanation:
HTTP request is send to an API. It returns an array of objects (sortered by date) that I want to process and save on my Mongo DB (using mongoose). I've to iterate through all these objects and, for each:
Look for the previous related object stored on DB (which COULD BE one of that array).
Check some values between the 'already stored' and the object to save to evaluate if new object must be saved or could be discarded.
Save it or discard it, and then jump to next iteration.
It's important to wait each iteration to finish because:
Items on array MUST be stored in DB in order: first those which lower date, because each could be modified by some object stored later with a higher date.
If next iteration starts before previous has finished, the query that searchs for the previous object could not find it if it hasn't been stored yet
Already tried:
Using promises or async/await on forEach/for loops only makes that iteration async, but it keeps launching all iterations at once.
I've tried using async/await functions inside forEach/for loops, even creating my own asyncForEach function as shown below, but none of this has worked:
Array.prototype.asyncForEach = function(fn) {
return this.reduce(
(promise, n) => promise.then(() => fn(n)),
Promise.resolve()
);
};
Test function:
let testArray = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
testArray.asyncForEach(function(element) {
setTimeout(() => {
console.log(element);
}, Math.random() * 500);
});
Provided example should show numbers on order in every case. It's not a problem if internal function (setTimeout in the example) should return a promise.
What I think I need is a loop that waits some function/promise between iterations, and only starts the next iteration when the first is already finished.
How could I do that? Thanks you in advance!
const myArray = ['a','b','c','d'];
async function wait(ms) { // comment 3
return new Promise(resolve => setTimeout(resolve, ms));
}
async function doSomething() {
await myArray.reduce(async (promise, item) => {
await promise; // comment 2
await wait(1000);
// here we could await something else that is async like DB call
document.getElementById('results').append(`${item} `);
}, Promise.resolve()); // comment 1
}
setTimeout(() => doSomething(), 1000);
<div id="results">Starting in 1 second <br/></div>
You can also use reduce and async await which you already said you've tried.
Basically, if you read how reduce works you can see that it accepts 2 parameters, first being callback to execute over each step and second optional initial value.
In the callback we have first argument being an accumulator which means that it accepts whatever the previous step returns or the optional initial value for first step.
1) You are giving initial value of promise resolve so that you start your first step.
2) Because of this await promise you will never go into next step until previous one has finished, since that is the accumulator value from previous step, which is promise since we said that callback is async. We are not resolving promise per say here, but as soon as the previous step is finish, we are going to implicitly resolve it and go to next step.
3) You can put for example await wait(30) to be sure that you are throttling the Ajax requests and not sending to many requests to 3rd party API's, since then there is no way that you will send more than 1000/30 requests per second, even if your code executes really fast on your machine.
Hm, ok i am not 100% sure if i understand your question in the right way. But if you try to perform an async array operation that awaits for your logic for each item, you can do it like follow:
async loadAllUsers() {
const test = [1,2,3,4];
const users = [];
for (const index in test) {
// make some magic or transform data or something else
await users.push(test[index])
}
return users;
}
Then you can simply invoke this function with "await". I hope that helps you.
In asyncForEach function you are resolving a Promise, setTimeout doesn't return a Promise.So if you convert your setTimeout to Promise. It will work as expected.
Here is the modified code:
testArray.asyncForEach(function(element) {
return new Promise((resolve, reject) => {
setTimeout(() => {
console.log(element);
return resolve(element)
}, Math.random() * 500);
})
});
If I have a simple lodash chain that maps then filters an array:
lodash.chain(myarray)
.map(item=>{
if (item === 'some-condition') return [item];
})
.filter(item=>!!item)
.value();
Obviously, this is a made-up example but it relates to something simple I do all the time. Basically, a array map where some maps are not possible so 'undefined' is returned. I then filter-out all the undefined values.
Since, it is used quite lot, it makes sense to mixin it into my lodash.
So:
const lodash = _.runInContext();
function mapFilter(ary, iterator) {
return lodash.chain(ary)
.map(iterator)
.filter(item=>!!item)
.value()
}
lodash.mixin(lodash, mapFilter, {chain:true});
Obviously, we could just do the whole thing without lodash but normally, it might be part of a bigger chain. In theory, the lazy-evaluation makes it quicker.
What I really want is to tap into the current chain (if there is one) in my mixed-in method. Otherwise, I am losing the lazy-evaluation by calling value() twice.
So, if I had a longer chain:
lodash.chain(myarray)
.mapFilter( // do something) // my bespoke chainable method
.map( // do something else )
.sort()
.value();
I'd like to use the current chain (when there is one) in my bespoke method. Something like this:
// This is made-up and does not work!
const lodash = _.runInContext();
function mapFilter(ary, iterator) {
if (!!this.__currentChain) {
return this.__currentChain.map(iterator).filter(item=>!!item);
}
return lodash.chain(ary)
.map(iterator)
.filter(item=>!!item)
.value()
}
lodash.mixin(lodash, mapFilter, {chain:true});
Obviously, the above is made-up, but hopefully, it makes it clear what I am trying to achieve. I could of course, just not have my function and do a map() then a filter() but since I am doing it a lot, I'd like to have less typing. Also, the example could be longer, doing much more but still wanting to tap into the current chain.
Is this possible? That is my question. Obviously, I can think of a million and one alternative solutions but I am fine with those. Just looking for a lodash expert to say, "no not possible",or "yes, you do this".
I posted this as a comment but I feel that is what you would want either as a drop in or as something you need to check the source of how it is done and then code your own method or take pieces from it as part of your mixin etc.
The lodash _.tap method is there with the purpose of tap into" a method chain sequence in order to modify intermediate results so that you do not have to call value etc. You can use this as a starting point.
Hope this helps.
One of the ways to check if a function is called in a chain is to check whether this is LodashWrapper object or not. Then, use the first argument as an iterator when it's in a chain.
const _ = require('lodash');
const lodash = _.runInContext();
function mapFilter(array, iterator) {
if (this.constructor.name === 'LodashWrapper') {
return this.map(array).filter(item => !!item);
}
else {
return lodash.chain(array).map(iterator).filter(item => !!item).value();
}
}
lodash.mixin({ mapFilter }, { chain: true });
const filter = x => x == 2 ? [x] : null;
console.log(lodash.mapFilter([1, 2, 3], filter));
console.log(lodash.chain([1, 2, 3]).mapFilter(filter).head().value());
console.log(lodash([1, 2, 3]).mapFilter(filter).head());
By the way, when you use explicit _.chain, lodash doesn't apply shortcut fusion as you might expect. So you may want to use an implicit chaining. See Explicit chaining with lodash doesn't apply shortcut fusion for details.
Lets take an example where I have a huge array with elements being stringified JSON. I want to iterate over this array and convert all strings to JSON using JSON.parse(which blocks the event-loop).
var arr = ["{...}", "{...}", ... ] //input array
Here is the first approach(may keep the event loop blocked for some time):
var newArr = arr.map(function(val){
try{
var obj = JSON.parse(val);
return obj;
}
catch(err){return {};}
});
The second approach was using async.map method(Will this be more efficient compared to the first approach?):
var newArr = [];
async.map(arr,
function(val, done){
try{
var obj = JSON.parse(val);
done(null, obj);
}
catch(err){done(null, {});}
},
function(err, results){
if(!err)
newArr = results;
}
);
If the second approach is same or almost same then what is efficient way of doing this in node.js.
I came across child processes, will this be a good approach for this problem?
I don't think async.map guarantees a non-blocking handling of a sync function. Though it wraps your function with an asyncify function, I can't find anything in that code that actually makes it non-blocking. It's one of the problems I've encountered with async in the past (but maybe it's improved now)
You could definitely handroll your own solution with child processes, but it might be easier to use something like https://github.com/audreyt/node-webworker-threads
use async.map but wrap the callback in setImmediate(done)
I find the async functions quite convenient but not very efficient; if the mapped computation is very fast, calling done via setImmediate only once every 10 times and calling it directly otherwise will run visibly faster. (The setImmediate breaks up the call stack and yields to the event loop, but the setImmediate overhead is non-negligible)
I know that Ramda.js provides a reduce function, but I am trying to learn how to use ramda and I thought a reducer would be a good example. Given the following code, what would be a more efficient and functional approach?
(function(){
// Some operators. Sum and multiplication.
const sum = (a, b) => a + b;
const mult = (a, b) => a * b;
// The reduce function
const reduce = R.curry((fn, accum, list) => {
const op = R.curry(fn);
while(list.length > 0){
accum = pipe(R.head, op(accum))(list);
list = R.drop(1, list);
}
return accum;
});
const reduceBySum = reduce(sum, 0);
const reduceByMult = reduce(mult, 1);
const data = [1, 2, 3, 4];
const result1 = reduceBySum(data);
const result2 = reduceByMult(data);
console.log(result1); // 1 + 2 + 3 + 4 => 10
console.log(result2); // 1 * 2 * 3 * 4 => 24
})();
Run this on the REPL: http://ramdajs.com/repl/
I'm assuming this is a learning exercise and not for real-world application. Correct?
There are certainly some efficiencies you could gain over that code. At the core of Ramda's implementation, when all the dispatching, transducing, etc. are stripped away, is something like:
const reduce = curry(function _reduce(fn, acc, list) {
var idx = 0;
while (idx < list.length) {
acc = fn(acc, list[idx]);
idx += 1;
}
return acc;
});
I haven't tested, but this probably gains on your version because it only uses the number of functions calls strictly needed: one for each member of the list, and it does that with bare-bones iteration. Your version adds the call to curry, and then, on each iteration, calls to pipe and head, to that curried op function, to the result of the pipe call, and to drop. So this one should be faster.
On the other hand, this code is as imperative as it gets. If you want to go with something more purely functional, you would need to use a recursive solution. Here's one version:
const reduce = curry(function _reduce(fn, acc, list) {
return (list.length) ? _reduce(fn, fn(acc, head(list)), tail(list)) : acc;
});
This sacrifices all the performance of the above to the calls to tail. But it's clearly more of a straightforward functional implementation. In many modern JS engines, however, this will fail to even work on larger lists due to the stack depth.
Because it is tail-recursive, it would be able to take advantage of tail-call optimization specified by ES2015 but so far little implemented. Until then, it's mostly of academic interest. And even when that is available, because of the head and -- especially -- tail call in there, it's going to be much slower than the imperative implementation above.
You might be interested to know that Ramda was the second attempt at the API that's generated. Its original authors (disclaimer: I'm one of them) first built Eweda on the lines of this latter version. That experiment failed for exactly these reasons. Javascript simply cannot handle this sort of recursion... yet.