How to approach multiple async calls on single array? - javascript

this is bit more theoretical question. I originally intented to call this question Is it possible to iterate over map twice, but just from the sound of it, it sounds like an anti-pattern. So I assume I'm just approaching this wrong.
Also note: Data here servers as an abstraction. I know that what I'm doing here with data provided here is unnecessary, but please don't get fixated too much on data-structure and what-not. It does not represent the real (more complex, which furthermore is provided by client and I can't alter) data I'm working with. Instead approach the problem as how to return structured async calls for each array item please! :-)
My problem boils down to this:
I have array of ids on which I need to execture two separate asynchronous calls
Both of these callls need to pass (and in all id instances)
So as an example, imagine I have these two data-sets:
const dataMessages = [
{ id: "a", value: "hello" },
{ id: "b", value: "world" }
];
const dataNames = [
{ id: "a", name: "Samuel" },
{ id: "b", name: "John" },
{ id: "c", name: "Gustav"},
];
And an API-call mock-up:
const fetchFor = async (collection: Object[], id: string): Promise<Object> => {
const user = collection.find(user => user.id === id);
if (!user) {
throw Error(`User ${id} not found`);
}
return user;
};
Now I need to call the fetchFor() function for both the data-sets, presumably inside the inside the Promise.all, given forEach is not asynchronous from a predetermined array of ids.
I was thinking something akin to maping a list of Promises for the Promise.all to execute. This works fine, when you only need to map a single api-call:
const run = async () => {
const result = await Promise.all(
['a', 'b'].map(id => fetchFor(dataMessages, id)
)
console.log(result) // [{id: 'a', value: 'hello'}, {id: 'b', value: 'world}]
}
However I somehow need to return both promises for the
fetchFor(dataMessages, id)
fetchFor(dataNames, id)
inside the Promise.all array of Promises.
I guess I could always simply do a flatMap of two maps for both instances of API calls, but that sounds kinda dumb, given
I'd be doing array.map on same array twice
My data structure would not be logically connected (two separate array items for the same user, which would not even by followed by eachother)
So ideally I'd like to return dat in form of
const result = await Promise.all([...])
console.log(result)
/* [
* {id: 'a', message: 'hello', name: 'Samuel'},
* {id: 'b', message: 'world', name: 'John'},
* ]
Or do I simply have to do flatmap of promises and then do data-merging to objects based on id identifier inside a separate handler on the resolved Promise.all?
I've provided a working example of the single-api-call mockup here, so you don't have to copy-paste.
What would be the correct / common way of approaching such an issue?

You could nest Promise.all calls:
const [messages, names] = await Promise.all([
Promise.all(
['a', 'b'].map(id => fetchFor(dataMessages, id)
),
Promise.all(
['a', 'b', 'c'].map(id => fetchFor(dataNames, id)
)
]);
If you're wanting to then merge the results after retrieved, it's just a matter of standard data manipulation.

Related

name an object in Array

I'm doing multiple fetchs with Promise.all. So I receive data like this :
[
0: {
...
},
1: {
...
}
]
But I would like to name my Objects. So I can do data.myObject istead of data[0].
I would like the index to be a string that I chose.
For example, i'd like to get :
[
"home": {
...
},
"product": {
...
}
]
Is is possible ? Thanks
No, this is not possible. Promise.all works on iterables (like arrays), not on objects. Name your values after the Promise.all call:
const data = await Promise.all([…, …]);
const home = data[0], product = data[1];
becomes with destructuring
const [home, product] = await Promise.all([…, …]);
Promise.all works with an array. Arrays deal in numerically indexed (ordered) values.
You'd need to have a name stored somewhere for each promise, and then generate the object from them once the promises had resolved.
e.g.
const names = ['home', 'product'];
const promises = [fetchHome(), fetchProduct()];
const results = await Promise.all(promises);
const resultsByName = names.reduce((prev, curr, index) => {
return {...prev, [curr]: results[index]};
}, {});
You could use a similar approach without the second array if the name was available in the resolved values of fetchHome() and fetchProduct().
You can destructure the array of Promise results in the Promise.all.
const [home, product, toolbar, nav] = await Promise.all([
getHome(), getProduct(), getToolBar(), getNav()
]);
Since the results are an array like anything else you can destructure arrays and even use the ...rest syntax:
const [home, product, toolbar, nav, ...otherPromises] = await Promise.all([
getHome(), getProduct(), getToolBar(), getNav(), getOtherThing1()
]);
// otherPromises will be an array that you'll have to access
// with numeric keys as before:
// eg. otherPromises[0] might be the first non-named promise
// the result of getOtherThing1()
Seems like you want to convert Array to an object so you can get data by calling a key.
const arr = [ { id: "home", val: "val1" }, { id: "product", val: "val2" }];
const convert = arr.reduce((a,b) => (a[b.id] ??= b,a),{});
console.log(convert);
console.log(convert.home);

Javascript memoize find array

I'm trying to improve my knowledge regarding memoization in javascript. I have created a memoization function(I think..)
I've got an array of changes(a change log) made to items. Each item in the array contains a reference-id(employeeId) to whom made the edit. Looking something like this.
const changeLog = [
{
id: 1,
employeeId: 1,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 2,
employeeId: 2,
field: 'anotherField',
oldValue: '20',
newValue: '100',
},
...
]
I've also got an array containing each employee, looking something like this:
const employees = [
{
name: 'Joel Abero',
id: 1
},
{
name: 'John Doe',
id: 2
},
{
name: 'Dear John',
id: 3
}
]
To find the employee who made the change I map over each item in the changeLog and find where employeeId equals id in the employees-array.
Both of these arrays contains 500+ items, I've just pasted fragments.
Below I pasted my memoize helper.
1) How can I perform a test to see which of these two run the fastest?
2) Is this a proper way to use memoization?
3) Is there a rule of thumb when to use memoization? Or should I use it as often as I can?
const employees = [
{
name: 'Joel Abero',
id: 1
},
{
name: 'John Doe',
id: 2
},
{
name: 'Dear John',
id: 3
}
]
const changeLog = [
{
id: 1,
employeeId: 1,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 2,
employeeId: 2,
field: 'anotherField',
oldValue: '0',
newValue: '100',
},
{
id: 3,
employeeId: 3,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 4,
employeeId: 3,
field: 'someField',
oldValue: '0',
newValue: '100',
},
{
id: 5,
employeeId: 3,
field: 'someField',
oldValue: '0',
newValue: '100',
}
]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = {name: findEditedBy.name }
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
// with memoization
const changeLogWithEmployeesMemoized = changeLog.map( log => {
const employeeName = memoizedEmployee(log.employeeId);
return {
...log,
employeeName: employeeName.name
}
})
// without memoization
const changeLogWithEmployees = changeLog.map( log => {
const editedBy = findEditedByEmployee(log.employeeId);
return {
...log,
employeeName: editedBy.name
}
})
console.log('memoized', changeLogWithEmployeesMemoized)
console.log('not memoized', changeLogWithEmployees)
I'll try to answer each of your questions:
1) How can I perform a test to see which of these two run the fastest?
The best way is just a simple for loop. Take for example a fake API request:
const fakeAPIRequest = id => new Promise(r => setTimeout(r, 100, {id}))
This will take 100ms to complete on request. Using memoization, we can avoid making this 100ms request by checking if we've made this request before:
const cache = {}
const memoizedRequest = async (id) => {
if (id in cache) return Promise.resolve(cache[id])
return cache[id] = await fakeAPIRequest(id)
}
Here's a working example:
const fakeAPIRequest = id => new Promise(r => setTimeout(r, 100, {id}))
const cache = {}
const memoizedRequest = async (id) => {
if (id in cache) return Promise.resolve(cache[id])
return cache[id] = await fakeAPIRequest(id)
}
const request = async (id) => await fakeAPIRequest(id)
const test = async (name, cb) => {
console.time(name)
for (let i = 50; i--;) {
await cb()
}
console.timeEnd(name)
}
test('memoized', async () => await memoizedRequest('test'))
test('normal', async () => await request('test'))
2) Is this a proper way to use memoization?
I'm not entirely sure what you mean by this, but think of it as short-term caching.
Should your memo call include an API request, it could be great for non-changing data, saving plenty of time, but on the other hand, if the data is subject to change within a short period of time, then memoization can be a bad idea, meaning it will shortly be outdated.
If you are making many many calls to this function, it could eat up memory depending on how big the return data is, but this factor is down to implementation, not "a proper way".
3) Is there a rule of thumb when to use memoization? Or should I use it as often as I can?
More often than not, memoization is overkill - since computers are extremely fast, it can often boil down to just saving milliseconds - If you are only calling the function even just a few times, memoization provides little to no benefit. But I do keep emphasising API requests, which can take long periods of time. If you start using a memoized function, you should strive to use it everywhere where possible. Like mentioned before, though, it can eat up memory quickly depending on the return data.
One last point about memoization is that if the data is already client side, using a map like Nina suggested is definitely a much better and more efficient approach. Instead of looping each time to find the object, it loops once to transform the array into an object (or map), allowing you to access the data in O(1) time. Take an example, using find this time instead of the fakeAPI function I made earlier:
const data = [{"id":0},{"id":1},{"id":2},{"id":3},{"id":4},{"id":5},{"id":6},{"id":7},{"id":8},{"id":9},{"id":10},{"id":11},{"id":12},{"id":13},{"id":14},{"id":15},{"id":16},{"id":17},{"id":18},{"id":19},{"id":20},{"id":21},{"id":22},{"id":23},{"id":24},{"id":25},{"id":26},{"id":27},{"id":28},{"id":29},{"id":30},{"id":31},{"id":32},{"id":33},{"id":34},{"id":35},{"id":36},{"id":37},{"id":38},{"id":39},{"id":40},{"id":41},{"id":42},{"id":43},{"id":44},{"id":45},{"id":46},{"id":47},{"id":48},{"id":49},{"id":50},{"id":51},{"id":52},{"id":53},{"id":54},{"id":55},{"id":56},{"id":57},{"id":58},{"id":59},{"id":60},{"id":61},{"id":62},{"id":63},{"id":64},{"id":65},{"id":66},{"id":67},{"id":68},{"id":69},{"id":70},{"id":71},{"id":72},{"id":73},{"id":74},{"id":75},{"id":76},{"id":77},{"id":78},{"id":79},{"id":80},{"id":81},{"id":82},{"id":83},{"id":84},{"id":85},{"id":86},{"id":87},{"id":88},{"id":89},{"id":90},{"id":91},{"id":92},{"id":93},{"id":94},{"id":95},{"id":96},{"id":97},{"id":98},{"id":99}]
const cache = {}
const findObject = id => data.find(o => o.id === id)
const memoizedFindObject = id => id in cache ? cache[id] : cache[id] = findObject(id)
const map = new Map(data.map(o => [o.id, o]))
const findObjectByMap = id => map.get(id)
const list = Array(50000).fill(0).map(() => Math.floor(Math.random() * 100))
const test = (name, cb) => {
console.time(name)
for (let i = 50000; i--;) {
cb(list[i])
}
console.timeEnd(name)
}
test('memoized', memoizedFindObject)
test('normal', findObject)
test('map', findObjectByMap)
All in all, memoization is a great tool, very similar to caching. It provides a big speed up on heavy calculations or long network requests, but can prove ineffective if used infrequently.
I would create a Map in advance and get the object from the map for an update.
If map does not contain a wanted id, create a new object and add it to employees and to the map.
const
employees = [{ name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 }],
changeLog = [{ id: 1, employeeId: 1, field: 'someField', oldValue: '0', newValue: '100' }, { id: 2, employeeId: 2, field: 'anotherField', oldValue: '20', newValue: '100' }],
map = employees.reduce((map, o) => map.set(o.id, o), new Map);
for (const { id, field, newValue } of changeLog) {
let object = map.get(id);
if (object) {
object[field] = newValue;
} else {
let temp = { id, [field]: newValue };
employees.push(temp)
map.set(id, temp);
}
}
console.log(employees);
.as-console-wrapper { max-height: 100% !important; top: 0; }
Your memoization process is faulty!
You don't return objects with the same shape
When you don't find an employee in the cache, then you look it up and return the entire object, however, you only memoize part of the object:
employeesSavedInMemory[findEditedBy.id] = {name: findEditedBy.name }
So, when you find the employee in cache, you return a cut-down version of the data:
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = {name: findEditedBy.name }
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
const found = memoizedEmployee(1);
const fromCache = memoizedEmployee(1);
console.log("found:", found); //id + name
console.log("fromCache:", fromCache);//name
You get different data back when calling the same function with the same parameters.
You don't return the same objects
Another big problem is that you create a new object - even if you change to get a complete copy, the memoization is not transparent:
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = { ...findEditedBy } //make a copy of all object properties
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
memoizedEmployee(1)
const found = memoizedEmployee(1);
const fromCache = memoizedEmployee(1);
console.log("found:", found); //id + name
console.log("fromCache:", fromCache); //id + name
console.log("found === fromCache :", found === fromCache); // false
The result is basically the same you get "different" data, in that the objects are not the same one. So, for example, if you do:
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
function editedByWithMemoize () {
let employeesSavedInMemory = {}
return function(employeeId) {
if(employeeId in employeesSavedInMemory) {
console.log("from memory")
return employeesSavedInMemory[employeeId]
}
console.log("not from memory")
const findEditedBy = findEditedByEmployee(employeeId)
employeesSavedInMemory[findEditedBy.id] = { ...findEditedBy } //make a copy of all object properties
return findEditedBy
}
}
const memoizedEmployee = editedByWithMemoize();
const original = employees[0];
const found = memoizedEmployee(1);
found.foo = "hello";
console.log("found:", found); //id + name + foo
const fromCache = memoizedEmployee(1);
console.log("fromCache 1:", fromCache); //id + name
fromCache.bar = "world";
console.log("fromCache 2:", fromCache); //id + name + bar
console.log("original:", original); //id + name + foo
Compare with a normal implementation
I'll use memoize from Lodash but there are many other generic implementations and they still work the same way, so this is only for reference:
const { memoize } = _;
const employees = [ { name: 'Joel Abero', id: 1 }, { name: 'John Doe', id: 2 }, { name: 'Dear John', id: 3 } ]
function findEditedByEmployee (employeeId) {
return employees.find(({ id }) => id === employeeId)
}
const memoizedEmployee = memoize(findEditedByEmployee);
const original = employees[0];
const found = memoizedEmployee(1);
console.log("found 1:", found); //id + name
found.foo = "hello";
console.log("found 2:", found); //id + name + foo
const fromCache = memoizedEmployee(1);
console.log("fromCache 1:", fromCache); //id + name + foo
fromCache.bar = "world";
console.log("fromCache 2:", fromCache); //id + name + foo + bar
console.log("original:", original); //id + name + foo + bar
console.log("found === fromCache :", found === fromCache); //true
<script src="https://cdn.jsdelivr.net/npm/lodash#4.17.15/lodash.min.js"></script>
Just a demonstration that the memoization is completely transparent and does not produce any odd or unusual behaviour. Using the memoized function is exactly the same as the normal function in terms of effects. The only difference is the caching but there is no impact on how the function behaves.
Onto the actual questions:
How can I perform a test to see which of these two run the fastest?
Honestly, and personally - you shouldn't. A correct implementation of memoization has known properties. A linear search also has known properties. So, testing for speed is testing two known properties of both algorithms.
Let's dip into pure logic here - we have two things to consider:
the implementation is correct (let's call this P)
properties of implementation are correct (let's call this Q)
We can definitely say that "If the implementation is correct, then properties of implementation are correct", transformable to "if P, then Q" or formally P -> Q. Were we to go in the opposite direction Q -> P and try to test the known properties to confirm the implementation is correct, then we are committing the fallacy of affirming the consequent.
We can indeed observe that testing the speed is not even testing the solution for correctness. You could have incorrect implementation of memoization yet it would exhibit the same speed property of O(n) lookup once and O(1) on repeat reads as correct memoization. So, the test Q -> P will fail.
Instead, you should test the implementation for correctness, if you can prove that, then you can deduce that you'd have constant speed on repeat reads. And O(1) access is going to be (in most cases, especially this one), faster than O(n) lookup.
Consequently, if you don't need to prove correctness, then you can take the rest for granted. And if you use a known implementation of memoization, then you don't need to test your library.
With all that said, there is something you might still need to be aware of. The caching during memoization relies on creating a correct key for the cached item. And this could potentially have a big, even if constant, overhead cost depending on how the key is being derived. So, for example, a lookup for something near the start of the array might take 10ms yet creating the key for the cache might take 15ms, which means that O(1) would be slower. Slower than some cases.
The correct test to verify speed would normally be to check how much time it takes (on average) to lookup the first item in the array, the last item in the array, something from the middle of the array then check how much time it takes to fetch something from cache. Each of these has to be ran several times to ensure you don't get a random spike of speed either up or down.
But I'd have more to say later*
2) Is this a proper way to use memoization?
Yes. Again, assuming proper implementation, this is how you'd do it - memoize a function and then you get a lot of benefits for caching.
With that said, you can see from the Lodash implementation that you can just generalise the memoization implementation and apply it to any function, instead of writing a memoized version of each. This is quite a big benefit, since you only need to test one memoization function. Instead, if you have something like findEmployee(), findDepartment(), and findAddress() functions which you want to cache the results of, then you need to test each memoization implementation.
3) Is there a rule of thumb when to use memoization? Or should I use it as often as I can?
Yes, you should use it as often as you can* (with a huge asterisk)
* (huge asterisk)
This is the biggest asterisk I know how to make using markdown (outside just embedding images). I could go for a slightly bigger one but alas.
You have to determine when you can use it, in order to use it when you can. I'm not just saying this to be confusing - you shouldn't just be slapping memoized functions everywhere. There are situations when you cannot use them. And that's what I alluded to at the end of answering the first question - I wanted to talk about this in a single place:
You have to tread very carefully to verify what your actual usage is. If you have a million items in an array and only the first 10 are looked up faster than being fetched from cache, then there is 0.001% of items that would have no benefit from caching. In that case - you get a benefit from caching...or do you? If you only do a couple of reads per item, and you're only looking up less than a quarter of the items, then perhaps caching doesn't give you a good long term benefit. And if you look up each item exactly two times, then you're doubling your memory consumption for honestly quite trivial improvement of speed overall. Yet, what if you're not doing in-memory lookup from an array but something like a network request (e.g., database read)? In that case caching even for a single use could be very valuable.
You can see how a single detail can swing wildly whether you should use memoization or not. And often times it's not even that clear when you're initially writing the application, since you don't even know how often you might end up calling a function, what value you'd feed it, nor how often you'd call it with the same values over and over again. Even if you have an idea of what the typical usage might be, you still will need a real environment to test with, instead of just calling a non-memoized and a memoized version of a function in isolation.
Eric Lippert has an an amazing piece on performance testing that mostly boils down to - when performance matters, try to test real application with real data, and real usage. Otherwise your benchmark might be off for all sorts of reasons.
Even if memoization is clearly "faster" you have to consider memory usage. Here is a slightly silly example to illustrate memoization eating up more memory than necessary:
const { memoize } = _;
//stupid recursive function that removes 1 from `b` and
//adds 1 to `a` until it finds the total sum of the two
function sum (a, b) {
if(b)
return sum(a + 1, b - 1)
//only log once to avoid spamming the logs but show if it's called or not
console.log("sum() finished");
return a;
}
//memoize the function
sum = memoize(sum);
const result = sum(1, 999);
console.log("result:", result);
const resultFromCache1 = sum(1, 999); //no logs as it's cached
console.log("resultFromCache1:", resultFromCache1);
const resultFromCache2 = sum(999, 1); //no logs as it's cached
console.log("resultFromCache2:", resultFromCache2);
const resultFromCache3 = sum(450, 550); //no logs as it's cached
console.log("resultFromCache3:", resultFromCache3);
const resultFromCache4 = sum(42, 958); //no logs as it's cached
console.log("resultFromCache4:", resultFromCache4);
<script src="https://cdn.jsdelivr.net/npm/lodash#4.17.15/lodash.min.js"></script>
This will put one thousand cached results in memory. Yes, the function memoized is silly and doing a lot of unnecessary calls, which means there is a lot of memory overhead. Yet at the same time, if we re-call it with any arguments that sum up to 1000, then we immediately get the result, without having to do any recursion.
You can easily have similar real code, even if there is no recursion involved - you might end up calling some function a lot of times with a lot of different inputs. This will populate the cache with all results and yet whether that is useful or not is still up in the air.
So, if you can you should be using memoization. The biggest problem is finding out if you can.

Reduce number of operators in an RxJS mapping expression

I've create an Http request to get json data. Inside that json - there is an object which has an array. ( I need that array).
fromDb$ = of({
Result: {
Countries: [{ <--wanted array
ISOCode: 1,
Name: 'aaa'
}, {
ISOCode: 2,
Name: 'bbb'
}]
}
});
But- the data in the array has a different structure than I actually need.
I need to map (name &ISOcode) to (name and value )
This is what I've tried:
Use pluck to extract the inner Array
mergeMap the array object to a stream of objects (using of())
using map to transform each item to a desired structure
using toArray to wrap all to an array ( so I can bind it to a control)
Here is the actual code :
this.data = this.fromDb$.pipe(pluck<PtCountries, Array<Country>>('Result', 'Countries'),
mergeMap(a => from(a)),
map((c: Country) => ({
name: c.Name,
value: c.ISOCode,
})),
toArray());
The code does work and here is the online demo
Question
It looks like I've complicated it much more than it can be ,Is there a better way of doing it?
This line: mergeMap(a => from(a)) does not make a lot of sense. It's almost as if you did [1,2,3].map(v => v). You can just remove it.
To simplify this you basically need to use Array.map inside Observable.map.
Try this:
this.data = this.fromDb$.pipe(pluck<PtCountries, Array<Country>>('Result', 'Countries'),
map((countries: Country[]) => countries.map(country => ({
name: country.Name,
value: country.ISOCode,
}))));
Live demo
this.data = this.fromDb$.pipe(
mergeMap(object => object.Result.Countries),
map(country => ({ name: country.Name, value: country.ISOCode })),
toArray()
);

Looping through an array of objects in typescript and get a specific fails using Es6 syntax

I have several objects and i would like to get one and check a specific property
so i have
data: [{is_guest: true},{permission:'is_allowed_ip'}]
Now when i check the console.log(route.data) am getting
0:{is_guest:true},
1:{permission:'is_allowed_ip' }
and typeof route.data is an object
now i would like to get the object with is_guest:true
So i have tried
const data = Object.keys(route.data).map((index) => {
if (route.data[index].is_guest) {
return route.data[index]
}
});
console.log("route data is",data) //this still returns all the items
But the above fails returning all the objects.
How do i loop through all the objects and retrieve just only one with the is_guest key and value true
Sounds like you want Object.values, not Object.keys, and filter:
const data = Object.values(route.data).filter(e => e.is_guest);
Object.values is fairly new, but present on up-to-date Node, and entirely polyfillable.
Example:
const route = {
data: [
{is_guest: true},
{permission:'is_allowed_ip'}
]
};
const data = Object.values(route.data).filter(e => e.is_guest);
console.log(data);
Using E6:
data.filter(o => o.is_guest)
You can use the filter method.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/filter
I added some ids into your array just to make easier to understand.
// added ids to exemplify
const data = [
{id: 1, is_guest: true},
{id: 2, permission:'is_allowed_ip'},
{id: 3, is_guest: true},
{id: 4, is_guest: false},
]
// filter results
const filtered = data.filter(item => item.is_guest)
// just to show the result
document.querySelector('.debug').innerHTML = JSON.stringify(filtered, null, 2);
<pre><code class="debug"></code></pre>

javascript promises recursion

I have a async recursive function which returns promise if there is more work to do or returns result array otherwise. In case where no recursion is involved it correctly return the array but when recursion is there the array is undefined. The code is
function foo(filepath) {
var resultArr = [];
function doo(file) {
return asyncOperation(file).then(resp => {
resultArr.push(resp.data);
if (resp.pages) {
var pages = resp.pages.split(',');
pages.forEach(page => {
return doo(page);
});
} else {
return resultArr;
}
});
}
return doo(filepath);
}
And the way this is called
foo(abcfile).then(function(result){
console.log(result);
});
If I pass abcfile which has no resp.pages, I get result array, but there are resp.pages, then the result array is undefined.
I think you're just missing a returned promise within the if (resp.pages) block
if (resp.pages) {
return Promise.all(resp.pages.split(',').map(page => doo(page)))
.then(pagesArr => {
return resultArr.concat(...pagesArr)
})
}
I'm thinking there may be an issue with scoping resultArr outside the doo function so maybe try this
function foo(filepath) {
function doo(file) {
return asyncOperation(file).then(resp => {
const resultArr = [ resp.data ]
if (resp.pages) {
return Promise.all(resp.pages.split(',').map(page => doo(page)))
.then(pagesArr => resultArr.concat(...pagesArr))
} else {
return resultArr
}
})
}
return doo(filePath)
}
To explain the use of the spread-operator, look at it this way...
Say you have three pages for a file top; page1, page2 and page3 and each of those resolves with a couple of sub-pages each, the pagesArr would look like
[
['page1', 'page1a', 'page1b'],
['page2', 'page2a', 'page2b'],
['page3', 'page3a', 'page3b']
]
and resultArr so far looks like
['top']
If you use concat without the spread operator, you'd end up with
[
"top",
[
"page1",
"page1a",
"page1b"
],
[
"page2",
"page2a",
"page2b"
],
[
"page3",
"page3a",
"page3b"
]
]
But with the spread, you get
[
"top",
"page1",
"page1a",
"page1b",
"page2",
"page2a",
"page2b",
"page3",
"page3a",
"page3b"
]
To verify this works, I'll make a fake dataset, and a fakeAsyncOperation which reads data from the dataset asynchronously. To model your data closely, each query from the fake dataset returns a response with data and pages fields.
let fake = new Map([
['root', {data: 'root', pages: ['a', 'b', 'c', 'd']}],
['a', {data: 'a', pages: ['a/a', 'a/a']}],
['a/a', {data: 'a/a', pages: []}],
['a/b', {data: 'a/b', pages: ['a/b/a']}],
['a/b/a', {data: 'a/b/a', pages: []}],
['b', {data: 'b', pages: ['b/a']}],
['b/a', {data: 'b/a', pages: ['b/a/a']}],
['b/a/a', {data: 'b/a/a', pages: ['b/a/a/a']}],
['b/a/a/a', {data: 'b/a/a/a', pages: []}],
['c', {data: 'c', pages: ['c/a', 'c/b', 'c/c', 'c/d']}],
['c/a', {data: 'c/a', pages: []}],
['c/b', {data: 'c/b', pages: []}],
['c/c', {data: 'c/c', pages: []}],
['c/d', {data: 'c/d', pages: []}],
['d', {data: 'd', pages: []}]
]);
let fakeAsyncOperation = (page) => {
return new Promise(resolve => {
setTimeout(resolve, 100, fake.get(page))
})
}
Next we have your foo function. I've renamed doo to enqueue because it works like a queue. It has two parameters: acc for keeping track of the accumulated data, and xs (destructured) which is the items in the queue.
I've used the new async/await syntax that makes it particularly nice for dealing with this. We don't have to manually construct any Promises or deal with any manual .then chaining.
I made liberal use of the spread syntax in the recursive call because I its readability, but you could easily replace these for concat calls acc.concat([data]) and xs.concat(pages) if you like that more. – this is functional programming, so just pick an immutable operation you like and use that.
Lastly, unlike other answers that use Promise.all this will process each page in series. If a page were to have 50 subpages, Promise.all would attempt to make 50 requests in parallel and that may be undesired. Converting the program from parallel to serial is not necessarily straightforward, so that is the reason for providing this answer.
function foo (page) {
async function enqueue (acc, [x,...xs]) {
if (x === undefined)
return acc
else {
let {data, pages} = await fakeAsyncOperation(x)
return enqueue([...acc, data], [...xs, ...pages])
}
}
return enqueue([], [page])
}
foo('root').then(pages => console.log(pages))
Output
[ 'root',
'a',
'b',
'c',
'd',
'a/a',
'a/a',
'b/a',
'c/a',
'c/b',
'c/c',
'c/d',
'b/a/a',
'b/a/a/a' ]
Remarks
I'm happy that the foo function in my solution is not too far off from your original – I think you'll appreciate that. They both use an inner auxiliary function for looping and approach the problem in a similar way. async/await keeps the code nice an flat and highly readable (imo). Overall, I think this is an excellent solution for a somewhat complex problem.
Oh, and don't forget about circular references. There are no circular references in my dataset, but if page 'a' were to have pages: ['b'] and page 'b' had pages: ['a'], you can expect infinite recursion. Because this answer processes the pages serially, this would be a very easy to fix (by checking the accumulated value acc for an existing page identifier). This is much trickier (and out-of-scope of this answer) to handle when processing the pages in parallel.
The issue here is mixing async/sync operations in if (resp.pages) branch. Basically you'll have to return promise from then callback if you want the promise chain to work as expected.
In addition to Phil's answer, if you want to execute pages in order/sequence
if (resp.pages) {
var pages = resp.pages.split(',');
return pages.reduce(function(p, page) {
return p.then(function() {
return doo(page);
});
}, Promise.resolve());
}
The issue is you don't return anything when there's pages.
function foo(filepath) {
var resultArr = [];
function doo(file, promises) {
let promise = asyncOperation(file).then(resp => {
resultArr.push(resp.data);
if (resp.pages) {
var pages = resp.pages.split(',');
var pagePromises = [];
pages.forEach(page => {
return doo(page, pagePromises);
});
//Return the pages
return pagePromises;
} else {
return resultArr;
}
});
//They want it added
if ( promises ) { promises.push( promise ); }
//Just return normally
else { return promise; }
}
return doo(filepath);
}

Categories