I have an flat array of Folders like this one :
const foldersArray = [{id: "1", parentId: null, name: "folder1"}, {id: "2", parentId: null, name: "folder2"}, {id: "1.1", parentId: 1, name: "folder1.1"}, {id: "1.1.1", parentId: "1.1", name: "folder1.1.1"},{id: "2.1", parentId: 2, name: "folder2.1"}]
I want to output an array of all parents of a given folder to generate a Breadcrumb-like component of Folder path.
I have presently a code that does what I need but I'd like to write it better in a more "functional" way, using reduce recursively.
If I do :
getFolderParents(folder){
return this.foldersArray.reduce((all, item) => {
if (item.id === folder.parentId) {
all.push (item.name)
this.getFolderParents(item)
}
return all
}, [])
}
and I log the output, I can see it successfully finds the first Parent, then reexecute the code, and outputs the parent's parent... as my initial array is logically reset to [] at each step... Can't find a way around though...
You could do this with a Map so you avoid iterating over the array each time you need to retrieve the next parent. This way you get an O(n) instead of an O(n²) time complexity:
const foldersArray = [{id: "1", parentId: null, name: "folder1"}, {id: "2", parentId: null, name: "folder2"}, {id: "1.1", parentId: "1", name: "folder1.1"}, {id: "1.1.1", parentId: "1.1", name: "folder1.1.1"},{id: "2.1", parentId: "2", name: "folder2.1"}];
const folderMap = new Map(foldersArray.map( o => [o.id, o] ));
const getFolderParents = folder =>
(folder.parentId ? getFolderParents(folderMap.get(folder.parentId)) : [])
.concat(folder.name);
// Example call:
console.log(getFolderParents(foldersArray[4]));
Just a minor remark: your parentId data type is not consistent: it better be always a string, just like the data type of the id property. If not, you need to cast it in your code, but it is really better to have the data type right from the start. You'll notice that I have defined parentId as a string consistently: this is needed for the above code to work. Alternatively, cast it to string in the code with String(folder.parentId).
Secondly, the above code will pre-pend the parent folder name (like is done in file folder notations). If you need to append the parent name after the child, then swap the concat subject and argument:
[folder.name].concat(folder.parentId ? getFolderParents(folderMap.get(folder.parentId)) : []);
You can do what you're looking for with a rather ugly looking while loop. Gets the job done though. Each loop iteration filters, looking for an instance of a parent. If that doesn't exist, it stops and exits. If it does exist, it pushes that parent into the tree array, sets folder to its parent to move up a level, then moves on to the next iteration.
const foldersArray = [{
id: "1",
parentId: null,
name: "folder1"
}, {
id: "2",
parentId: null,
name: "folder2"
}, {
id: "1.1",
parentId: 1,
name: "folder1.1"
}, {
id: "1.1.1",
parentId: "1.1",
name: "folder1.1.1"
}, {
id: "2.1",
parentId: 2,
name: "folder2.1"
}]
function getParents(folder){
const tree = [], storeFolder = folder
let parentFolder
while((parentFolder = foldersArray.filter(t => t.id == folder.parentId)[0]) !== undefined){
tree.push(parentFolder)
folder = parentFolder
}
console.log({ originalFolder: storeFolder, parentTree: tree})
}
getParents(foldersArray[3])
You're thinking about it in a backwards way. You have a single folder as input and you wish to expand it to a breadcrumb list of many folders. This is actually the opposite of reduce which takes as input many values, and returns a single value.
Reduce is also known as fold, and the reverse of a fold is unfold. unfold accepts a looping function f and an init state. Our function is given loop controllers next which add value to the output and specifies the next state, and done which signals the end of the loop.
const unfold = (f, init) =>
f ( (value, nextState) => [ value, ...unfold (f, nextState) ]
, () => []
, init
)
const range = (m, n) =>
unfold
( (next, done, state) =>
state > n
? done ()
: next ( state // value to add to output
, state + 1 // next state
)
, m // initial state
)
console.log (range (3, 10))
// [ 3, 4, 5, 6, 7, 8, 9, 10 ]
Above, we start with an initial state of a number, m in this case. Just like the accumulator variable in reduce, you can specify any initial state to unfold. Below, we express your program using unfold. We add parent to make it easy to select a folder's parent
const parent = ({ parentId }) =>
data .find (f => f.id === String (parentId))
const breadcrumb = folder =>
unfold
( (next, done, f) =>
f == null
? done ()
: next ( f // add folder to output
, parent (f) // loop with parent folder
)
, folder // init state
)
breadcrumb (data[3])
// [ { id: '1.1.1', parentId: '1.1', name: 'folder1.1.1' }
// , { id: '1.1', parentId: 1, name: 'folder1.1' }
// , { id: '1', parentId: null, name: 'folder1' } ]
breadcrumb (data[4])
// [ { id: '2.1', parentId: 2, name: 'folder2.1' }
// , { id: '2', parentId: null, name: 'folder2' } ]
breadcrumb (data[0])
// [ { id: '1', parentId: null, name: 'folder1' } ]
You can verify the results of the program below
const data =
[ {id: "1", parentId: null, name: "folder1"}
, {id: "2", parentId: null, name: "folder2"}
, {id: "1.1", parentId: 1, name: "folder1.1"}
, {id: "1.1.1", parentId: "1.1", name: "folder1.1.1"}
, {id: "2.1", parentId: 2, name: "folder2.1"}
]
const unfold = (f, init) =>
f ( (value, state) => [ value, ...unfold (f, state) ]
, () => []
, init
)
const parent = ({ parentId }) =>
data .find (f => f.id === String (parentId))
const breadcrumb = folder =>
unfold
( (next, done, f) =>
f == null
? done ()
: next ( f // add folder to output
, parent (f) // loop with parent folder
)
, folder // init state
)
console.log (breadcrumb (data[3]))
// [ { id: '1.1.1', parentId: '1.1', name: 'folder1.1.1' }
// , { id: '1.1', parentId: 1, name: 'folder1.1' }
// , { id: '1', parentId: null, name: 'folder1' } ]
console.log (breadcrumb (data[4]))
// [ { id: '2.1', parentId: 2, name: 'folder2.1' }
// , { id: '2', parentId: null, name: 'folder2' } ]
console.log (breadcrumb (data[0]))
// [ { id: '1', parentId: null, name: 'folder1' } ]
If you trace the computation above, you see that find is called once per folder f added to the outupt in the unfolding process. This is an expensive operation, and if your data set is significantly large, could be a problem for you.
A better solution would be to create an additional representation of your data that has a structure better suited for this type of query. If all you do is create a Map of f.id -> f, you can decrease lookup time from linear to logarithmic.
unfold is really powerful and suited for a wide variety of problems. I have many other answers relying on it in various ways. There's even some dealing with asynchrony in there, too.
If you get stuck, don't hesitate to ask follow-up questions :D
Related
I'm trying to achieve some specific use case but come to a dead end.
I need to, given an flat array of objects like this (Copying it from another similar post, as I found several similar posts but none matching my use case, or at least I haven't been smart enough to realise how to tweak possible solutions to fit my use-case):
const myArr = [
{
id: '1',
parentId: '0',
},
{
id: '2',
parentId: '1',
},
{
id: '3',
parentId: '2',
},
{
id: '4',
parentId: '2',
},
{
id: '5',
parentId: '2',
},
{
id: '6',
parentId: '2',
},
{
id: '7',
parentId: '6',
},
{
id: '8',
parentId: '7',
}
]
And then I have another array of IDs like so:
const idArr = [2, 4, 8]
So I need to filter from the first array, elements with matching ID, i.e the element with ID 2, then ID 4 and ID 8
And then, for each element present in the filtered array, I need to find it's ancestry until reach the root level, then build a tree
The problem here is that I already achieved it, but in real life this array will be huge, with thousands of elements in it, and the code will run most likely a lot of times
So I am looking for the technically most performant possible solution:
I'd say building a tree recursively is pretty much done in a performant way, but somehow I am in a dead end with step 2, getting all the ancestry of certain elements.
Could anyone bring some light here?
Thanks a lot in advance
const myArr = [
{
id: '1',
parentId: '0',
},
{
id: '2',
parentId: '1',
},
{
id: '3',
parentId: '2',
},
{
id: '4',
parentId: '2',
},
{
id: '5',
parentId: '2',
},
{
id: '6',
parentId: '2',
},
{
id: '7',
parentId: '6',
},
{
id: '8',
parentId: '7',
}
]
const idArr = [2, 4, 8]
elObj = {}
for (const el of myArr) {
elObj[el.id] = {"parentId": el.parentId}
}
const getAncestory = (id, res) => {
if (elObj[id].parentId === '0') {
res.push('0')
return res
}
res.push(elObj[id].parentId)
return getAncestory(elObj[id].parentId, res)
}
const res = []
idArr.forEach(el => {
res.push(getAncestory(el.toString(), [el.toString()]))
})
console.log(elObj)
console.log(res)
Here's how the above code works and performs in terms of time complexity:
Creating an object of objects where elements can be accessed in constant time based on their ids is a linear operation, both time and space-wise. We could have skipped this step if there was a guarantee that say element with id i is at index i of the initial array.
Creating each ancestry list takes O(m) time where m is the distance of the initial element to the root element of the tree. Note that we have assumed that all elements are eventually connected to the root element (our base case of parentId === '0'). If this assumption is not correct, we need to amend the code for that.
Assuming that there are n elements that you need to build the ancestry lists for (length of idArr), this whole process takes O(n * m), since the operations are all constant.
This algorithm can deteriorate into a quadratic one in terms of the number of nodes in the tree in case the tree has the shape of a flat linked list and you want the ancestry of all of its elements. That's because we would need to list 1 + 2 + ... n-1 + n elements where the closest element to the root takes 1 step and the farthest away takes n steps. This leads to n * (n+1)/2 steps which is O(n^2) in Big O terms.
One way to amend it is to change the representation of the tree with parent to child pointers. Then we can start from the root and backtrack, traversing all the possible paths and saving those of interest. This approach could be beneficial or worse the proposed one depending on data and the exact requirements for the output.
Note: If you have a few thousands of objects and are looking for the ancestry of a few hundreds of them, the above approach is fine (I'm making a lot of assumptions about the data). To make an educated guess one needs more details about the data and requirements.
It's not entirely clear what you're trying to generate. This answer makes the guess that you want a tree that includes only the nodes whose ids are specified, as well as their ancestors. It returns a structure like this:
[
{id: "1", children: [
{id: "2", children: [
{id: "4", children: []},
{id: "6", children: [
{id: "7", children: [
{id: "8", children: []}
]}
]}
]}
]}
]
Note that this is not a tree but a forest. I see nothing to demonstrate that every lineage ends up at a single root node, so the output might have multiple roots. But if your data does enforce the single root, you can just take the first element.
This code will do that:
const unfold = (fn, init, res = []) =>
fn (init, (x, s) => unfold (fn, s, res .concat (x)), () => res)
const filterByIds = (xs, map = Object .fromEntries(xs .map (({id, parentId}) => [id, parentId]))) => (
ids,
selected = [...new Set (ids .map (String) .flatMap (
id => unfold ((i, next, stop) => i in map ? next (i, map [i]) : stop(), id)
))]
) => xs .filter (({id}) => selected .includes (id))
const makeForest = (xs, root = 0) =>
xs .filter (({parentId}) => parentId == root)
.map (({id, parentId, ...rest}) => ({
id,
...rest,
children: makeForest (xs, id)
})) // [0] /* if the data really forms a tree */
const extractForest = (xs, ids) =>
makeForest (filterByIds (xs) (ids))
const myArr = [{id: "1", parentId: "0"}, {id: "2", parentId: "1"}, {id: "3", parentId: "2"}, {id: "4", parentId: "2"}, {id: "5", parentId: "2"}, {id: "6", parentId: "2"}, {id: "7", parentId: "6"}, {id: "8", parentId: "7"}]
console .log (
extractForest (myArr, [2, 4, 8])
)
.as-console-wrapper {max-height: 100% !important; top: 0}
We start with the somewhat interesting unfold helper function. This lets you start with a seed value and turn it into an array of values by repeatedly calling the function you supply with the current seed and two function, one to pass along a new value and the next seed, the other to stop processing and return the list of values returned so far. We use this to track the lineage of each id. This is by no means the only way we could have done so. A while loop is probably more familiar, but it's a useful tool, and it doesn't involve any reassignment or mutable variables, which I really appreciate.
(An example of how unfold works might be a simple Fibonacci number generator:
const fibsTo = (n) =>
unfold (([a, b], next, stop) => b <= n ? next (b, [b, a + b]) : stop (), [0, 1])
fibsTo (100) //=> [1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
This calls next with the next Fibonnaci number and a new seed including the current value and the one which would come next, starting with the seed [0, 1] When the total passes our target number, we instead call stop.)
Next we have the function filterByIds that takes your input array, and returns a function that accepts a list of ids and filters the array to include just those elements which are in the ancestry of one of those ids. We do this in three steps. First, we create an Object (map) mapping the ids of our input values to those actual values. Second, we flatmap the ids with a function to retrieve the list of their ancestors; this uses our unfold above, but could be rewritten with a while loop. And we use [... new Set (/* above */)] to collect the unique values from this list. Third, we filter the original list to include only the elements whose ids are in this new list (selected.)
The function makeForest -- like unfold is fairly generic, taking a flat list of {id, parentId, ...more} nodes and nesting them recursively in an {id, ...more, children: []} structure. Uncomment the [0] in the last line if your data is singly rooted.
And finally we have our main function extractForest which calls makeForest pm the result of filterByIds.
I would like to stress that unfold and makeForest are both quite generic. So the custom code here is mostly filterByIds, and the simple extractForest wrapper.
I have two arrays like so
data = [{id: 1, name: apple},
{id: 2, name: mango},
{id: 3, name: grapes},
{id: 4, name: banana}]
data2 =[{id: 1, name: apple},
{id: 3, name grapes}]
My Expected result would be:
[{ id: 2, name: mango},
{id:4, name: banana}]
My code is
let finalData =[];
data.forEach(result => {
data2.find(datum => {
if(datum['id'] === result['id]{
finalData.push(result);
}
})
})
I am getting wrong result. What is the simplest code or library that I can use?
Your sample data doesn't make sense, but assuming you mean that all data items that have matching IDs also have matching names and also assuming you want a set of all items where the IDs are the same in the two sets of data, you could use a Set to keep track of which IDs are present in one array then filter the second array by those that have their IDs in the set:
const idsInFirst = new Set(data.map(d => d.id));
const intersection = data2.filter(d => idsInFirst.has(d.id));
The reason why an intermediate Set structure is used is because it allows O(1) lookups after a one-time scan, which is more efficient than repeatedly scanning the first array over and over.
If you meant to say you wanted a difference between data sets (items excluded from data that are in data2), you'd want to negate/inverse things a bit:
const idsToExclude = new Set(data2.map(d => d.id));
const difference = data.filter(d => !idsToExclude.has(d.id));
Edit
After your clarifying edit, it's that second block of code that you'll want.
I would say a good way to do that is filtering your longest array using a function that will validate if the object id is present in both arrays. Check this example:
const data = [
{id: 1, name: 'apple'},
{id: 2, name: 'mango'},
{id: 3, name: 'grapes'},
{id: 4, name: 'banana'}
]
const data2 =[
{id: 1, name: 'apple' },
{id: 3, name: 'grapes' }
]
const longest = data.length > data2.length ? data : data2;
const shortest = data.length <= data2.length ? data : data2;
const finalData = longest.filter( obj => !shortest.find( o => o.id === obj.id ) )
console.log(finalData)
Good luck!
I'm doing some work that involves grabbing some specific nodes in a tree-like structure. My colleague claims that my implementation, which we are trying to use a stack and a DFS algorithm, is not that.
Here is my implementation of using a stack to create a basic DFS algorithm:
const findMatchingElements = (node, name, result) => {
for(const child of node.children) {
findMatchingElements(child, name, result)
}
if(node.name === name) result.push(node)
return result
}
const getElements = (tree, name) => {
return findMatchingElements(tree, name, [])
}
getElements(obj, 'foo')
And a sample input:
const obj = {
id: 1,
name: 'foo',
children: [
{
id: 45,
name: 'bar',
children: [
{
id: 859,
name: 'bar',
children: []
}
]
},
{
id: 67,
name: 'foo',
children: [
{
id: 456,
name: 'bar',
children: []
},
{
id: 653,
name: 'foo',
children: []
}
]
}
]
}
I am getting my expected output:
[ { id: 653, name: 'foo', children: [] },
{ id: 67, name: 'foo', children: [ [Object], [Object] ] },
{ id: 1, name: 'foo', children: [ [Object], [Object] ] } ]
In the order I expect as well, but my colleague for some reason does not think this is a proper stack implementation. Am I missing something? Is it because of the way the final answer is printed out? To me, this feels like it is a stack.
I'm a bit confused about what you're disagreeing about here but that output looks like a stack to me if you agree that it's LIFO once the client starts using it.
Right now it's just a JavaScript array, but if you start pushing and popping on it, and only doing that, then it's a JavaScript implementation of a stack.
Because of the order of the lines in your recursive function, you are using a post-order traversal in your DFS. In your implementation, there's no such thing as in-order. Your co-worker might have been expecting pre-order DFS. To convert your algorithm to pre-order, just check the node before visiting the children nodes.
const findMatchingElements = (node, name, result) => {
if(node.name === name) result.push(node)
for(const child of node.children) {
findMatchingElements(child, name, result)
}
return result
}
Reference
You are using stack implicitly via recursion. I guess your colleague means your implementation does not use stack explicitly without recursion.
How can I display multiple values of an array to the console that match the condition (e.g: === "McDonalds")?
I only managed to display one item. But I don't know how i can display all the value of my array.
public products: product[] = [
{ id: 1, name: "McFlurry", price: 2, enseigne:"McDonalds" },
{ id: 2, name: "Potatoes", price: 3, enseigne:"McDonalds" },
{ id: 3, name: "BigMac", price: 4, enseigne:"KFC" },
{ id: 4, name: "Nuggets", price: 3, enseigne:"KFC" }
];
searchEnseigne(){
let server = this.products.find(x => x.enseigne === "McDonalds");
console.log(server);
}
let server = this.products.filter(x => x.enseigne === "McDonalds");
console.log(server);
Use filter instead of find:
The filter() method creates a new array with all elements that pass the test. While The find() method returns the value of the first element
searchEnseigne(){
let server = this.products.filter(x => x.enseigne === "McDonalds");
console.log(server);
}
There is an equals function in Ramdajs which is totally awesome, it will provide the following:
// (1) true
R.equals({ id: 3}, { id: 3})
// (2) true
R.equals({ id: 3, name: 'freddy'}, { id: 3, name: 'freddy'})
// (3) false
R.equals({ id: 3, name: 'freddy'}, { id: 3, name: 'freddy', additional: 'item'});
How would I go about enhancing this function, or in some other way produce a true result for number 3
I would like to ignore all the properties of the rValue not present in the lValue, but faithfully compare the rest. I would prefer the recursive nature of equals remain intact - if that's possible.
I made a simple fiddle that shows the results above.
There's a constraint on equals in order to play nicely with the Fantasy Land spec that requires the symmetry of equals(a, b) === equals(b, a) to hold, so to satisfy your case we'll need to get the objects into some equivalent shape for comparison.
We can achieve this by creating a new version of the second object that has had all properties removed that don't exist in the first object.
const intersectObj = (a, b) => pick(keys(a), b)
// or if you prefer the point-free edition
const intersectObj_ = useWith(pick, [keys, identity])
const a = { id: 3, name: 'freddy' },
b = { id: 3, name: 'freddy', additional: 'item'}
intersectObj(a, b) // {"id": 3, "name": "freddy"}
Using this, we can now compare both objects according to the properties that exist in the first object a.
const partialEq = (a, b) => equals(a, intersectObj(a, b))
// again, if you prefer it point-free
const partialEq_ = converge(equals, [identity, intersectObj])
partialEq({ id: 3, person: { name: 'freddy' } },
{ id: 3, person: { name: 'freddy' }, additional: 'item'})
//=> true
partialEq({ id: 3, person: { name: 'freddy' } },
{ id: 3, person: { age: 15 }, additional: 'item'})
//=> false
Use whereEq
From the docs: "Takes a spec object and a test object; returns true if the test satisfies the spec, false otherwise."
whereEq({ id: 3, name: 'freddy' }, { id: 3, name: 'freddy', additional: 'item' })
The other way around is to develop your own version. It boils down to:
if (is object):
check all keys - recursive
otherwise:
compare using `equals`
This is recursive point-free version that handles deep objects, arrays and non-object values.
const { equals, identity, ifElse, is, mapObjIndexed, useWith, where } = R
const partialEquals = ifElse(
is(Object),
useWith(where, [
mapObjIndexed(x => partialEquals(x)),
identity,
]),
equals,
)
console.log(partialEquals({ id: 3 }, { id: 3 }))
console.log(partialEquals({ id: 3, name: 'freddy' }, { id: 3, name: 'freddy' }))
console.log(partialEquals({ id: 3, name: 'freddy' }, { id: 3, name: 'freddy', additional: 'item' }))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
I haven't used Ramda.js before so if there's something wrong in my answer please be free to point out.
I learned the source code of Ramda.js
In src/equals.js, is where the function you use is defined.
var _curry2 = require('./internal/_curry2');
var _equals = require('./internal/_equals');
module.exports = _curry2(function equals(a, b) {
return _equals(a, b, [], []);
});
So it simply put the function equals (internally, called _equals) into the "curry".
So let's check out the internal _equals function, it did check the length in the line 84~86:
if (keysA.length !== keys(b).length) {
return false;
}
Just comment these lines it will be true as you wish.
You can 1) just comment these 3 lines in the distributed version of Ramda, or 2) you can add your own partialEquals function to it then re-build and create your version of Ramda (which is more recommended, from my point of view). If you need any help about that, don't hesitate to discuss with me. :)
This can also be accomplished by whereEq
R.findIndex(R.whereEq({id:3}))([{id:9}{id:8}{id:3}{id:7}])