I'm doing some work that involves grabbing some specific nodes in a tree-like structure. My colleague claims that my implementation, which we are trying to use a stack and a DFS algorithm, is not that.
Here is my implementation of using a stack to create a basic DFS algorithm:
const findMatchingElements = (node, name, result) => {
for(const child of node.children) {
findMatchingElements(child, name, result)
}
if(node.name === name) result.push(node)
return result
}
const getElements = (tree, name) => {
return findMatchingElements(tree, name, [])
}
getElements(obj, 'foo')
And a sample input:
const obj = {
id: 1,
name: 'foo',
children: [
{
id: 45,
name: 'bar',
children: [
{
id: 859,
name: 'bar',
children: []
}
]
},
{
id: 67,
name: 'foo',
children: [
{
id: 456,
name: 'bar',
children: []
},
{
id: 653,
name: 'foo',
children: []
}
]
}
]
}
I am getting my expected output:
[ { id: 653, name: 'foo', children: [] },
{ id: 67, name: 'foo', children: [ [Object], [Object] ] },
{ id: 1, name: 'foo', children: [ [Object], [Object] ] } ]
In the order I expect as well, but my colleague for some reason does not think this is a proper stack implementation. Am I missing something? Is it because of the way the final answer is printed out? To me, this feels like it is a stack.
I'm a bit confused about what you're disagreeing about here but that output looks like a stack to me if you agree that it's LIFO once the client starts using it.
Right now it's just a JavaScript array, but if you start pushing and popping on it, and only doing that, then it's a JavaScript implementation of a stack.
Because of the order of the lines in your recursive function, you are using a post-order traversal in your DFS. In your implementation, there's no such thing as in-order. Your co-worker might have been expecting pre-order DFS. To convert your algorithm to pre-order, just check the node before visiting the children nodes.
const findMatchingElements = (node, name, result) => {
if(node.name === name) result.push(node)
for(const child of node.children) {
findMatchingElements(child, name, result)
}
return result
}
Reference
You are using stack implicitly via recursion. I guess your colleague means your implementation does not use stack explicitly without recursion.
Related
Requirement: Is there any optimal or easy way to filtered out the objects from an array which contains a specific property based on its value without recursion.
Problem statement: We can achieve this requirement using recursion but as data set (array of objects) is very large and each object contains n number of nested objects, Recursion approach is causing performance issue.
Here is the sample mock data:
[{
children: [{
children: [{
children: [],
isWorking: 'yes'
}]
}]
}, {
children: [],
isWorking: 'no'
}, {
children: [{
children: [{
children: [],
isWorking: 'no'
}]
}]
}, {
children: [{
children: [],
isWorking: 'yes'
}]
}, ...]
I want to filter out the root objects from an array which contains nested isWorking property with the value as yes.
isWorking property will only available for the objects which does not contains children. i.e. children: []
As I said earlier, I am able to achieve this by recursion but looking for a optimal solution which will not impact the performance.
This is what I tried (Working solution):
const parent = [{
children: [{
children: [{
children: [],
isWorking: 'yes'
}]
}]
}, {
children: [],
isWorking: 'no'
}, {
children: [{
children: [{
children: [],
isWorking: 'no'
}]
}]
}, {
children: [{
children: [],
isWorking: 'yes'
}]
}];
const isWorkingFlagArr = [];
function checkForOccupation(arr) {
arr.forEach(obj => {
(!obj.children.length) ? isWorkingFlagArr.push(obj.isWorking === 'yes') : checkForOccupation(obj.children)
})
}
checkForOccupation(parent);
const res = parent.filter((obj, index) => isWorkingFlagArr[index]);
console.log(res);
The following puts each recursive call on a new microtask, thereby avoiding blowing the stack.
This code runs the same algorithm as yours but ensures that recursive calls are made asynchronously on new microtasks.
In the following code 👇
i. Top-level async is not supported by StackOverflow.
ii. async to enable use of await.
iii. Async IIFE.
iv. Your algorithm.
v. Suspend continuation of the for..of loop until the promise
returned by the recursive call is resolved. Akin to a .then(() => checkForOccupation(children)), meaning that the recursive call occurs
with a fresh stack on a microtask, thereby mitigating the problem of a deeply nested
recursive call and the lack of tail-call recursion optimization in JS.
This brings with it a performance penalty.
vi. Invoke the async IIFE to kick things off.
vii. Invoke the outer async IIFE to compensate for the lack of
top-level async support by StackOverflow.
(async () => { // i.
const getFlags = async (arr) => { // ii.
const flags = []
await (async function checkForOccupation(arr) { // iii.
for(const { children, isWorking } of arr) { // iv.
!children.length
? flags.push(isWorking === 'yes')
: await checkForOccupation(children) // v.
}
})(arr) // vi.
return flags
}
const data = [{
children: [{
children: [{
children: [],
isWorking: 'yes'
}]
}]
}, {
children: [],
isWorking: 'no'
}, {
children: [{
children: [{
children: [],
isWorking: 'no'
}]
}]
}, {
children: [{
children: [],
isWorking: 'yes'
}]
}]
const flags = await getFlags(data)
console.log(data.filter((_, index) => flags[index]))
})()
The alternative approach would be to manage a stack of state explicitly, which would be a chore.
I have no idea if this will perform better or worse than your code, but I find it significantly simpler:
const checkForOccupation = (xs) => xs .flatMap (
(x, _, __, kids = checkForOccupation (x .children || [])) =>
x .isWorking == 'yes' || kids .length > 0 ? [{...x, children: kids}] : []
)
const data = [{children: [{children: [{children: [{children: [], isWorking: 'no'}], isWorking: 'yes'}]}]}, {children: [], isWorking: 'no'}, {children: [{children: [{children: [], isWorking: 'no'}]}]}, {children: [{children: [], isWorking: 'yes'}]}]
console .log (checkForOccupation (data))
.as-console-wrapper {max-height: 100% !important; top: 0}
We use flatMap to transform and filter in a single go. We first recur on the children array of our node (which will bottom out when that array is empty) and check whether the returned array has any members, or if our node has the magic isWorking value. If so, we return to flatMap an array containing a single object that has our own properties, with children replaced by the result of that recursive call. If it doesn't have that value or any children, we return to flatMap an empty array. flatMap then flattens out the collection of returned arrays.
Ben Aston -- in a comment to the question and in his own answer -- worried about blowing the stack. As far as I can tell, this can only happen if you have an object nested several thousand layers deep. If you do, I think this is likely the least of your problems.
I'm learning how to handle js arrays and I was wondering if it's possible to create new array-objects by splitting an attribute of an existing object.
I tried to do that with .map and .flatMap, but the output gives me combinations of objects that replicate the other values, while I'm looking for unique objects
I think that the code can be more clear:
const array=[
{ names:['something1', 'something2'],
state:false,
features:['feature1','feature2']
},
{ names:['something3', 'something4'],
state:true,
features:['feature3','feature4']
},
]
array.flatMap(({names,state,features}) => {
names.flatMap(name => {
features.flatMap(feature => {
console.log(({name,state,feature}));
})
})
})
So, with this code the output is:
{ name: 'something1', state: false, feature: 'feature1' }
{ name: 'something1', state: false, feature: 'feature2' }
{ name: 'something2', state: false, feature: 'feature1' }
{ name: 'something2', state: false, feature: 'feature2' }
{ name: 'something3', state: true, feature: 'feature3' }
{ name: 'something3', state: true, feature: 'feature4' }
{ name: 'something4', state: true, feature: 'feature3' }
{ name: 'something4', state: true, feature: 'feature4' }
But I want the output to be:
{ name: 'something1', state: false, feature: 'feature1' },
{ name: 'something2', state: false, feature: 'feature2' },
{ name: 'something3', state: true, feature: 'feature3' },
{ name: 'something4', state: true, feature: 'feature4' }
I'm a newbie in coding, sorry if my words are not properly correct in describing this problem.
Thanks for your patience
You can use .flatMap() with an inner .map() function (instead of a .flatMap() like you are doing) to map each element in the names array to its own respective object with it's associated feature.
See example below:
const array = [{
names: ['something1', 'something2'],
state: false,
features: ['feature1', 'feature2']
},
{
names: ['something3', 'something4'],
state: true,
features: ['feature3', 'feature4']
},
];
const res = array.flatMap(
({names, state, features}) => names.map((name, i) => ({name, state, feature: features[i]}))
);
console.log(res);
Here you go, slight edit on last one. Just map names.
const arr = [{
names: ["test1", "test2"],
values: ["t1", "t2"]
},
{
names: ["test3", "test4"],
values: ["t3", "t4"]
}];
const flat = arr.reduce((a, {names, values}) => {
names.map((name, i) => {
a.push({ name, value: values[i]});
});
return a;
}, []).flat();
console.log(`Flat: ${JSON.stringify(flat)}`);
If you want to learn programming, the magics might not be the best pick. map(), reduce() and the like are algorithms themselves. The basics would be completing these tasks with simple loops (for, while) and sometimes recursion (like the general solution for the origin of this question).
What you have at the moment is an array, which you can iterate over in an outer loop, and inside there are objects with parallel arrays, which you could iterate over also, in an inner loop:
const array=[
{ names:['something1', 'something2'],
state:false,
features:['feature1','feature2']
},
{ names:['something3', 'something4'],
state:true,
features:['feature3','feature4']
},
];
for(let outer=0;outer<array.length;outer++){
let obj=array[outer];
for(let inner=0;inner<obj.names.length;inner++)
console.log({
name:obj.names[inner],
state:obj.state,
feature:obj.features[inner]
});
}
Then yes, the outer loop does not need the index at all, so it could directly iterate over the elements (for(let obj of array) or array.forEach()), but for the inner loop you need the index, so it is not that trivial to throw it away (see the strange inbalance with the suggested map() variants: they have name, and features[i] - where of course name is actually names[i] extracted already, but it hides a bit of the fact that the two arrays are traversed in parallel).
I have an flat array of Folders like this one :
const foldersArray = [{id: "1", parentId: null, name: "folder1"}, {id: "2", parentId: null, name: "folder2"}, {id: "1.1", parentId: 1, name: "folder1.1"}, {id: "1.1.1", parentId: "1.1", name: "folder1.1.1"},{id: "2.1", parentId: 2, name: "folder2.1"}]
I want to output an array of all parents of a given folder to generate a Breadcrumb-like component of Folder path.
I have presently a code that does what I need but I'd like to write it better in a more "functional" way, using reduce recursively.
If I do :
getFolderParents(folder){
return this.foldersArray.reduce((all, item) => {
if (item.id === folder.parentId) {
all.push (item.name)
this.getFolderParents(item)
}
return all
}, [])
}
and I log the output, I can see it successfully finds the first Parent, then reexecute the code, and outputs the parent's parent... as my initial array is logically reset to [] at each step... Can't find a way around though...
You could do this with a Map so you avoid iterating over the array each time you need to retrieve the next parent. This way you get an O(n) instead of an O(n²) time complexity:
const foldersArray = [{id: "1", parentId: null, name: "folder1"}, {id: "2", parentId: null, name: "folder2"}, {id: "1.1", parentId: "1", name: "folder1.1"}, {id: "1.1.1", parentId: "1.1", name: "folder1.1.1"},{id: "2.1", parentId: "2", name: "folder2.1"}];
const folderMap = new Map(foldersArray.map( o => [o.id, o] ));
const getFolderParents = folder =>
(folder.parentId ? getFolderParents(folderMap.get(folder.parentId)) : [])
.concat(folder.name);
// Example call:
console.log(getFolderParents(foldersArray[4]));
Just a minor remark: your parentId data type is not consistent: it better be always a string, just like the data type of the id property. If not, you need to cast it in your code, but it is really better to have the data type right from the start. You'll notice that I have defined parentId as a string consistently: this is needed for the above code to work. Alternatively, cast it to string in the code with String(folder.parentId).
Secondly, the above code will pre-pend the parent folder name (like is done in file folder notations). If you need to append the parent name after the child, then swap the concat subject and argument:
[folder.name].concat(folder.parentId ? getFolderParents(folderMap.get(folder.parentId)) : []);
You can do what you're looking for with a rather ugly looking while loop. Gets the job done though. Each loop iteration filters, looking for an instance of a parent. If that doesn't exist, it stops and exits. If it does exist, it pushes that parent into the tree array, sets folder to its parent to move up a level, then moves on to the next iteration.
const foldersArray = [{
id: "1",
parentId: null,
name: "folder1"
}, {
id: "2",
parentId: null,
name: "folder2"
}, {
id: "1.1",
parentId: 1,
name: "folder1.1"
}, {
id: "1.1.1",
parentId: "1.1",
name: "folder1.1.1"
}, {
id: "2.1",
parentId: 2,
name: "folder2.1"
}]
function getParents(folder){
const tree = [], storeFolder = folder
let parentFolder
while((parentFolder = foldersArray.filter(t => t.id == folder.parentId)[0]) !== undefined){
tree.push(parentFolder)
folder = parentFolder
}
console.log({ originalFolder: storeFolder, parentTree: tree})
}
getParents(foldersArray[3])
You're thinking about it in a backwards way. You have a single folder as input and you wish to expand it to a breadcrumb list of many folders. This is actually the opposite of reduce which takes as input many values, and returns a single value.
Reduce is also known as fold, and the reverse of a fold is unfold. unfold accepts a looping function f and an init state. Our function is given loop controllers next which add value to the output and specifies the next state, and done which signals the end of the loop.
const unfold = (f, init) =>
f ( (value, nextState) => [ value, ...unfold (f, nextState) ]
, () => []
, init
)
const range = (m, n) =>
unfold
( (next, done, state) =>
state > n
? done ()
: next ( state // value to add to output
, state + 1 // next state
)
, m // initial state
)
console.log (range (3, 10))
// [ 3, 4, 5, 6, 7, 8, 9, 10 ]
Above, we start with an initial state of a number, m in this case. Just like the accumulator variable in reduce, you can specify any initial state to unfold. Below, we express your program using unfold. We add parent to make it easy to select a folder's parent
const parent = ({ parentId }) =>
data .find (f => f.id === String (parentId))
const breadcrumb = folder =>
unfold
( (next, done, f) =>
f == null
? done ()
: next ( f // add folder to output
, parent (f) // loop with parent folder
)
, folder // init state
)
breadcrumb (data[3])
// [ { id: '1.1.1', parentId: '1.1', name: 'folder1.1.1' }
// , { id: '1.1', parentId: 1, name: 'folder1.1' }
// , { id: '1', parentId: null, name: 'folder1' } ]
breadcrumb (data[4])
// [ { id: '2.1', parentId: 2, name: 'folder2.1' }
// , { id: '2', parentId: null, name: 'folder2' } ]
breadcrumb (data[0])
// [ { id: '1', parentId: null, name: 'folder1' } ]
You can verify the results of the program below
const data =
[ {id: "1", parentId: null, name: "folder1"}
, {id: "2", parentId: null, name: "folder2"}
, {id: "1.1", parentId: 1, name: "folder1.1"}
, {id: "1.1.1", parentId: "1.1", name: "folder1.1.1"}
, {id: "2.1", parentId: 2, name: "folder2.1"}
]
const unfold = (f, init) =>
f ( (value, state) => [ value, ...unfold (f, state) ]
, () => []
, init
)
const parent = ({ parentId }) =>
data .find (f => f.id === String (parentId))
const breadcrumb = folder =>
unfold
( (next, done, f) =>
f == null
? done ()
: next ( f // add folder to output
, parent (f) // loop with parent folder
)
, folder // init state
)
console.log (breadcrumb (data[3]))
// [ { id: '1.1.1', parentId: '1.1', name: 'folder1.1.1' }
// , { id: '1.1', parentId: 1, name: 'folder1.1' }
// , { id: '1', parentId: null, name: 'folder1' } ]
console.log (breadcrumb (data[4]))
// [ { id: '2.1', parentId: 2, name: 'folder2.1' }
// , { id: '2', parentId: null, name: 'folder2' } ]
console.log (breadcrumb (data[0]))
// [ { id: '1', parentId: null, name: 'folder1' } ]
If you trace the computation above, you see that find is called once per folder f added to the outupt in the unfolding process. This is an expensive operation, and if your data set is significantly large, could be a problem for you.
A better solution would be to create an additional representation of your data that has a structure better suited for this type of query. If all you do is create a Map of f.id -> f, you can decrease lookup time from linear to logarithmic.
unfold is really powerful and suited for a wide variety of problems. I have many other answers relying on it in various ways. There's even some dealing with asynchrony in there, too.
If you get stuck, don't hesitate to ask follow-up questions :D
There is an equals function in Ramdajs which is totally awesome, it will provide the following:
// (1) true
R.equals({ id: 3}, { id: 3})
// (2) true
R.equals({ id: 3, name: 'freddy'}, { id: 3, name: 'freddy'})
// (3) false
R.equals({ id: 3, name: 'freddy'}, { id: 3, name: 'freddy', additional: 'item'});
How would I go about enhancing this function, or in some other way produce a true result for number 3
I would like to ignore all the properties of the rValue not present in the lValue, but faithfully compare the rest. I would prefer the recursive nature of equals remain intact - if that's possible.
I made a simple fiddle that shows the results above.
There's a constraint on equals in order to play nicely with the Fantasy Land spec that requires the symmetry of equals(a, b) === equals(b, a) to hold, so to satisfy your case we'll need to get the objects into some equivalent shape for comparison.
We can achieve this by creating a new version of the second object that has had all properties removed that don't exist in the first object.
const intersectObj = (a, b) => pick(keys(a), b)
// or if you prefer the point-free edition
const intersectObj_ = useWith(pick, [keys, identity])
const a = { id: 3, name: 'freddy' },
b = { id: 3, name: 'freddy', additional: 'item'}
intersectObj(a, b) // {"id": 3, "name": "freddy"}
Using this, we can now compare both objects according to the properties that exist in the first object a.
const partialEq = (a, b) => equals(a, intersectObj(a, b))
// again, if you prefer it point-free
const partialEq_ = converge(equals, [identity, intersectObj])
partialEq({ id: 3, person: { name: 'freddy' } },
{ id: 3, person: { name: 'freddy' }, additional: 'item'})
//=> true
partialEq({ id: 3, person: { name: 'freddy' } },
{ id: 3, person: { age: 15 }, additional: 'item'})
//=> false
Use whereEq
From the docs: "Takes a spec object and a test object; returns true if the test satisfies the spec, false otherwise."
whereEq({ id: 3, name: 'freddy' }, { id: 3, name: 'freddy', additional: 'item' })
The other way around is to develop your own version. It boils down to:
if (is object):
check all keys - recursive
otherwise:
compare using `equals`
This is recursive point-free version that handles deep objects, arrays and non-object values.
const { equals, identity, ifElse, is, mapObjIndexed, useWith, where } = R
const partialEquals = ifElse(
is(Object),
useWith(where, [
mapObjIndexed(x => partialEquals(x)),
identity,
]),
equals,
)
console.log(partialEquals({ id: 3 }, { id: 3 }))
console.log(partialEquals({ id: 3, name: 'freddy' }, { id: 3, name: 'freddy' }))
console.log(partialEquals({ id: 3, name: 'freddy' }, { id: 3, name: 'freddy', additional: 'item' }))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
I haven't used Ramda.js before so if there's something wrong in my answer please be free to point out.
I learned the source code of Ramda.js
In src/equals.js, is where the function you use is defined.
var _curry2 = require('./internal/_curry2');
var _equals = require('./internal/_equals');
module.exports = _curry2(function equals(a, b) {
return _equals(a, b, [], []);
});
So it simply put the function equals (internally, called _equals) into the "curry".
So let's check out the internal _equals function, it did check the length in the line 84~86:
if (keysA.length !== keys(b).length) {
return false;
}
Just comment these lines it will be true as you wish.
You can 1) just comment these 3 lines in the distributed version of Ramda, or 2) you can add your own partialEquals function to it then re-build and create your version of Ramda (which is more recommended, from my point of view). If you need any help about that, don't hesitate to discuss with me. :)
This can also be accomplished by whereEq
R.findIndex(R.whereEq({id:3}))([{id:9}{id:8}{id:3}{id:7}])
I have a tree in javascript which has multiple root elements and nested children.
Here's the object:
[{
_id: '546d30905d7edd1d5169181d',
name: 'first'
children: []
}, {
_id: '546d30905d7edd1d2169181d',
name: 'second'
children: []
}, {
_id: '446d30905d7edd1d5169181d',
name: 'third',
children: [{
_id: '446d30905d7e2d1d5169181d',
name: '3child',
children: []
}, {
_id: '446d30915d7e2d1d5169181d',
name: '3child2',
children: [{
_id: '546d30905d7edd1d2569181d',
name: 'second2',
children: []
}]
}]
}, {
_id: '546d30995d7edd1d5169181d',
name: 'fourth',
children: []
}]
This is a truncated document that's being stored in MongoDB using materialized path. The issue is that I need to add a 'sorting' ability, so nodes in the same root can be sorted.
I want to iterate this tree and apply a sort_value such as node['sort_value'] = 0, etc.
Each level will have it's own sort order, starting at 0.
I can simply iterate the tree recursively:
function iterate(items) {
_.each(items, function(page, key) {
if (items.children.length > 0) {
iterate(items.children);
}
});
}
However, I can't figure out how to keep track of the sort orders and also update the object's to include the sort_value field.
Any help would be greatly appreciated! Thank you
I did it so that I used array key for sorting and "synchronized" it with object property (because I needed it saved to DB and restored after) and it works as a charm :)
So something like this, pseudo:
var unsorted = [
0:{"sort_key": "0", "data":"dataaa 0"},
1:{"sort_key": "1", "data":"dataaa 1"},
...
n:{"sort_key": "n", "data":"dataaa n"}
];
function_sort(unsorted){
...
return sorted = [
0:{"sort_key": "n", "data":"dataaa y"},
1:{"sort_key": "44", "data":"dataaa x"},
...
n:{"sort_key": "0", "data":"dataaa z"}
];
}
save = function_save(sorted){
...update sort_key as array key...
return for_saving;
}