I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))
Related
I have some data in the form:
const data = {
list: [1, 2, 3],
newItem: 5
}
I want to make a function that appends the value of newItem to list resulting in this new version of data:
{
list: [1,2,3,5],
newItem: 5,
}
(Ultimately, I'd remove newItem after it's been moved into the list, but I'm trying to simplify the problem for this question).
I'm trying to do it using pointfree style and Ramda.js, as a learning experience.
Here's where I am so far:
const addItem = R.assoc('list',
R.pipe(
R.prop('list'),
R.append(R.prop('newItem'))
)
)
The idea is to generate a function that accepts data, but in this example the call to R.append also needs a reference to data, I'm trying to avoid explicitly mentioning data in order to maintain Pointfree style.
Is this possible to do without mentioning data?
If I understand correctly you want to go from {x:3, y:[1,2]} to [1,2,3]. Here's one way:
const fn = compose(apply(append), props(['x', 'y']))
fn({x:3, y:[1,2]});
//=> [1,2,3]
As the discussion on the answer from customcommander shows, there are two different possible interpretations.
If you want to just receive [1, 2, 3, 5], then you can do it as customcommander does, or the way I would choose:
const fn1 = lift (append) (prop ('newItem'), prop ('list'))
But if you wanted something like {list: [1, 2, 3, 5], newItem: 5}, then you might use the above inside applySpec and combine that with a merge, like this:
const fn2 = chain (mergeLeft, applySpec ({list: fn1}))
Here's a snippet:
const fn1 = lift (append) (prop ('newItem'), prop ('list'))
const fn2 = chain (mergeLeft, applySpec ({list: fn1}))
const data = {list: [1, 2, 3], newItem: 5}
console .log (fn1 (data)) //=> [1, 2, 3, 5]
console .log (fn2 (data)) //=> {list: [1, 2, 3, 5], newItem: 5}
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
<script> const {lift, append, prop, chain, mergeLeft, applySpec} = R </script>
This second one is a little unwieldy, once you inline fn1. It repeats the property list in two places, and that always bothers me. But I don't have a good solution at the moment.
I've several times wanted a combination of R.evolve and R.applySpec, which works on the outside like evolve, letting you specify only the properties which need to change, but whose transformation functions are given the whole input object, and not just the corresponding property.
With something like that, this might look like
const f3 = evolveSpec ({
list: ({list, newItem}) => [...list, newItem]
})
or using the above:
const f3 = evolveSpec ({
list: lift (append) (prop ('newItem'), prop ('list'))
})
I think this might be a useful candidate for inclusion in Ramda.
const addItem = R.chain
( R.assoc('list') )
( R.converge(R.append, [R.prop('newItem'), R.prop('list')]) );
const data = {
list: [1, 2, 3],
newItem: 5
};
console.log(addItem(data));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
And here is why:
First we can have a look at what the current addItem is supposed to look like when not point free:
const addItem = x => R.assoc('list')
(
R.pipe(
R.prop('list'),
R.append(R.prop('newItem')(x))
)(x)
)(x);
console.log(addItem({ list: [1, 2, 3], newItem: 5 }));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
It takes some data and uses it in three places. We can refactor a bit:
const f = R.assoc('list');
const g = x => R.pipe(
R.prop('list'),
R.append(R.prop('newItem')(x))
)(x)
const addItem = x => f(g(x))(x);
console.log(addItem({ list: [1, 2, 3], newItem: 5 }));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
The x => f(g(x))(x) part might not be obvious immediately but looking at the list of common combinators in JavaScript it can be identified as S_:
Name
#
Haskell
Ramda
Sanctuary
Signature
chain
S_³
(=<<)²
chain²
chain²
(a → b → c) → (b → a) → b → c
Thus x => f(g(x))(x) can be simplified pointfree to R.chain(f)(g).
This leaves the g which still takes one argument and uses it in two places. The ultimate goal is to extract two properties from an object and pass them to R.append(), this can be more easily (and pointfree) be expressed with R.converge() as:
const g = R.converge(R.append, [R.prop('newItem'), R.prop('list')]);
Substituting the f and g back gives
const addItem = R.chain
( R.assoc('list') )
( R.converge(R.append, [R.prop('newItem'), R.prop('list')]) );
newbie here... I'm trying to grasp the concept of functional programming in Javascript, but I got stuck.
I'm trying to apply a function to another function with recursion (higher-order function). Let's say I have an input that can be a variable or an array, for example:
const A = [5, 14, 23, 32, 41];
const B = 50;
My basic function should convert Fahrenheit to Celsius (but it could really be any function)
const convertF2C = x => (x - 32) / 1.8;
So the way I would normally solve it would be:
const result = array => array.map ? array.map(result) : convertF2C(array); // using recursion if the input is an array
The problem with the above is that if I would like to change the convertF2C in the "result" function, I would have to modify the code
So, thinking functionally I should be able to create a general function that takes a basic function, like this:
const arrayResult = apply2Array(convertF2C);
console.log(arrayResult(A)); // Output: [-15, -10, -5, 0, 5]
console.log(arrayResult(B)); // Output: 10
Where i'm guessing that the general function "apply2Array", should look something along the lines of:
const apply2Array = fn => (...args) => args.map ? args.map(apply2Array) : fn(...args); // does not work
I found a “sort of” similar question here, but it did not help me : Higher-order function of recursive functions?
Any guidance, help or pointing me in the right direction would be much appreciated.
I'm a bit confused by the answers here. I can't tell if they are responding to requirements I don't actually see, or if I'm missing something important.
But if you just want a decorator that converts a function on a scalar to one that operates on either a scalar or an array of scalars, it's quite simple, and you weren't far off. This should do it:
const apply2Array = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (fn) : fn (arg)
const convertF2C = (t) => (t - 32) / 1.8
const A = [5, 14, 23, 32, 41]
const B = 50
const arrayResult = apply2Array(convertF2C);
console .log (arrayResult (A))
console .log (arrayResult (B))
.as-console-wrapper {max-height: 100% !important; top: 0}
I would suggest that you should use Array.isArray for the check and not the existence of a map property. A property named map might be something other than Array.prototype.map, perhaps something to do with cartography.
Other comments and answers suggest you also want to work the same on nested arrays, to convert something like [5, [[14, 23], 32], 41] into [-15, [[-10, -5], 0], 5]. That wouldn't be much harder. All you need to do, as Bergi suggest, is to wrap the recursively applied function in the same decorator:
const apply2Array = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (apply2Array (fn)) : fn (arg)
// ^^^^^^^^^^^
const convertF2C = (t) => (t - 32) / 1.8
const A = [5, 14, 23, 32, 41]
const B = 50
const C = [5, [[14, 23], 32], 41]
const arrayResult = apply2Array(convertF2C);
console .log (arrayResult (A))
console .log (arrayResult (B))
console .log (arrayResult (C))
.as-console-wrapper {max-height: 100% !important; top: 0}
Don't do this
Still, I would suggest that this enterprise if fraught with potential pitfalls. Imagine, for instance, you had a sum function that operated on an array of number, and you want to use it to operate on either an array of numbers or on an array of arrays of numbers.
If you wrapped it up with either version of apply2Array, it wouldn't work properly. With the first version, the function will work as expected if you supply an array of arrays of numbers, but will fail if you simply supply an array of number. The second one will fail either way.
The trouble is that sometimes your basic function wants to operate on an array. Making a function that does multiple things based on the types of its inputs loses you some simplicity.
Instead, I would suggest that you create multiple functions to do the different things that you need. You can still use a decorator, but a more general one than the above.
Here we use one called map, which reifies Array.prototype.map:
const map = (fn) => (xs) =>
xs .map (x => fn (x))
const convertF2C = (t) => (t - 32) / 1.8
const convertAllF2C = map (convertF2C)
const A = [5, 14, 23, 32, 41]
const B = 50
console .log (convertAllF2C (A))
console .log (convertF2C (B))
.as-console-wrapper {max-height: 100% !important; top: 0}
And if you also wanted deep mapping, you might rename the decorator above, and do this:
const map = (fn) => (xs) =>
xs .map (x => fn(x))
const deepMap = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (deepMap (fn)) : fn (arg)
const convertF2C = (t) => (t - 32) / 1.8
const convertAllF2C = map (convertF2C)
const deepConvertF2C = deepMap (convertF2C)
const A = [5, 14, 23, 32, 41]
const B = 50
const C = [5, [[14, 23], 32], 41]
const arrayResult = deepMap (convertF2C);
console .log (convertAllF2C (A))
console .log (convertF2C (B))
console .log (deepConvertF2C (C))
.as-console-wrapper {max-height: 100% !important; top: 0}
Having three separate functions to call for your three cases is generally simpler than one function that can be called with three different styles of input associated with three different styles of output. And as these are built from our base function with only some generic decorators, they are still easy to maintain.
But doesn't that contradict...?
Some people know me as a founder and primary author of Ramda. And Ramda has a map function related to this. But it seems to operate on multiple types, including arrays, objects, functions, and more. Isn't this a contradiction?
I'd say no. We just need to move up a layer of abstraction. FantasyLand specifies an abstract generic type, Functor (borrowed from abstract mathematics). These are types which in some way contain one or more values of another type, and to which we can create a similarly-structured container by mapping the function supplied to each of those values. There are certain simple laws that your map function must obey for it to be considered a Functor, but if you do, then Ramda's map will work just fine with your type. In other words, Ramda's map does not work on Arrays specifically, but on any Functor. Ramda itself supplies implementations for Arrays, Objects, and Functions, but delegates the call to other types to their own map methods.
The basic point, though, is that Ramda does not really impose additional complexity here, because the input type of Ramda's map is Functor instead of Array.
Simplicity
Functional programming is about many things. But one of the central topics has to be simplicity. If you haven't seen Rich Hickey's talk Simple Made Easy, I would highly recommend it. It explains an objective notion of simplicity and describes how you might achieve it.
If you give the inner function a name, it becomes easier to write the recursion:
const makeDeeplyMappable = fn => function deepMap(a) {
return Array.isArray(a) ? a.map(deepMap) : fn(a);
};
const convertF2C = x => (x - 32) / 1.8;
const deepF2C = makeDeeplyMappable(convertF2C);
console.log(deepF2C([[32, [0]]]));
console.log(deepF2C([[[[5]]], 32]));
genericRecursiveLoop should be the answer you are looking for.
const convertF2C = x => (x - 32) / 1.8;
// in case you really want to use recursion
// args[0].map is actually just [[1,2,3]][0].map will return try if nested Array
// insuch a case you flaten them manuallu ...arg[0] is extracting nested array
const recursiveLoop = (args) => args.map ? args.map(recursiveLoop) : convertF2C(args)
console.log('recursiveLoop')
console.log(recursiveLoop(5))
console.log(recursiveLoop(6))
console.log(recursiveLoop(5, 6)) // it will only show response for 5
console.log(recursiveLoop([5, 6]))
/* ANSWER */
const genericRecursiveLoop = (func) => (args) => args.map ? args.map(genericRecursiveLoop(func)) : func(args);
let innerRecursiveLoopFunc = genericRecursiveLoop(convertF2C)
console.log('genericRecursiveLoop')
console.log(innerRecursiveLoopFunc(5))
console.log(innerRecursiveLoopFunc(6))
console.log(innerRecursiveLoopFunc(5, 6)) // it will only show response for 5
console.log(innerRecursiveLoopFunc([5, 6]))
You could also consider Array#flatMap if your data structure isn't arbitrarily nested... this would probably help you preserve readability.
const A = [5, 14, 23, 32, 41];
const B = 50;
const toCelsius = (x) => []
.concat(x)
.flatMap((n) => (n - 32) / 1.8)
;
console.log('A', toCelsius(A));
console.log('B', toCelsius(B));
I tried this code and it produces me wanted result:
const {
__,
compose,
converge,
divide,
identity,
length,
prop
} = require("ramda");
const div2 = divide(__, 2);
const lengthDiv2 = compose(Math.floor, div2, length);
const midElement = converge(prop, [lengthDiv2, identity]);
console.log(midElement([1, 5, 4]); //5
But I dont know is there another way to get property from array, particularly some other implementation of midElement function?
You can create midElement by chaining R.nth and lengthDiv2 because according to R.chain documentation (and #ScottSauyet):
If second argument is a function, chain(f, g)(x) is equivalent to
f(g(x), x).
In this case g is lengthDiv2, f is R.nth, and x is the array. So, the result would be R.nth(lengthDiv2(array), array), which will return the middle item.
const { compose, flip, divide, length, chain, nth } = R;
const div2 = flip(divide)(2); // create the function using flip
const lengthDiv2 = compose(Math.floor, div2, length);
const midElement = chain(nth, lengthDiv2); // chain R.nth and lengthDiv2
console.log(midElement([1, 5, 4])); //5
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>
Simplification
Yes, there is a somewhat easier way to write midElement. This feels a bit cleaner:
const div2 = divide (__, 2)
const lengthDiv2 = compose (floor, div2, length)
const midElement = chain (nth, lengthDiv2)
console.log (midElement ([8, 6, 7, 5, 3, 0, 9])) //=> 5
console.log (midElement ([8, 6, 7, 5, 3, 0])) //=> 5
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script><script>
const {divide, __, compose, length, chain, nth} = R
const {floor} = Math </script>
(We choose nth over prop here only because it's semantically more correct. nth is specific to arrays and their indices. prop works only because of the coincidence that Javascript builds its arrays atop plain objects.)
chain is an interesting function. You can find many more details in its FantasyLand specification. But for our cases, the important point is how it works with functions.
chain (f, g) //=> (x) => f (g (x)) (x)
And that explains how (here at least) it's a simpler alternative to converge.
Note that this version -- like your original -- chooses the second of the two central values when the list has an even length. I usually find that we more naturally choose the first one. That is, for example, midpoint([3, 6, 9, 12]) would usually be 6. To alter that we could simply add a decrement operation before dividing:
const midpoint = chain(nth, compose(floor, divide(__, 2), dec, length))
But Why?
However, Ramda is not offering much useful here. Ramda (disclaimer: I'm one of its main authors) offers help with many problems. But it's a tool, and I would not suggest using it except when it makes your code cleaner.
And this version seems to me much easier to comprehend:
const midpoint = (xs) => xs[Math.floor ((xs.length / 2))]
console.log (midpoint ([8, 6, 7, 5, 3, 0, 9])) //=> 5
console.log (midpoint ([8, 6, 7, 5, 3, 0])) //=> 5
Or this version if you want the decrement behavior above:
const midpoint = (xs) => xs[Math.floor (((xs.length - 1) / 2))]
console.log (midpoint ([8, 6, 7, 5, 3, 0, 9])) //=> 5
console.log (midpoint ([8, 6, 7, 5, 3, 0])) //=> 7
Another Option
But there are so many different ways to write such a function. While I wouldn't really recommend it, since it's performance cannot compare, a recursive solution is very elegant:
// choosing the first central option
const midpoint = (xs) => xs.length <= 2 ? xs[0] : midpoint (xs.slice(1, -1))
// choosing the second central option
const midpoint = (xs) => xs.length <= 2 ? xs[xs.length - 1] : midpoint (xs.slice(1, -1))
These simply take one of the two central elements if there are no more than two left and otherwise recursively takes the midpoint of the array remaining after removing the first and last elements.
What to remember
I'm a founder of Ramda, and proud of the library. But we need to remember that it just a library. It should make a certain style of coding easier, but it should not dictate any particular style. Use it when it makes your code simpler, more maintainable, more consistent, or more performant. Never use it simply because you can.
I'm trying to get a better understanding of recursion as well as functional programming, I thought a good practice example for that would be to create permutations of a string with recursion and modern methods like reduce, filter and map.
I found this beautiful piece of code
const flatten = xs =>
xs.reduce((cum, next) => [...cum, ...next], []);
const without = (xs, x) =>
xs.filter(y => y !== x);
const permutations = xs =>
flatten(xs.map(x =>
xs.length < 2
? [xs]
: permutations(without(xs, x)).map(perm => [x, ...perm])
));
permutations([1,2,3])
// [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]]
from Permutations in JavaScript?
by Márton Sári
I've delimited it a bit in order to add some console logs to debug it and understand what's it doing behind the scenes
const flatten = xs => {
console.log(`input for flatten(${xs})`);
return xs.reduce((cum, next) => {
let res = [...cum, ...next];
console.log(`output from flatten(): ${res}`);
return res;
}, []);
}
const without = (xs, x) => {
console.log(`input for without(${xs},${x})`)
let res = xs.filter(y => y !== x);
console.log(`output from without: ${res}`);
return res;
}
const permutations = xs => {
console.log(`input for permutations(${xs})`);
let res = flatten(xs.map(x => {
if (xs.length < 2) {
return [xs]
} else {
return permutations(without(xs, x)).map(perm => [x, ...perm])
}
}));
console.log(`output for permutations: ${res}`)
return res;
}
permutations([1,2,3])
I think I have a good enough idea of what each method iss doing, but I just can't seem to conceptualize how it all comes together to create [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]]
can somebody show me step by step what's going on under the hood?
To get all permuations we do the following:
We take one element of the array from left to right.
xs.map(x => // 1
For all the other elements we generate permutations recursively:
permutations(without(xs, x)) // [[2, 3], [3, 2]]
for every permutation we add the value we've taken out back at the beginning:
.map(perm => [xs, ...perm]) // [[1, 2, 3], [1, 3, 2]]
now that is repeated for all the arrays elements and it results in:
[
// 1
[[1, 2, 3], [1, 3, 2]],
// 2
[[2, 1, 3], [2, 3, 1]],
// 3
[[3, 1, 2], [3, 2, 1]]
]
now we just have to flatten(...) that array to get the desired result.
The whole thing could be expressed as a tree of recursive calls:
[1, 2, 3]
- [2, 3] ->
- [3] -> [1, 2, 3]
- [2] -> [1, 3, 2]
- [1, 3] ->
- [1] -> [2, 3, 1]
- [3] -> [2, 1, 3]
- [1, 2] ->
- [1] -> [3, 2, 1]
- [2] -> [3, 1, 2]
I've delimited it a bit in order to add some console logs to debug it
This can help of course. However keep in mind that simple recursive definitions can often result in complex execution traces.
That is in fact one of reasons why recursion can be so useful. Because some algorithms that have complicated iterations, admit a simple recursive description. So your goal in understanding a recursive algorithm should be to figure out the inductive (not iterative) reasoning in its definition.
Lets forget about javascript and focus on the algorithm. Let's see we can obtain the permutations of elements of a set A, which we will denote P(A).
Note: It's of no relevance that in the original algorithm the input is a list, since the original order does not matter at all. Likewise it's of no relevance that we will return a set of lists rather than a list of lists, since we don't care the order in which solutions are calculated.
Base Case:
The simplest case is the empty set. There is exactly one solution for the permutations of 0 elements, and that solution is the empty sequence []. So,
P(A) = {[]}
Recursive Case:
In order to use recursion, you want to describe how to obtain P(A) from P(A') for some A' smaller than A in size.
Note: If you do that, it's finished. Operationally the program will work out via successive calls to P with smaller and smaller arguments until it reaches the base case, and then it will come back bulding longer results from shorter ones.
So here is one way to write a particular permutation of an A with n+1 elems. You need to successively pick one element of A for each position:
_ _ ... _
n+1 n 1
So you pick an x ∈ A for the first
x _ ... _
n 1
And then you need to choose a permutation in P(A\{x}).
This tells you one way to build all permutations of size n. Consider all possible choices of x in A (to use as first element), and for each choice put x in front of each solution of P(A\{x}). Finally take the union of all solutions you found for each choice of x.
Let's use the dot operator to represent putting x in front of a sequence s, and the diamond operator to represent putting x in front of every s ∈ S. That is,
x⋅s = [x, s1, s2, ..., sn]
x⟡S = {x⋅s : s ∈ S}
Then for a non-empty A
P(A) = ⋃ {x⟡P(A\{x}) : x ∈ A}
This expression together with the case base give you all the permutations of elements in a set A.
The javascript code
To understand how the code you've shown implements this algortithm you need to consider the following
That code considers two base cases, when you have 0 or 1 elements, by writing xs.length < 2. We could have done that too, it's irrelevant. You can change that 2 into a 1 and it should still work.
The mapping corresponds to our operation x⟡S = {x⋅s : s ∈ S}
The without corresponds to P(A\{x})
The flatten corresponds to the ⋃ which joins all solutions.
I'm trying to understand recursion and I have a somewhat decent understanding on how it intuitively works, however the aggregation of the returned data is the bit I struggle with.
For instance, in javascript to flatten an array I came up with the following code:
var _flatten = function(arr){
if(!arr instanceof Array) return arr;
var g = [];
function flatten(arr){
for(var i = 0; i < arr.length;i++){
if(arr[i] instanceof Array){
flatten(arr[i]);
}else{
g.push(arr[i]);
}
}
}
flatten(arr);
return g;
}
Turning something like this
var list = [1,2,3,4,5,6,[1,2,3,4,5,[1,2,3],[1,2,3,4]]];
into this:[ 1, 2, 3, 4, 5, 6, 1, 2, 3, 4, 5, 1, 2, 3, 1, 2, 3, 4 ]
Which is fine and all, but the global variable g seems like some kind of cheap hack. I don't know how to think about the result returned when getting to the top of the stack and the return of the function propagating back down the stack. How would you implement this function, and how can I get a better grasp on this?
Thanks!
Instead of a global variable (to make it more proper recursion) you can send in g as a argument to the flatten function, and pass the modified g back out with a return statement.
var _flatten = function(arr) {
if (!arr instanceof Array) return arr;
function flatten(arr, g) {
for (var i = 0; i < arr.length; i++) {
if (arr[i] instanceof Array) {
flatten(arr[i], g);
} else {
g.push(arr[i]);
}
}
return g;
}
return flatten(arr, []);
}
There are many ways to write an array flattening procedure but I understand your question is about understanding recursion in general
The g isn't global in any sense of the word, but it is a symptom of the implementation choices. Mutation isn't necessarily bad so long as you keep it localized to your function – that g is never leaked outside of the function where someone could potentially observe the side effects.
Personally tho, I think it's better to break your problem into small generic procedures that make it much easier to describe your code.
You'll note that we don't have to setup temporary variables like g or handle incrementing array iterators like i – we don't even have to look at the .length property of the array. Not having to think about these things make it really nice to write our program in a declarative way.
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => xs => xs.map(f).reduce((x,y) => x.concat(y), [])
// id :: a -> a
const id = x => x
// flatten :: [[a]] -> [a]
const flatten = concatMap (id)
// isArray :: a -> Bool
const isArray = Array.isArray
// deepFlatten :: [[a]] -> [a]
const deepFlatten = concatMap (x => isArray(x) ? deepFlatten(x) : x)
// your sample data
let data = [0, [1, [2, [3, [4, 5], 6]]], [7, [8]], 9]
console.log(deepFlatten(data))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ]
console.log(flatten(data))
// [ 0, 1, [ 2, [ 3, [ 4, 5 ], 6 ] ], 7, [ 8 ], 9 ]
First you'll see I made two different flattening procedures. flatten to flatten one level of nesting, and deepFlatten to flatten an arbitrarily deep array.
You'll also see I use Array.prototype.map and Array.prototype.reduce since these are provided by ECMAScript but that doesn't mean you're only limited to using procedures that you have. You can make your own procedures to fill the gaps. Here we made concatMap which is a useful generic provided by other languages such as Haskell.
Utilizing these generics, you can see that deepFlatten is an insanely simple procedure.
// deepFlatten :: [[a]] -> [a]
const deepFlatten = concatMap (x => isArray(x) ? deepFlatten(x) : x)
It's made up of a single expression including a lambda made up of a single if branch (by use of the ternary operator ?:)
Maybe it's a lot to take in, but hopefully it demonstrates that "writing a recursive procedure" isn't always about complicated setup of state variables and complex logic to control the recursion. In this case, it's a simple
if (condition) recurse else don't
If you have any questions, let me know. I'm happy to help you in any way I can.
In fact recursive coding is very simple and every aspects of it shall be contained in the function body. Any info that needs to pass shall be sent through arguments to the next recursion. Usage of global thingies is very ugly and shall be avoided. Accordingly i would simply do the in place array flattening job as follows;
var list = [1,2,3,4,5,6,[1,2,3,4,5,[1,2,3],[1,2,[9,8,7],3,4]]];
function flatArray(arr){
for (var i = 0, len = arr.length; i < len; i++)
Array.isArray(arr[i]) && (arr.splice(i,0,...flatArray(arr.splice(i,1)[0])), len = arr.length);
return arr;
}
console.log(flatArray(list));