understanding better a solution for finding permutations of a string - javascript - javascript

I'm trying to get a better understanding of recursion as well as functional programming, I thought a good practice example for that would be to create permutations of a string with recursion and modern methods like reduce, filter and map.
I found this beautiful piece of code
const flatten = xs =>
xs.reduce((cum, next) => [...cum, ...next], []);
const without = (xs, x) =>
xs.filter(y => y !== x);
const permutations = xs =>
flatten(xs.map(x =>
xs.length < 2
? [xs]
: permutations(without(xs, x)).map(perm => [x, ...perm])
));
permutations([1,2,3])
// [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]]
from Permutations in JavaScript?
by Márton Sári
I've delimited it a bit in order to add some console logs to debug it and understand what's it doing behind the scenes
const flatten = xs => {
console.log(`input for flatten(${xs})`);
return xs.reduce((cum, next) => {
let res = [...cum, ...next];
console.log(`output from flatten(): ${res}`);
return res;
}, []);
}
const without = (xs, x) => {
console.log(`input for without(${xs},${x})`)
let res = xs.filter(y => y !== x);
console.log(`output from without: ${res}`);
return res;
}
const permutations = xs => {
console.log(`input for permutations(${xs})`);
let res = flatten(xs.map(x => {
if (xs.length < 2) {
return [xs]
} else {
return permutations(without(xs, x)).map(perm => [x, ...perm])
}
}));
console.log(`output for permutations: ${res}`)
return res;
}
permutations([1,2,3])
I think I have a good enough idea of what each method iss doing, but I just can't seem to conceptualize how it all comes together to create [[1, 2, 3], [1, 3, 2], [2, 1, 3], [2, 3, 1], [3, 1, 2], [3, 2, 1]]
can somebody show me step by step what's going on under the hood?

To get all permuations we do the following:
We take one element of the array from left to right.
xs.map(x => // 1
For all the other elements we generate permutations recursively:
permutations(without(xs, x)) // [[2, 3], [3, 2]]
for every permutation we add the value we've taken out back at the beginning:
.map(perm => [xs, ...perm]) // [[1, 2, 3], [1, 3, 2]]
now that is repeated for all the arrays elements and it results in:
[
// 1
[[1, 2, 3], [1, 3, 2]],
// 2
[[2, 1, 3], [2, 3, 1]],
// 3
[[3, 1, 2], [3, 2, 1]]
]
now we just have to flatten(...) that array to get the desired result.
The whole thing could be expressed as a tree of recursive calls:
[1, 2, 3]
- [2, 3] ->
- [3] -> [1, 2, 3]
- [2] -> [1, 3, 2]
- [1, 3] ->
- [1] -> [2, 3, 1]
- [3] -> [2, 1, 3]
- [1, 2] ->
- [1] -> [3, 2, 1]
- [2] -> [3, 1, 2]

I've delimited it a bit in order to add some console logs to debug it
This can help of course. However keep in mind that simple recursive definitions can often result in complex execution traces.
That is in fact one of reasons why recursion can be so useful. Because some algorithms that have complicated iterations, admit a simple recursive description. So your goal in understanding a recursive algorithm should be to figure out the inductive (not iterative) reasoning in its definition.
Lets forget about javascript and focus on the algorithm. Let's see we can obtain the permutations of elements of a set A, which we will denote P(A).
Note: It's of no relevance that in the original algorithm the input is a list, since the original order does not matter at all. Likewise it's of no relevance that we will return a set of lists rather than a list of lists, since we don't care the order in which solutions are calculated.
Base Case:
The simplest case is the empty set. There is exactly one solution for the permutations of 0 elements, and that solution is the empty sequence []. So,
P(A) = {[]}
Recursive Case:
In order to use recursion, you want to describe how to obtain P(A) from P(A') for some A' smaller than A in size.
Note: If you do that, it's finished. Operationally the program will work out via successive calls to P with smaller and smaller arguments until it reaches the base case, and then it will come back bulding longer results from shorter ones.
So here is one way to write a particular permutation of an A with n+1 elems. You need to successively pick one element of A for each position:
_ _ ... _
n+1 n 1
So you pick an x ∈ A for the first
x _ ... _
n 1
And then you need to choose a permutation in P(A\{x}).
This tells you one way to build all permutations of size n. Consider all possible choices of x in A (to use as first element), and for each choice put x in front of each solution of P(A\{x}). Finally take the union of all solutions you found for each choice of x.
Let's use the dot operator to represent putting x in front of a sequence s, and the diamond operator to represent putting x in front of every s ∈ S. That is,
x⋅s = [x, s1, s2, ..., sn]
x⟡S = {x⋅s : s ∈ S}
Then for a non-empty A
P(A) = ⋃ {x⟡P(A\{x}) : x ∈ A}
This expression together with the case base give you all the permutations of elements in a set A.
The javascript code
To understand how the code you've shown implements this algortithm you need to consider the following
That code considers two base cases, when you have 0 or 1 elements, by writing xs.length < 2. We could have done that too, it's irrelevant. You can change that 2 into a 1 and it should still work.
The mapping corresponds to our operation x⟡S = {x⋅s : s ∈ S}
The without corresponds to P(A\{x})
The flatten corresponds to the ⋃ which joins all solutions.

Related

Array sort and value change at the same time

I have an array below and the first number in each array means order.
What I want to do is, whenever I change the order, it resorts the array and re-index it into 2, 3, 4, 5.
const payments = [
[2, paymentName1, '5%'],
[3, paymentName2, '5%'],
[4, paymentName3, '5%'],
[5, paymentName4, '5%']
];
For example, if I change the first array order from 2 to 6, array becomes the one below.
const payments = [
[2, paymentName2, '5%'],
[3, paymentName3, '5%'],
[4, paymentName4, '5%'],
[5, paymentName1, '5%'],
];
what I currently did was to sort it and take for loop to re-order it. and I want to do it in one loop if possible. Please help me with writing this algorithm.
Thanks in advance!
Edit:
payments.sort((a, b) => a[0] - b[0]);
for (const index in payments) {
payments[index][0] = parseInt(index) + 2;
}
This is my current function. Would there be a better way to do?
thanks!
After you sort, just loop over the array and assign the new order values incrementally. There is no "better" here.
const payments = [
[2, "paymentName1", '5%'],
[3, "paymentName2", '5%'],
[4, "paymentName3", '5%'],
[5, "paymentName4", '5%']
];
function setOrder(index, newOrder) {
payments[index][0] = newOrder;
payments.sort(([a], [b]) => a - b);
for (let i = 0; i < payments.length; i++) payments[i][0] = i + 2;
}
setOrder(0, 6);
console.log(payments);
The time complexity is determined by the call to sort: O(nlogn).
Alternatively, you could use binary search to find the target index where the mutated element should go, and then rotate the array elements accordingly. Then the time complexity will be O(n). Although this has a better time complexity, the overhead of JavaScript code will make that for arrays of moderate sizes you'll get faster results with sort.

Generating an array of arrays that when added equals a given number

i'm working on a bigger problem but am a little stuck on a certain issue. Hopefully, I can explain it clearly! I am looking to generate an array of arrays where each individual array has elements then when added together equal a certain number. An example would be:
target = 4
solution : [[1,1,1,1], [1,1,2], [2,2], [1,3], [4]]
edit: to make the question more clear, the solution should contain every possible combination of positive integers that will equal the target
You could take a recursive approach and loop from the last found item or one and call the function until no more values are to disperse.
function x(count) {
function iter(left, right) {
if (!left) return result.push(right);
for (var i = right[right.length - 1] || 1; i <= left; i++)
iter(left - i, [...right, i]);
}
var result = []
iter(count, []);
return result;
}
x(4).map(a => console.log(...a));
I'm not sure what language you were working in. Also, it's general StackOverflow etiquette to show what you have already tried an what exact step you got stuck on. That said, here is some Python code that does what you want.
This problem is easy to solve as a recursive function. If you have some number n, the first number in a list of sums of n could be any number between 1 and n. Call that number i. Once it's picked, the rest of the list should sum to n - i. So just make a recursive function that adds up the results for all i's that are less than n and all the results for each of the solutions to n-i.
def get_sum_components(n):
# ignore negatives
if n <= 0:
raise ValueError("n must be a positive int")
# The only way to sum to 1 is [1]. This is the base case
if n == 1:
return [[1]]
results = []
# imagine that the first number in the list of sum components was i
for i in range(1, n):
remainder = n - i
# get_sum_components(remainder) will return a list of solutions for n-i
for result in get_sum_components(remainder):
# don't forget to add i back to the beginning so they sum to n
result = [i] + result
results.append(result)
# lastly, just the number itself is always an answer
results.append([n])
return results
print(get_sum_components(4))
# gives [[1, 1, 1, 1], [1, 1, 2], [1, 2, 1], [1, 3], [2, 1, 1], [2, 2], [3, 1], [4]]
If you don't care about order, this will create some duplicates (like [1, 3], [3, 1]). It should be easy to filter those out, though. (Just sort all the lists and then use a dict/set to remove duplicates)

I need some help to understand this combination of methods (reduce, concat and map)

I am studying ES6 and I am strugling with this line of code. I know I am losing something here and I just can't understand what is really happening.
The code:
const powerset = arr =>
arr.reduce((a, v) =>
a.concat(a.map(r =>
[v].concat(r))), [[]]);
console.log(powerset([1, 2, 3]));
The output:
[[], [1], [2], [2, 1], [3], [3, 1], [3, 2], [3, 2, 1]]
What do I see here?
the first concat will concatenate inside the 'primary array' the return of each map, and this one will concatenate the value of r with the value of v, and I believe that r is equal to a.
Based on that what I understand is that (I know I am wrong, but I don't know why) it should work like this:
In the first 'level' a is an empty array and v is equal to 1, so the first value should be [1] and not [], since r is being concatenate with v; in the second 'level' a is equal to 1 and v to 2, what would return [2, 1] and in the third 'level' the return would be [3, 2, 1] since v is equal to 3 and a to [2, 1].
As I said before, I know I am wrong but I just can't see what I am losing here. I made my research, as well as a lot of experiments, but I didn't get it.
How is this code really working?
Let's first fix that formatting a bit:
arr.reduce(
(a, v) => a.concat(a.map(r => [v].concat(r))),
[[]]
)
So, reduce takes [[]] as the starting value, and the callback returns this list concatenated with something else. So far so good, makes sense that the return value is [[], ...] then, it's the starting value with additional values appended.
With three values being passed into powerset, there will be three iterations of this reduce process.
Now, what is being concatenated to that list each turn?
a.map(r => [v].concat(r))
a is that list that it starts with and that will be returned, v is the current value from arr, the list that was passed into powerset to begin with. r is each value currently in a.
So, on the first iteration, a is [[]], so r will be [] once, and v is 1:
[[]].map(_ => [1].concat([]))
→ [[]].map(_ => [1]) // [1].concat([]) is [1]
→ [[1]]
So this first map operation returns [[1]]:
(a, _) => a.concat([[1]])
→ (_, _) => [[], [1]]
So, you're indeed seeing the beginning of the output here. On the next iteration, a is [[], [1]] and v is 2.
a .map(r => [v].concat(r))
→ [[], [1]].map(r => [2].concat(r)) // two mappings here:
→ [] → [2].concat([]) // [2]
→ [1] → [2].concat([1]) // [2, 1]
→ [[2], [2, 1]]
So:
(a, _) => a.concat([[2], [2, 1]])
→ (_, _) => [[], [1], [2], [2, 1]]
And you can figure out the third iteration yourself.

Can you give me an example of how to use Ramda lift?

I am reading ramda documentation
const madd3 = R.lift((a, b, c) => a + b + c);
madd3([1,2,3], [1,2,3], [1]); //=> [3, 4, 5, 4, 5, 6, 5, 6, 7]
It looks like a really useful function. I can't see what would be a use case for it.
Thanks
This function can only accept numbers:
const add3 = (a, b, c) => a + b + c;
add3(1, 2, 3); //=> 6
However what if these numbers were each contained in a functor? (i.e. a thing that contains a value; an array in the example below)
add3([1], [2], [3]); //=> "123"
That's obviously not what we want.
You can "lift" the function so that it can "extract" the value of each parameter/functor:
const add3Lifted = lift(add3);
add3Lifted([1], [2], [3]); //=> [6]
Arrays can obviously hold more than one value and combined with a lifted function that knows how to extract the values of each functor, you can now do this:
add3Lifted([1, 10], [2, 20], [3, 30]);
//=> [6, 33, 24, 51, 15, 42, 33, 60]
Which is basically what you'd have got if you had done this:
[
add3(1, 2, 3), // 6
add3(1, 2, 30), // 33
add3(1, 20, 3), // 24
add3(1, 20, 30), // 51
add3(10, 2, 3), // 15
add3(10, 2, 30), // 42
add3(10, 20, 3), // 33
add3(10, 20, 30) // 60
]
Note that each array doesn't have to be of the same length:
add3Lifted([1, 10], [2], [3]);
//=> [6, 15]
So to answer your question: if you intend to run a function with different sets of values, lifting that function may be a useful thing to consider:
const results = [add3(1, 2, 3), add3(10, 2, 3)];
is the same as:
const results = add3Lifted([1, 10], [2], [3]);
Functional programming is a long and mathematical topic, in particular the part dealing with monads and cathegory theory in general. But it is worth to take a look at it, here is a funny introduction with pictures.
In short, lift is a function that will take a n-arguments function and will produce a function that takes n wrapped-values and produces another resulting wrapped-value. A lift that take a one-argument function is defined by the following type signature
// name :: f is a wrp-value => function -> wrp-value -> wrp-value
liftA :: Applicative f => (a -> b) -> f a -> f b
Wait... Wrapped-value?
I will introduce briefly Haskell, only to explain this. In haskell, an easy example of wrapped-value is Maybe, Maybe can be a wrapped-value or nothing, that is also a wrapped-value. The following example applies a function to a Maybe containing a value, and a empty Maybe.
> liftA (+ 8) (Just 8)
Just 16
> liftA (+ 8) Nothing
Nothing
The list is also a wrapped-value, and we can apply functions to it. In the second case liftA2 applies two-argument functions to two lists.
> liftA (+ 8) [1,2,3]
[9,10,11]
> liftA2 (*) [1..3] [1..3]
[1,2,3,2,4,6,3,6,9]
This wrapped-value is an Applicative Functor, so from now I will call it Applicative.
Maybe Maybe you are starting to lose interest from this point...
But someone before us has got lost on this topic, finally he survived and published it as an answer to this question.
Lets look at what did he see...
...
He saw Fantasy Land
In fantasy-land, an object implements Apply spec when it has
an ap method defined (that object also has to implement
Functor spec by defining a map method).
Fantasy-land is a fancy name to a functional programming spec in
javascript. Ramda follows it.
Apply is our Applicative, a
Functor that implements also an ap method.
A Functor, is something that has the map method.
So, wait... the Array in javascript has a map...
[1,2,3].map((a)=>a+1) \\=> [ 2, 3, 4 ]
Then the Array is a Functor, and map applies a function to all values of it, returning another Functor with the same number of values.
But what does the ap do?
ap applies a list of functions to a list of values.
Dispatches to the ap method of the second argument, if present. Also
treats curried functions as applicatives.
Let's try to do something with it.
const res = R.ap(
[
(a)=>(-1*a),
(a)=>((a>1?'greater than 1':'a low value'))
],
[1,2,3]); //=> [ -1, -2, -3, "a low value", "greater than 1", "greater than 1" ]
console.log(res);
<script src="https://cdn.jsdelivr.net/npm/ramda#0.26.1/dist/ramda.min.js"></script>
The ap method takes an Array (or some other Applicative) of functions an applies it to a Applicative of values to produce another Applicative flattened.
The signature of the method explains this
[a → b] → [a] → [b]
Apply f => f (a → b) → f a → f b
Finally, what does lift do?
Lift takes a function with n arguments, and produces another function that takes n Aplicatives and produces a flattened Aplicative of the results.
In this case our Applicative is the Array.
const add2 = (a, b) => a + b;
const madd2 = R.lift(add2);
const res = madd2([1,2,3], [2,3,4]);
//=> [3, 4, 5, 4, 5, 6, 5, 6, 7]
console.log(res);
// Equivalent to lift using ap
const result2 = R.ap(R.ap(
[R.curry(add2)], [1, 2, 3]),
[2, 3, 4]
);
//=> [3, 4, 5, 4, 5, 6, 5, 6, 7]
console.log(result2);
<script src="https://cdn.jsdelivr.net/npm/ramda#0.26.1/dist/ramda.min.js"></script>
These wrappers (Applicatives, Functors, Monads) are interesting because they can be anything that implements these methods. In haskell, this is used to wrap unsafe operations, such as input/output. It can also be an error wrapper or a tree, even any data structure.
What hasn't been mentioned in the current answers is that functions like R.lift will not only work with arrays but any well behaved Apply1 data type.
For example, we can reuse the same function produced by R.lift:
const lifted = lift((a, b, c) => a + b - c)
With functions as the Apply type:
lifted(a => a * a,
b => b + 5,
c => c * 3)(4) //=> 13
Optional types (dispatching to .ap):
const Just = val => ({
map: f => Just(f(val)),
ap: other => other.map(otherVal => val(otherVal)),
getOr: _ => val
})
const Nothing = {
map: f => Nothing,
ap: other => Nothing,
getOr: x => x
}
lifted(Just(4), Just(6), Just(8)).getOr(NaN) //=> 2
lifted(Just(4), Nothing, Just(8)).getOr(NaN) //=> NaN
Asynchronous types (dispatching to .ap):
const Asynchronous = fn => ({
run: fn,
map: f => Asynchronous(g => fn(a => g(f(a)))),
ap: other => Asynchronous(fb => fn(f => other.run(a => fb(f(a)))))
})
const delay = (n, x) => Asynchronous(then => void(setTimeout(then, n, x)))
lifted(delay(2000, 4), delay(1000, 6), delay(500, 8)).run(console.log)
... and many more. The point here is that anything that can uphold the interface and laws expected of any Apply type can make use of generic functions such as R.lift.
1. The argument order of ap as listed in the fantasy-land spec is reversed from the order supported by name dispatching in Ramda, though is still supported when using the fantasy-land/ap namespaced method.
Basically it is taking a cartesian product and applies a function to each array.
const
cartesian = (a, b) => a.reduce((r, v) => r.concat(b.map(w => [].concat(v, w))), []),
fn = ([a, b, c]) => a + b + c,
result = [[1, 2, 3], [1, 2, 3], [1]]
.reduce(cartesian)
.map(fn);
console.log(result); // [3, 4, 5, 4, 5, 6, 5, 6, 7]

Transducer flatten and uniq

I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))

Categories