Why array is not iterable in reduce - javascript

I want to split array by even and odd elements, this is my code
A.reduce((a,v,i)=> v % 2 == 0 ? [...a[0],v] : [...a[1],v],[[],[]])
A is array of numbers. I don't understand why do I get an error
a[1] is not iterable?
Considering that this code is working ok:
let arr = [[],[]];
console.log([...arr[1], 4]);

You are only returning a single array in reduce(). You also need to return the second.
In the first iteration the a is [[],[]]. But after the first it will become only a single array.
let A = [1,2,3,4]
const res= A.reduce((a,v,i)=> v % 2 == 0 ? [a[0],[...a[1],v]] : [[...a[0],v],a[1]],[[],[]])
console.log(res)
You could use a trick here. As v % 2 will return 1 or 0 so you could push() to that and use , to return the original a without spread operator.
let A = [1,2,3,4]
const res= A.reduce((a,v,i)=> (a[v % 2].push(v),a),[[],[]])
console.log(res)

You could also just filter twice:
const res = [A.filter(it => it % 2), A.filter(it => !(it % 2))];

You can use destructuring assignment to make this a little easier -
const data =
[ 1, 2, 3, 4 ]
const result =
data.reduce
( ([ odd, even ], v) =>
Boolean (v & 1)
? [ [...odd, v], even ]
: [ odd, [...even, v] ]
, [ [], [] ]
)
console.log(result)
// [ [ 1, 3 ], [ 2, 4 ] ]
You can make a generic function, partition -
const partition = (p, a = []) =>
a.reduce
( ([ t, f ], v) =>
p (v)
? [ [...t, v], f ]
: [ t, [...f, v] ]
, [ [], [] ]
)
const evenOdds =
partition (v => Boolean (v & 1), [ 1, 2, 3, 4 ])
const lessThan2 =
partition (v => v < 2, [ 1, 2, 3, 4 ])
console.log(evenOdds)
// [ [ 1, 3 ], [ 2, 4 ] ]
console.log(lessThan2)
// [ [ 1 ], [ 2, 3, 4 ] ]

The problem with your solution is that in reduce function you return one array of many elements (not one array with two arrays). Try this instead (where B=[[],[]], time complexity n )
A.forEach(x=> B[x%2].push(x) )
let A=[1,2,3,4,5,6,7], B=[ [],[] ];
A.forEach(x=> B[x%2].push(x) );
console.log(B);

Related

JavaScript how to find unique values in an array based on values in a nested array

I have a nested/multi-dimensional array like so:
[ [ 1, 1, a ], [ 1, 1 , b ], [ 2, 2, c ], [ 1 ,1, d ] ]
And I want to filter it so that it returns only unique values of the outer array based on the 1st value of each nested array.
So from the above array, it would return:
[ [1,1,a] [2,2,c] ]
Am trying to do this in vanilla javascript if possible. Thanks for any input! =)
Here is my solution.
const dedup = arr.filter((item, idx) => arr.findIndex(x => x[0] == item[0]) == idx)
It looks simple and also somehow tricky a bit.
I realize there's already three solutions, but I don't like them. My solution is
Generic - you can use unique with any selector function
O(n) - it uses a set, it doesn't run in O(n^2) time
So here it is:
/**
* #param arr - The array to get the unique values of
* #param uniqueBy - Takes the value and selects a criterion by which unique values should be taken
*
* #returns A new array containing the original values
*
* #example unique(["hello", "hElLo", "friend"], s => s.toLowerCase()) // ["hello", "friend"]
*/
function unique(arr, uniqueBy) {
const temp = new Set()
return arr.filter(v => {
const computed = uniqueBy(v)
const isContained = temp.has(computed)
temp.add(computed)
return !isContained
})
}
const arr = [ [ 1, 1, 'a' ], [ 1, 1, 'b' ], [ 2, 2, 'c' ], [ 1, 1, 'd' ] ]
console.log(unique(arr, v => v[0]))
You could filter with a set and given index.
const
uniqueByIndex = (i, s = new Set) => array => !s.has(array[i]) && s.add(array[i]),
data = [[1, 1, 'a'], [1, 1, 'b'], [2, 2, 'c'], [1, 1, 'd']],
result = data.filter(uniqueByIndex(0));
console.log(result);
const input = [[1,1,'a'], [1,1,'b'], [2,2,'c'], [1,1,'d']]
const res = input.reduce((acc, e) => acc.find(x => x[0] === e[0])
? acc
: [...acc, e], [])
console.log(res)
Create the object with keys as first element of array. Iterate over array, check if the first element of array exist in the Object, if not push into the array.
const nestedArr = [ [1,1,"a"], [1,1,"b"], [2,2,"c"], [1,1,"d"] ];
const output = {};
for(let arr of nestedArr) {
if(!output[arr[0]]) {
output[arr[0]] = arr;
}
}
console.log(Object.values(output));
Another solution, would be to maintain the count of first array element and if the count is equal to 1, then push in the final array.
const input = [ [1,1,"a"], [1,1,"b"], [2,2,"c"], [1,1,"d"] ],
count = {},
output = [];
input.forEach(arr => {
count[arr[0]] = (count[arr[0]] || 0) + 1;
if(count[arr[0]] === 1) {
output.push(arr);
}
})
console.log(output);

Take top X total items in a round robin from multiple arrays with Ramda

I have an array of arrays and want to write a function that returns the top x number of items, by taking items from each array in order.
Here is an example of what I'm after:
const input = [
["1a", "2a", "3a", "4a", "5a"],
["1b", "2b", "3b", "4b", "5b"],
["1c", "2c", "3c", "4c", "5c"],
["1d", "2d", "3d", "4d", "5d"]
];
const takeRoundRobin = count => arr => {
// implementation here
};
const actual = takeRoundRobin(5)(input);
const expected = [
"1a", "1b", "1c", "1d", "2a"
];
I saw a suggestion to a Scala question that solved this using zip but in Ramda you can only pass 2 lists to zip.
Here, Ramda's transpose can be your base. Add a dollop of unnest, a dash of take, and you get this:
const {take, unnest, transpose} = R
const takeRoundRobin = (n) => (vals) => take(n, unnest(transpose(vals)))
const input = [
['1a', '2a', '3a', '4a', '5a'],
['1b', '2b', '3b', '4b', '5b'],
['1c', '2c', '3c', '4c', '5c'],
['1d', '2d', '3d', '4d', '5d']
]
console.log(takeRoundRobin(5)(input))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.js"></script>
Note also that this can handle arrays of varying lengths:
If you want to be able to wrap around to the beginning and continue taking values, you could replace take with a recursiveTake like this:
const {take, unnest, transpose, concat } = R
//recursive take
const recursiveTake = (n) => (vals) => {
const recur = (n,vals,result) =>
(n<=0)
? result
: recur(n-vals.length,vals,result.concat(take(n,vals)))
return recur(n,vals,[]);
};
const takeRoundRobin = (n) => (vals) =>
recursiveTake(n)(unnest(transpose(vals)));
const input = [
['1a', '2a', '3a', '4a'],
['1b'],
['1c', '2c', '3c', '4c', '5c'],
['1d', '2d']
]
console.log(takeRoundRobin(14)(input))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.js"></script>
Another version of that function, without the explicit recursion would look like:
const takeCyclic = (n) => (vals) => take(
n,
unnest(times(always(vals), Math.ceil(n / (vals.length || 1))))
)
Here's one way you can do it using recursion –
const None =
Symbol ()
const roundRobin = ([ a = None, ...rest ]) =>
// base: no `a`
a === None
? []
// inductive: some `a`
: isEmpty (a)
? roundRobin (rest)
// inductive: some non-empty `a`
: [ head (a), ...roundRobin ([ ...rest, tail (a) ]) ]
It works in a variety of cases –
const data =
[ [ 1 , 4 , 7 , 9 ]
, [ 2 , 5 ]
, [ 3 , 6 , 8 , 10 , 11 , 12 ]
]
console.log (roundRobin (data))
// => [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 ]
console.log (roundRobin ([ [ 1 , 2 , 3 ] ]))
// => [ 1 , 2 , 3 ]
console.log (roundRobin ([]))
// => []
Free variables are defined using prefix notation which is more familiar with functional style –
const isEmpty = xs =>
xs.length === 0
const head = xs =>
xs [0]
const tail = xs =>
xs .slice (1)
Verify it works in your browser below –
const None =
Symbol ()
const roundRobin = ([ a = None, ...rest ]) =>
a === None
? []
: isEmpty (a)
? roundRobin (rest)
: [ head (a), ...roundRobin ([ ...rest, tail (a) ]) ]
const isEmpty = xs =>
xs.length === 0
const head = xs =>
xs [0]
const tail = xs =>
xs .slice (1)
const data =
[ [ 1 , 4 , 7 , 9 ]
, [ 2 , 5 ]
, [ 3 , 6 , 8 , 10 , 11 , 12 ]
]
console.log (roundRobin (data))
// => [ 1 , 2 , 3 , 4 , 5 , 6 , 7 , 8 , 9 , 10 , 11 , 12 ]
console.log (roundRobin ([ [ 1 , 2 , 3 ] ]))
// => [ 1 , 2 , 3 ]
console.log (roundRobin ([]))
// => []
Here's another way using a secondary parameter with default assignment –
const roundRobin = ([ a = None, ...rest ], acc = []) =>
// no `a`
a === None
? acc
// some `a`
: isEmpty (a)
? roundRobin (rest, acc)
// some non-empty `a`
: roundRobin
( append (rest, tail (a))
, append (acc, head (a))
)
const append = (xs, x) =>
xs .concat ([ x ])
To demonstrate what you may have seen as the implementation in other languages, the applicative instance for a ZipList can be used to transpose the array, where a ZipList applies the functions contained in the ZipList in a pair-wise manner with the corresponding ZipList of values unlike the standard permutative version of ap for lists.
const ZipList = xs => ({
getZipList: xs,
map: f => ZipList(R.map(f, xs)),
ap: other => ZipList(R.zipWith(R.applyTo, other.getZipList, xs))
})
ZipList.of = x => ZipList(new Proxy([], {
get: (target, prop) =>
prop == 'length' ? Infinity : /\d+/.test(prop) ? x : target[prop]
}))
This has an interesting requirement which is somewhat clunky to represent in JS, where the of function to produce a "pure" value needs to produce a ZipList containing a repeating list of the "pure" value, implemented here using a Proxy instance of an array.
The transpose can then be formed via:
xs => R.unnest(R.traverse(ZipList.of, ZipList, xs).getZipList)
After all of this, we have just reinvented R.transpose as per the answer from #scott-sauyet.
It is nevertheless an interesting implementation to be aware of.
(full example below)
const ZipList = xs => ({
getZipList: xs,
map: f => ZipList(R.map(f, xs)),
ap: other => ZipList(R.zipWith(R.applyTo, other.getZipList, xs))
})
ZipList.of = x => ZipList(new Proxy([], {
get: (target, prop) =>
prop == 'length' ? Infinity : /\d+/.test(prop) ? x : target[prop]
}))
const fn = xs => R.unnest(R.traverse(ZipList.of, ZipList, xs).getZipList)
const input = [
["1a", "2a", "3a", "4a", "5a"],
["1b", "2b", "3b", "4b", "5b"],
["1c", "2c", "3c", "4c", "5c"],
["1d", "2d", "3d", "4d", "5d"]
];
const expected = [
"1a", "1b", "1c", "1d", "2a"
];
const actual = R.take(5, fn(input))
console.log(R.equals(expected, actual))
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
Not sure what Ramda functions to use to address this particular problem but here is an answer not using Ramda that'll only work if all arrays are the same length:
const input = [
['1a', '2a', '3a', '4a', '5a'],
['1b', '2b', '3b', '4b', '5b'],
['1c', '2c', '3c', '4c', '5c'],
['1d', '2d', '3d', '4d', '5d'],
];
const takeRoundRobin = (count) => (arr) => {
const recur = (arr, current, count, result) =>
(current === count)
? result
: recur(
arr,
current + 1,
count,
result.concat(
arr
[current % arr.length]//x value
[//y value
Math.floor(current / arr.length) %
(arr.length + 1)
],
),
);
return recur(arr, 0, count, []);
};
console.log(takeRoundRobin(22)(input));

Convert multiple recursive calls into tail-recursion

Just wondering if a function like this can be done tail-recursively. I find it quite difficult because it calls itself twice.
Here is my non-tail-recursive implementation in javascript. (Yes I know most javascript engine doesn't support TCO, but this is just for theory.) The goal is to find all sublists of a certain length(size) of a given array(arr). Ex: getSublistsWithFixedSize([1,2,3] ,2) returns [[1,2], [1,3], [2,3]]
function getSublistsWithFixedSize(arr, size) {
if(size === 0) {
return [[]];
}
if(arr.length === 0 ) {
return [];
}
let [head, ...tail] = arr;
let sublists0 = getSublistsWithFixedSize(tail, size - 1);
let sublists1 = getSublistsWithFixedSize(tail, size);
let sublists2 = sublists0.map(x => {
let y = x.slice();
y.unshift(head);
return y;
});
return sublists1.concat(sublists2);
}
One such way is to use continuation-passing style. In this technique, an additional parameter is added to your function to specify how to continue the computation
Below we emphasize each tail call with /**/
function identity(x) {
/**/return x;
}
function getSublistsWithFixedSize(arr, size, cont = identity) {
if(size === 0) {
/**/ return cont([[]]);
}
if(arr.length === 0 ) {
/**/ return cont([]);
}
let [head, ...tail] = arr;
/**/return getSublistsWithFixedSize(tail, size - 1, function (sublists0) {
/**/ return getSublistsWithFixedSize(tail, size, function (sublists1) {
let sublists2 = sublists0.map(x => {
let y = x.slice();
y.unshift(head);
return y;
});
/**/ return cont(sublists1.concat(sublists2));
});
});
}
console.log(getSublistsWithFixedSize([1,2,3,4], 2))
// [ [ 3, 4 ], [ 2, 4 ], [ 2, 3 ], [ 1, 4 ], [ 1, 3 ], [ 1, 2 ] ]
You can think of the continuation almost like we invent our own return mechanism; only it's a function here, not a special syntax.
This is perhaps more apparent if we specify our own continuation at the call site
getSublistsWithFixedSize([1,2,3,4], 2, console.log)
// [ [ 3, 4 ], [ 2, 4 ], [ 2, 3 ], [ 1, 4 ], [ 1, 3 ], [ 1, 2 ] ]
Or even
getSublistsWithFixedSize([1,2,3,4], 2, sublists => sublists.length)
// 6
The pattern might be easier to see with a simpler function. Consider the famous fib
const fib = n =>
n < 2
? n
: fib (n - 1) + fib (n - 2)
console.log (fib (10))
// 55
Below we convert it to continuation-passing style
const identity = x =>
x
const fib = (n, _return = identity) =>
n < 2
? _return (n)
: fib (n - 1, x =>
fib (n - 2, y =>
_return (x + y)))
fib (10, console.log)
// 55
console.log (fib (10))
// 55
I want to remark that the use of .slice and .unshift is unnecessary for this particular problem. I'll give you an opportunity to come up with some other solutions before sharing an alternative.
Edit
You did a good job rewriting your program, but as you identified, there are still areas which it can be improved. One area I think you're struggling the most is by use of array mutation operations like arr[0] = x or arr.push(x), arr.pop(), and arr.unshift(x). Of course you can use these operations to arrive at the intended result, but in a functional program, we think about things in a different way. Instead of destroying an old value by overwriting it, we only read values and construct new ones.
We'll also avoid high level operations like Array.fill or uniq (unsure which implementation you chose) as we can build the result naturally using recursion.
The inductive reasoning for your recursive function is perfect, so we don't need to adjust that
if the size is zero, return the empty result [[]]
if the input array is empty, return an empty set, []
otherwise the size is at least one and we have at least one element x - get the sublists of one size smaller r1, get the sublists of the same size r2, return the combined result of r1 and r2 prepending x to each result in r1
We can encode this in a straightforward way. Notice the similarity in structure compared to your original program.
const sublists = (size, [ x = None, ...rest ], _return = identity) =>
size === 0
? _return ([[]])
: x === None
? _return ([])
: sublists // get sublists of 1 size smaller, r1
( size - 1
, rest
, r1 =>
sublists // get sublists of same size, r2
( size
, rest
, r2 =>
_return // return the combined result
( concat
( r1 .map (r => prepend (x, r)) // prepend x to each r1
, r2
)
)
)
)
We call it with a size and an input array
console.log (sublists (2, [1,2,3,4,5]))
// [ [ 1, 2 ]
// , [ 1, 3 ]
// , [ 1, 4 ]
// , [ 1, 5 ]
// , [ 2, 3 ]
// , [ 2, 4 ]
// , [ 2, 5 ]
// , [ 3, 4 ]
// , [ 3, 5 ]
// , [ 4, 5 ]
// ]
Lastly, we provide the dependencies identity, None, concat, and prepend - Below concat is an example of providing a functional interface to an object's method. This is one of the many techniques used to increase reuse of functions in your programs and help readability at the same time
const identity = x =>
x
const None =
{}
const concat = (xs, ys) =>
xs .concat (ys)
const prepend = (value, arr) =>
concat ([ value ], arr)
You can run the full program in your browser below
const identity = x =>
x
const None =
{}
const concat = (xs, ys) =>
xs .concat (ys)
const prepend = (value, arr) =>
concat ([ value ], arr)
const sublists = (size, [ x = None, ...rest ], _return = identity) =>
size === 0
? _return ([[]])
: x === None
? _return ([])
: sublists // get sublists of 1 size smaller, r1
( size - 1
, rest
, r1 =>
sublists // get sublists of same size, r2
( size
, rest
, r2 =>
_return // return the combined result
( concat
( r1 .map (r => prepend (x, r)) // prepend x to each r1
, r2
)
)
)
)
console.log (sublists (3, [1,2,3,4,5,6,7]))
// [ [ 1, 2, 3 ]
// , [ 1, 2, 4 ]
// , [ 1, 2, 5 ]
// , [ 1, 2, 6 ]
// , [ 1, 2, 7 ]
// , [ 1, 3, 4 ]
// , [ 1, 3, 5 ]
// , [ 1, 3, 6 ]
// , [ 1, 3, 7 ]
// , [ 1, 4, 5 ]
// , [ 1, 4, 6 ]
// , [ 1, 4, 7 ]
// , [ 1, 5, 6 ]
// , [ 1, 5, 7 ]
// , [ 1, 6, 7 ]
// , [ 2, 3, 4 ]
// , [ 2, 3, 5 ]
// , [ 2, 3, 6 ]
// , [ 2, 3, 7 ]
// , [ 2, 4, 5 ]
// , [ 2, 4, 6 ]
// , [ 2, 4, 7 ]
// , [ 2, 5, 6 ]
// , [ 2, 5, 7 ]
// , [ 2, 6, 7 ]
// , [ 3, 4, 5 ]
// , [ 3, 4, 6 ]
// , [ 3, 4, 7 ]
// , [ 3, 5, 6 ]
// , [ 3, 5, 7 ]
// , [ 3, 6, 7 ]
// , [ 4, 5, 6 ]
// , [ 4, 5, 7 ]
// , [ 4, 6, 7 ]
// , [ 5, 6, 7 ]
// ]
Here is my solution with the help of an accumulator. It's far from perfect but it works.
function getSublistsWithFixedSizeTailRecRun(arr, size) {
let acc= new Array(size + 1).fill([]);
acc[0] = [[]];
return getSublistsWithFixedSizeTailRec(arr, acc);
}
function getSublistsWithFixedSizeTailRec(arr, acc) {
if(arr.length === 0 ) {
return acc[acc.length -1];
}
let [head, ...tail] = arr;
//add head to acc
let accWithHead = acc.map(
x => x.map(
y => {
let z = y.slice()
z.push(head);
return z;
}
)
);
accWithHead.pop();
accWithHead.unshift([[]]);
//zip accWithHead and acc
acc = zipMerge(acc, accWithHead);
return getSublistsWithFixedSizeTailRec(tail, acc);
}
function zipMerge(arr1, arr2) {
let result = arr1.map(function(e, i) {
return uniq(e.concat(arr2[i]));
});
return result;
}

How can I prevent a tail recursive function from reversing the order of a List?

I am experimenting with the functional List type and structural sharing. Since Javascript doesn't have a Tail Recursive Modulo Cons optimization, we can't just write List combinators like this, because they are not stack safe:
const list =
[1, [2, [3, [4, [5, []]]]]];
const take = n => ([head, tail]) =>
n === 0 ? []
: head === undefined ? []
: [head, take(n - 1) (tail)];
console.log(
take(3) (list) // [1, [2, [3, []]]]
);
Now I tried to implement take tail recursively, so that I can either rely on TCO (still an unsettled Promise in Ecmascript) or use a trampoline (omitted in the example to keep things simple):
const list =
[1, [2, [3, [4, [5, []]]]]];
const safeTake = n => list => {
const aux = (n, acc, [head, tail]) => n === 0 ? acc
: head === undefined ? acc
: aux(n - 1, [head, acc], tail);
return aux(n, [], list);
};
console.log(
safeTake(3) (list) // [3, [2, [1, []]]]
);
This works but the new created list is in reverse order. How can I solve this issue in a purely functional manner?
Laziness gives you tail recursion modulo cons for free. Hence, the obvious solution is to use thunks. However, we don't just want any kind of thunk. We want a thunk for an expression in weak head normal form. In JavaScript, we can implement this using lazy getters as follows:
const cons = (head, tail) => ({ head, tail });
const list = cons(1, cons(2, cons(3, cons(4, cons(5, null)))));
const take = n => n === 0 ? xs => null : xs => xs && {
head: xs.head,
get tail() {
delete this.tail;
return this.tail = take(n - 1)(xs.tail);
}
};
console.log(take(3)(list));
There are lots of advantages to using lazy getters:
Normal properties and lazy properties are used in the same way.
You can use it to create infinite data structures.
You don't have to worry about blowing up the stack.
Hope that helps.
One way to prevent the list from reversing is to use continuation passing style. Now just put it on a trampoline of your choice...
const None =
Symbol ()
const identity = x =>
x
const safeTake = (n, [ head = None, tail ], cont = identity) =>
head === None || n === 0
? cont ([])
: safeTake (n - 1, tail, answer => cont ([ head, answer ]))
const list =
[ 1, [ 2, [ 3, [ 4, [ 5, [] ] ] ] ] ]
console.log (safeTake (3, list))
// [ 1, [ 2, [ 3, [] ] ] ]
Here it is on a trampoline
const None =
Symbol ()
const identity = x =>
x
const call = (f, ...values) =>
({ tag: call, f, values })
const trampoline = acc =>
{
while (acc && acc.tag === call)
acc = acc.f (...acc.values)
return acc
}
const safeTake = (n = 0, xs = []) =>
{
const aux = (n, [ head = None, tail ], cont) =>
head === None || n === 0
? call (cont, [])
: call (aux, n - 1, tail, answer =>
call (cont, [ head, answer ]))
return trampoline (aux (n, xs, identity))
}
const list =
[ 1, [ 2, [ 3, [ 4, [ 5, [] ] ] ] ] ]
console.log (safeTake (3, list))
// [ 1, [ 2, [ 3, [] ] ] ]

algorithm to merge two arrays into an array of all possible combinations

Example given in JavaScript:
Suppose we have two arrays [0,0,0] and [1,1,1]. What's the algorithm to produce all possible ways these two arrays can be combine. Example:
mergeEveryWayPossible([0,0,0],[1,1,1])
// [ [0,0,0],[1,0,0], [0,1,0], [0,0,1], [1,1,0], [0,1,1], [1,0,1], [1,1,1] ]
merge the arrays into an array of all possible combinations. This is different than finding the cartesian product.
I'm also not sure what this kind of combination is called. If the algorithm or technique has a name, please share.
You could transform the value to an array to this format
[
[0, 1],
[0, 1],
[0, 1]
]
and then build a new result set by iterating the outer and inner arrays.
var data = [[0, 0, 0], [1, 1, 1]],
values = data.reduce((r, a, i) => (a.forEach((b, j) => (r[j] = r[j] || [])[i] = b), r), []),
result = values.reduce((a, b) => a.reduce((r, v) => r.concat(b.map(w => [].concat(v, w))), []));
console.log(result);
.as-console-wrapper { max-height: 100% !important; top: 0; }
continuations
Here's a solution involving delimited continuations – delimited continuations are sometimes called composable continuations because they have a return value, and thus can be composed with any other ordinary functions – additionally, they can be called multiple times which can produce extraordinary effects
// identity :: a -> a
const identity = x =>
x
// concatMap :: (a -> [b]) -> [a] -> [b]
const concatMap = f => ([x,...xs]) =>
x === undefined
? []
: f (x) .concat (concatMap (f) (xs))
// cont :: a -> cont a
const cont = x =>
k => k (x)
// reset :: cont a -> (a -> b) -> b
const reset = m =>
k => m (k)
// shift :: ((a -> b) -> cont a) -> cont b
const shift = f =>
k => f (x => k (x) (identity))
// amb :: [a] -> cont [a]
const amb = xs =>
shift (k => cont (concatMap (k) (xs)))
// demo
reset (amb (['J', 'Q', 'K', 'A']) (x =>
amb (['♡', '♢', '♤', '♧']) (y =>
cont ([[x, y]]))))
(console.log)
// [ ['J','♡'], ['J','♢'], ['J','♤'], ['J','♧'], ['Q','♡'], ['Q','♢'], ['Q','♤'], ['Q','♧'], ['K','♡'], ['K','♢'], ['K','♤'], ['K','♧'], ['A','♡'], ['A','♢'], ['A','♤'], ['A','♧'] ]
Of course this works for any variety of inputs and any nesting limit (that doesn't blow the stack ^_^)
const choices =
[0,1]
reset (amb (choices) (x =>
amb (choices) (y =>
amb (choices) (z =>
cont ([[x, y, z]])))))
(console.log)
// [ [0,0,0], [0,0,1], [0,1,0], [0,1,1], [1,0,0], [1,0,1], [1,1,0], [1,1,1] ]
But you must be wondering how we can abstract the nesting of amb itself – for example, in the code above, we have 3 levels of nesting to generate permutations of length 3 – what if we wanted to permute our choices 4, 5, or N times ?
const permute = (n, choices) =>
{
const loop = (acc, n) =>
n === 0
? cont ([acc])
: amb (choices) (x =>
loop (acc.concat ([x]), n - 1))
return loop ([], n)
}
permute (4, [true,false]) (console.log)
// [ [ true , true , true , true ],
// [ true , true , true , false ],
// [ true , true , false, true ],
// [ true , true , false, false ],
// [ true , false, true , true ],
// [ true , false, true , false ],
// [ true , false, false, true ],
// [ true , false, false, false ],
// [ false, true , true , true ],
// [ false, true , true , false ],
// [ false, true , false, true ],
// [ false, true , false, false ],
// [ false, false, true , true ],
// [ false, false, true , false ],
// [ false, false, false, true ],
// [ false, false, false, false ] ]
sounds german, or something
If I'm understanding your comment correctly, you want something that zips the input and permutes each pair – shall we call it, zippermute ?
const zippermute = (xs, ys) =>
{
const loop = (acc, [x,...xs], [y,...ys]) =>
x === undefined || y === undefined
? cont ([acc])
: amb ([x,y]) (choice =>
loop (acc.concat ([choice]), xs, ys))
return loop ([], xs, ys)
}
zippermute (['a', 'b', 'c'], ['x', 'y', 'z']) (console.log)
// [ [ 'a', 'b', 'c' ],
// [ 'a', 'b', 'z' ],
// [ 'a', 'y', 'c' ],
// [ 'a', 'y', 'z' ],
// [ 'x', 'b', 'c' ],
// [ 'x', 'b', 'z' ],
// [ 'x', 'y', 'c' ],
// [ 'x', 'y', 'z' ] ]
List monad
Whoever wrote that long thing about delimited whatchamacallits is nuts – after the 3 hours I spend trying to figure it out, I'll forget everything about it in 30 seconds !
On a more serious note, when compared to this answer, the shift/reset is so unbelievably impractical, it's a joke. But, if I didn't share that answer first, we wouldn't have had the joy of turning our brains inside out ! So please, don't reach for shift/reset unless they're critical to the task at hand – and please forgive me if you feel cheated into learning something totally cool !
Let's not overlook a more straightforward solution, the List monad – lovingly implemented with Array.prototype.chain here – also, notice the structural similarities between this solution and the continuation solution.
// monads do not have to be intimidating
// here's one in 2 lines†
Array.prototype.chain = function chain (f)
{
return this.reduce ((acc, x) =>
acc.concat (f (x)), [])
};
const permute = (n, choices) =>
{
const loop = (acc, n) =>
n === 0
? [acc]
: choices.chain (choice =>
loop (acc.concat ([choice]), n - 1))
return loop ([], n)
}
console.log (permute (3, [0,1]))
// [ [ 0, 0, 0 ],
// [ 0, 0, 1 ],
// [ 0, 1, 0 ],
// [ 0, 1, 1 ],
// [ 1, 0, 0 ],
// [ 1, 0, 1 ],
// [ 1, 1, 0 ],
// [ 1, 1, 1 ] ]
const zippermute = (xs, ys) =>
{
const loop = (acc, [x,...xs], [y,...ys]) =>
x === undefined || y === undefined
? [acc]
: [x,y].chain (choice =>
loop (acc.concat ([choice]), xs, ys))
return loop ([], xs, ys)
}
console.log (zippermute (['a', 'b', 'c'], ['x', 'y', 'z']))
// [ [ 'a', 'b', 'c' ],
// [ 'a', 'b', 'z' ],
// [ 'a', 'y', 'c' ],
// [ 'a', 'y', 'z' ],
// [ 'x', 'b', 'c' ],
// [ 'x', 'b', 'z' ],
// [ 'x', 'y', 'c' ],
// [ 'x', 'y', 'z' ] ]
† a monad interface is made up of some unit (a -> Monad a) and bind (Monad a -> (a -> Monad b) -> Monad b) functions – chain is our bind here, and JavaScript's array literal syntax ([someValue]) provides our unit – and that's all there is to it
Oh, you can't touch native prototypes !!
OK, sometimes there's good reason not to touch native prototypes. Don't worry tho, just create a data constructor for Arrays; we'll call it List – now we have a place to define our intended behaviours
If you like this solution, you might find another answer I wrote useful; the program employs the list monad to fetch 1 or more values from a data source using a query path
const List = (xs = []) =>
({
value:
xs,
chain: f =>
List (xs.reduce ((acc, x) =>
acc.concat (f (x) .value), []))
})
const permute = (n, choices) =>
{
const loop = (acc, n) =>
n === 0
? List ([acc])
: List (choices) .chain (choice =>
loop (acc.concat ([choice]), n - 1))
return loop ([], n) .value
}
console.log (permute (3, [0,1]))
// [ [ 0, 0, 0 ],
// [ 0, 0, 1 ],
// [ 0, 1, 0 ],
// [ 0, 1, 1 ],
// [ 1, 0, 0 ],
// [ 1, 0, 1 ],
// [ 1, 1, 0 ],
// [ 1, 1, 1 ] ]
const zippermute = (xs, ys) =>
{
const loop = (acc, [x,...xs], [y,...ys]) =>
x === undefined || y === undefined
? List ([acc])
: List ([x,y]).chain (choice =>
loop (acc.concat ([choice]), xs, ys))
return loop ([], xs, ys) .value
}
console.log (zippermute (['a', 'b', 'c'], ['x', 'y', 'z']))
// [ [ 'a', 'b', 'c' ],
// [ 'a', 'b', 'z' ],
// [ 'a', 'y', 'c' ],
// [ 'a', 'y', 'z' ],
// [ 'x', 'b', 'c' ],
// [ 'x', 'b', 'z' ],
// [ 'x', 'y', 'c' ],
// [ 'x', 'y', 'z' ] ]
You can use lodash, here's their implementation:
(function(_) {
_.mixin({combinations: function(values, n) {
values = _.values(values);
var combinations = [];
(function f(combination, index) {
if (combination.length < n) {
_.find(values, function(value, index) {
f(_.concat(combination, [value]), index + 1);
}, index);
} else {
combinations.push(combination);
}
})([], 0);
return combinations;
}});
})((function() {
if (typeof module !== 'undefined' && typeof exports !== 'undefined' && this === exports) {
return require('lodash');
} else {
return _;
}
}).call(this));
console.log(_.combinations('111000', 3))
console.log(_.combinations('111000', 3).length + " combinations available");
This would log out the following:
[["1", "1", "1"], ["1", "1", "0"], ["1", "1", "0"], ["1", "1", "0"],
["1", "1", "0"], ["1", "1", "0"], ["1", "1", "0"], ["1", "0", "0"],
["1", "0", "0"], ["1", "0", "0"], ["1", "1", "0"], ["1", "1", "0"],
["1", "1", "0"], ["1", "0", "0"], ["1", "0", "0"], ["1", "0", "0"],
["1", "0", "0"], ["1", "0", "0"], ["1", "0", "0"], ["0", "0", "0"]]
"20 combinations available"
There library is at https://github.com/SeregPie/lodash.combinations
Note that for arrays length N there are 2^N combinations. Every integer number in range 0..2^N-1 corresponds to some combination: if k-th bit of number is zero then get k-th element of result from the first array, otherwise - from the second.
P.S. Note that your example is equivalent to binary representation of numbers.
This was a fun problem that I encountered and upon reading the answers to listed here, I figured a more readable less clever answer might be good for newcomers.
This treats each array that needs to be comibined together like an individual digit that increments up to that digit's highest value (the array length - 1).
With all of the digits representing the arrays, it is a matter of incrementing the set, carrying any ones, and eventually looping back around to all zeros again in the counter to indicate that we have completed the set.
This also makes it so we don't have to do any recursion and it can be just done with a while loop.
function allPermutationsOfArrays(arrayOfArrays) {
const out = [];
// this counter acts like an incrementing number
const incrementalSet = arrayOfArrays.map(() => 0);
// max values for each "digit" of the counter
const maxValues = arrayOfArrays.map((a) => a.length - 1);
while (1) {
const outRow = [];
// for the current counter incrementer, get the array values and
// put them together for output
for (let i = 0; i < incrementalSet.length; i++) {
outRow[i] = arrayOfArrays[i][incrementalSet[i]];
}
out.push(outRow);
//add one to incremental set - we are going right to left so it works like
// normal numbers, but really that is arbitrary and we could go left to right
let allZeros = true;
for (let i = incrementalSet.length - 1; i >= 0; i--) {
if (incrementalSet[i] + 1 > maxValues[i]) {
incrementalSet[i] = 0;
continue; //carry the one to the next slot
} else {
incrementalSet[i] += 1;
allZeros = false;
break; // nothing to carry over
}
}
if (allZeros) {
// we have done all combinations and are back to [0, 0,...];
break; // break the while(1) loop
}
}
return out;
}
console.log(
allPermutationsOfArrays([
[0, 1, 2],
["a", "b"],
])
);
// [
// [ 0, 'a' ],
// [ 0, 'b' ],
// [ 1, 'a' ],
// [ 1, 'b' ],
// [ 2, 'a' ],
// [ 2, 'b' ]
// ]

Categories