Is there alternative to get middle element from array using Ramda? - javascript

I tried this code and it produces me wanted result:
const {
__,
compose,
converge,
divide,
identity,
length,
prop
} = require("ramda");
const div2 = divide(__, 2);
const lengthDiv2 = compose(Math.floor, div2, length);
const midElement = converge(prop, [lengthDiv2, identity]);
console.log(midElement([1, 5, 4]); //5
But I dont know is there another way to get property from array, particularly some other implementation of midElement function?

You can create midElement by chaining R.nth and lengthDiv2 because according to R.chain documentation (and #ScottSauyet):
If second argument is a function, chain(f, g)(x) is equivalent to
f(g(x), x).
In this case g is lengthDiv2, f is R.nth, and x is the array. So, the result would be R.nth(lengthDiv2(array), array), which will return the middle item.
const { compose, flip, divide, length, chain, nth } = R;
const div2 = flip(divide)(2); // create the function using flip
const lengthDiv2 = compose(Math.floor, div2, length);
const midElement = chain(nth, lengthDiv2); // chain R.nth and lengthDiv2
console.log(midElement([1, 5, 4])); //5
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script>

Simplification
Yes, there is a somewhat easier way to write midElement. This feels a bit cleaner:
const div2 = divide (__, 2)
const lengthDiv2 = compose (floor, div2, length)
const midElement = chain (nth, lengthDiv2)
console.log (midElement ([8, 6, 7, 5, 3, 0, 9])) //=> 5
console.log (midElement ([8, 6, 7, 5, 3, 0])) //=> 5
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.0/ramda.js"></script><script>
const {divide, __, compose, length, chain, nth} = R
const {floor} = Math </script>
(We choose nth over prop here only because it's semantically more correct. nth is specific to arrays and their indices. prop works only because of the coincidence that Javascript builds its arrays atop plain objects.)
chain is an interesting function. You can find many more details in its FantasyLand specification. But for our cases, the important point is how it works with functions.
chain (f, g) //=> (x) => f (g (x)) (x)
And that explains how (here at least) it's a simpler alternative to converge.
Note that this version -- like your original -- chooses the second of the two central values when the list has an even length. I usually find that we more naturally choose the first one. That is, for example, midpoint([3, 6, 9, 12]) would usually be 6. To alter that we could simply add a decrement operation before dividing:
const midpoint = chain(nth, compose(floor, divide(__, 2), dec, length))
But Why?
However, Ramda is not offering much useful here. Ramda (disclaimer: I'm one of its main authors) offers help with many problems. But it's a tool, and I would not suggest using it except when it makes your code cleaner.
And this version seems to me much easier to comprehend:
const midpoint = (xs) => xs[Math.floor ((xs.length / 2))]
console.log (midpoint ([8, 6, 7, 5, 3, 0, 9])) //=> 5
console.log (midpoint ([8, 6, 7, 5, 3, 0])) //=> 5
Or this version if you want the decrement behavior above:
const midpoint = (xs) => xs[Math.floor (((xs.length - 1) / 2))]
console.log (midpoint ([8, 6, 7, 5, 3, 0, 9])) //=> 5
console.log (midpoint ([8, 6, 7, 5, 3, 0])) //=> 7
Another Option
But there are so many different ways to write such a function. While I wouldn't really recommend it, since it's performance cannot compare, a recursive solution is very elegant:
// choosing the first central option
const midpoint = (xs) => xs.length <= 2 ? xs[0] : midpoint (xs.slice(1, -1))
// choosing the second central option
const midpoint = (xs) => xs.length <= 2 ? xs[xs.length - 1] : midpoint (xs.slice(1, -1))
These simply take one of the two central elements if there are no more than two left and otherwise recursively takes the midpoint of the array remaining after removing the first and last elements.
What to remember
I'm a founder of Ramda, and proud of the library. But we need to remember that it just a library. It should make a certain style of coding easier, but it should not dictate any particular style. Use it when it makes your code simpler, more maintainable, more consistent, or more performant. Never use it simply because you can.

Related

What is the time/space complexity of this algorithm to get all the sub arrays of an array divided by a given number

I am writing a function that takes an array and an integer number and returns an array of subarrays. The number of subarrays is exact the integer number passed to the function. And the subarrays have to be continuous, meaning the original order of items in the array has to be preserved. Also no subarray can be empty. They have to have at least one item in it. For example:
const array = [2,3,5,4]
const numOfSubarray = 3
const subarrays = getSubarrays(arraym numOfSubarray)
In this case subarrays is this:
[
[[2, 3], [5], [4]],
[[2], [3, 5], [4]],
[[2], [3], [5, 4]],
]
Here is my attempt:
function getSubarrays(array, numOfSubarray) {
const results = []
const recurse = (index, subArrays) => {
if (index === array.length && subArrays.length === numOfSubarray) {
results.push([...subArrays])
return
}
if (index === array.length) return
// 1. push current item to the current subarray
// when the remaining items are more than the remaining sub arrays needed
if (array.length - index - 1 >= numOfSubarray - subArrays.length) {
recurse(
index + 1,
subArrays.slice(0, -1).concat([subArrays.at(-1).concat(array[index])])
)
}
// 2. start a new subarray when the current subarray is not empty
if (subArrays.at(-1).length !== 0)
recurse(index + 1, subArrays.concat([[array[index]]]))
}
recurse(0, [[]], 0)
return results
}
Right now it seems to be working. But I wanted to know what is the time/space complexity of this algorithm. I think it is definitely slower than O(2^n). Is there any way to improve it? Or any other solutions we can use to improve the algorithm here?
You can't get an answer down to anything like 2n, I'm afraid. This grows much faster than that, because the answer has to do with the binomial coefficients, whose definitions have fundamental factorial parts, and whose approximations involve terms like nn.
Your solution seems likely to be worse than necessary, noted because of the exponential number of calls required to solve the simplest case, when numOfSubarrays is 1, and you should just be able to return [array]. But as to full analysis, I'm not certain.
As the first comment shows, the above analysis is dead wrong.
However, if your're interested in another approach, here's how I might do it, based on the same insight others have mentioned, that the way to do this is to find all sets of numOfSubarrays indices of the positions between your values, and then convert them to your final format:
const choose = (n, k) =>
k == 0
? [[]]
: n == 0
? []
: [... choose (n - 1, k), ... choose (n - 1, k - 1). map (xs => [...xs, n])]
const breakAt = (xs) => (ns) =>
[...ns, xs .length] .map ((n, i) => xs .slice (i == 0 ? 0 : ns [i - 1], n))
const subarrays = (xs, n) =>
choose (xs .length - 1, n - 1) .map (breakAt (xs))
console .log (subarrays ([2, 3, 5, 4], 3) // combine for easier demo
.map (xs => xs .map (ys => ys .join ('')) .join('-')) //=> ["23-5-4", "2-35-4", "2-3-54"]
)
console .log (subarrays ([2, 3, 5, 4], 3)) // pure result
.as-console-wrapper {max-height: 100% !important; top: 0}
Here, choose (n, k) finds all the possible ways to choose k elements from the numbers 1, 2, 3, ..., n. So, for instance, choose (4, 2) would yield [[1, 2], [1, 3], [1, 4], [2, 3], [2, 4], [3, 4]].
breakAt breaks an array into sub-arrays at a set of indices. So
breakAt ([8, 6, 7, 5, 3, 0, 9]) ([3, 5])
// 3 5 ///=> [[8, 6, 7], [5, 3], [0, 9]]
And subarrays simply combines these, subtracting one from the array length, subtracting one from numOfSubarrays, calling choose with those two values, and then for each result, calling breakAt with the original array and this set of indices.
Even here I haven't tried to analyze the complexity, but since the output is factorial, the process will take a factorial amount of time.
If you want to completely split a list of n elements into k disjunct, continuous sub-lists this is like placing k-1 split points into the n-1 gaps between the elements:
2 | 3 | 5 4
2 | 3 5 | 4
2 3 | 5 | 4
In combinatorics this is taking k-1 from n-1. So I think the result size of the ouput will be n-1 take k-1 = (n-1)! / ((k-1)! * (n-k)!). Thus the complexity is something polynomial like O(n^(k-1)) for constant k. If you don't fix k but raise it with n like k = n/2 the complexity will get exponential.
I don't think that you can improve this, because the output's size is increasing by this complexity.
tl;dr
The number of solutions is bound to (as #gimix mentioned) binomial coefficient, so if I understand correctly it's pessimistically exponential
https://en.wikipedia.org/wiki/Binomial_coefficient#Bounds_and_asymptotic_formulas.
If I'm not mistaken that makes your algorithm this exponential * n (for each element of each solution) * n (because on nearly every step you copy array which length might be dependent on n).
fix second if - only call recurse if subArrays.length < numOfSubarrays
you are copying arrays a lot - slice, concat, spread operator - all of those create new arrays. If for every solution (which length might be depending on n) on every step you copy this solution (which I think is happening here) you multiply the complexity by n.
the space complexity is also exponential * n - you store the exponential number of solutions, possibly of length dependent on n. Using a generator and returning one solution at the time could greatly improve that. As #gimix mentioned the combinations might be the simplest way to do it. Combinations generator in python: https://docs.python.org/3/library/itertools.html#itertools.combinations
Dwelling on complexity:
I think you are right about the slower than exponential complexity, but - bare with me - how much do you know about Fibonacci's sequence? ;)
Let's consider input:
array = [1, 2, ..., n]
numOfSubarrays = 1
We can consider the recursive calls a binary tree with if 1. guarding the left child (first recurse call) and if 2. guarding the right child (second recurse call).
For each recurse call if 1. will be fulfilled - there are more items than sub arrays needed.
Second if will be true only if current sub array has some elements. It's a tricky condition - it fails if, and only if, it succeeded one frame higher - an empty array has been added at the very beginning (except for the root call - it has no parent). Speaking in terms of a tree, it means we are in the right child - the parent must have just added an empty sub array as a current. On the other hand, for the left child parent has just pushed (yet another?) element to the current sub array and we are sure the if 2. will succeed.
Okay, but what does it say about the complexity?
Well, we can count the number of nodes in the tree, multiply by the number of operations they perform - most of them a constant number - and we get the complexity. So how many are there?
I'll keep track of left and right nodes separately on every level. It's gonna be useful soon. For convenience I'll ignore root call (I could treat it as a right node - it has empty sub array - but it messes up the final effect) and start from level 1 - the left child of the root call.
r1 = 0
l1 = 1
As a left node (sub array isn't empty) it has two children:
r2 = 1
l2 = 1
Now, the left node always has two children (1. is always fulfilled; 2. is true because parent pushed element to current sub array) and the right node has only the left child:
r3 = r2 + l2 = 1 + 1 = 2
l3 = r2 = 1
we could continue. The results are:
l
r
1
0
1
1
2
1
3
2
5
3
well... it's oddly familiar, isn't it?
Okay, so apparently, the complexity is O(Σ(Fi + Fi-1) where 1 <= i <= n).
Alright, but what does it really mean?
There is a very cool prove that S(n) - sum of the Fibonacci numbers from 0 to n is equal F(n+2) - 1. It simplifies the complexity to:
O(S(n) + S(n-1)) = O(F(n+2) - 1 + F(n+1) - 1) = O(F(n+3) - 2) = O(F(n+3))
We can forget about the +3 since F(n+3) < 2 * F(n+2) < 4 * F(n+1) < 8 * F(n).
The final question, is Fibonacci sequence exponential? Yes and no apparently.
The is no number that would fulfil the xn = F(n) - the value oscillates between 2 and √2, because for F(n+1) < 2 * F(n) < F(n+2).
It's proven though, that lim(n->∞) F(n+1) / F(n) = φ - the golden ratio. It means the O(F(n)) = O(φn). (Actually, you copy arrays a lot, so it's more like O(φn*n))
How to fix it? You could check if there isn't too many arrays before recursing in if 2.
Other than that, just as #Markus mentioned, depending on the input, the number of solutions might be exponential, so the algorithm to get them also has to be exponential. But that's not true for every input, so let's keep those cases to minimum :D
The problem can be solved by a different approach:
compute all the combinations of numOfSubarray numbers ranging from 1 to the length of array
each combination is a valid slicing of array. For instance, you want to slice the array in your example at positions (1, 2), (1, 3), (2, 3), yielding subarrays [[2],[3],[5,4]], [[2],[3,5],[4]], and [[2,3],[5],[4]]
Time complexity is, I believe, O(r(nCr)) where n is (length of array - 1) and r is (number of subarrays - 1).
To visualize how and why it works have a look at the stars and bars technique

Function Composition - Isn't looping over an array multiple times for multiple operations inefficient?

I am trying to understand the concepts and basics of functional programming. I am not going hard-core with Haskell or Clojure or Scala. Instead, I am using JavaScript.
So, if I have understood correctly, the idea behind Functional Programming is to compose a software application using Pure Functions - which take care of handling single responsibility/functionality in an application without any side-effects.
Composition takes place in such a way that the output of one function is piped in as an input to another (according to the logic).
I write 2 functions for doubling and incrementing respectively which take an integer as an argument. Followed by a utility function that composes the functions passed in as arguments.
{
// doubles the input
const double = x => x * 2
// increments the input
const increment = x => x + 1
// composes the functions
const compose = (...fns) => x => fns.reduceRight((x, f) => f(x), x)
// input of interest
const arr = [2,3,4,5,6]
// composed function
const doubleAndIncrement = compose(increment, double)
// only doubled
console.log(arr.map(double))
// only incremented
console.log(arr.map(increment))
// double and increment
console.log(arr.map(doubleAndIncrement))
}
The outputs are as follows:
[4, 6, 8, 10, 12] // double
[3, 4, 5, 6, 7] // increment
[5, 7, 9, 11, 13] // double and increment
So, my question is, the reduceRight function will be going through the array twice in this case for applying the 2 functions.
If the array gets larger in size, wouldn't this be inefficient?
Using a loop, it can be done in a single traversal with the two operations in the same loop.
How can this be prevented or is my understanding incorrect in any way?
It is map that traverses the array, and that happens only once. reduceRight is traversing the list of composed functions (2 in your example) and threading the current value of the array through that chain of functions. The equivalent inefficient version you describe would be:
const map = f => x => x.map(f)
const doubleAndIncrement = compose(map(increment), map(double))
// double and increment inefficient
console.log(doubleAndIncrement(arr))
This reveals one of the laws that map must satisfy, that:
map(compose(g, f)) is equivalent (isomorphic) to compose(map(g), map(f))
But as we now know, the latter can be made more efficient by simplifying it to the former, and it will traverse the input array only once.
Optimising time complexity while still having small and dedicated functions is a widely discussed topic in fp.
Let's take this simple case:
const double = n => 2 * n;
const square = n => n * n;
const increment = n => 1 + n;
const isOdd = n => n % 2;
const result = [1, 2, 3, 4, 5, 6, 7, 8]
.map(double)
.map(square)
.map(increment)
.filter(isOdd);
console.log('result', result);
In linear composition, as well as chaining, this operation can be read as O(4n) time complexity... meaning that for each input we perform roughly 4 operations (e.g. in a list of 4 billion items we would perform 16 billion operations).
we could solve this issue (intermediate values + unnecessary number of operations) by embedding all the operations (double, square, increment, isOdd) into a single reduce function... however, this would result in a loss of readability.
In FP you have the concept of Transducer (read here) so that you could still keep the readability given by single purposed functions and have the efficiency of performing as little operations as possible.
const double = n => 2 * n;
const square = n => n * n;
const increment = n => 1 + n;
const isOdd = n => n % 2;
const transducer = R.into(
[],
R.compose(R.map(double), R.map(square), R.map(increment), R.filter(isOdd)),
);
const result = transducer([1, 2, 3, 4, 5, 6, 7, 8]);
console.log('result', result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js" integrity="sha256-xB25ljGZ7K2VXnq087unEnoVhvTosWWtqXB4tAtZmHU=" crossorigin="anonymous"></script>

Calculate the mathematical difference of each element between two arrays

Given two array of same length, return an array containing the mathematical difference of each element between two arrays.
Example:
a = [3, 4, 7]
b = [3, 9, 10 ]
results: c = [(3-3), (9-4), (10,7)] so that c = [0, 5 3]
let difference = []
function calculateDifferenceArray(data_one, data_two){
let i = 0
for (i in data_duplicates) {
difference.push(data_two[i]-data_one[i])
}
console.log(difference)
return difference
}
calculateDifferenceArray((b, a))
It does work.
I am wondering if there is a more elegant way to achieve the same
Use map as following:
const a = [3, 4, 7]
const b = [3, 9, 10]
const c = b.map((e, i) => e - a[i])
// [0, 5, 3]
for-in isn't a good tool for looping through arrays (more in my answer here).
"More elegant" is subjective, but it can be more concise and, to my eyes, clear if you use map:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => data_two[index] - v1)
}
calculateDifferenceArray(b, a) // < Note just one set of () here
Live Example:
const a = [3, 4, 7];
const b = [3, 9, 10 ];
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => v1 - data_two[index]);
}
console.log(calculateDifferenceArray(b, a));
or if you prefer it slightly more verbose for debugging et. al.:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => {
const v2 = data_two[index]
return v1 - v2
})
}
calculateDifferenceArray(b, a)
A couple of notes on the version of this in the question:
It seems to loop over something (data_duplicates?) unrelated to the two arrays passed into the method.
It pushes to an array declared outside the function. That means if you call the function twice, it'll push the second set of values into the array but leave the first set of values there. That declaration and initialization should be inside the function, not outside it.
You had two sets of () in the calculateDifferenceArray call. That meant you only passed one argument to the function, because the inner () wrapped an expression with the comma operator, which takes its second operand as its result.
You had the order of the subtraction operation backward.
You could use higher order array method map. It would work something like this:
let a = [2,3,4];
let b = [3,5,7];
let difference = a.map((n,i)=>n-b[i]);
console.log(difference);
you can read more about map here

Is this the most efficient use of ES6 to find factors without a loop?

I am trying to find the least verbose way to find the factors for each number in an array without using loops. I have a snippet of ES6 code that I could use in a .map to avoid a loop I think, but I'm at a loss as to what it is doing in the second line.
I've looked at the .filter and .from methods on MDN, so we're shallow copying an array instance from an iterable, seemingly empty by just calling Array(), but then I'm at a loss in describing it in English after that, which makes me feel uneasy.
let evens = [2,4,6,80,24,36];
Here's the ES6 snippet I'm trying to deconstruct/explain in English
const factor = number => Array
.from(Array(number), (_, i) => i)
.filter(i => number % i === 0)
so I dropped it into this .map like so
const factors = evens.map((number => {
return factors(number)
}))
console.log(factors)
I get an array of arrays of the factors as shown here
[ [ 1 ],
[ 1, 2 ],
[ 1, 2, 3 ],
[ 1, 2, 4, 5, 8, 10, 16, 20, 40 ],
[ 1, 2, 3, 4, 6, 8, 12 ],
[ 1, 2, 3, 4, 6, 9, 12, 18 ] ]
So...it works, but what is happening in that second line? I love that it is succinct, but when I try to reverse engineer it into non-ES6 I'm left wondering.
Thank you in advance, advanced ES6 folks.
There are a number of things to unpack here.
First of all, "without using loops." Can you explain your reason for that requirement? Not that I'm unsympathetic, as I usually avoid explicit loops, but you should really be able to explain why you want to do this. There are two fundamentally different ways to process an ordered collection: iterative looping and recursion. If you're not using recursive code, there's probably a loop hiding somewhere. It might be buried inside a map, filter, etc., which is most often an improvement, but that loop is still there.
Second, the layout of this snippet is fairly misleading:
const factor = number => Array
.from(Array(number), (_, i) => i)
.filter(i => number % i === 0)
Usually when a number of lines start .methodName(...) each of these methods operates on the data supplied by the previous line. But from here is just a static method of Array; separating them like this is confusing. Either of these would be better, as would many other layouts:
const factor = number =>
Array.from(Array(number), (_, i) => i)
.filter(i => number % i === 0)
const factor = number => Array.from(
Array(number),
(_, i) => i
).filter(i => number % i === 0)
Third, as comments and another answer have pointed out, Array.from accepts an iterable and a mapping function and returns an array, and Array(number) will give you an array with no values but which reports its length as number, so will serve as an appropriate iterable. There are a number of equivalent ways one might write this, for instance:
Array.from({length: number}, (_, i) => i)
[...Array(number)].map((_, i) => i)
Fourth, you mention this:
const factors = evens.map((number => {
return factor(number)
}))
(typo fixed)
While there's nothing exactly wrong with this, you might want to recognize that this can be written much more cleanly as
const factors = evens.map(factor)
Finally, that factoring code is missing a major performance tweak. You test every possible value up to n, when you really can find factors in pairs, testing only up to sqrt(n). That is a major difference. There is no known efficient factoring technique, which is probably a good thing as modern encryption depends upon this being a difficult problem. But you most likely don't want to make it much worse than it has to be.
Array(number) creates an empty array, with the length number. Now as it is completely empty the length is not really useful yet ... if you however call Array.from on it, it does iterate over all indices (till number) and then calls the passed callback and builds up a new array with the return values. (_, i) => i takes the index of the previous value (which is undefined all the time) and returns it as the value. Therefore you get the following results:
number | Array.from
0 | []
1 | [0]
2 | [0, 1]
5 | [0, 1, 2, 3, 4]
As you see, that generates all numbers from 0 till number. Now you just have to filter out those that evenly divide number, which can easily done by checking the result of a modulo operation agains zero:
1 % 2 -> 1
2 % 2 -> 0
3 % 2 -> 1
4 % 2 -> 0
I would think a more efficient way would be to use generators.
A bonus is I think the code is easier to understand.
let evens = [2,4,6,80,24,36];
function* factor(n) {
for (let l = 1; l < n; l += 1) {
if (n % l === 0) yield l;
}
}
const factors = evens.map((number => {
return Array.from(factor(number)).join(", ");
}));
console.log(factors);

Transducer flatten and uniq

I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))

Categories