Sequentially apply multiple functions to object using different lenses - javascript

I would like to perform some updates to an array in an object, and then calculate another parameter based on this update. This is what I tried:
import * as R from 'ramda'
const obj = {
arr: [
2,
3
],
result: {
sumOfDoubled: 0
}
};
const double = a => {
return a*2;
}
const arrLens = R.lensProp('arr');
const res0sumOfDblLens = R.lensPath(['result','sumOfDoubled']);
const calc = R.pipe(
R.over(arrLens,R.map(double)),
R.view(arrLens),
R.sum,
R.set(res0sumOfDblLens)
);
const updatedObjA = calc(obj);
const updatedObjB = R.set(res0sumOfDblLens,R.sum(R.view(arrLens,R.over(arrLens,R.map(double),obj))),obj);
// what I want: {"arr":[4,6],"result":{"sumOfDoubled":10}}
console.log(JSON.stringify(obj)); //{"arr":[2,3],"result":{"sumOfDoubled":0}}, as expected
console.log(JSON.stringify(updatedObjA)); //undefined
console.log(JSON.stringify(updatedObjB)); //{"arr":[2,3],"result":{"sumOfDoubled":10}}, correct result but the array did not update
I realise that neither approaches will work; approach A boils down to R.set(res0sumOfDblLens,10), which makes no sense as it doesn't have a target object for the operation. Approach B, on the other hand, manipulates the base object twice rather than passing the result of the first manipulation as an input for the second.
How can I achieve this using only one function composition; i.e. apply the double() function to one part of the object, and then passing that updated object as input for calculating sumOfDoubled?

As well as OriDrori's converge solution, you could also use either of two other Ramda functions. I always prefer lift to converge when it works; it feels more like standard FP, where converge is very much a Ramda artifact. It doesn't always do the job because of some of the variadic features of converge. But it does here, and you could write:
const calc = pipe (
over (arrLens, map (multiply (2))),
lift (set (res0sumOfDblLens) ) (
pipe (view (arrLens), sum),
identity
)
)
But that identity in either of these solutions makes me wonder if there's something better. And there is. Ramda's chain when applied to functions is what's sometimes known as the starling combinator, :: (a -> b -> c) -> (a -> b) -> a -> c. Or said a different way, chain (f, g) //~> (x) => f (g (x)) (x). And that's just what we want to apply here. So with chain, this is simplified further:
const arrLens = lensProp('arr')
const res0sumOfDblLens = lensPath(['result', 'sumOfDoubled'])
const calc = pipe (
over (arrLens, map (multiply (2))),
chain (
set (res0sumOfDblLens),
pipe (view (arrLens), sum)
)
)
const obj = { arr: [2, 3], result: { sumOfDoubled: 0 }}
console .log (calc (obj))
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
<script>const {lensProp, lensPath, pipe, over, map, multiply, chain, set, view, sum} = R </script>

To get the updated value, and the object, so you can set the new sum, you can use R.converge():
const arrLens = R.lensProp('arr');
const res0sumOfDblLens = R.lensPath(['result', 'sumOfDoubled']);
const calc = R.pipe(
R.over(arrLens, R.map(R.multiply(2))),
R.converge(R.set(res0sumOfDblLens), [
R.pipe(R.view(arrLens), R.sum),
R.identity
])
);
const obj = { arr: [2, 3], result: { sumOfDoubled: 0 }};
const result = calc(obj);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>

Maybe a variant without a lense would be a better fit for your case?
const doubleArr = pipe(
path(['arr']),
map(x => x*2)
)
const newData = applySpec({
arr: doubleArr,
result: {
sumOfDoubled: pipe(
doubleArr,
sum
)
}
})

Related

In Javascript, is there an easyish way to get a chainable Array prepend operation like the reverse of Array.concat?

I'm doing array manipulation in Javascript, and I want to be able to chain operations with multiple calls to map, concat, etc.
const someAmazingArrayOperation = (list) =>
list
.map(transformStuff)
.sort(myAwesomeSortAlgorithm)
.concat([someSuffixElement])
.precat([newFirstElement])
.filter(unique)
But the problem I've run into is that Array.precat doesn't exist. (Think of Array.concat, but the reverse.)
I don't want to modify Array.prototype in my own code, for reasons. (https://flaviocopes.com/javascript-why-not-modify-object-prototype/)
I could totally use Array.concat and concatenate my array to the end of the prefix array and carry on. But that doesn't chain with the other stuff, and it makes my code look clunky.
It's kind of a minor issue because I can easily write code to get the output I want. But it's kind of a big deal because I want my code to look clean and this seems like a missing piece of the Array prototype.
Is there a way to get what I want without modifying the prototype of a built-in type?
For more about the hypothetical Array.precat, see also:
concat, but prepend instead of append
You could use Array#reduce with a function which takes the initialValue as array for prepending data.
const
precat = (a, b) => [...a, b],
result = [1, 2, 3]
.reduce(precat, [9, 8, 7]);
console.log(result)
If you don't want to modify Array.prototype, you can consider extends:
class AmazingArray extends Array {
precat(...args) {
return new AmazingArray().concat(...args, this);
}
}
const transformStuff = x => 2*x;
const myAwesomeSortAlgorithm = (a, b) => a - b;
const someSuffixElement = 19;
const newFirstElement = -1;
const unique = (x, i, arr) => arr.indexOf(x) === i;
const someAmazingArrayOperation = (list) =>
new AmazingArray()
.concat(list)
.map(transformStuff)
.sort(myAwesomeSortAlgorithm)
.concat([someSuffixElement])
.precat([newFirstElement])
.filter(unique);
console.log(someAmazingArrayOperation([9, 2, 2, 3]));
I don't want to modify Array.prototype in my own code, for reasons.
These reasons are good, but you can sidestep them by using a collision-safe property - key it with a symbol, not a name:
const precat = Symbol('precatenate')
Array.prototype[precat] = function(...args) {
return [].concat(...args, this);
};
const someAmazingArrayOperation = (list) =>
list
.map(transformStuff)
.sort(myAwesomeCompareFunction)
.concat([someSuffixElement])
[precat]([newFirstElement])
.filter(unique);

How do I apply a composed function to each object in a list using Ramda?

I'm building a simple app using RamdaJS that aims to take a list of objects that represent U.S. states, and for each state, it should calculate the number of electoral votes and add that value as a new property to each object, called electoralVotes. The basic gist of the calculation itself (as inaccurate as it may be) is to divide the population by 600000, round that number down, and if the rounded-down number is 0, round it up to 1.
For simplicity, the array of states only includes a state name and population for each state:
const states = [
{ state: 'Alabama', population: 4833722 },
{ state: 'Alaska', population: 735132 },
{ state: 'Arizona', population: 6626624 },
// ... etc.
];
I created a function called getElectoralVotesForState that is created with nested levels of composition (A composed function that is built using another composed function). This function takes a state object, examines its population property, then calculates and returns the corresponding number of electoral votes.
const R = require('ramda');
// This might not be factually accurate, but it's a ballpark anyway
const POP_PER_ELECTORAL_VOTE = 600000;
const populationLens = R.lensProp("population");
// Take a number (population) and calculate the number of electoral votes
// If the rounded-down calculation is 0, round it up to 1
const getElectoralVotes = R.pipe(
R.divide(R.__, POP_PER_ELECTORAL_VOTE),
Math.floor,
R.when(R.equals(0), R.always(1))
);
// Take a state object and return the electoral votes
const getElectoralVotesForState = R.pipe(
R.view(populationLens),
getElectoralVotes
);
If I want to pass in a single state to the getElectoralVotesForState function, it works fine:
const alabama = { state: 'Alabama', population: 4833722 };
const alabamaElectoralVotes = getElectoralVotesForState(alabama); // Resolves to 8
While this seems to work for a single object, I can't seem to get this to apply to an array of objects. My guess is that the solution might look something like this:
const statesWithElectoralVotes = R.map(
R.assoc("electoralVotes", getElectoralVotesForState)
)(states);
This does add an electoralVotes property to each state, but it's a function and not a resolved value. I'm sure it's just a silly thing I'm getting wrong here, but I can't figure it out.
What am I missing?
To apply a function to to an array of items use R.map. Since you want the value you don't need to R.assoc:
const POP_PER_ELECTORAL_VOTE = 600000;
const populationLens = R.lensProp("population");
const getElectoralVotes = R.pipe(
R.divide(R.__, POP_PER_ELECTORAL_VOTE),
Math.floor,
R.when(R.equals(0), R.always(1))
);
const getElectoralVotesForState = R.pipe(
R.view(populationLens),
getElectoralVotes
);
const mapStates = R.map(getElectoralVotesForState);
const states = [{"state":"Alabama","population":4833722},{"state":"Alaska","population":735132},{"state":"Arizona","population":6626624}];
const result = mapStates(states);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
In addition, the lens is a bit redundant here, take the value of population using R.prop. I would also replace R.when with R.max.
const POP_PER_ELECTORAL_VOTE = 600000;
const getElectoralVotesForState = R.pipe(
R.prop('population'),
R.divide(R.__, POP_PER_ELECTORAL_VOTE),
Math.floor,
R.max(1)
);
const mapStates = R.map(getElectoralVotesForState);
const states = [{"state":"Alabama","population":4833722},{"state":"Alaska","population":735132},{"state":"Arizona","population":6626624}];
const result = mapStates(states);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
To add the property to each object, you'll need to get the value from the object, and then add it as a property to the object. This means that we need to use 2 functions - f (R.assoc) & g (getElectoralVotesForState), and apply both of them to the object - x, but one of them (R.assoc) also need the result of the other function.
you'll need to apply getElectoralVotesForState on the object to get the number, (g(x)) and then take the result, and it to the object (
To add the electoralVotes you can use R.chain in conjunction with R.assoc. When R.chain is applied to function - R.chain(f, g)(x) is equivalent to f(g(x), x). In your case
f - assoc
g - getElectoralVotesForState
x - the object
The combined function - R.chain(R.assoc('electoralVotes'), getElectoralVotesForState) becomes assoc('electoralVotes')(getElectoralVotesForState(object), object).
Example:
const POP_PER_ELECTORAL_VOTE = 600000;
const getElectoralVotesForState = R.pipe(
R.prop('population'),
R.divide(R.__, POP_PER_ELECTORAL_VOTE),
Math.floor,
R.max(1)
);
const mapStates = R.map(
R.chain(R.assoc("electoralVotes"), getElectoralVotesForState)
);
const states = [{"state":"Alabama","population":4833722},{"state":"Alaska","population":735132},{"state":"Arizona","population":6626624}];
const result = mapStates(states);
console.log(result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js" integrity="sha512-rZHvUXcc1zWKsxm7rJ8lVQuIr1oOmm7cShlvpV0gWf0RvbcJN6x96al/Rp2L2BI4a4ZkT2/YfVe/8YvB2UHzQw==" crossorigin="anonymous" referrerpolicy="no-referrer"></script>
I think Ori Drori answers your question well. I have no suggested improvements. But I want to show that it's not too hard to code the current apportionment method used for the U.S. Congress, the Huntington-Hill Method:
// Huntington-Hill apportionment method
const apportion = (total) => (pops) =>
huntingtonHill (total - pops.length, pops .map (pop => ({...pop, seats: 1})))
// method of equal proportions
const huntingtonHill = (toFill, seats, state = nextSeat (seats)) =>
toFill <= 0
? seats
: huntingtonHill (toFill - 1, seats .map (s => s.state == state ? {...s, seats: s.seats + 1} : s))
// find state to assign the next seat
const nextSeat = (seats) =>
seats
.map (({state, population, seats}) => [state, population * Math.sqrt(1 / (seats * (seats + 1)))])
.sort (([_, a], [_1, b]) => b - a)
[0] [0] // ideally, use a better max implementation that sort/head, but low priority
// convert census data to expected object format
const restructure = results =>
results
.slice (1) // remove header
.map (([population, state]) => ({state, population})) // make objects
.filter (({state}) => ! ['District of Columbia', 'Puerto Rico'] .includes (state)) // remove non-states
.sort (({state: s1}, {state: s2}) => s1 < s2 ? -1 : s1 > s2 ? 1 : 0) // alphabetize
fetch ('https://api.census.gov/data/2021/pep/population?get=POP_2021,NAME&for=state:*')
.then (res => res.json())
.then (restructure)
.then (apportion (435))
.then (console .log)
.catch (console .warn)
.as-console-wrapper {max-height: 100% !important; top: 0}
Here we call the U.S. Census API to fetch the populations of each state, remove Washington DC and Puerto Rico, reformat these results to your {state, population} input format, and then call apportion (435) with the array of values. (If you have the data already in that format, you can just call apportion (435)), and it will assign one seat to each state and then use the Huntington-Hill method to assign the remaining seats.
It does this by continually calling nextSeat, which divides each state's population by the geometric mean of its current number of seats and the next higher number, then choosing the state with the largest value.
This does not use Ramda for anything. Perhaps we would clean this up slightly with some Ramda functions (for example, replacing pop => ({...pop, seats: 1}) with assoc('seat', 1)), but it would not likely be a large gain. I saw this question because I pay attention to the Ramda tag. But the point here is that the actual current method of apportionment is not that difficult to implement, if you happen to be interested.
You can see how this technique is used to compare different sized houses in an old gist of mine.

Ramda.js transducers: average the resulting array of numbers

I'm currently learning about transducers with Ramda.js. (So fun, yay! 🎉)
I found this question that describes how to first filter an array and then sum up the values in it using a transducer.
I want to do something similar, but different. I have an array of objects that have a timestamp and I want to average out the timestamps. Something like this:
const createCheckin = ({
timestamp = Date.now(), // default is now
startStation = 'foo',
endStation = 'bar'
} = {}) => ({timestamp, startStation, endStation});
const checkins = [
createCheckin(),
createCheckin({ startStation: 'baz' }),
createCheckin({ timestamp: Date.now() + 100 }), // offset of 100
];
const filterCheckins = R.filter(({ startStation }) => startStation === 'foo');
const mapTimestamps = R.map(({ timestamp }) => timestamp);
const transducer = R.compose(
filterCheckins,
mapTimestamps,
);
const average = R.converge(R.divide, [R.sum, R.length]);
R.transduce(transducer, average, 0, checkins);
// Should return something like Date.now() + 50, giving the 100 offset at the top.
Of course average as it stands above can't work because the transform function works like a reduce.
I found out that I can do it in a step after the transducer.
const timestamps = R.transduce(transducer, R.flip(R.append), [], checkins);
average(timestamps);
However, I think there must be a way to do this with the iterator function (second argument of the transducer). How could you achieve this? Or maybe average has to be part of the transducer (using compose)?
As a first step, you can create a simple type to allow averages to be combined. This requires keeping a running tally of the sum and number of items being averaged.
const Avg = (sum, count) => ({ sum, count })
// creates a new `Avg` from a given value, initilised with a count of 1
Avg.of = n => Avg(n, 1)
// takes two `Avg` types and combines them together
Avg.append = (avg1, avg2) =>
Avg(avg1.sum + avg2.sum, avg1.count + avg2.count)
With this, we can turn our attention to creating the transformer that will combine the average values.
First, a simple helper function that allow values to be converted to our Avg type and also wraps a reduce function to default to the first value it receives rather than requiring an initial value to be provided (a nice initial value doesn't exist for averages, so we'll just use the first of the values instead)
const mapReduce1 = (map, reduce) =>
(acc, n) => acc == null ? map(n) : reduce(acc, map(n))
The transformer then just needs to combine the Avg values and then pull resulting average out of the result. n.b. The result needs to guard for null values in the case where the transformer is run over an empty list.
const avgXf = {
'##transducer/step': mapReduce1(Avg.of, Avg.append),
'##transducer/result': result =>
result == null ? null : result.sum / result.count
}
You can then pass this as the accumulator function to transduce, which should produce the resulting average value.
transduce(transducer, avgXf, null, checkins)
I'm afraid this strikes me as quite confused.
I think of transducers as a way of combining the steps of a composed function on sequences of values so that you can iterate the sequence only once.
average makes no sense here. To take an average you need the whole collection.
So you can transduce the filtering and mapping of the values. But you will absolutely need to then do the averaging separately. Note that filter then map is a common enough pattern that there are plenty of filterMap functions around. Ramda doesn't have one, but this would do fine:
const filterMap = (f, m) => (xs) =>
xs .flatMap (x => f (x) ? [m (x)] : [])
which would then be used like this:
filterMap (
propEq ('startStation', 'foo'),
prop ('timestamp')
) (checkins)
But for more complex sequences of transformations, transducers can certainly fit the bill.
I would also suggest that when you can, you should use lift instead of converge. It's a more standard FP function, and works on a more abstract data type. Here const average = lift (divide) (sum, length) would work fine.

How to compose functions and then apply arguments in Lodash/FP

I am trying to learn more about using currying and composition in functional programming by using Lodash/FP to clean up some old code. However, I am repeatedly running into situations where I have a function and I want to pass it one or more functions. I then want to pass the values that will be used as the arguments to the functions that I passed to the original function.
I'm finding it difficult to explain exactly what I'm trying to do so I made a JS Fiddle that shows how I have been trying to approach this:
https://jsfiddle.net/harimau777/rqkLf1rg/2/
const foo = a => `${a}${a}`
// Desired Behavior: const ans1 = (a, b) => `${foo(a)}${foo(b)}`
const ans1 = _.compose(
(a, b) => `${a}${b}`,
foo,
foo
)
// Desired Result: '1122'
console.log(ans1('1', '2'))
// Desired Behavior: const ans2 = a => a.map(a => a + 1)
const ans2 = _.compose(
_.map,
a => a + 1
)
//Desired Result: [2, 3, 4]
console.log(ans2([1, 2, 3]))
Based on Ori Drori's answer below I think that I can clarify my question (is this how people normally follow up on StackOverflow as opposed to asking a new question?):
Suppose that instead of applying the same sequence of functions to both inputs I wanted to apply a sequence of functions to the first input, a different sequence to the second input, and use both results as the input to the rest of the _.compose. I could do this using:
const f1 = _.compose(<Some sequence of functions>)
const f2 = _.compose(<Some sequence of functions>)
const f3 = <A function which takes two inputs>
const ans = _.compose(
<More functions here>,
f3
)(f1(a), f2(b))
console.log(ans)
However, I'm wondering if there is a way to handle this using a single compose or if there are any patterns that tend to be used in functional programming to handle situations like this.
LodashFPs _.compose() (_.flowRight() in lodash) works by applying the parameters to the right most (bottom in your code) function, and passes the result to the function to it's left, and so on:
_.compose(a, b, c)(params) -> a(b(c(params)))
This means that every function, except for the right most receives only one parameter.
You can get the 1st example working by changing the methods a bit:
const ans1 = _.compose(
arr => arr.join(''), // joins the params to one string
(...args) => args.map((s) => `${s}${s}`) // returns an array of double params
)
// Desired Result: '1122'
console.log(ans1('1', '2'))
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash-fp/0.10.4/lodash-fp.min.js"></script>
In the 2nd example you want to create a new method that maps via a predefined callback. Since lodash/fp methods are auto curried, you can supply the callback to the _.map(), and get a new method. Compose won't work here since _.map() doesn't use the results of the method directly, but applies it to every item in the array:
const ans2 = _.map(a => a + 1)
//Desired Result: [2, 3, 4]
console.log(ans2([1, 2, 3]))
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash-fp/0.10.4/lodash-fp.min.js"></script>
The case you've presented in your clarification can be handled by _.useWith() (known as _.overArgs() in lodash):
Creates a function that invokes func with its arguments transformed.
However, it's use is not recommended since it reduces readability.
Example:
const foo = a => `${a}${a}`
const ans1 = _.useWith(
(a, b) => `${a}${b}`,
[
foo,
foo
]
)
const result = ans1(1, 2)
console.log(result)
<script src='https://cdn.jsdelivr.net/g/lodash#4(lodash.min.js+lodash.fp.min.js)'></script>

Split array into two different arrays using functional JavaScript

I was wondering what would be the best way to split an array into two different arrays using JavaScript, but to keep it in the realms of functional programming.
Let's say that the two arrays should be created depending on some logic. For instance splitting one array should only contain strings with less than four characters and the other the rest.
const arr = ['horse', 'elephant', 'dog', 'crocodile', 'cat'];
I have thought about different methods:
Filter:
const lessThanFour = arr.filter((animal) => {
return animal.length < 4;
});
const fourAndMore = arr.filter((animal) => {
return animal.length >= 4;
});
The problem with this for me is that you have to go through your data twice, but it is very readable. Would there be a massive impact doing this twice if you have a rather large array?
Reduce:
const threeFourArr = arr.reduce((animArr, animal) => {
if (animal.length < 4) {
return [[...animArr[0], animal], animArr[1]];
} else {
return [animArr[0], [...animArr[1], animal]];
}
}, [[], []]);
Where the array's 0 index contains the array of less than four and the 1 index contains the array of more than three.
I don't like this too much, because it seems that the data structure is going to give a bit of problems, seeing that it is an array of arrays. I've thought about building an object with the reduce, but I can't imagine that it would be better than the array within an array solution.
I've managed to look at similar questions online as well as Stack Overflow, but many of these break the idea of immutability by using push() or they have very unreadable implementations, which in my opinion breaks the expressiveness of functional programming.
Are there any other ways of doing this? (functional of course)
collateBy
I just shared a similar answer here
I like this solution better because it abstracts away the collation but allows you to control how items are collated using a higher-order function.
Notice how we don't say anything about animal.length or < 4 or animals[0].push inside collateBy. This procedure has no knowledge of the kind of data you might be collating.
// generic collation procedure
const collateBy = f => g => xs => {
return xs.reduce((m,x) => {
let v = f(x)
return m.set(v, g(m.get(v), x))
}, new Map())
}
// custom collator
const collateByStrLen4 =
// collate by length > 4 using array concatenation for like elements
// note i'm using `[]` as the "seed" value for the empty collation
collateBy (x=> x.length > 4) ((a=[],b)=> [...a,b])
// sample data
const arr = ['horse','elephant','dog','crocodile','cat']
// get collation
let collation = collateByStrLen4 (arr)
// output specific collation keys
console.log('greater than 4', collation.get(true))
console.log('not greater than 4', collation.get(false))
// output entire collation
console.log('all entries', Array.from(collation.entries()))
Check out that other answer I posted to see other usage varieties. It's a pretty handy procedure.
bifilter
This is another solution that captures both out outputs of a filter function, instead of throwing away filtered values like Array.prototype.filter does.
This is basically what your reduce implementation does but it is abstracted into a generic, parameterized procedure. It does not use Array.prototype.push but in the body of a closure, localized mutation is generally accepted as OK.
const bifilter = (f,xs) => {
return xs.reduce(([T,F], x, i, arr)=> {
if (f(x, i, arr) === false)
return [T, [...F,x]]
else
return [[...T,x] ,F]
}, [[],[]])
}
const arr = ['horse','elephant','dog','crocodile','cat']
let [truthy,falsy] = bifilter(x=> x.length > 4, arr)
console.log('greater than 4', truthy)
console.log('not greater than 4', falsy)
Though it might be a little more straightforward, it's not nearly as powerful as collateBy. Either way, pick whichever one you like, adapt it to meet your needs if necessary, and have fun !
If this is your own app, go nuts and add it to Array.prototype
// attach to Array.prototype if this is your own app
// do NOT do this if this is part of a lib that others will inherit
Array.prototype.bifilter = function(f) {
return bifilter(f,this)
}
The function you are trying to build is usually known as partition and can be found under that name in many libraries, such as underscore.js. (As far as I know its not a builtin method)
var threeFourArr = _.partition(animals, function(x){ return x.length < 4 });
I don't like this too much, because it seems that the data structure is going to give a bit of problems, seeing that it is an array of arrays
Well, that is the only way to have a function in Javascript that returns two different values. It looks a bit better if you can use destructuring assignment (an ES6 feature):
var [smalls, bigs] = _.partition(animals, function(x){ return x.length < 4 });
Look at it as returning a pair of arrays instead of returning an array of arrays. "Array of arrays" suggests that you may have a variable number of arrays.
I've managed to look at similar questions online as well as Stack Overflow, but many of these break the idea of immutability by using push() or they have very unreadable implementations, which in my opinion breaks the expressiveness of functional programming.
Mutability is not a problem if you localize it inside a single function. From the outside its just as immutable as before and sometimes using some mutability will be more idiomatic than trying to do everything in a purely functional manner. If I had to code a partition function from scratch I would write something along these lines:
function partition(xs, pred){
var trues = [];
var falses = [];
xs.forEach(function(x){
if(pred(x)){
trues.push(x);
}else{
falses.push(x);
}
});
return [trues, falses];
}
A shorter .reduce() version would be:
const split = arr.reduce((animArr, animal) => {
animArr[animal.length < 4 ? 0 : 1].push(animal);
return animArr
}, [ [], [] ]);
Which might be combined with destructuring:
const [ lessThanFour, fourAndMore ] = arr.reduce(...)
If you are not opposed to using underscore there is a neat little function called groupBy that does exactly what you are looking for:
const arr = ['horse', 'elephant', 'dog', 'crocodile', 'cat'];
var results = _.groupBy(arr, function(cur) {
return cur.length > 4;
});
const greaterThanFour = results.true;
const lessThanFour = results.false;
console.log(greaterThanFour); // ["horse", "elephant", "crocodile"]
console.log(lessThanFour); // ["dog", "cat"]
Kudos for the beautiful response of the user Thank you, here an alternative using a recursion,
const arr = ['horse', 'elephant', 'dog', 'crocodile', 'cat'];
const splitBy = predicate => {
return x = (input, a, b) => {
if (input.length > 0) {
const value = input[0];
const [z, y] = predicate(value) ? [[...a, value], b] : [a, [...b, value]];
return x(input.slice(1), z, y);
} else {
return [a, b];
}
}
}
const splitAt4 = splitBy(x => x.length < 4);
const [lessThan4, fourAndMore ] = splitAt4(arr, [], []);
console.log(lessThan4, fourAndMore);
I don't think there could be another solution than returning an array of arrays or an object containing arrays. How else is a javascript function return multiple arrays after splitting them?
Write a function containing your push logic for readability.
var myArr = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
var x = split(myArr, v => (v <= 5));
console.log(x);
function split(array, tester) {
const result = [
[],
[]
];
array.forEach((v, i, a) => {
if (tester(v, i, a)) result[0].push(v);
else result[1].push(v);
});
return result;
}

Categories