Functional composition in JavaScript - javascript

I know this is quite possible since my Haskell friends seem to be able to do this kind of thing in their sleep, but I can't wrap my head around more complicated functional composition in JS.
Say, for example, you have these three functions:
const round = v => Math.round(v);
const clamp = v => v < 1.3 ? 1.3 : v;
const getScore = (iteration, factor) =>
iteration < 2 ? 1 :
iteration === 2 ? 6 :
(getScore(iteration - 1, factor) * factor);
In this case, say iteration should be an integer, so we would want to apply round() to that argument. And imagine that factor must be at least 1.3, so we would want to apply clamp() to that argument.
If we break getScore into two functions, this seems easier to compose:
const getScore = iteration => factor =>
iteration < 2 ? 1 :
iteration === 2 ? 6 :
(getScore(iteration - 1)(factor) * factor);
The code to do this probably looks something like this:
const getRoundedClampedScore = compose(round, clamp, getScore);
But what does the compose function look like? And how is getRoundedClampedScore invoked? Or is this horribly wrong?

The compose function should probably take the core function to be composed first, using rest parameters to put the other functions into an array, and then return a function that calls the ith function in the array with the ith argument:
const round = v => Math.round(v);
const clamp = v => v < 1.3 ? 1.3 : v;
const getScore = iteration => factor =>
iteration < 2 ? 1 :
iteration === 2 ? 6 :
(getScore(iteration - 1)(factor) * factor);
const compose = (fn, ...transformArgsFns) => (...args) => {
const newArgs = transformArgsFns.map((tranformArgFn, i) => tranformArgFn(args[i]));
return fn(...newArgs);
}
const getRoundedClampedScore = compose(getScore, round, clamp);
console.log(getRoundedClampedScore(1)(5))
console.log(getRoundedClampedScore(3.3)(5))
console.log(getRoundedClampedScore(3.3)(1))

Haskell programmers can often simplify expressions similar to how you'd simplify mathematical expressions. I will show you how to do so in this answer. First, let's look at the building blocks of your expression:
round :: Number -> Number
clamp :: Number -> Number
getScore :: Number -> Number -> Number
By composing these three functions we want to create the following function:
getRoundedClampedScore :: Number -> Number -> Number
getRoundedClampedScore iteration factor = getScore (round iteration) (clamp factor)
We can simplify this expression as follows:
getRoundedClampedScore iteration factor = getScore (round iteration) (clamp factor)
getRoundedClampedScore iteration = getScore (round iteration) . clamp
getRoundedClampedScore iteration = (getScore . round) iteration . clamp
getRoundedClampedScore iteration = (. clamp) ((getScore . round) iteration)
getRoundedClampedScore = (. clamp) . (getScore . round)
getRoundedClampedScore = (. clamp) . getScore . round
If you want to convert this directly into JavaScript then you could do so using reverse function composition:
const pipe = f => g => x => g(f(x));
const compose2 = (f, g, h) => pipe(g)(pipe(f)(pipe(h)));
const getRoundedClampedScore = compose2(getScore, round, clamp);
// You'd call it as follows:
getRoundedClampedScore(iteration)(factor);
That being said, the best solution would be to simply define it in pointful form:
const compose2 = (f, g, h) => x => y => f(g(x))(h(y));
const getRoundedClampedScore = compose2(getScore, round, clamp);
Pointfree style is often useful but sometimes pointless.

I think part of the trouble you're having is that compose isn't actually the function you're looking for, but rather something else. compose feeds a value through a series of functions, whereas you're looking to pre-process a series of arguments, and then feed those processed arguments into a final function.
Ramda has a utility function that's perfect for this, called converge. What converge does is produce a function that applies a series of functions to a series of arguments on a 1-to-1 correspondence, and then feeds all of those transformed arguments into another function. In your case, using it would look like this:
var saferGetScore = R.converge(getScore, [round, clamp]);
If you don't want to get involved in a whole 3rd party library just to use this converge function, you can easily define your with a single line of code. It looks a lot like what CaptainPerformance is using in their answer, but with one fewer ... (and you definitely shouldn't name it compose, because that's an entirely different concept):
const converge = (f, fs) => (...args) => f(...args.map((a, i) => fs[i](a)));
const saferGetScore = converge(getScore, [round, clamp]);
const score = saferGetScore(2.5, 0.3);

Related

I keep getting this question wrong. Counting Bits using javascript

This is the question.
Given an integer n, return an array ans of length n + 1 such that for each i (0 <= i <= n), ans[i] is the number of 1's in the binary representation of i.
https://leetcode.com/problems/counting-bits/
And this is my solution below.
If the input is 2, expected output should be [0,1,1] but I keep getting [0,2,2]. Why is that???
var countBits = function(n) {
//n=3. [0,1,2,3]
var arr=[0];
for (var i=1; i<=n; i++){
var sum = 0;
var value = i;
while(value != 0){
sum += value%2;
value /= 2;
}
arr.push(sum);
}
return arr;
};
console.log(countBits(3));
You're doing way too much work.
Suppose b is the largest power of 2 corresponding to the first bit in i. Evidently, i has exactly one more 1 in its binary representation than does i - b. But since you're generating the counts in order, you've already worked out how many 1s there are in i - b.
The only trick is how to figure out what b is. And to do that, we use another iterative technique: as you list numbers, b changes exactly at the moment that i becomes twice the previous value of b:
const countBits = function(n) {
let arr = [0], bit = 1;
for (let i = 1; i <= n; i++){
if (i == bit + bit) bit += bit;
arr.push(arr[i - bit] + 1);
}
return arr;
};
console.log(countBits(20));
This technique is usually called "dynamic programming". In essence, it takes a recursive definition and computes it bottom-up: instead of starting at the desired argument and recursing down to the base case, it starts at the base and then computes each intermediate value which will be needed until it reaches the target. In this case, all intermediate values are needed, saving us from having to think about how to compute only the minimum number of intermediate values necessary.
Think of it this way: if you know how many ones are there in a number X, then you immediately know how many ones are there in X*2 (the same) and X*2+1 (one more). Since you're processing numbers in order, you can just push both derived counts to the result and skip to the next number:
let b = [0, 1]
for (let i = 1; i <= N / 2; i++) {
b.push(b[i])
b.push(b[i] + 1)
}
Since we push two numbers at once, the result will be one-off for even N, you have to pop the last number afterwards.
use floor():
sum += Math.floor(value%2);
value = Math.floor(value/2);
I guess your algorithm works for some typed language where integers division results in integers
Here's a very different approach, using the opposite of a fold (such as Array.prototype.reduce) typically called unfold. In this case, we start with a seed array, perform some operation on it to yield the next value, and recur, until we decide to stop.
We write a generic unfold and then use it with a callback which accepts the entire array we've found so far plus next and done callbacks, and then chooses whether to stop (if we've reached our limit) or continue. In either case, it calls one of the two callbacks.
It looks like this:
const _unfold = (fn, init) =>
fn (init, (x) => _unfold (fn, [...init, x]), () => init)
// Number of 1's in the binary representation of each integer in [`0 ... n`]
const oneBits = (n) => _unfold (
(xs, next, done) => xs .length < n ? next (xs .length % 2 + xs [xs .length >> 1]) : done(),
[0]
)
console .log (oneBits (20))
I have a GitHub Gist which shows a few more examples of this pattern.
An interesting possible extension would be to encapsulate the handling of the array-to--length-n bit, and make this function trivial. That's no the only use of such an _unfold, but it's probably a common one. It could look like this:
const _unfold = (fn, init) =>
fn (init, (x) => _unfold (fn, [...init, x]), () => init)
const unfoldN = (fn, init) => (n) => _unfold (
(xs, next, done) => xs .length < n ? next (fn (xs)) : done (),
init
)
const oneBits = unfoldN (
(xs) => xs .length % 2 + xs [xs .length >> 1],
[0]
)
console .log (oneBits (20))
Here we have two helper functions that make oneBits quite trivial to write. And those helpers have many potential uses.

Ramda.js transducers: average the resulting array of numbers

I'm currently learning about transducers with Ramda.js. (So fun, yay! 🎉)
I found this question that describes how to first filter an array and then sum up the values in it using a transducer.
I want to do something similar, but different. I have an array of objects that have a timestamp and I want to average out the timestamps. Something like this:
const createCheckin = ({
timestamp = Date.now(), // default is now
startStation = 'foo',
endStation = 'bar'
} = {}) => ({timestamp, startStation, endStation});
const checkins = [
createCheckin(),
createCheckin({ startStation: 'baz' }),
createCheckin({ timestamp: Date.now() + 100 }), // offset of 100
];
const filterCheckins = R.filter(({ startStation }) => startStation === 'foo');
const mapTimestamps = R.map(({ timestamp }) => timestamp);
const transducer = R.compose(
filterCheckins,
mapTimestamps,
);
const average = R.converge(R.divide, [R.sum, R.length]);
R.transduce(transducer, average, 0, checkins);
// Should return something like Date.now() + 50, giving the 100 offset at the top.
Of course average as it stands above can't work because the transform function works like a reduce.
I found out that I can do it in a step after the transducer.
const timestamps = R.transduce(transducer, R.flip(R.append), [], checkins);
average(timestamps);
However, I think there must be a way to do this with the iterator function (second argument of the transducer). How could you achieve this? Or maybe average has to be part of the transducer (using compose)?
As a first step, you can create a simple type to allow averages to be combined. This requires keeping a running tally of the sum and number of items being averaged.
const Avg = (sum, count) => ({ sum, count })
// creates a new `Avg` from a given value, initilised with a count of 1
Avg.of = n => Avg(n, 1)
// takes two `Avg` types and combines them together
Avg.append = (avg1, avg2) =>
Avg(avg1.sum + avg2.sum, avg1.count + avg2.count)
With this, we can turn our attention to creating the transformer that will combine the average values.
First, a simple helper function that allow values to be converted to our Avg type and also wraps a reduce function to default to the first value it receives rather than requiring an initial value to be provided (a nice initial value doesn't exist for averages, so we'll just use the first of the values instead)
const mapReduce1 = (map, reduce) =>
(acc, n) => acc == null ? map(n) : reduce(acc, map(n))
The transformer then just needs to combine the Avg values and then pull resulting average out of the result. n.b. The result needs to guard for null values in the case where the transformer is run over an empty list.
const avgXf = {
'##transducer/step': mapReduce1(Avg.of, Avg.append),
'##transducer/result': result =>
result == null ? null : result.sum / result.count
}
You can then pass this as the accumulator function to transduce, which should produce the resulting average value.
transduce(transducer, avgXf, null, checkins)
I'm afraid this strikes me as quite confused.
I think of transducers as a way of combining the steps of a composed function on sequences of values so that you can iterate the sequence only once.
average makes no sense here. To take an average you need the whole collection.
So you can transduce the filtering and mapping of the values. But you will absolutely need to then do the averaging separately. Note that filter then map is a common enough pattern that there are plenty of filterMap functions around. Ramda doesn't have one, but this would do fine:
const filterMap = (f, m) => (xs) =>
xs .flatMap (x => f (x) ? [m (x)] : [])
which would then be used like this:
filterMap (
propEq ('startStation', 'foo'),
prop ('timestamp')
) (checkins)
But for more complex sequences of transformations, transducers can certainly fit the bill.
I would also suggest that when you can, you should use lift instead of converge. It's a more standard FP function, and works on a more abstract data type. Here const average = lift (divide) (sum, length) would work fine.

Functional programming style pattern matching in JavaScript

I'm writing compiler from kind of functional language to JS. Compiler would run in browser. I need to implement pattern matching mechanics in JS, because original language have one. I've found Sparkler and Z. Sparkler can't be executed in browser as far as I know and Z doesn't have all possibilities I need.
So my language have semantics like this:
count x [] <- 0
count x [ x : xs ] <- 1 + count x xs
count x [ y : xs ] <- count x xs
This is what happens in this snippet:
First line is definition of a function, which takes two parameters: some variable x and empty list, and returns zero.
Second line is definition of a function, which also takes two parameters: some variable x and list, which starts with x, and returns 1 + count(x, xs)
Fot this example I want to generate code like this:
const count = (x, list) => {
match(x, list) => (
(x, []) => {...}
(x, [ x : xs ]) => {...}
(x, [ y : xs ]) => {...}
)
}
How properly unfold this kind of pattern matching into ifs and ors?
General case
There is a proposal for Pattern Matching in ECMAScript, but as of 2018 it's in a very early stage.
Currently, the Implementations section only lists:
Babel Plugin
Sweet.js macro (NOTE: this isn't based on the proposal, this proposal is partially based on it!)
List case
Use destructuring assignment, like:
const count = list => {
const [x, ...xs] = list;
if (x === undefined) {
return 0;
} else if (xs === undefined) {
return 1;
} else {
return 1 + count(xs);
}
}
Using ex-patterns, you could write your example as follows. You need to use the placeholder names that come with the package (_, A, B, C, ... Z) but you can rename matched variables in the callback function with destructuring (an object containing all named matches is passed in as the first argument to the callback function).
import { when, then, Y, _, tail, end } from 'ex-patterns';
const count = list => (
when(list)
([], then(() => 0)) // match empty array
([_], then(() => 1)) // match array with (any) 1 element
([_, tail(Y)], then(({ Y: xs }) => 1 + count(xs))) // match array and capture tail
(end);
);
This also covers the case where list = [undefined, 'foo', 'bar'], which I don't think would be covered by the accepted answer.
To make the code more efficient, you can call count with an Immutable.js List instead of an array (no changes required). In that case, the tail portion of the array doesn't need to be sliced and copied into a new array on every loop.
As with the packages you mentioned, this doesn't run in the browser natively, but I guess that's not a major obstacle with modern bundling tools.
Here are the docs: https://moritzploss.github.io/ex-patterns/
Disclaimer: I'm the author of ex-patterns :)
I had a need for pattern matching and made something that works for me.
const count = patroon(
[_], ([, ...xs]) => 1 + count(xs),
[], 0
)
count([0,1,2,3])
4
See readme for more usage examples.
https://github.com/bas080/patroon
https://www.npmjs.com/package/patroon

Lodash fp (functional programming) reduce not working how I expect it to work

Using normal lodash without fp, you'd do something like
chain(array).map(..).reduce(...).value()
With fp, you'd do
compose(reduce(...), map(...))(array)
I can make it work for many methods (flatten, sort, map), except reduce.
You'd expect it (lodash/fp/reduce) to work like
reduce((a,b)=>a+b, 0)([1,2,3])
But the fp version still requires 3 arguments, which doesn't make sense to me. All the other functions work like this for me, except reduce
func(...)(array)
How can I make fpreduce work like other fp functions in this manner:
compose(reduce(...), map(...), flatten(...))(array)
reduce takes 3 total arguments regardless of whether a functional interface is used. lodash/fp just changes parameter order and allows you to partially functions
const fp = require ('lodash/fp')
const sum = fp.reduce (fp.add, 0)
const sq = x => x * x
const main = fp.compose (sum, fp.map (sq))
console.log (main ([1,2,3,4]))
// => 30
// [1,2,3,4] => [1,4,9,16] => 0 + 1 + 4 + 9 + 16 => 30
Or as an inline composition
fp.compose (fp.reduce (fp.add, 0), fp.map (x => x * x)) ([1,2,3,4])
// => 30

How to avoid intermediate results when performing array iterations?

When working with arrays, intermediate representations are needed regularly - particularly in connection with functional programming, in which data is often treated as immutable:
const square = x => x * x;
const odd = x => (x & 1) === 1;
let xs = [1,2,3,4,5,6,7,8,9];
// unnecessary intermediate array:
xs.map(square).filter(odd); // [1,4,9,16,25,36,49,64,81] => [1,9,25,49,81]
// even worse:
xs.map(square).filter(odd).slice(0, 2); // [1,9]
How can I avoid this behavior in Javascript/Ecmascript 2015 to obtain more efficient iterative algorithms?
Transducers are one possible way to avoid intermediate results within iterative algorithms. In order to understand them better you have to realize, that transducers by themselves are rather pointless:
// map transducer
let map = tf => rf => acc => x => rf(acc)(tf(x));
Why should we pass a reducing function to map for each invocation when that required function is always the same, concat namely?
The answer to this question is located in the official transducer definition:
Transducers are composable algorithmic transformations.
Transducer develop their expressive power only in conjunction with function composition:
const comp = f => g => x => f(g(x));
let xf = comp(filter(gt3))(map(inc));
foldL(xf(append))([])(xs);
comp is passed an arbitrary number of transducers (filter and map) and a single reduction function (append) as its final argument. From that comp builds a transformation sequence that requires no intermediate arrays. Each array element passes through the entire sequence before the next element is in line.
At this point, the definition of the map transducer is understandable: Composability requires matching function signatures.
Note that the order of evaluation of the transducer stack goes from left to right and is thus opposed to the normal order of function composition.
An important property of transducers is their ability to exit iterative processes early. In the chosen implementation, this behavior is achieved by implementing both transducers and foldL in continuation passing style. An alternative would be lazy evaluation. Here is the CPS implementation:
const foldL = rf => acc => xs => {
return xs.length
? rf(acc)(xs[0])(acc_ => foldL(rf)(acc_)(xs.slice(1)))
: acc;
};
// transducers
const map = tf => rf => acc => x => cont => rf(acc)(tf(x))(cont);
const filter = pred => rf => acc => x => cont => pred(x) ? rf(acc)(x)(cont) : cont(acc);
const takeN = n => rf => acc => x => cont =>
acc.length < n - 1 ? rf(acc)(x)(cont) : rf(acc)(x)(id);
// reducer
const append = xs => ys => xs.concat(ys);
// transformers
const inc = x => ++x;
const gt3 = x => x > 3;
const comp = f => g => x => f(g(x));
const liftC2 = f => x => y => cont => cont(f(x)(y));
const id = x => x;
let xs = [1,3,5,7,9,11];
let xf = comp(filter(gt3))(map(inc));
foldL(xf(liftC2(append)))([])(xs); // [6,8,10,12]
xf = comp(comp(filter(gt3))(map(inc)))(takeN(2));
foldL(xf(liftC2(append)))([])(xs); // [6,8]
Please note that this implementation is more of a proof of concept and no full-blown solution. The obvious benefits of transducers are:
no intermediate representations
purely functional and concise solution
genericity (work with any aggregate/collection, not just arrays)
extraordinary code reusability (reducers/transformers are common functions with their usual signatures)
Theoretically, CPS is as fast as imperative loops, at least in Ecmascript 2015, since all tail calls have the same return point and can thereby share the same stack frame (TCO).
It is considered controversial whether this approach is idiomatic enough for a Javascript solution. I prefer this functional style. However, the most common transducer libraries are implemented in object style and should look more familiar to OO developers.

Categories