Is this a valid monad transformer in Javascript? - javascript

In order to better understand monad transformers I implemented one. Since Javascript is dynamically typed I don't mimic type or data constructors but declare only plain old Javascript objects, which hold the corresponding static functions to form a specific monad / transformer. The underlying idea is to apply these methods to a value/values in a container type. Types and containers are separated so to speak.
Arrays can contain any number of elements. It is trivial to extend Arrays so that they implement the monad interface. Arrays can also represent the two variants of the maybe type. An empty Array corresponds to nothing. An Array with a single element corresponds to just(a). Consequently I will use Arrays as my container type. Please note that this is an quick and dirty implementation just for learning:
const array = {
of: x => Array.of(x),
map: f => ftor => ftor.map(f),
ap: ftor => gtor => array.flatten(array.map(f => array.map(f) (gtor)) (ftor)),
flatten: ftor => ftor.reduce((xs, y) => xs.concat(y), []),
chain: mf => ftor => array.flatten(array.map(mf) (ftor))
}
const maybe = {
of: array.of,
empty: () => [],
throw: ftor => { if (ftor.length > 1) throw Error("indeterministic value"); return ftor },
map: f => ftor => maybe.throw(ftor).map(f),
ap: ftor => gtor => maybe.flatten(maybe.map(f => maybe.map(f) (gtor)) (ftor)),
flatten: array.flatten,
chain: mf => ftor => maybe.flatten(maybe.map(mf) (ftor)),
T: M => {
return {
of: x => M.of(maybe.of(x)),
empty: () => M.of(maybe.empty()),
map: f => ftor => M.map(gtor => maybe.map(f) (gtor)) (ftor),
ap: ftor => gtor => M.flatten(M.map(htor => M.map(itor => maybe.ap(htor) (itor)) (gtor)) (ftor)),
flatten: maybe.flatten,
chain: mf => ftor => M.chain(gtor => maybe.chain(mf) (gtor)) (ftor)
};
}
};
Now I combine a maybe transformer with the monadic array to get a monad that can handle arrays of maybes.
const arraym = maybe.T(array);
const add = x => y => x + y;
const addm = x => y => [x + y];
const arrayOfMaybes = [[1],[],[3]]
When I treat arraym as an applicative functor, everything works as expected:
// yields: [[11],[],[13]] as expected
arraym.ap(arraym.map(add) (arrayOfMaybes)) (arraym.of(10));
However, when I apply chain something goes wrong:
// yields: [11,13] but [[11],[13]] expected
arraym.chain(x => arraym.chain(y => addm(x) (y)) (arrayOfMaybes)) ([[10]])
Is the cause of this problem
that this isn't a valid monad transformer?
that the way I apply chain is wrong?
that my expectation regarding the result is wrong?

Is the cause of this problem that the way I apply chain is wrong?
Yes. You need to pass an mf that returns an arraym, not an array like addm does. You could use
const addmm = x => y => array.map(maybe.of)(addm(x)(y))
arraym.chain(x => arraym.chain( addmm(x) )(arrayOfMaybes))([[10]])
To help with this, you also might consider implementing lift for every monad transformer.

Related

Function composition of pluck

I'm learning functional programming in JS and I'm trying to write my own pluck.
const curry = (f, arr = []) => (...args) =>
(a => (a.length === f.length ? f(...a) : curry(f, a)))([
...arr,
...args,
]);
const map = curry((fn, arr) => arr.map(fn));
const pipe = (...fns) => x => fns.reduce((y, f) => f(y), x);
const prop = curry((key, obj) => obj[key]);
const pluck = pipe(prop, map);
But for some reason, pluck doesn't work. As far as I thought, this pluck would:
Call prop with the key I invoke pluck with.
So, prop with a curried key gets put as the function into map, which is returned from pipe.
Then if I pass it an array, it should map over the array, applying prop with the key.
But,
pluck('foo')([{ foo: 'bar'}]);
[ƒ]
What am I doing wrong?
Because the built-in .map() function passes 3 arguments to the callback, your code is getting confused. It's easy to fix:
const map = curry((fn, arr) => arr.map(v => fn(v)));

Mimic Immutability by Generating Data Structures of the Same/Similar Shape (not the Same Type)

Lately I frequently end up with different data structures of the same shape just to avoid altering the existing ones, so in order to mimic immutability. To make it easier to distinguish the layers of nested data structures and to keep them consistent with each other I wrap them in types that merely serve as "semantic wrappers". Here is the simplified pattern:
const ListX = xs =>
({runListX: xs, [Symbol.toStringTag]: "ListX"});
const ListY = xs =>
({runListY: xs, [Symbol.toStringTag]: "ListY"});
const foo = [ListX([ListY([1, 2, 3]), ListY([4, 5, 6])])];
// later I gather more data that is related to foo but instead of
// altering foo I just create a new data structure
const bar = [ListX([ListY(["a", "b", "c"]), ListY(["d", "e", "f"])])];
const fold = f => init => xs =>
xs.reduce((acc, x, i) => f(acc) (x, i), init);
const combining = foo => bar =>
fold(acc1 => (tx, i) =>
fold(acc2 => (ty, j) =>
fold(acc3 => (x, k) => {
const y = bar[i].runListX[j].runListY[k];
return (acc3.push(x + y), acc3)
}) (acc2) (ty.runListY)) (acc1) (tx.runListX)) ([]) (foo);
console.log(
combining(foo) (bar)); // ["1a","2b","3c","4d","5e","6f"]
It works for me for programs with, say, 1000 lines of code. However, I wonder if it would still work for a larger code base. What are the long term drawbacks of this approach (maybe compared to real immutability gained by libs like immutable.js)? Has this approach/pattern even a name?
Please note that I am aware that this kind of question is probably too broad for SO. Still, it's all over my head.

foldr that doesn't crash and works reasonably well

I want a foldr that's similar to the foldr in Haskell or lisp. My foldr causes stack overflow on large arrays, and probably because large numbers of pending operations on the stack can't be reduced until it hits the base case. How would you optimize my foldr so it works reasonably well for large arrays.
const foldr = (f, acc, [x, ...xs]) =>
(typeof x === 'undefined')
? acc
: f (x, foldr (f, acc, xs))
foldr((x, acc) => x + acc, 0, [...Array(100000).keys()])
foldr is pretty nearly reduceRight:
const flip = f => (a, b) => f(b, a)
const foldr = (f, acc, arr) =>
arr.reduceRight(flip(f), acc)
Replace arr.reduceRight with [...arr].reduceRight if you’d like to keep the support for arbitrary iterables that [x, ...xs] unpacking gives you.
const flip = f => (a, b) => f(b, a)
const foldr = (f, acc, arr) =>
arr.reduceRight(flip(f), acc)
console.log(foldr((x, acc) => x + acc, 0, [...Array(100000).keys()]))
The problem is that the default list-like structure that JavaScript uses are mutable arrays (not true c-like arrays, they may internally be implemented as trees) while functional languages like Haskell or Lisp use linked lists. You can get the first element and the rest of linked list without mutation in constant time. If you want to do the same in JavaScript (without mutation), you have to create (allocate) new array to get the rest of the array.
However, the whole foldr can be implemented with internal mutation. The whole function won't do any external mutation:
const foldr = (f, initialValue, arr) => {
let value = initialValue;
for (let i = arr.length - 1; i >= 0; i--) {
value = f(arr[i], value)
}
return value;
}

How do I take a value out of a Maybe monad in ramda-fantasy?

I want to have a pipe that does some operations on a Maybe, and want to return its value at last. Currently I am doing:
const data = Maybe(5)
pipe(
map(add(1)),
... other operations
y => y.getOrElse([])
)(data)
Is there any cleaner way out?
The only improvement would be to create a pointfree helper function
const getOrElse = (defaultValue) => (m) => m.getOrElse(defaultValue);

How to avoid intermediate results when performing array iterations?

When working with arrays, intermediate representations are needed regularly - particularly in connection with functional programming, in which data is often treated as immutable:
const square = x => x * x;
const odd = x => (x & 1) === 1;
let xs = [1,2,3,4,5,6,7,8,9];
// unnecessary intermediate array:
xs.map(square).filter(odd); // [1,4,9,16,25,36,49,64,81] => [1,9,25,49,81]
// even worse:
xs.map(square).filter(odd).slice(0, 2); // [1,9]
How can I avoid this behavior in Javascript/Ecmascript 2015 to obtain more efficient iterative algorithms?
Transducers are one possible way to avoid intermediate results within iterative algorithms. In order to understand them better you have to realize, that transducers by themselves are rather pointless:
// map transducer
let map = tf => rf => acc => x => rf(acc)(tf(x));
Why should we pass a reducing function to map for each invocation when that required function is always the same, concat namely?
The answer to this question is located in the official transducer definition:
Transducers are composable algorithmic transformations.
Transducer develop their expressive power only in conjunction with function composition:
const comp = f => g => x => f(g(x));
let xf = comp(filter(gt3))(map(inc));
foldL(xf(append))([])(xs);
comp is passed an arbitrary number of transducers (filter and map) and a single reduction function (append) as its final argument. From that comp builds a transformation sequence that requires no intermediate arrays. Each array element passes through the entire sequence before the next element is in line.
At this point, the definition of the map transducer is understandable: Composability requires matching function signatures.
Note that the order of evaluation of the transducer stack goes from left to right and is thus opposed to the normal order of function composition.
An important property of transducers is their ability to exit iterative processes early. In the chosen implementation, this behavior is achieved by implementing both transducers and foldL in continuation passing style. An alternative would be lazy evaluation. Here is the CPS implementation:
const foldL = rf => acc => xs => {
return xs.length
? rf(acc)(xs[0])(acc_ => foldL(rf)(acc_)(xs.slice(1)))
: acc;
};
// transducers
const map = tf => rf => acc => x => cont => rf(acc)(tf(x))(cont);
const filter = pred => rf => acc => x => cont => pred(x) ? rf(acc)(x)(cont) : cont(acc);
const takeN = n => rf => acc => x => cont =>
acc.length < n - 1 ? rf(acc)(x)(cont) : rf(acc)(x)(id);
// reducer
const append = xs => ys => xs.concat(ys);
// transformers
const inc = x => ++x;
const gt3 = x => x > 3;
const comp = f => g => x => f(g(x));
const liftC2 = f => x => y => cont => cont(f(x)(y));
const id = x => x;
let xs = [1,3,5,7,9,11];
let xf = comp(filter(gt3))(map(inc));
foldL(xf(liftC2(append)))([])(xs); // [6,8,10,12]
xf = comp(comp(filter(gt3))(map(inc)))(takeN(2));
foldL(xf(liftC2(append)))([])(xs); // [6,8]
Please note that this implementation is more of a proof of concept and no full-blown solution. The obvious benefits of transducers are:
no intermediate representations
purely functional and concise solution
genericity (work with any aggregate/collection, not just arrays)
extraordinary code reusability (reducers/transformers are common functions with their usual signatures)
Theoretically, CPS is as fast as imperative loops, at least in Ecmascript 2015, since all tail calls have the same return point and can thereby share the same stack frame (TCO).
It is considered controversial whether this approach is idiomatic enough for a Javascript solution. I prefer this functional style. However, the most common transducer libraries are implemented in object style and should look more familiar to OO developers.

Categories