How to work with javascript Map without mutations - javascript

I'm working in a functional way in my JS project.
That's also means I do not mutate object or array entities. Instead I always create a new instance and replace an old one.
e.g.
let obj = {a: 'aa', b: 'bb'}
obj = {...obj, b: 'some_new_value'}
The question is:
How to work in a functional (immutable) way with javascript Maps?
I guess I can use the following code to add values:
let map = new Map()
...
map = new Map(map).set(something)
But what about delete items?
I cannot do new Map(map).delete(something), because the result of .delete is a boolean.
P.S. I'm aware of existence of ImmutableJS, but I don't want to use it due to you never 100% sure if you are working now with a plain JS object, or with immutablejs' object (especially nested structures). And because of bad support of TypeScript, btw.

I cannot do new Map(map).delete(something);, because the result of .delete is a boolean.
You can use an interstitial variable. You can farm it out to a function if you like:
function map_delete(old_map, key_to_delete) {
const new_map = new Map(old_map);
new_map.delete(key_to_delete);
return new_map;
}
Or you can create get the entries in the map, filter them and create a new one from the result:
const new_map = new Map( Array.from(old_map.entries).filter( ([key, value]) => key !== something ) );

If you don't want to use a persistent map data structure, then you cannot get around mutations or have to conduct insanely inefficient shallow copies. Please note that mutations themselves aren't harmful, but only in conjunction with sharing the underlying mutable values.
If we are able to limit the way mutable values can be accessed, we can get safe mutable data types. They come at a cost, though. You cannot just use them as usual. As a matter of fact using them takes some time to get familiar with. It's a trade-off.
Here is an example with the native Map:
// MUTABLE
const Mutable = clone => refType => // strict variant
record(Mutable, app(([o, initialCall, refType]) => {
o.mutable = {
run: k => {
o.mutable.run = _ => {
throw new TypeError("illegal subsequent inspection");
};
o.mutable.set = _ => {
throw new TypeError("illegal subsequent mutation");
};
return k(refType);
},
set: k => {
if (initialCall) {
initialCall = false;
refType = clone(refType);
}
k(refType);
return o;
}
}
return o;
}) ([{}, true, refType]));
const mutRun = k => o =>
o.mutable.run(k);
const mutSet = k => o =>
o.mutable.set(k);
// MAP
const mapClone = m => new Map(m);
const mapDelx = k => m => // safe-in-place-update variant
mutSet(m_ =>
m_.has(k)
? m_.delete(k)
: m_) (m);
const mapGet = k => m =>
m.get(k);
const mapSetx = k => v => // safe-in-place-update variant
mutSet(m_ => m_.set(k, v));
const mapUpdx = k => f => // safe-in-place-update variant
mutSet(m_ => m_.set(k, f(m_.get(k))));
const MutableMap = Mutable(mapClone);
// auxiliary functions
const record = (type, o) => (
o[Symbol.toStringTag] = type.name || type, o);
const app = f => x => f(x);
const id = x => x;
// MAIN
const m = MutableMap(new Map([[1, "foo"], [2, "bar"], [3, "baz"]]));
mapDelx(2) (m);
mapUpdx(3) (s => s.toUpperCase()) (m);
const m_ = mutRun(Array.from) (m);
console.log(m_); // [[1, "foo"], [3, "BAZ"]]
try {mapSetx(4) ("bat") (m)} // illegal subsequent mutation
catch (e) {console.log(e.message)}
try {mutRun(mapGet(1)) (m)} // illegal subsequent inspection
catch (e) {console.log(e.message)}
If you take a closer look at Mutable you see it creates a shallow copy as well, but only once, initially. You can than conduct as many mutations as you want, until you inspect the mutable value the first time.
You can find an implementation with several instances in my scriptum library. Here is a post with some more background information on the concept.
I borrowed the concept from Rust where it is called ownership. The type theoretical background are affine types, which are subsumed under linear types, in case you are interested.

roll your own data structure
Another option is to write your own map module that does not depend on JavaScript's native Map. This completely frees us from its mutable behaviours and prevents making full copies each time we wish to set, update, or del. This solution gives you full control and effectively demonstrates how to implement any data structure of your imagination -
// main.js
import { fromEntries, set, del, toString } from "./map.js"
const a =
[["d",3],["e",4],["g",6],["j",9],["b",1],["a",0],["i",8],["c",2],["h",7],["f",5]]
const m =
fromEntries(a)
console.log(1, toString(m))
console.log(2, toString(del(m, "b")))
console.log(3, toString(set(m, "c", "#")))
console.log(4, toString(m))
We wish for the expected output -
map m, the result of fromEntries(a)
derivative of map m with key "b" deleted
derivative of map m with key "c" updated to "#"
map m, unmodified from the above operations
1 (a, 0)->(b, 1)->(c, 2)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
2 (a, 0)->(c, 2)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
3 (a, 0)->(b, 1)->(c, #)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
4 (a, 0)->(b, 1)->(c, 2)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
Time to fulfill our wishes and implement our map module. We'll start by defining what it means to be an empty map -
// map.js
const empty =
Symbol("map.empty")
const isEmpty = t =>
t === empty
Next we need a way to insert our entries into the map. This calls into existence, fromEntries, set, update, and node -
// map.js (continued)
const fromEntries = a =>
a.reduce((t, [k, v]) => set(t, k, v), empty)
const set = (t, k, v) =>
update(t, k, _ => v)
const update = (t, k, f) =>
isEmpty(t)
? node(k, f())
: k < t.key
? node(t.key, t.value, update(t.left, k, f), t.right)
: k > t.key
? node(t.key, t.value, t.left, update(t.right, k, f))
: node(k, f(t.value), t.left, t.right)
const node = (key, value, left = empty, right = empty) =>
({ key, value, left, right })
Next we'll define a way to get a value from our map -
// main.js (continued)
const get = (t, k) =>
isEmpty(t)
? undefined
: k < t.key
? get(t.left, k)
: k > t.key
? get(t.right, k)
: t.value
And now we'll define a way to delete an entry from our map, which also calls into existence concat -
// map.js (continued)
const del = (t, k) =>
isEmpty(t)
? t
: k < t.key
? node(t.key, t.value, del(t.left, k), t.right)
: k > t.key
? node(t.key, t.value, t.left, del(t.right, k))
: concat(t.left, t.right)
const concat = (l, r) =>
isEmpty(l)
? r
: isEmpty(r)
? l
: r.key < l.key
? node(l.key, l.value, concat(l.left, r), l.right)
: r.key > l.key
? node(l.key, l.value, l.left, concat(l.right, r))
: r
Finally we provide a way to visualize the map using toString, which calls into existence inorder. As a bonus, we'll throw in toArray -
const toString = (t) =>
Array.from(inorder(t), ([ k, v ]) => `(${k}, ${v})`).join("->")
function* inorder(t)
{ if (isEmpty(t)) return
yield* inorder(t.left)
yield [ t.key, t.value ]
yield* inorder(t.right)
}
const toArray = (t) =>
Array.from(inorder(t))
Export the module's features -
// map.js (continued)
export { empty, isEmpty, fromEntries, get, set, update, del, append, inorder, toArray, toString }
low hanging fruit
Your Map module is finished but there are some valuable features we can add without requiring much effort. Below we implement preorder and postorder map traversals. Additionally we add a second parameter to toString and toArray that allows you to choose which traversal to use. inorder is used by default -
// map.js (continued)
function* preorder(t)
{ if (isEmpty(t)) return
yield [ t.key, t.value ]
yield* preorder(t.left)
yield* preorder(t.right)
}
function* postorder(t)
{ if (isEmpty(t)) return
yield* postorder(t.left)
yield* postorder(t.right)
yield [ t.key, t.value ]
}
const toArray = (t, f = inorder) =>
Array.from(f(t))
const toString = (t, f = inorder) =>
Array.from(f(t), ([ k, v ]) => `(${k}, ${v})`).join("->")
export { ..., preorder, postorder }
And we can extend fromEntries to accept any iterable, not just arrays. This matches the functionality of Object.fromEntries and Array.from -
// map.js (continued)
function fromEntries(it)
{ let r = empty
for (const [k, v] of it)
r = set(r, k, v)
return r
}
Like we did above, we can add a second parameter which allows us to specify how the entries are added into the map. Now it works just like Array.from. Why Object.fromEntries doesn't have this behaviour is puzzling to me. Array.from is smart. Be like Array.from -
// map.js (continued)
function fromEntries(it, f = v => v)
{ let r = empty
let k, v
for (const e of it)
( [k, v] = f(e)
, r = set(r, k, v)
)
return r
}
// main.js
import { fromEntries, toString } from "./map.js"
const a =
[["d",3],["e",4],["g",6],["j",9],["b",1],["a",0],["i",8],["c",2],["h",7],["f",5]]
const z =
fromEntries(a, ([ k, v ]) => [ k.toUpperCase(), v * v ])
console.log(toString(z))
(A, 0)->(B, 1)->(C, 4)->(D, 9)->(E, 16)->(F, 25)->(G, 36)->(H, 49)->(I, 64)->(J, 81)
demo
Expand the snippet below to verify the results of our Map module in your own browser -
// map.js
const empty =
Symbol("map.empty")
const isEmpty = t =>
t === empty
const node = (key, value, left = empty, right = empty) =>
({ key, value, left, right })
const fromEntries = a =>
a.reduce((t, [k, v]) => set(t, k, v), empty)
const get = (t, k) =>
isEmpty(t)
? undefined
: k < t.key
? get(t.left, k)
: k > t.key
? get(t.right, k)
: t.value
const set = (t, k, v) =>
update(t, k, _ => v)
const update = (t, k, f) =>
isEmpty(t)
? node(k, f())
: k < t.key
? node(t.key, t.value, update(t.left, k, f), t.right)
: k > t.key
? node(t.key, t.value, t.left, update(t.right, k, f))
: node(k, f(t.value), t.left, t.right)
const del = (t, k) =>
isEmpty(t)
? t
: k < t.key
? node(t.key, t.value, del(t.left, k), t.right)
: k > t.key
? node(t.key, t.value, t.left, del(t.right, k))
: concat(t.left, t.right)
const concat = (l, r) =>
isEmpty(l)
? r
: isEmpty(r)
? l
: r.key < l.key
? node(l.key, l.value, concat(l.left, r), l.right)
: r.key > l.key
? node(l.key, l.value, l.left, concat(l.right, r))
: r
function* inorder(t)
{ if (isEmpty(t)) return
yield* inorder(t.left)
yield [ t.key, t.value ]
yield* inorder(t.right)
}
const toArray = (t) =>
Array.from(inorder(t))
const toString = (t) =>
Array.from(inorder(t), ([ k, v ]) => `(${k}, ${v})`).join("->")
// main.js
const a =
[["d",3],["e",4],["g",6],["j",9],["b",1],["a",0],["i",8],["c",2],["h",7],["f",5]]
const m =
fromEntries(a)
console.log(1, toString(m))
console.log(2, toString(del(m, "b")))
console.log(3, toString(set(m, "c", "#")))
console.log(4, toString(m))
console.log(5, get(set(m, "z", "!"), "z"))

functional module
Here's my little implementation of a persistent map module -
// map.js
import { effect } from "./func.js"
const empty = _ =>
new Map
const update = (t, k, f) =>
fromEntries(t).set(k, f(get(t, k)))
const set = (t, k, v) =>
update(t, k, _ => v)
const get = (t, k) =>
t.get(k)
const del = (t, k) =>
effect(t => t.delete(k))(fromEntries(t))
const fromEntries = a =>
new Map(a)
export { del, empty, fromEntries, get, set, update }
// func.js
const effect = f => x =>
(f(x), x)
// ...
export { effect, ... }
// main.js
import { fromEntries, del, set } from "./map.js"
const q =
fromEntries([["a",1], ["b",2]])
console.log(1, q)
console.log(2, del(q, "b"))
console.log(3, set(q, "c", 3))
console.log(4, q)
Expand the snippet below to verify the results in your own browser -
const effect = f => x =>
(f(x), x)
const empty = _ =>
new Map
const update = (t, k, f) =>
fromEntries(t).set(k, f(get(t, k)))
const set = (t, k, v) =>
update(t, k, _ => v)
const get = (t, k) =>
t.get(k)
const del = (t, k) =>
effect(t => t.delete(k))(fromEntries(t))
const fromEntries = a =>
new Map(a)
const q =
fromEntries([["a", 1], ["b", 2]])
console.log(1, q)
console.log(2, del(q, "b"))
console.log(3, set(q, "c", 3))
console.log(4, q)
1 Map(2) {a => 1, b => 2}
2 Map(1) {a => 1}
3 Map(3) {a => 1, b => 2, c => 3}
4 Map(2) {a => 1, b => 2}
object-oriented interface
If you want to use it in an object-oriented way, you can add a class wrapper around our plain functions. Here we call it Mapping because we don't want to clobber the native Map -
// map.js (continued)
class Mapping
{ constructor(t) { this.t = t }
update(k,f) { return new Mapping(update(this.t, k, f)) }
set(k,v) { return new Mapping(set(this.t, k, v)) }
get(k) { return get(this.t, k) }
del(k) { return new Mapping(del(this.t, k)) }
static empty () { return new Mapping(empty()) }
static fromEntries(a) { return new Mapping(fromEntries(a))
}
}
export default Mapping
// main.js
import Mapping from "./map"
const q =
Mapping.fromEntries([["a", 1], ["b", 2]]) // <- OOP class method
console.log(1, q)
console.log(2, q.del("b")) // <- OOP instance method
console.log(3, q.set("c", 3)) // <- OOP instance method
console.log(4, q)
Even though we're calling through OOP interface, our data structure still behaves persistently. No mutable state is used -
1 Mapping { t: Map(2) {a => 1, b => 2} }
2 Mapping { t: Map(1) {a => 1} }
3 Mapping { t: Map(3) {a => 1, b => 2, c => 3} }
4 Mapping { t: Map(2) {a => 1, b => 2} }

Related

Emulating lisp list operations in js

What would be the proper way to define the cons, car, and cdr functions if we were to use javascript and functional programming? So far I have:
// CAR = λxy.x
// CDR = λxy.y
// CONS = λxy => ?
const car = a => b => a;
const cdr = a => b => b;
const cons = f => a => b;
let pair = cons(3)(4);
console.log(pair(car));
console.log(pair(cdr));
But my issue is pair(car) seems like an odd way to invoke the function. It seems like car(pair) seems like a better way but I'm not sure how to save the pair so it can be passed like that. How would that be done?
You can write them using continuations -
const cons = (a,b) => k => k(a,b)
const car = k => k((a,b) => a)
const cdr = k => k((a,b) => b)
const l = cons(1, cons(2, cons(3, null)))
console.log(car(l), car(cdr(l)), car(cdr(cdr(l))), cdr(cdr(cdr(l))))
// 1 2 3 null
Define more like list, toString, map, and filter -
const cons = (a,b) => k => k(a,b)
const car = k => k((a,b) => a)
const cdr = k => k((a,b) => b)
const list = (v, ...vs) => v == null ? null : cons(v, list(...vs))
const toString = l => l == null ? "Ø" : car(l) + "->" + toString(cdr(l))
console.log(toString(list(1,2,3,4,5)))
// 1->2->3->4->5->Ø
const square = x => x * x
const map = (f, l) =>
l == null
? null
: cons(f(car(l)), map(f, cdr(l)))
console.log(toString(map(square, list(1,2,3,4,5))))
// 1->4->9->16->25->Ø
const isOdd = x => x & 1
const filter = (f, l) =>
l == null
? null
: Boolean(f(car(l)))
? cons(car(l), filter(f, cdr(l)))
: filter(f, cdr(l))
console.log(toString(filter(isOdd, map(square, list(1,2,3,4,5)))))
// 1->9->25->Ø
Note you could just as easily write cons, car, and cdr abstractions using an array. Lambda is more fun but anything that fulfills the contract is acceptable. Being able to change the underlying representation like this and not require changes in other parts of your program is what makes data abstraction a powerful technique.
const cons = (a,b) => [a,b]
const car = k => k[0]
const cdr = k => k[1]
const l = cons(1, cons(2, cons(3, null)))
console.log(car(l), car(cdr(l)), car(cdr(cdr(l))), cdr(cdr(cdr(l))))
// 1 2 3 null
The following would be one way to define it, where car and cdr take one argument (a pair) instead of two:
// CAR = λxy.x or λp.p[0] where p is xy and p[0] means the first argument (x)
// CDR = λxy.y or λp.p[1] where p is xy and p[1] means the second argument (y)
// CONS = λxyf.xyf -- basically just saving x and y in a closure and deferring a function call
const first = a => b => a;
const second = a => b => b;
const getFirstFromPair = p => p(first), car = getFirstFromPair;
const getSecondFromPair = p => p(second), cdr = getSecondFromPair;
const cons = a => b => f => f(a)(b);
let pair = cons(3)(4);
console.log(car(pair));
console.log(cdr(pair));
Or, if we allow a pair as a single argument, such as (1,2):
const cons = (a,b) => f => f(a,b); // f is like a fake function call to do encapsulation
const car = f => f((a,b) => a); // and we have to un-do the function call here by passing it the first
const cdr = f => f((a,b) => b);
console.log(cdr(cons(1,2)));

How to encode corecursion/codata in a strictly evaluated setting?

Corecursion means calling oneself on data at each iteration that is greater than or equal to what one had before. Corecursion works on codata, which are recursively defined values. Unfortunately, value recursion is not possible in strictly evaluated languages. We can work with explicit thunks though:
const Defer = thunk =>
({get runDefer() {return thunk()}})
const app = f => x => f(x);
const fibs = app(x_ => y_ => {
const go = x => y =>
Defer(() =>
[x, go(y) (x + y)]);
return go(x_) (y_).runDefer;
}) (1) (1);
const take = n => codata => {
const go = ([x, tx], acc, i) =>
i === n
? acc
: go(tx.runDefer, acc.concat(x), i + 1);
return go(codata, [], 0);
};
console.log(
take(10) (fibs));
While this works as expected the approach seems awkward. Especially the hideous pair tuple bugs me. Is there a more natural way to deal with corecursion/codata in JS?
I would encode the thunk within the data constructor itself. For example, consider.
// whnf :: Object -> Object
const whnf = obj => {
for (const [key, val] of Object.entries(obj)) {
if (typeof val === "function" && val.length === 0) {
Object.defineProperty(obj, key, {
get: () => Object.defineProperty(obj, key, {
value: val()
})[key]
});
}
}
return obj;
};
// empty :: List a
const empty = null;
// cons :: (a, List a) -> List a
const cons = (head, tail) => whnf({ head, tail });
// fibs :: List Int
const fibs = cons(0, cons(1, () => next(fibs, fibs.tail)));
// next :: (List Int, List Int) -> List Int
const next = (xs, ys) => cons(xs.head + ys.head, () => next(xs.tail, ys.tail));
// take :: (Int, List a) -> List a
const take = (n, xs) => n === 0 ? empty : cons(xs.head, () => take(n - 1, xs.tail));
// toArray :: List a -> [a]
const toArray = xs => xs === empty ? [] : [ xs.head, ...toArray(xs.tail) ];
// [0,1,1,2,3,5,8,13,21,34]
console.log(toArray(take(10, fibs)));
This way, we can encode laziness in weak head normal form. The advantage is that the consumer has no idea whether a particular field of the given data structure is lazy or strict, and it doesn't need to care.

Similar curry functions producing different results

I am learning functional javascript and I came across two different implementations of the curry function. I am trying to understand the difference between the two they seem similar yet one works incorrectly for some cases and correctly for other cases.
I have tried interchanging the functions the one defined using es6 'const'
works for simple cases but when using 'filter' to filter strings the results are incorrect, but with integers it produces the desired results.
//es6
//Does not work well with filter when filtering strings
//but works correctly with numbers
const curry = (fn, initialArgs=[]) => (
(...args) => (
a => a.length === fn.length ? fn(...a) : curry(fn, a)
)([...initialArgs, ...args])
);
//Regular js
//Works well for all cases
function curry(fn) {
const arity = fn.length;
return function $curry(...args) {
if (args.length < arity) {
return $curry.bind(null, ...args);
}
return fn.call(null, ...args);
};
}
const match = curry((pattern, s) => s.match(pattern));
const filter = curry((f, xs) => xs.filter(f));
const hasQs = match(/q/i);
const filterWithQs = filter(hasQs);
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]));
//Output:
//es6:
[ 'hello', 'quick', 'sand', 'qwerty', 'quack' ]
//regular:
[ 'quick', 'qwerty', 'quack' ]
If you change filter to use xs.filter(x => f(x)) instead of xs.filter(f) it will work -
const filter = curry((f, xs) => xs.filter(x => f(x)))
// ...
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]))
// => [ 'quick', 'qwerty', 'quack' ]
The reason for this is because Array.prototype.filter passes three (3) arguments to the "callback" function,
callback - Function is a predicate, to test each element of the array. Return true to keep the element, false otherwise. It accepts three arguments:
element - The current element being processed in the array.
index (Optional) - The index of the current element being processed in the array.
array (Optional) - The array filter was called upon.
The f you are using in filter is match(/q/i), and so when it is called by Array.prototype.filter, you are getting three (3) extra arguments instead of the expected one (1). In the context of curry, that means a.length will be four (4), and since 4 === fn.length is false (where fn.length is 2), the returned value is curry(fn, a), which is another function. Since all functions are considered truthy values in JavaScript, the filter call returns all of the input strings.
// your original code:
xs.filter(f)
// is equivalent to:
xs.filter((elem, index, arr) => f(elem, index, arr))
By changing filter to use ...filter(x => f(x)), we only allow one (1) argument to be passed to the callback, and so curry will evaluate 2 === 2, which is true, and the return value is the result of evaluating match, which returns the expected true or false.
// the updated code:
xs.filter(x => f(x))
// is equivalent to:
xs.filter((elem, index, arr) => f(elem))
An alternative, and probably better option, is to change the === to >= in your "es6" curry -
const curry = (fn, initialArgs=[]) => (
(...args) => (
a => a.length >= fn.length ? fn(...a) : curry(fn, a)
)([...initialArgs, ...args])
)
// ...
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]))
// => [ 'quick', 'qwerty', 'quack' ]
This allows you to "overflow" function parameters "normally", which JavaScript has no problem with -
const foo = (a, b, c) => // has only three (3) parameters
console.log(a + b + c)
foo(1,2,3,4,5) // called with five (5) args
// still works
// => 6
Lastly here's some other ways I've written curry over the past. I've tested that each of them produce the correct output for your problem -
by auxiliary loop -
const curry = f => {
const aux = (n, xs) =>
n === 0 ? f (...xs) : x => aux (n - 1, [...xs, x])
return aux (f.length, [])
}
versatile curryN, works with variadic functions -
const curryN = n => f => {
const aux = (n, xs) =>
n === 0 ? f (...xs) : x => aux (n - 1, [...xs, x])
return aux (n, [])
};
// curry derived from curryN
const curry = f => curryN (f.length) (f)
spreads for days -
const curry = (f, ...xs) => (...ys) =>
f.length > xs.length + ys.length
? curry (f, ...xs, ...ys)
: f (...xs, ...ys)
homage to the lambda calculus and Howard Curry's fixed-point Y-combinator -
const U =
f => f (f)
const Y =
U (h => f => f (x => U (h) (f) (x)))
const curryN =
Y (h => xs => n => f =>
n === 0
? f (...xs)
: x => h ([...xs, x]) (n - 1) (f)
) ([])
const curry = f =>
curryN (f.length) (f)
and my personal favourites -
// for binary (2-arity) functions
const curry2 = f => x => y => f (x, y)
// for ternary (3-arity) functions
const curry3 = f => x => y => z => f (x, y, z)
// for arbitrary arity
const partial = (f, ...xs) => (...ys) => f (...xs, ...ys)
Lastly a fun twist on #Donat's answer that enables anonymous recursion -
const U =
f => f (f)
const curry = fn =>
U (r => (...args) =>
args.length < fn.length
? U (r) .bind (null, ...args)
: fn (...args)
)
The main difference here is not the es6 syntax but how the arguments are partially applied to the function.
First version: curry(fn, a)
Second versison: $curry.bind(null, ...args)
It works for only one step of currying (as needed in your example) if you change first version (es6) to fn.bind(null, ...args)
The representation of the "Regular js" version in es6 syntax would look like this (you need the constant to have a name for the function in the recursive call):
curry = (fn) => {
const c = (...args) => (
args.length < fn.length ? c.bind(null, ...args) : fn(...args)
);
return c;
}

Extracting data from a function chain without arrays

This is an advanced topic of
How to store data of a functional chain of Monoidal List?
I am pretty sure we can somehow extract data from a function chain without using an array storing data.
The basic structure is :
L = a => L
very simple, but this structure generates a list:
L(1)(2)(3)(4)(5)(6)()
This may be related to
What is a DList?
, but this structure strictly depends on function chain only.
So, what is the way to pull out the whole values?
Current achievement of mine merely pulling out the head and tail, and I don't know how to fix this.
EDIT:
I forgot to mention what I try to do is
List.fold(f) / reduce(f)
operation.
So, if one choses f as Array.concat which means you can extract data as an array, but simply fold is not limited to array concatenation. and f can be add/sum etc.
So, currently, so far, to visualize the internal behavior, in a sense, I write log as f.
EDIT2
I must clarify more. The specification can be presented:
const f = (a) => (b) => a + b;//binary operation
A(a)(b)(f) = f(a)(b) // a + b
A(a)(b)(c)(f) = f(f(a)(b))(c) // a + b + c
So this is exactly
(a b c).reduce(f)
thing, and when
f = (a) => (b) => a.concat(b)
The result would be [a, b, c].
Array.concat is merely a member of generalized binary operations f.
At first this challenge is easy for my skill, but turned out hard and felt it's better to ask smarter coder.
Thanks.
const A = a => {
const B = b => (b === undefined)
? (() => {
log("a " + a);
return A();
})()
: c => (c === undefined)
? (() => {
log("b " + b);
return B()();
})()
: B;
return B;
};
A(1)(2)(3)(4)(5)(6)()
function log(m) {
console.log((m)); //IO
return m;
};
result:
b 6
a 1
a undefined
Quite the series of questions you have here. Here's my take on it:
We start with a way to construct lists
nil is a constant which represents the empty list
cons (x, list) constructs a new list with x added to the front of list
// nil : List a
const nil =
(c, n) => n
// cons : (a, List a) -> List a
const cons = (x, y = nil) =>
(c, n) => c (y (c, n), x)
// list : List Number
const myList =
cons (1, cons (2, cons (3, cons (4, nil))))
console.log (myList ((x, y) => x + y, 0))
// 10
And to satisfy your golfy variadic curried interface, here is autoCons
const autoCons = (init, n) =>
{ const loop = acc => (x, n) =>
isFunction (x)
? acc (x, n)
: loop (cons (x, acc))
return loop (nil) (init, n)
}
const isFunction = f =>
f != null && f.constructor === Function && f.length === 2
const nil =
(c, n) => n
const cons = (x, y = nil) =>
(c, n) => c (y (c, n), x)
console.log
( autoCons (1) ((x,y) => x + y, 0) // 1
, autoCons (1) (2) ((x,y) => x + y, 0) // 3
, autoCons (1) (2) (3) ((x,y) => x + y, 0) // 6
, autoCons (1) (2) (3) (4) ((x,y) => x + y, 0) // 10
)
Our encoding makes it possible to write other generic list functions, like isNil
// isNil : List a -> Bool
const isNil = l =>
l ((acc, _) => false, true)
console.log
( isNil (autoCons (1)) // false
, isNil (autoCons (1) (2)) // false
, isNil (nil) // true
)
Or like length
// length : List a -> Int
const length = l =>
l ((acc, _) => acc + 1, 0)
console.log
( length (nil) // 0
, length (autoCons (1)) // 1
, length (autoCons (1) (2)) // 2
, length (autoCons (1) (2) (3)) // 3
)
Or nth which fetches the nth item in the list
// nth : Int -> List a -> a
const nth = n => l =>
l ( ([ i, res ], x) =>
i === n
? [ i + 1, x ]
: [ i + 1, res]
, [ 0, undefined ]
) [1]
console.log
( nth (0) (autoCons ("A") ("B") ("C")) // "A"
, nth (1) (autoCons ("A") ("B") ("C")) // "B"
, nth (2) (autoCons ("A") ("B") ("C")) // "C"
, nth (3) (autoCons ("A") ("B") ("C")) // undefined
)
We can implement functions like map and filter for our list
// map : (a -> b) -> List a -> List b
const map = f => l =>
l ( (acc, x) => cons (f (x), acc)
, nil
)
// filter : (a -> Bool) -> List a -> List a
const filter = f => l =>
l ( (acc, x) => f (x) ? cons (x, acc) : acc
, nil
)
We can even make a program using our list which takes a list as an argument
// rcomp : (a -> b) -> (b -> c) -> a -> c
const rcomp = (f, g) =>
x => g (f (x))
// main : List String -> String
const main = letters =>
autoCons (map (x => x + x))
(filter (x => x !== "dd"))
(map (x => x.toUpperCase()))
(rcomp, x => x)
(letters)
((x, y) => x + y, "")
main (autoCons ("a") ("b") ("c") ("d") ("e"))
// AABBCCEE
Run the program in your browser below
const nil =
(c, n) => n
const cons = (x, y = nil) =>
(c, n) => c (y (c, n), x)
const isFunction = f =>
f != null && f.constructor === Function && f.length === 2
const autoCons = (init, n) =>
{ const loop = acc => (x, n) =>
isFunction (x)
? acc (x, n)
: loop (cons (x, acc))
return loop (nil) (init, n)
}
const map = f => l =>
l ( (acc, x) => cons (f (x), acc)
, nil
)
const filter = f => l =>
l ( (acc, x) => f (x) ? cons (x, acc) : acc
, nil
)
const rcomp = (f, g) =>
x => g (f (x))
const main = letters =>
autoCons (map (x => x + x))
(filter (x => x !== "dd"))
(map (x => x.toUpperCase()))
(rcomp, x => x)
(letters)
((x, y) => x + y, "")
console.log (main (autoCons ("a") ("b") ("c") ("d") ("e")))
// AABBCCEE
Sorry, my bad
Let's rewind and look at our initial List example
// list : List Number
const myList =
cons (1, cons (2, cons (3, cons (4, nil))))
console.log
( myList ((x, y) => x + y, 0) // 10
)
We conceptualize myList as a list of numbers, but we contradict ourselves by calling myList (...) like a function.
This is my fault. In trying to simplify the example, I crossed the barrier of abstraction. Let's look at the types of nil and cons –
// nil : List a
// cons : (a, List a) -> List a
Given a list of type List a, how do we get a value of type a out? In the example above (repeated below) we cheat by calling myList as a function. This is internal knowledge that only the implementer of nil and cons should know
// myList is a list, not a function... this is confusing...
console.log
( myList ((x, y) => x + y, 0) // 10
)
If you look back at our original implementation of List,
// nil : List a
const nil =
(c, n) => n
// cons : (a, List a) -> List a
const cons = (x, y = nil) =>
(c, n) => c (y (c, n), x)
I also cheated you giving simplified type annotations like List a. What is List, exactly?
We're going to address all of this and it starts with our implementation of List
List, take 2
Below nil and cons have the exact same implementation. I've only fixed the type annotations. Most importantly, I added reduce which provides a way to get values "out" of our list container.
The type annotation for List is updated to List a r – this can be understood as "a list containing values of type a that when reduced, will produce a value of type r."
// type List a r = (r, a) -> r
// nil : List a r
const nil =
(c, n) => n
// cons : (a, List a r) -> List a r
const cons = (x, y = nil) =>
(c, n) => c (y (c, n), x)
// reduce : ((r, a) -> r, r) -> List a -> r
const reduce = (f, init) => l =>
l (f, init)
Now we can maintain List as a sane type, and push all the wonky behavior you want into the autoCons function. Below we update autoCons to work with our list acc using our new reduce function
const autoCons = (init, n) =>
{ const loop = acc => (x, n) =>
isFunction (x)
// don't break the abstraction barrier
? acc (x, n)
// extract the value using our appropriate list module function
? reduce (x, n) (acc)
: loop (cons (x, acc))
return loop (nil) (init, n)
}
So speaking of types, let's examine the type of autoCons –
autoCons (1) // "lambda (x,n) => isFunction (x) ...
autoCons (1) (2) // "lambda (x,n) => isFunction (x) ...
autoCons (1) (2) (3) // "lambda (x,n) => isFunction (x) ...
autoCons (1) (2) (3) (add, 0) // 6
Well autoCons always returns a lambda, but that lambda has a type that we cannot determine – sometimes it returns another lambda of its same kind, other times it returns a completely different result; in this case some number, 6
Because of this, we cannot easily mix and combine autoCons expressions with other parts of our program. If you drop this perverse drive to create variadic curried interfaces, you can make an autoCons that is type-able
// autoCons : (...a) -> List a r
const autoCons = (...xs) =>
{ const loop = (acc, x = nil, ...xs) =>
x === nil
? acc
: loop (cons (x, acc), ...xs)
return loop (nil, ...xs)
}
Because autoCons now returns a known List (instead of the mystery unknown type caused by variadic currying), we can plug an autoCons list into the various other functions provided by our List module.
const c =
autoCons (1, 2, 3)
const d =
autoCons (4, 5, 6)
console.log
( toArray (c) // [ 1, 2, 3 ]
, toArray (map (x => x * x) (d)) // [ 16, 25, 36 ]
, toArray (filter (x => x != 5) (d)) // [ 4, 6 ]
, toArray (append (c, d)) // [ 1, 2, 3, 4, 5, 6 ]
)
These kind of mix-and-combine expressions is not possible when autoCons returns a type we cannot rely upon. Another important thing to notice is the List module gives us a place to expand its functionality. We can easily add functions used above like map, filter, append, and toArray – you lose this flexibility when trying to shove everything through the variadic curried interface
Let's look at those additions to the List module now – as you can see, each function is well-typed and has behavior we can rely upon
// type List a r = (r, a) -> r
// nil : List a r
// cons : (a, List a r) -> List a r
// reduce : ((r, a) -> r, r) -> List a r -> r
// length : List a r -> Int
const length =
reduce
( (acc, _) => acc + 1
, 0
)
// map : (a -> b) -> List a r -> List b r
const map = f =>
reduce
( (acc, x) => cons (f (x), acc)
, nil
)
// filter : (a -> Bool) -> List a r -> List a r
const filter = f =>
reduce
( (acc,x) => f (x) ? cons (x, acc) : acc
, nil
)
// append : (List a r, List a r) -> List a r
const append = (l1, l2) =>
(c, n) =>
l2 (c, l1 (c, n))
// toArray : List a r -> Array a
const toArray =
reduce
( (acc, x) => [ ...acc, x ]
, []
)
Even autoCons makes sense as part of our module now
// autoCons : (...a) -> List a r
const autoCons = (...xs) =>
{ const loop = (acc, x = nil, ...xs) =>
x === nil
? acc
: loop (cons (x, acc), ...xs)
return loop (nil, ...xs)
}
Add any other functions to the List module
// nth: Int -> List a r -> a
// isNil : List a r -> Bool
// first : List a r -> a
// rest : List a r -> List a r
// ...
Given an expression like A(a)(b)(f) where f is a function, it's impossible to know whether f is supposed to be added to the list or whether it's the reducing function. Hence, I'm going to describe how to write expressions like A(a)(b)(f, x) which is equivalent to [a, b].reduce(f, x). This allows us to distinguish when the list ends depending upon how many arguments you provide:
const L = g => function (x, a) {
switch (arguments.length) {
case 1: return L(k => g((f, a) => k(f, f(a, x))));
case 2: return g((f, a) => a)(x, a);
}
};
const A = L(x => x);
const xs = A(1)(2)(3)(4)(5);
console.log(xs((x, y) => x + y, 0)); // 15
console.log(xs((x, y) => x * y, 1)); // 120
console.log(xs((a, x) => a.concat(x), [])); // [1,2,3,4,5]
It works due to continuations. Every time we add a new element, we accumulate a CPS function. Each CPS function calls the previous CPS function, thereby creating a CPS function chain. When we give this CPS function chain a base function, it unrolls the chain and allows us to reduce it. It's the same idea behind transducers and lenses.
Edit: user633183's solution is brilliant. It uses the Church encoding of lists using right folds to alleviate the need for continuations, resulting in simpler code which is easy to understand. Here's her solution, modified to make foldr seem like foldl:
const L = g => function (x, a) {
switch (arguments.length) {
case 1: return L((f, a) => f(g(f, a), x));
case 2: return g(x, a);
}
};
const A = L((f, a) => a);
const xs = A(1)(2)(3)(4)(5);
console.log(xs((x, y) => x + y, 0)); // 15
console.log(xs((x, y) => x * y, 1)); // 120
console.log(xs((a, x) => a.concat(x), [])); // [1,2,3,4,5]
Here g is the Church encoded list accumulated so far. Initially, it's the empty list. Calling g folds it from the right. However, we also build the list from the right. Hence, it seems like we're building the list and folding it from the left because of the way we write it.
If all these functions are confusing you, what user633183 is really doing is:
const L = g => function (x, a) {
switch (arguments.length) {
case 1: return L([x].concat(g));
case 2: return g.reduceRight(x, a);
}
};
const A = L([]);
const xs = A(1)(2)(3)(4)(5);
console.log(xs((x, y) => x + y, 0)); // 15
console.log(xs((x, y) => x * y, 1)); // 120
console.log(xs((a, x) => a.concat(x), [])); // [1,2,3,4,5]
As you can see, she is building the list backwards and then using reduceRight to fold the backwards list backwards. Hence, it looks like you're building and folding the list forwards.
I'll have to admit I haven't read through your linked questions and I'm mainly here for the fun puzzle... but does this help in any way?
I figured you want to differentiate between adding an element (calling with a new value) and running a function on the list (calling with a function). Since I had to somehow pass the function to run, I couldn't get the (1) vs () syntax to work.
This uses an interface that returns an object with concat to extend the list, and fold to run a reducer on the list. Again, not sure if it's a complete answer, but it might help you explore other directions.
const Empty = Symbol();
const L = (x, y = Empty) => ({
concat: z => L(z, L(x, y)),
fold: (f, seed) => f(x, y === Empty ? seed : y.fold(f, seed))
});
const sum = (a, b) => a + b;
console.log(
L(1)
.concat(2).concat(3).concat(4).concat(5).concat(6)
.fold(sum, 0)
)
Work in progress
Thanks to the stunning contribution by #user3297291 , I somehow could refactor the code to fit my specification, but not working because I am lost the concept during the implementation :(
The point is whole thing must be curried, and no object.method is involved.
Can anyone "debug" please :)
The initial value is set to the first element, in this example as 1
I think this is almost done.
const isFunction = f => (typeof f === 'function');
const Empty = Symbol();
const L = (x = Empty) => (y = Empty) => z => isFunction(z)
? (() => {
const fold = f => seed => f(x)(y) === Empty
? seed
: (L)(y)(f);
return fold(z)(x);
})()
: L(z)(L(x)(y));
const sum = a => b => a + b;
console.log(
L(1)(2)(3)(4)(5)(6)(sum)
);
Output
z => isFunction(z)
? (() => {
const fold = f => seed => f(x)(y) === Empty
? seed
: (L)(y)(f);
return fold(z)(x);
})()
: L(z)(L(x)(y))
I've gone through the various questions you have but I'm still not sure I entirely understand what you're looking for. On the off chance you're simply looking to represent a linked list, here is a "dumb" representation that does not use clever tricks like overloaded arguments or default parameter values:
const List = (() => {
const nil = Symbol()
// ADT
const Nil = nil
const Cons = x => xs => ({ x, xs })
const match = ({ Nil, Cons }) => l => l === nil ? Nil : Cons(l.x)(l.xs)
// Functor
const map = f => match({
Nil,
Cons: x => xs => Cons(f(x))(map(f)(xs))
})
// Foldable
const foldr = f => z => match({
Nil: z,
Cons: x => xs => f(x)(foldr(f)(z)(xs)) // danger of stack overflow!
// https://wiki.haskell.org/Foldr_Foldl_Foldl%27
})
return { Nil, Cons, match, map, foldr }
})()
const { Nil, Cons, match, map, foldr } = List
const toArray = foldr(x => xs => [x, ...xs])([])
const l = Cons(1)(Cons(2)(Cons(3)(Nil)))
const l2 = map(x => x * 2)(l)
const l3 = map(x => x * 3)(l2)
const a = toArray(l3)
console.log(a) // => [6, 12, 18]

How to chain map and filter functions in the correct order

I really like chaining Array.prototype.map, filter and reduce to define a data transformation. Unfortunately, in a recent project that involved large log files, I could no longer get away with looping through my data multiple times...
My goal:
I want to create a function that chains .filter and .map methods by, instead of mapping over an array immediately, composing a function that loops over the data once. I.e.:
const DataTransformation = () => ({
map: fn => (/* ... */),
filter: fn => (/* ... */),
run: arr => (/* ... */)
});
const someTransformation = DataTransformation()
.map(x => x + 1)
.filter(x => x > 3)
.map(x => x / 2);
// returns [ 2, 2.5 ] without creating [ 2, 3, 4, 5] and [4, 5] in between
const myData = someTransformation.run([ 1, 2, 3, 4]);
My attempt:
Inspired by this answer and this blogpost I started writing a Transduce function.
const filterer = pred => reducer => (acc, x) =>
pred(x) ? reducer(acc, x) : acc;
const mapper = map => reducer => (acc, x) =>
reducer(acc, map(x));
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
The problem:
The problem with the Transduce snippet above, is that it runs “backwards”... The last method I chain is the first to be executed:
const someTransformation = Transduce()
.map(x => x + 1)
.filter(x => x > 3)
.map(x => x / 2);
// Instead of [ 2, 2.5 ] this returns []
// starts with (x / 2) -> [0.5, 1, 1.5, 2]
// then filters (x < 3) -> []
const myData = someTransformation.run([ 1, 2, 3, 4]);
Or, in more abstract terms:
Go from:
Transducer(concat).map(f).map(g) == (acc, x) => concat(acc, f(g(x)))
To:
Transducer(concat).map(f).map(g) == (acc, x) => concat(acc, g(f(x)))
Which is similar to:
mapper(f) (mapper(g) (concat))
I think I understand why it happens, but I can't figure out how to fix it without changing the “interface” of my function.
The question:
How can I make my Transduce method chain filter and map operations in the correct order?
Notes:
I'm only just learning about the naming of some of the things I'm trying to do. Please let me know if I've incorrectly used the Transduce term or if there are better ways to describe the problem.
I'm aware I can do the same using a nested for loop:
const push = (acc, x) => (acc.push(x), acc);
const ActionChain = (actions = []) => {
const run = arr =>
arr.reduce((acc, x) => {
for (let i = 0, action; i < actions.length; i += 1) {
action = actions[i];
if (action.type === "FILTER") {
if (action.fn(x)) {
continue;
}
return acc;
} else if (action.type === "MAP") {
x = action.fn(x);
}
}
acc.push(x);
return acc;
}, []);
const addAction = type => fn =>
ActionChain(push(actions, { type, fn }));
return {
map: addAction("MAP"),
filter: addAction("FILTER"),
run
};
};
// Compare to regular chain to check if
// there's a performance gain
// Admittedly, in this example, it's quite small...
const naiveApproach = {
run: arr =>
arr
.map(x => x + 3)
.filter(x => x % 3 === 0)
.map(x => x / 3)
.filter(x => x < 40)
};
const actionChain = ActionChain()
.map(x => x + 3)
.filter(x => x % 3 === 0)
.map(x => x / 3)
.filter(x => x < 40)
const testData = Array.from(Array(100000), (x, i) => i);
console.time("naive");
const result1 = naiveApproach.run(testData);
console.timeEnd("naive");
console.time("chain");
const result2 = actionChain.run(testData);
console.timeEnd("chain");
console.log("equal:", JSON.stringify(result1) === JSON.stringify(result2));
Here's my attempt in a stack snippet:
const filterer = pred => reducer => (acc, x) =>
pred(x) ? reducer(acc, x) : acc;
const mapper = map => reducer => (acc, x) => reducer(acc, map(x));
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
const sameDataTransformation = Transduce()
.map(x => x + 5)
.filter(x => x % 2 === 0)
.map(x => x / 2)
.filter(x => x < 4);
// It's backwards:
// [-1, 0, 1, 2, 3]
// [-0.5, 0, 0.5, 1, 1.5]
// [0]
// [5]
console.log(sameDataTransformation.run([-1, 0, 1, 2, 3, 4, 5]));
before we know better
I really like chaining ...
I see that, and I'll appease you, but you'll come to understand that forcing your program through a chaining API is unnatural, and more trouble than it's worth in most cases.
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
I think I understand why it happens, but I can't figure out how to fix it without changing the “interface” of my function.
The problem is indeed with your Transduce constructor. Your map and filter methods are stacking map and pred on the outside of the transducer chain, instead of nesting them inside.
Below, I've implemented your Transduce API that evaluates the maps and filters in correct order. I've also added a log method so that we can see how Transduce is behaving
const Transduce = (f = k => k) => ({
map: g =>
Transduce(k =>
f ((acc, x) => k(acc, g(x)))),
filter: g =>
Transduce(k =>
f ((acc, x) => g(x) ? k(acc, x) : acc)),
log: s =>
Transduce(k =>
f ((acc, x) => (console.log(s, x), k(acc, x)))),
run: xs =>
xs.reduce(f((acc, x) => acc.concat(x)), [])
})
const foo = nums => {
return Transduce()
.log('greater than 2?')
.filter(x => x > 2)
.log('\tsquare:')
.map(x => x * x)
.log('\t\tless than 30?')
.filter(x => x < 30)
.log('\t\t\tpass')
.run(nums)
}
// keep square(n), forall n of nums
// where n > 2
// where square(n) < 30
console.log(foo([1,2,3,4,5,6,7]))
// => [ 9, 16, 25 ]
untapped potential
Inspired by this answer ...
In reading that answer I wrote, you overlook the generic quality of Trans as it was written there. Here, our Transduce only attempts to work with Arrays, but really it can work with any type that has an empty value ([]) and a concat method. These two properties make up a category called Monoids and we'd be doing ourselves a disservice if we didn't take advantage of transducer's ability to work with any type in this category.
Above, we hard-coded the initial accumulator [] in the run method, but this should probably be supplied as an argument – much like we do with iterable.reduce(reducer, initialAcc)
Aside from that, both implementations are essentially equivalent. The biggest difference is that the Trans implementation provided in the linked answer is Trans itself is a monoid, but Transduce here is not. Trans neatly implements composition of transducers in the concat method whereas Transduce (above) has composition mixed within each method. Making it a monoid allows us to rationalize Trans the same way do all other monoids, instead of having to understand it as some specialized chaining interface with unique map, filter, and run methods.
I would advise building from Trans instead of making your own custom API
have your cake and eat it too
So we learned the valuable lesson of uniform interfaces and we understand that Trans is inherently simple. But, you still want that sweet chaining API. OK, ok...
We're going to implement Transduce one more time, but this time we'll do so using the Trans monoid. Here, Transduce holds a Trans value instead of a continuation (Function).
Everything else stays the same – foo takes 1 tiny change and produces an identical output.
// generic transducers
const mapper = f =>
Trans(k => (acc, x) => k(acc, f(x)))
const filterer = f =>
Trans(k => (acc, x) => f(x) ? k(acc, x) : acc)
const logger = label =>
Trans(k => (acc, x) => (console.log(label, x), k(acc, x)))
// magic chaining api made with Trans monoid
const Transduce = (t = Trans.empty()) => ({
map: f =>
Transduce(t.concat(mapper(f))),
filter: f =>
Transduce(t.concat(filterer(f))),
log: s =>
Transduce(t.concat(logger(s))),
run: (m, xs) =>
transduce(t, m, xs)
})
// when we run, we must specify the type to transduce
// .run(Array, nums)
// instead of
// .run(nums)
Expand this code snippet to see the final implementation – of course you could skip defining a separate mapper, filterer, and logger, and instead define those directly on Transduce. I think this reads nicer tho.
// Trans monoid
const Trans = f => ({
runTrans: f,
concat: ({runTrans: g}) =>
Trans(k => f(g(k)))
})
Trans.empty = () =>
Trans(k => k)
const transduce = (t, m, xs) =>
xs.reduce(t.runTrans((acc, x) => acc.concat(x)), m.empty())
// complete Array monoid implementation
Array.empty = () => []
// generic transducers
const mapper = f =>
Trans(k => (acc, x) => k(acc, f(x)))
const filterer = f =>
Trans(k => (acc, x) => f(x) ? k(acc, x) : acc)
const logger = label =>
Trans(k => (acc, x) => (console.log(label, x), k(acc, x)))
// now implemented with Trans monoid
const Transduce = (t = Trans.empty()) => ({
map: f =>
Transduce(t.concat(mapper(f))),
filter: f =>
Transduce(t.concat(filterer(f))),
log: s =>
Transduce(t.concat(logger(s))),
run: (m, xs) =>
transduce(t, m, xs)
})
// this stays exactly the same
const foo = nums => {
return Transduce()
.log('greater than 2?')
.filter(x => x > 2)
.log('\tsquare:')
.map(x => x * x)
.log('\t\tless than 30?')
.filter(x => x < 30)
.log('\t\t\tpass')
.run(Array, nums)
}
// output is exactly the same
console.log(foo([1,2,3,4,5,6,7]))
// => [ 9, 16, 25 ]
wrap up
So we started with a mess of lambdas and then made things simpler using a monoid. The Trans monoid provides distinct advantages in that the monoid interface is known and the generic implementation is extremely simple. But we're stubborn or maybe we have goals to fulfill that are not set by us – we decide to build the magic Transduce chaining API, but we do so using our rock-solid Trans monoid which gives us all the power of Trans but also keeps complexity nicely compartmentalised.
dot chaining fetishists anonymous
Here's a couple other recent answers I wrote about method chaining
Is there any way to make a functions return accessible via a property?
Chaining functions and using an anonymous function
Pass result of functional chain to function
I think you need to change the order of your implementations:
const filterer = pred => reducer => (x) =>pred((a=reducer(x) )?x: undefined;
const mapper = map => reducer => (x) => map(reducer(x));
Then you need to change the run command to:
run: arr => arr.reduce((a,b)=>a.concat([reducer(b)]), []);
And the default reducer must be
x=>x
However, this way the filter wont work. You may throw undefined in the filter function and catch in the run function:
run: arr => arr.reduce((a,b)=>{
try{
a.push(reducer(b));
}catch(e){}
return a;
}, []);
const filterer = pred => reducer => (x) =>{
if(!pred((a=reducer(x))){
throw undefined;
}
return x;
};
However, all in all i think the for loop is much more elegant in this case...

Categories