How to chain map and filter functions in the correct order - javascript

I really like chaining Array.prototype.map, filter and reduce to define a data transformation. Unfortunately, in a recent project that involved large log files, I could no longer get away with looping through my data multiple times...
My goal:
I want to create a function that chains .filter and .map methods by, instead of mapping over an array immediately, composing a function that loops over the data once. I.e.:
const DataTransformation = () => ({
map: fn => (/* ... */),
filter: fn => (/* ... */),
run: arr => (/* ... */)
});
const someTransformation = DataTransformation()
.map(x => x + 1)
.filter(x => x > 3)
.map(x => x / 2);
// returns [ 2, 2.5 ] without creating [ 2, 3, 4, 5] and [4, 5] in between
const myData = someTransformation.run([ 1, 2, 3, 4]);
My attempt:
Inspired by this answer and this blogpost I started writing a Transduce function.
const filterer = pred => reducer => (acc, x) =>
pred(x) ? reducer(acc, x) : acc;
const mapper = map => reducer => (acc, x) =>
reducer(acc, map(x));
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
The problem:
The problem with the Transduce snippet above, is that it runs “backwards”... The last method I chain is the first to be executed:
const someTransformation = Transduce()
.map(x => x + 1)
.filter(x => x > 3)
.map(x => x / 2);
// Instead of [ 2, 2.5 ] this returns []
// starts with (x / 2) -> [0.5, 1, 1.5, 2]
// then filters (x < 3) -> []
const myData = someTransformation.run([ 1, 2, 3, 4]);
Or, in more abstract terms:
Go from:
Transducer(concat).map(f).map(g) == (acc, x) => concat(acc, f(g(x)))
To:
Transducer(concat).map(f).map(g) == (acc, x) => concat(acc, g(f(x)))
Which is similar to:
mapper(f) (mapper(g) (concat))
I think I understand why it happens, but I can't figure out how to fix it without changing the “interface” of my function.
The question:
How can I make my Transduce method chain filter and map operations in the correct order?
Notes:
I'm only just learning about the naming of some of the things I'm trying to do. Please let me know if I've incorrectly used the Transduce term or if there are better ways to describe the problem.
I'm aware I can do the same using a nested for loop:
const push = (acc, x) => (acc.push(x), acc);
const ActionChain = (actions = []) => {
const run = arr =>
arr.reduce((acc, x) => {
for (let i = 0, action; i < actions.length; i += 1) {
action = actions[i];
if (action.type === "FILTER") {
if (action.fn(x)) {
continue;
}
return acc;
} else if (action.type === "MAP") {
x = action.fn(x);
}
}
acc.push(x);
return acc;
}, []);
const addAction = type => fn =>
ActionChain(push(actions, { type, fn }));
return {
map: addAction("MAP"),
filter: addAction("FILTER"),
run
};
};
// Compare to regular chain to check if
// there's a performance gain
// Admittedly, in this example, it's quite small...
const naiveApproach = {
run: arr =>
arr
.map(x => x + 3)
.filter(x => x % 3 === 0)
.map(x => x / 3)
.filter(x => x < 40)
};
const actionChain = ActionChain()
.map(x => x + 3)
.filter(x => x % 3 === 0)
.map(x => x / 3)
.filter(x => x < 40)
const testData = Array.from(Array(100000), (x, i) => i);
console.time("naive");
const result1 = naiveApproach.run(testData);
console.timeEnd("naive");
console.time("chain");
const result2 = actionChain.run(testData);
console.timeEnd("chain");
console.log("equal:", JSON.stringify(result1) === JSON.stringify(result2));
Here's my attempt in a stack snippet:
const filterer = pred => reducer => (acc, x) =>
pred(x) ? reducer(acc, x) : acc;
const mapper = map => reducer => (acc, x) => reducer(acc, map(x));
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
const sameDataTransformation = Transduce()
.map(x => x + 5)
.filter(x => x % 2 === 0)
.map(x => x / 2)
.filter(x => x < 4);
// It's backwards:
// [-1, 0, 1, 2, 3]
// [-0.5, 0, 0.5, 1, 1.5]
// [0]
// [5]
console.log(sameDataTransformation.run([-1, 0, 1, 2, 3, 4, 5]));

before we know better
I really like chaining ...
I see that, and I'll appease you, but you'll come to understand that forcing your program through a chaining API is unnatural, and more trouble than it's worth in most cases.
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
I think I understand why it happens, but I can't figure out how to fix it without changing the “interface” of my function.
The problem is indeed with your Transduce constructor. Your map and filter methods are stacking map and pred on the outside of the transducer chain, instead of nesting them inside.
Below, I've implemented your Transduce API that evaluates the maps and filters in correct order. I've also added a log method so that we can see how Transduce is behaving
const Transduce = (f = k => k) => ({
map: g =>
Transduce(k =>
f ((acc, x) => k(acc, g(x)))),
filter: g =>
Transduce(k =>
f ((acc, x) => g(x) ? k(acc, x) : acc)),
log: s =>
Transduce(k =>
f ((acc, x) => (console.log(s, x), k(acc, x)))),
run: xs =>
xs.reduce(f((acc, x) => acc.concat(x)), [])
})
const foo = nums => {
return Transduce()
.log('greater than 2?')
.filter(x => x > 2)
.log('\tsquare:')
.map(x => x * x)
.log('\t\tless than 30?')
.filter(x => x < 30)
.log('\t\t\tpass')
.run(nums)
}
// keep square(n), forall n of nums
// where n > 2
// where square(n) < 30
console.log(foo([1,2,3,4,5,6,7]))
// => [ 9, 16, 25 ]
untapped potential
Inspired by this answer ...
In reading that answer I wrote, you overlook the generic quality of Trans as it was written there. Here, our Transduce only attempts to work with Arrays, but really it can work with any type that has an empty value ([]) and a concat method. These two properties make up a category called Monoids and we'd be doing ourselves a disservice if we didn't take advantage of transducer's ability to work with any type in this category.
Above, we hard-coded the initial accumulator [] in the run method, but this should probably be supplied as an argument – much like we do with iterable.reduce(reducer, initialAcc)
Aside from that, both implementations are essentially equivalent. The biggest difference is that the Trans implementation provided in the linked answer is Trans itself is a monoid, but Transduce here is not. Trans neatly implements composition of transducers in the concat method whereas Transduce (above) has composition mixed within each method. Making it a monoid allows us to rationalize Trans the same way do all other monoids, instead of having to understand it as some specialized chaining interface with unique map, filter, and run methods.
I would advise building from Trans instead of making your own custom API
have your cake and eat it too
So we learned the valuable lesson of uniform interfaces and we understand that Trans is inherently simple. But, you still want that sweet chaining API. OK, ok...
We're going to implement Transduce one more time, but this time we'll do so using the Trans monoid. Here, Transduce holds a Trans value instead of a continuation (Function).
Everything else stays the same – foo takes 1 tiny change and produces an identical output.
// generic transducers
const mapper = f =>
Trans(k => (acc, x) => k(acc, f(x)))
const filterer = f =>
Trans(k => (acc, x) => f(x) ? k(acc, x) : acc)
const logger = label =>
Trans(k => (acc, x) => (console.log(label, x), k(acc, x)))
// magic chaining api made with Trans monoid
const Transduce = (t = Trans.empty()) => ({
map: f =>
Transduce(t.concat(mapper(f))),
filter: f =>
Transduce(t.concat(filterer(f))),
log: s =>
Transduce(t.concat(logger(s))),
run: (m, xs) =>
transduce(t, m, xs)
})
// when we run, we must specify the type to transduce
// .run(Array, nums)
// instead of
// .run(nums)
Expand this code snippet to see the final implementation – of course you could skip defining a separate mapper, filterer, and logger, and instead define those directly on Transduce. I think this reads nicer tho.
// Trans monoid
const Trans = f => ({
runTrans: f,
concat: ({runTrans: g}) =>
Trans(k => f(g(k)))
})
Trans.empty = () =>
Trans(k => k)
const transduce = (t, m, xs) =>
xs.reduce(t.runTrans((acc, x) => acc.concat(x)), m.empty())
// complete Array monoid implementation
Array.empty = () => []
// generic transducers
const mapper = f =>
Trans(k => (acc, x) => k(acc, f(x)))
const filterer = f =>
Trans(k => (acc, x) => f(x) ? k(acc, x) : acc)
const logger = label =>
Trans(k => (acc, x) => (console.log(label, x), k(acc, x)))
// now implemented with Trans monoid
const Transduce = (t = Trans.empty()) => ({
map: f =>
Transduce(t.concat(mapper(f))),
filter: f =>
Transduce(t.concat(filterer(f))),
log: s =>
Transduce(t.concat(logger(s))),
run: (m, xs) =>
transduce(t, m, xs)
})
// this stays exactly the same
const foo = nums => {
return Transduce()
.log('greater than 2?')
.filter(x => x > 2)
.log('\tsquare:')
.map(x => x * x)
.log('\t\tless than 30?')
.filter(x => x < 30)
.log('\t\t\tpass')
.run(Array, nums)
}
// output is exactly the same
console.log(foo([1,2,3,4,5,6,7]))
// => [ 9, 16, 25 ]
wrap up
So we started with a mess of lambdas and then made things simpler using a monoid. The Trans monoid provides distinct advantages in that the monoid interface is known and the generic implementation is extremely simple. But we're stubborn or maybe we have goals to fulfill that are not set by us – we decide to build the magic Transduce chaining API, but we do so using our rock-solid Trans monoid which gives us all the power of Trans but also keeps complexity nicely compartmentalised.
dot chaining fetishists anonymous
Here's a couple other recent answers I wrote about method chaining
Is there any way to make a functions return accessible via a property?
Chaining functions and using an anonymous function
Pass result of functional chain to function

I think you need to change the order of your implementations:
const filterer = pred => reducer => (x) =>pred((a=reducer(x) )?x: undefined;
const mapper = map => reducer => (x) => map(reducer(x));
Then you need to change the run command to:
run: arr => arr.reduce((a,b)=>a.concat([reducer(b)]), []);
And the default reducer must be
x=>x
However, this way the filter wont work. You may throw undefined in the filter function and catch in the run function:
run: arr => arr.reduce((a,b)=>{
try{
a.push(reducer(b));
}catch(e){}
return a;
}, []);
const filterer = pred => reducer => (x) =>{
if(!pred((a=reducer(x))){
throw undefined;
}
return x;
};
However, all in all i think the for loop is much more elegant in this case...

Related

How to work with javascript Map without mutations

I'm working in a functional way in my JS project.
That's also means I do not mutate object or array entities. Instead I always create a new instance and replace an old one.
e.g.
let obj = {a: 'aa', b: 'bb'}
obj = {...obj, b: 'some_new_value'}
The question is:
How to work in a functional (immutable) way with javascript Maps?
I guess I can use the following code to add values:
let map = new Map()
...
map = new Map(map).set(something)
But what about delete items?
I cannot do new Map(map).delete(something), because the result of .delete is a boolean.
P.S. I'm aware of existence of ImmutableJS, but I don't want to use it due to you never 100% sure if you are working now with a plain JS object, or with immutablejs' object (especially nested structures). And because of bad support of TypeScript, btw.
I cannot do new Map(map).delete(something);, because the result of .delete is a boolean.
You can use an interstitial variable. You can farm it out to a function if you like:
function map_delete(old_map, key_to_delete) {
const new_map = new Map(old_map);
new_map.delete(key_to_delete);
return new_map;
}
Or you can create get the entries in the map, filter them and create a new one from the result:
const new_map = new Map( Array.from(old_map.entries).filter( ([key, value]) => key !== something ) );
If you don't want to use a persistent map data structure, then you cannot get around mutations or have to conduct insanely inefficient shallow copies. Please note that mutations themselves aren't harmful, but only in conjunction with sharing the underlying mutable values.
If we are able to limit the way mutable values can be accessed, we can get safe mutable data types. They come at a cost, though. You cannot just use them as usual. As a matter of fact using them takes some time to get familiar with. It's a trade-off.
Here is an example with the native Map:
// MUTABLE
const Mutable = clone => refType => // strict variant
record(Mutable, app(([o, initialCall, refType]) => {
o.mutable = {
run: k => {
o.mutable.run = _ => {
throw new TypeError("illegal subsequent inspection");
};
o.mutable.set = _ => {
throw new TypeError("illegal subsequent mutation");
};
return k(refType);
},
set: k => {
if (initialCall) {
initialCall = false;
refType = clone(refType);
}
k(refType);
return o;
}
}
return o;
}) ([{}, true, refType]));
const mutRun = k => o =>
o.mutable.run(k);
const mutSet = k => o =>
o.mutable.set(k);
// MAP
const mapClone = m => new Map(m);
const mapDelx = k => m => // safe-in-place-update variant
mutSet(m_ =>
m_.has(k)
? m_.delete(k)
: m_) (m);
const mapGet = k => m =>
m.get(k);
const mapSetx = k => v => // safe-in-place-update variant
mutSet(m_ => m_.set(k, v));
const mapUpdx = k => f => // safe-in-place-update variant
mutSet(m_ => m_.set(k, f(m_.get(k))));
const MutableMap = Mutable(mapClone);
// auxiliary functions
const record = (type, o) => (
o[Symbol.toStringTag] = type.name || type, o);
const app = f => x => f(x);
const id = x => x;
// MAIN
const m = MutableMap(new Map([[1, "foo"], [2, "bar"], [3, "baz"]]));
mapDelx(2) (m);
mapUpdx(3) (s => s.toUpperCase()) (m);
const m_ = mutRun(Array.from) (m);
console.log(m_); // [[1, "foo"], [3, "BAZ"]]
try {mapSetx(4) ("bat") (m)} // illegal subsequent mutation
catch (e) {console.log(e.message)}
try {mutRun(mapGet(1)) (m)} // illegal subsequent inspection
catch (e) {console.log(e.message)}
If you take a closer look at Mutable you see it creates a shallow copy as well, but only once, initially. You can than conduct as many mutations as you want, until you inspect the mutable value the first time.
You can find an implementation with several instances in my scriptum library. Here is a post with some more background information on the concept.
I borrowed the concept from Rust where it is called ownership. The type theoretical background are affine types, which are subsumed under linear types, in case you are interested.
roll your own data structure
Another option is to write your own map module that does not depend on JavaScript's native Map. This completely frees us from its mutable behaviours and prevents making full copies each time we wish to set, update, or del. This solution gives you full control and effectively demonstrates how to implement any data structure of your imagination -
// main.js
import { fromEntries, set, del, toString } from "./map.js"
const a =
[["d",3],["e",4],["g",6],["j",9],["b",1],["a",0],["i",8],["c",2],["h",7],["f",5]]
const m =
fromEntries(a)
console.log(1, toString(m))
console.log(2, toString(del(m, "b")))
console.log(3, toString(set(m, "c", "#")))
console.log(4, toString(m))
We wish for the expected output -
map m, the result of fromEntries(a)
derivative of map m with key "b" deleted
derivative of map m with key "c" updated to "#"
map m, unmodified from the above operations
1 (a, 0)->(b, 1)->(c, 2)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
2 (a, 0)->(c, 2)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
3 (a, 0)->(b, 1)->(c, #)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
4 (a, 0)->(b, 1)->(c, 2)->(d, 3)->(e, 4)->(f, 5)->(g, 6)->(h, 7)->(i, 8)->(j, 9)
Time to fulfill our wishes and implement our map module. We'll start by defining what it means to be an empty map -
// map.js
const empty =
Symbol("map.empty")
const isEmpty = t =>
t === empty
Next we need a way to insert our entries into the map. This calls into existence, fromEntries, set, update, and node -
// map.js (continued)
const fromEntries = a =>
a.reduce((t, [k, v]) => set(t, k, v), empty)
const set = (t, k, v) =>
update(t, k, _ => v)
const update = (t, k, f) =>
isEmpty(t)
? node(k, f())
: k < t.key
? node(t.key, t.value, update(t.left, k, f), t.right)
: k > t.key
? node(t.key, t.value, t.left, update(t.right, k, f))
: node(k, f(t.value), t.left, t.right)
const node = (key, value, left = empty, right = empty) =>
({ key, value, left, right })
Next we'll define a way to get a value from our map -
// main.js (continued)
const get = (t, k) =>
isEmpty(t)
? undefined
: k < t.key
? get(t.left, k)
: k > t.key
? get(t.right, k)
: t.value
And now we'll define a way to delete an entry from our map, which also calls into existence concat -
// map.js (continued)
const del = (t, k) =>
isEmpty(t)
? t
: k < t.key
? node(t.key, t.value, del(t.left, k), t.right)
: k > t.key
? node(t.key, t.value, t.left, del(t.right, k))
: concat(t.left, t.right)
const concat = (l, r) =>
isEmpty(l)
? r
: isEmpty(r)
? l
: r.key < l.key
? node(l.key, l.value, concat(l.left, r), l.right)
: r.key > l.key
? node(l.key, l.value, l.left, concat(l.right, r))
: r
Finally we provide a way to visualize the map using toString, which calls into existence inorder. As a bonus, we'll throw in toArray -
const toString = (t) =>
Array.from(inorder(t), ([ k, v ]) => `(${k}, ${v})`).join("->")
function* inorder(t)
{ if (isEmpty(t)) return
yield* inorder(t.left)
yield [ t.key, t.value ]
yield* inorder(t.right)
}
const toArray = (t) =>
Array.from(inorder(t))
Export the module's features -
// map.js (continued)
export { empty, isEmpty, fromEntries, get, set, update, del, append, inorder, toArray, toString }
low hanging fruit
Your Map module is finished but there are some valuable features we can add without requiring much effort. Below we implement preorder and postorder map traversals. Additionally we add a second parameter to toString and toArray that allows you to choose which traversal to use. inorder is used by default -
// map.js (continued)
function* preorder(t)
{ if (isEmpty(t)) return
yield [ t.key, t.value ]
yield* preorder(t.left)
yield* preorder(t.right)
}
function* postorder(t)
{ if (isEmpty(t)) return
yield* postorder(t.left)
yield* postorder(t.right)
yield [ t.key, t.value ]
}
const toArray = (t, f = inorder) =>
Array.from(f(t))
const toString = (t, f = inorder) =>
Array.from(f(t), ([ k, v ]) => `(${k}, ${v})`).join("->")
export { ..., preorder, postorder }
And we can extend fromEntries to accept any iterable, not just arrays. This matches the functionality of Object.fromEntries and Array.from -
// map.js (continued)
function fromEntries(it)
{ let r = empty
for (const [k, v] of it)
r = set(r, k, v)
return r
}
Like we did above, we can add a second parameter which allows us to specify how the entries are added into the map. Now it works just like Array.from. Why Object.fromEntries doesn't have this behaviour is puzzling to me. Array.from is smart. Be like Array.from -
// map.js (continued)
function fromEntries(it, f = v => v)
{ let r = empty
let k, v
for (const e of it)
( [k, v] = f(e)
, r = set(r, k, v)
)
return r
}
// main.js
import { fromEntries, toString } from "./map.js"
const a =
[["d",3],["e",4],["g",6],["j",9],["b",1],["a",0],["i",8],["c",2],["h",7],["f",5]]
const z =
fromEntries(a, ([ k, v ]) => [ k.toUpperCase(), v * v ])
console.log(toString(z))
(A, 0)->(B, 1)->(C, 4)->(D, 9)->(E, 16)->(F, 25)->(G, 36)->(H, 49)->(I, 64)->(J, 81)
demo
Expand the snippet below to verify the results of our Map module in your own browser -
// map.js
const empty =
Symbol("map.empty")
const isEmpty = t =>
t === empty
const node = (key, value, left = empty, right = empty) =>
({ key, value, left, right })
const fromEntries = a =>
a.reduce((t, [k, v]) => set(t, k, v), empty)
const get = (t, k) =>
isEmpty(t)
? undefined
: k < t.key
? get(t.left, k)
: k > t.key
? get(t.right, k)
: t.value
const set = (t, k, v) =>
update(t, k, _ => v)
const update = (t, k, f) =>
isEmpty(t)
? node(k, f())
: k < t.key
? node(t.key, t.value, update(t.left, k, f), t.right)
: k > t.key
? node(t.key, t.value, t.left, update(t.right, k, f))
: node(k, f(t.value), t.left, t.right)
const del = (t, k) =>
isEmpty(t)
? t
: k < t.key
? node(t.key, t.value, del(t.left, k), t.right)
: k > t.key
? node(t.key, t.value, t.left, del(t.right, k))
: concat(t.left, t.right)
const concat = (l, r) =>
isEmpty(l)
? r
: isEmpty(r)
? l
: r.key < l.key
? node(l.key, l.value, concat(l.left, r), l.right)
: r.key > l.key
? node(l.key, l.value, l.left, concat(l.right, r))
: r
function* inorder(t)
{ if (isEmpty(t)) return
yield* inorder(t.left)
yield [ t.key, t.value ]
yield* inorder(t.right)
}
const toArray = (t) =>
Array.from(inorder(t))
const toString = (t) =>
Array.from(inorder(t), ([ k, v ]) => `(${k}, ${v})`).join("->")
// main.js
const a =
[["d",3],["e",4],["g",6],["j",9],["b",1],["a",0],["i",8],["c",2],["h",7],["f",5]]
const m =
fromEntries(a)
console.log(1, toString(m))
console.log(2, toString(del(m, "b")))
console.log(3, toString(set(m, "c", "#")))
console.log(4, toString(m))
console.log(5, get(set(m, "z", "!"), "z"))
functional module
Here's my little implementation of a persistent map module -
// map.js
import { effect } from "./func.js"
const empty = _ =>
new Map
const update = (t, k, f) =>
fromEntries(t).set(k, f(get(t, k)))
const set = (t, k, v) =>
update(t, k, _ => v)
const get = (t, k) =>
t.get(k)
const del = (t, k) =>
effect(t => t.delete(k))(fromEntries(t))
const fromEntries = a =>
new Map(a)
export { del, empty, fromEntries, get, set, update }
// func.js
const effect = f => x =>
(f(x), x)
// ...
export { effect, ... }
// main.js
import { fromEntries, del, set } from "./map.js"
const q =
fromEntries([["a",1], ["b",2]])
console.log(1, q)
console.log(2, del(q, "b"))
console.log(3, set(q, "c", 3))
console.log(4, q)
Expand the snippet below to verify the results in your own browser -
const effect = f => x =>
(f(x), x)
const empty = _ =>
new Map
const update = (t, k, f) =>
fromEntries(t).set(k, f(get(t, k)))
const set = (t, k, v) =>
update(t, k, _ => v)
const get = (t, k) =>
t.get(k)
const del = (t, k) =>
effect(t => t.delete(k))(fromEntries(t))
const fromEntries = a =>
new Map(a)
const q =
fromEntries([["a", 1], ["b", 2]])
console.log(1, q)
console.log(2, del(q, "b"))
console.log(3, set(q, "c", 3))
console.log(4, q)
1 Map(2) {a => 1, b => 2}
2 Map(1) {a => 1}
3 Map(3) {a => 1, b => 2, c => 3}
4 Map(2) {a => 1, b => 2}
object-oriented interface
If you want to use it in an object-oriented way, you can add a class wrapper around our plain functions. Here we call it Mapping because we don't want to clobber the native Map -
// map.js (continued)
class Mapping
{ constructor(t) { this.t = t }
update(k,f) { return new Mapping(update(this.t, k, f)) }
set(k,v) { return new Mapping(set(this.t, k, v)) }
get(k) { return get(this.t, k) }
del(k) { return new Mapping(del(this.t, k)) }
static empty () { return new Mapping(empty()) }
static fromEntries(a) { return new Mapping(fromEntries(a))
}
}
export default Mapping
// main.js
import Mapping from "./map"
const q =
Mapping.fromEntries([["a", 1], ["b", 2]]) // <- OOP class method
console.log(1, q)
console.log(2, q.del("b")) // <- OOP instance method
console.log(3, q.set("c", 3)) // <- OOP instance method
console.log(4, q)
Even though we're calling through OOP interface, our data structure still behaves persistently. No mutable state is used -
1 Mapping { t: Map(2) {a => 1, b => 2} }
2 Mapping { t: Map(1) {a => 1} }
3 Mapping { t: Map(3) {a => 1, b => 2, c => 3} }
4 Mapping { t: Map(2) {a => 1, b => 2} }

Usage of Promise.All in recursion doesn't seems to be working

Actual doSomething function posts ele to a remote API to do some calculations.
My calc function supposed to get the summation of the remote API's calculation for each element, It should run for every element without affecting how nested they are located.
However, Currently, I can't get this to work. How do I fix this?
const doSomething = (ele) => new Promise(resolve => {
console.log(ele);
resolve(ele * 2);//for example
})
const calc = (arr) => new Promise(
async(resolve) => {
console.log(arr.filter(ele => !Array.isArray(ele)));
let sum = 0;
const out = await Promise.all(arr.filter(ele => !Array.isArray(ele))
.map(ele => doSomething(ele)));
sum += out.reduce((a, b) => a + b, 0);
const out2 = await Promise.all(arr.filter(ele => Array.isArray(ele))
.map(ele => calc(ele)));
sum += out2.reduce((a, b) => a + b, 0);
resolve(sum);
}
)
const process = async () => {
console.log('processing..');
const arr = [1, 2, 3, 4, 5, [6,7], 1, [8,[10,11]]];
const out = await calc(arr);
console.log(out);
}
process();
While it may look like I've addressed issues that are non-existent - the original code in the question had ALL the flaws I address in this answer, including Second and Third below
yes, the code in the question now works! But it clearly was flawed
First: no need for Promise constructor in calc function, since you use Promise.all which returns a promise, if you make calc async, just use await
Second: dosomething !== doSomething
Third: out2 is an array, so sum += out2 is going to mess you up
Fourth: .map(ele => doSomething(ele)) can be written .map(doSoemthing) - and the same for the calc(ele) map
So, working code becomes:
const doSomething = (ele) => new Promise(resolve => {
resolve(ele * 2); //for example
})
const calc = async(arr) => {
const out = await Promise.all(arr.filter(ele => !Array.isArray(ele)).map(doSomething));
let sum = out.reduce((a, b) => a + b, 0);
const out2 = await Promise.all(arr.filter(ele => Array.isArray(ele)).map(calc));
sum += out2.reduce((a, b) => a + b, 0);
return sum;
}
const process = async() => {
console.log('processing..');
const arr = [1, 2, 3, 4, 5, [6, 7], 1, [8, [10, 11]]];
const out = await calc(arr);
console.log(out);
}
process();
Can I suggest a slightly different breakdown of the problem?
We can write one function that recursively applies your function to all (nested) elements of your array, and another to recursively total the results.
Then we await the result of the first call and pass it to the second.
I think these functions are simpler, and they are also reusable.
const doSomething = async (ele) => new Promise(resolve => {
setTimeout(() => resolve(ele * 2), 1000);
})
const recursiveCall = async (proc, arr) =>
Promise .all (arr .map (ele =>
Array .isArray (ele) ? recursiveCall (proc, ele) : proc (ele)
))
const recursiveAdd = (ns) =>
ns .reduce ((total, n) => total + (Array .isArray (n) ? recursiveAdd (n) : n), 0)
const process = async() => {
console.log('processing..');
const arr = [1, 2, 3, 4, 5, [6, 7], 1, [8, [10, 11]]];
const processedArr = await recursiveCall (doSomething, arr);
const out = recursiveAdd (processedArr)
console.log(out);
}
process();
I think a generic deepReduce solves this problem well. Notice it's written in synchronous form -
const deepReduce = (f, init = null, xs = []) =>
xs.reduce
( (r, x) =>
Array.isArray(x)
? deepReduce(f, r, x)
: f(r, x)
, init
)
Still, we can use deepReduce asynchronously by initialising with a promise and reducing with an async function -
deepReduce
( async (r, x) =>
await r + await doSomething(x)
, Promise.resolve(0)
, input
)
.then(console.log, console.error)
See the code in action here -
const deepReduce = (f, init = null, xs = []) =>
xs.reduce
( (r, x) =>
Array.isArray(x)
? deepReduce(f, r, x)
: f(r, x)
, init
)
const doSomething = x =>
new Promise(r => setTimeout(r, 200, x * 2))
const input =
[1, 2, 3, 4, 5, [6,7], 1, [8,[10,11]]]
deepReduce
( async (r, x) =>
await r + await doSomething(x)
, Promise.resolve(0)
, input
)
.then(console.log, console.error)
// 2 + 4 + 6 + 8 + (10 + 14) + 2 + (16 + (20 + 22))
// => 116
console.log("doing something. please wait...")
further generalisation
Above we are hand-encoding a summing function, (+), with the empty sum 0. In reality, this function could be more complex and maybe we want a more general pattern so we can construct our program piecewise. Below we take synchronous add and convert it to an asynchronous function using liftAsync2(add) -
const add = (x = 0, y = 0) =>
x + y // <-- synchronous
const main =
pipe
( deepMap(doSomething) // <-- first do something for every item
, deepReduce(liftAsync2(add), Promise.resolve(0)) // <-- then reduce
)
main([1, 2, 3, 4, 5, [6,7], 1, [8,[10,11]]])
.then(console.log, console.error)
// 2 + 4 + 6 + 8 + (10 + 14) + 2 + (16 + (20 + 22))
// => 116
deepMap and deepReduce generics. These are in curried form so they can plug directly into pipe, but that is only a matter of style -
const deepReduce = (f = identity, init = null) => (xs = []) =>
xs.reduce
( (r, x) =>
Array.isArray(x)
? deepReduce(f, r)(x)
: f(r, x)
, init
)
const deepMap = (f = identity) => (xs = []) =>
xs.map
( x =>
Array.isArray(x)
? deepMap(f)(x)
: f(x)
)
liftAsync2 takes a common binary (has two parameters) function and "lifts" it into the asynchronous context. pipe and identity are commonly available in most functional libs or easy to write yourself -
const identity = x =>
x
const pipe = (...fs) =>
x => fs.reduce((r, f) => f(r), x)
const liftAsync2 = f =>
async (x, y) => f (await x, await y)
Here's all of the code in a demo you can run yourself. Notice because deepMap synchronously applies doSomething to all nested elements, all promises are run in parallel. This is in direct contrast to the serial behaviour in the first program. This may or may not be desirable so it's important to understand the difference in how these run -
const identity = x =>
x
const pipe = (...fs) =>
x => fs.reduce((r, f) => f(r), x)
const liftAsync2 = f =>
async (x, y) => f (await x, await y)
const deepReduce = (f = identity, init = null) => (xs = []) =>
xs.reduce
( (r, x) =>
Array.isArray(x)
? deepReduce(f, r)(x)
: f(r, x)
, init
)
const deepMap = (f = identity) => (xs = []) =>
xs.map
( x =>
Array.isArray(x)
? deepMap(f)(x)
: f(x)
)
const doSomething = x =>
new Promise(r => setTimeout(r, 200, x * 2))
const add =
(x, y) => x + y
const main =
pipe
( deepMap(doSomething)
, deepReduce(liftAsync2(add), Promise.resolve(0))
)
main([1, 2, 3, 4, 5, [6,7], 1, [8,[10,11]]])
.then(console.log, console.error)
// 2 + 4 + 6 + 8 + (10 + 14) + 2 + (16 + (20 + 22))
// => 116
console.log("doing something. please wait...")

How to encode corecursion/codata in a strictly evaluated setting?

Corecursion means calling oneself on data at each iteration that is greater than or equal to what one had before. Corecursion works on codata, which are recursively defined values. Unfortunately, value recursion is not possible in strictly evaluated languages. We can work with explicit thunks though:
const Defer = thunk =>
({get runDefer() {return thunk()}})
const app = f => x => f(x);
const fibs = app(x_ => y_ => {
const go = x => y =>
Defer(() =>
[x, go(y) (x + y)]);
return go(x_) (y_).runDefer;
}) (1) (1);
const take = n => codata => {
const go = ([x, tx], acc, i) =>
i === n
? acc
: go(tx.runDefer, acc.concat(x), i + 1);
return go(codata, [], 0);
};
console.log(
take(10) (fibs));
While this works as expected the approach seems awkward. Especially the hideous pair tuple bugs me. Is there a more natural way to deal with corecursion/codata in JS?
I would encode the thunk within the data constructor itself. For example, consider.
// whnf :: Object -> Object
const whnf = obj => {
for (const [key, val] of Object.entries(obj)) {
if (typeof val === "function" && val.length === 0) {
Object.defineProperty(obj, key, {
get: () => Object.defineProperty(obj, key, {
value: val()
})[key]
});
}
}
return obj;
};
// empty :: List a
const empty = null;
// cons :: (a, List a) -> List a
const cons = (head, tail) => whnf({ head, tail });
// fibs :: List Int
const fibs = cons(0, cons(1, () => next(fibs, fibs.tail)));
// next :: (List Int, List Int) -> List Int
const next = (xs, ys) => cons(xs.head + ys.head, () => next(xs.tail, ys.tail));
// take :: (Int, List a) -> List a
const take = (n, xs) => n === 0 ? empty : cons(xs.head, () => take(n - 1, xs.tail));
// toArray :: List a -> [a]
const toArray = xs => xs === empty ? [] : [ xs.head, ...toArray(xs.tail) ];
// [0,1,1,2,3,5,8,13,21,34]
console.log(toArray(take(10, fibs)));
This way, we can encode laziness in weak head normal form. The advantage is that the consumer has no idea whether a particular field of the given data structure is lazy or strict, and it doesn't need to care.

Similar curry functions producing different results

I am learning functional javascript and I came across two different implementations of the curry function. I am trying to understand the difference between the two they seem similar yet one works incorrectly for some cases and correctly for other cases.
I have tried interchanging the functions the one defined using es6 'const'
works for simple cases but when using 'filter' to filter strings the results are incorrect, but with integers it produces the desired results.
//es6
//Does not work well with filter when filtering strings
//but works correctly with numbers
const curry = (fn, initialArgs=[]) => (
(...args) => (
a => a.length === fn.length ? fn(...a) : curry(fn, a)
)([...initialArgs, ...args])
);
//Regular js
//Works well for all cases
function curry(fn) {
const arity = fn.length;
return function $curry(...args) {
if (args.length < arity) {
return $curry.bind(null, ...args);
}
return fn.call(null, ...args);
};
}
const match = curry((pattern, s) => s.match(pattern));
const filter = curry((f, xs) => xs.filter(f));
const hasQs = match(/q/i);
const filterWithQs = filter(hasQs);
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]));
//Output:
//es6:
[ 'hello', 'quick', 'sand', 'qwerty', 'quack' ]
//regular:
[ 'quick', 'qwerty', 'quack' ]
If you change filter to use xs.filter(x => f(x)) instead of xs.filter(f) it will work -
const filter = curry((f, xs) => xs.filter(x => f(x)))
// ...
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]))
// => [ 'quick', 'qwerty', 'quack' ]
The reason for this is because Array.prototype.filter passes three (3) arguments to the "callback" function,
callback - Function is a predicate, to test each element of the array. Return true to keep the element, false otherwise. It accepts three arguments:
element - The current element being processed in the array.
index (Optional) - The index of the current element being processed in the array.
array (Optional) - The array filter was called upon.
The f you are using in filter is match(/q/i), and so when it is called by Array.prototype.filter, you are getting three (3) extra arguments instead of the expected one (1). In the context of curry, that means a.length will be four (4), and since 4 === fn.length is false (where fn.length is 2), the returned value is curry(fn, a), which is another function. Since all functions are considered truthy values in JavaScript, the filter call returns all of the input strings.
// your original code:
xs.filter(f)
// is equivalent to:
xs.filter((elem, index, arr) => f(elem, index, arr))
By changing filter to use ...filter(x => f(x)), we only allow one (1) argument to be passed to the callback, and so curry will evaluate 2 === 2, which is true, and the return value is the result of evaluating match, which returns the expected true or false.
// the updated code:
xs.filter(x => f(x))
// is equivalent to:
xs.filter((elem, index, arr) => f(elem))
An alternative, and probably better option, is to change the === to >= in your "es6" curry -
const curry = (fn, initialArgs=[]) => (
(...args) => (
a => a.length >= fn.length ? fn(...a) : curry(fn, a)
)([...initialArgs, ...args])
)
// ...
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]))
// => [ 'quick', 'qwerty', 'quack' ]
This allows you to "overflow" function parameters "normally", which JavaScript has no problem with -
const foo = (a, b, c) => // has only three (3) parameters
console.log(a + b + c)
foo(1,2,3,4,5) // called with five (5) args
// still works
// => 6
Lastly here's some other ways I've written curry over the past. I've tested that each of them produce the correct output for your problem -
by auxiliary loop -
const curry = f => {
const aux = (n, xs) =>
n === 0 ? f (...xs) : x => aux (n - 1, [...xs, x])
return aux (f.length, [])
}
versatile curryN, works with variadic functions -
const curryN = n => f => {
const aux = (n, xs) =>
n === 0 ? f (...xs) : x => aux (n - 1, [...xs, x])
return aux (n, [])
};
// curry derived from curryN
const curry = f => curryN (f.length) (f)
spreads for days -
const curry = (f, ...xs) => (...ys) =>
f.length > xs.length + ys.length
? curry (f, ...xs, ...ys)
: f (...xs, ...ys)
homage to the lambda calculus and Howard Curry's fixed-point Y-combinator -
const U =
f => f (f)
const Y =
U (h => f => f (x => U (h) (f) (x)))
const curryN =
Y (h => xs => n => f =>
n === 0
? f (...xs)
: x => h ([...xs, x]) (n - 1) (f)
) ([])
const curry = f =>
curryN (f.length) (f)
and my personal favourites -
// for binary (2-arity) functions
const curry2 = f => x => y => f (x, y)
// for ternary (3-arity) functions
const curry3 = f => x => y => z => f (x, y, z)
// for arbitrary arity
const partial = (f, ...xs) => (...ys) => f (...xs, ...ys)
Lastly a fun twist on #Donat's answer that enables anonymous recursion -
const U =
f => f (f)
const curry = fn =>
U (r => (...args) =>
args.length < fn.length
? U (r) .bind (null, ...args)
: fn (...args)
)
The main difference here is not the es6 syntax but how the arguments are partially applied to the function.
First version: curry(fn, a)
Second versison: $curry.bind(null, ...args)
It works for only one step of currying (as needed in your example) if you change first version (es6) to fn.bind(null, ...args)
The representation of the "Regular js" version in es6 syntax would look like this (you need the constant to have a name for the function in the recursive call):
curry = (fn) => {
const c = (...args) => (
args.length < fn.length ? c.bind(null, ...args) : fn(...args)
);
return c;
}

function composition with rest operator, reducer and mapper

I'm following an article about Transducers in JavaScript, and in particular I have defined the following functions
const reducer = (acc, val) => acc.concat([val]);
const reduceWith = (reducer, seed, iterable) => {
let accumulation = seed;
for (const value of iterable) {
accumulation = reducer(accumulation, value);
}
return accumulation;
}
const map =
fn =>
reducer =>
(acc, val) => reducer(acc, fn(val));
const sumOf = (acc, val) => acc + val;
const power =
(base, exponent) => Math.pow(base, exponent);
const squares = map(x => power(x, 2));
const one2ten = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
res1 = reduceWith(squares(sumOf), 0, one2ten);
const divtwo = map(x => x / 2);
Now I want to define a composition operator
const more = (f, g) => (...args) => f(g(...args));
and I see that it is working in the following cases
res2 = reduceWith(more(squares,divtwo)(sumOf), 0, one2ten);
res3 = reduceWith(more(divtwo,squares)(sumOf), 0, one2ten);
which are equivalent to
res2 = reduceWith(squares(divtwo(sumOf)), 0, one2ten);
res3 = reduceWith(divtwo(squares(sumOf)), 0, one2ten);
The whole script is online.
I don't understand why I can't concatenate also the last function (sumOf) with the composition operator (more). Ideally I'd like to write
res2 = reduceWith(more(squares,divtwo,sumOf), 0, one2ten);
res3 = reduceWith(more(divtwo,squares,sumOf), 0, one2ten);
but it doesn't work.
Edit
It is clear that my initial attempt was wrong, but even if I define the composition as
const compose = (...fns) => x => fns.reduceRight((v, fn) => fn(v), x);
I still can't replace compose(divtwo,squares)(sumOf) with compose(divtwo,squares,sumOf)
Finally I've found a way to implement the composition that seems to work fine
const more = (f, ...g) => {
if (g.length === 0) return f;
if (g.length === 1) return f(g[0]);
return f(more(...g));
}
Better solution
Here it is another solution with a reducer and no recursion
const compose = (...fns) => (...x) => fns.reduceRight((v, fn) => fn(v), ...x);
const more = (...args) => compose(...args)();
usage:
res2 = reduceWith(more(squares,divtwo,sumOf), 0, one2ten);
res3 = reduceWith(more(divtwo,squares,sumOf), 0, one2ten);
full script online
Your more operates with only 2 functions. And the problem is here more(squares,divtwo)(sumOf) you execute a function, and here more(squares,divtwo, sumOf) you return a function which expects another call (fo example const f = more(squares,divtwo, sumOf); f(args)).
In order to have a variable number of composable functions you can define a different more for functions composition. Regular way of composing any number of functions is compose or pipe functions (the difference is arguments order: pipe takes functions left-to-right in execution order, compose - the opposite).
Regular way of defining pipe or compose:
const pipe = (...fns) => x => fns.reduce((v, fn) => fn(v), x);
const compose = (...fns) => x => fns.reduceRight((v, fn) => fn(v), x);
You can change x to (...args) to match your more definition.
Now you can execute any number of functions one by one:
const pipe = (...fns) => x => fns.reduce((v, fn) => fn(v), x);
const compose = (...fns) => x => fns.reduceRight((v, fn) => fn(v), x);
const inc = x => x + 1;
const triple = x => x * 3;
const log = x => { console.log(x); return x; } // log x, then return x for further processing
// left to right application
const pipe_ex = pipe(inc, log, triple, log)(10);
// right to left application
const compose_ex = compose(log, inc, log, triple)(10);
I still can't replace compose(divtwo,squares)(sumOf) with compose(divtwo,squares,sumOf)
Yes, they are not equivalent. And you shouldn't try anyway! Notice that divtwo and squares are transducers, while sumOf is a reducer. They have different types. Don't build a more function that mixes them up.
If you insist on using a dynamic number of transducers, put them in an array:
[divtwo, squares].reduceRight((t, r) => t(r), sumOf)

Categories