Chain methods and call function one by one - javascript

so basically I need to implement a lazy evaluation. .add() takes a function as parameter and any other arbitrary arguments. The function that is passed as an argument is run later (when evaluate is called) with other arguments(if any) as parameters to that function.
Basically my issue stands when i run .evaluate() which takes an array as parameter, the functions that were passed to add() as parameters are not called one at a time and returning a result and have it as a parameter for the next.
class Lazy {
constructor() {
this.functions = [];
this.counter = 0;
this.resultedArray = [];
}
add(func, ...args) {
if (args.length > 0) {
this.counter++;
const argsArrayNumeric = args.filter((item) => typeof item === "number");
this.functions.push({
func: func,
args: args,
});
} else {
if (func.length <= args.length + 1 || func.length === args.length) {
this.counter++;
this.functions.push({ func: func });
} else {
console.log("Missing parameters for " + func.name + " function");
}
}
return this;
}
evaluate(array) {
if (this.counter > 0) {
let result;
this.functions.map((obj, index) => {
array.map((item, index) => {
console.log(obj);
if (obj.func && obj.args) {
result = obj.func(item, ...obj.args);
} else {
result = obj.func(item);
}
this.resultedArray.push(result);
});
});
console.log(this.resultedArray);
} else {
console.log(array);
}
}
}
const s = new Lazy();
const timesTwo = (a) => {
return a * 2;
};
const plus = (a, b) => {
return a + b;
};
s.add(timesTwo).add(plus, 1).evaluate([1, 2, 3]);
//correct result is [3,5,7] but i have [ 2, 4, 6, 2, 3, 4 ]

There are a few problems in evaluate:
push will be executed too many times when you have more than one function.
Don't use map when you don't really map. .map() returns a result.
From the expected output (in the original version of the question) it seems you need to apply the functions from right to left, not from left to right.
this.counter does not really play a role. The length of the this.functions array should be all you need.
the result should not be printed in the method, but returned. It is the caller's responsibility to print it or do something else with it.
All this can be dealt with using reduce or reduceRight (depending on the expected order) like this:
evaluate(array) {
return this.functions.reduceRight((result, obj) =>
result.map((item) => obj.func(item, ...(obj.args || [])))
, array);
}
And in the main program, print the return value:
console.log(s.add(plus, 1).add(timesTwo).evaluate([1, 2, 3]));
The add method has also some strange logic, like:
When the first if condition is false, then the else block kicks in with args.length == 0. It is then strange to see conditions on args.length... it really is 0!
If the first condition in func.length <= args.length + 1 || func.length === args.length is false, then surely the second will always be false also. It should not need to be there.
argsArrayNumeric is never used.
All in all, it seems the code could be reduced to this snippet:
class Lazy {
constructor() {
this.functions = [];
}
add(func, ...args) {
if (func.length > args.length + 1) {
throw ValueError(`Missing parameters for ${func.name} function`);
}
this.functions.push({ func, args });
return this;
}
evaluate(array) {
return this.functions.reduceRight((result, obj) =>
result.map((item) => obj.func(item, ...(obj.args || [])))
, array);
}
}
const timesTwo = (a) => a * 2;
const plus = (a, b) => a + b;
const s = new Lazy();
console.log(s.add(plus, 1).add(timesTwo).evaluate([1, 2, 3])); // [3, 5, 7]

I think you're working too hard at this. I believe this does almost the same thing (except it doesn't report arity errors; that's easy enough to include but doesn't add anything to the discussion):
class Lazy {
constructor () {
this .fns = []
}
add (fn, ...args) {
this .fns .push ({fn, args});
return this
}
evaluate (xs) {
return xs .map (
x => this .fns .reduce ((a, {fn, args}) => fn (...args, a), x)
)
}
}
const timesTwo = (a) => a * 2
const plus = (a, b) => a + b
const s = new Lazy () .add (timesTwo) .add (plus, 1)
console .log (s .evaluate ([1, 2, 3]))
But I don't actually see much of a reason for the class here. A plain function will do much the same:
const lazy = (fns = [], laze = {
add: (fn, ...args) => lazy (fns .concat ({fn, args})),
evaluate: (xs) => xs .map (
x => fns .reduce ((a, {fn, args}) => fn (...args, a), x)
)
}) => laze
const timesTwo = (a) => a * 2
const plus = (a, b) => a + b
const s = lazy () .add (timesTwo) .add (plus, 1)
console .log (s .evaluate ([1, 2, 3]))
This is slightly different in that s is not mutated on each add, but a new object is returned. While we could change that and mutate and return the original object easily enough, the functional programming purist in me would actually consider moving more in that direction. In fact, all we're maintaining here is a list of {fn, args} objects. It might make the most sense to take that on directly, like this:
const add = (lazy, fn, ...args) => lazy .concat ({fn, args})
const evaluate = (lazy, xs) => xs .map (
x => lazy .reduce ((a, {fn, args}) => fn (...args, a), x)
)
const timesTwo = (a) => a * 2
const plus = (a, b) => a + b
const q = []
const r = add (q, timesTwo)
const s = add (r, plus, 1)
// or just
// const s = add (add ([], timesTwo), plus, 1)
console .log (evaluate (s, [1, 2, 3]))
In this version, we don't need a Lazy constructor or a lazy factory function, as our data structure is a plain array. While this is the format I prefer, any of these should do something similar to what you're trying.
Update
Based on comments, I include one more step on my journey, between the first and second snippets above:
const lazy = () => {
const fns = []
const laze = {
add: (fn, ...args) => fns .push ({fn, args}) && laze,
evaluate: (xs) => xs .map (
x => fns .reduce((a, {fn, args}) => fn (...args, a), x)
)
}
return laze
}
const timesTwo = (a) => a * 2
const plus = (a, b) => a + b
const s = lazy () .add (timesTwo) .add (plus, 1)
console .log (s .evaluate ([1, 2, 3]))
This version may be more familiar than the second snippet. It stores fns in a closure, hiding implementation details in a way we often don't with JS classes. But that list is still mutable through the add method of the object we return. Further ones proceed down the road to immutability.

Related

How to encode corecursion/codata in a strictly evaluated setting?

Corecursion means calling oneself on data at each iteration that is greater than or equal to what one had before. Corecursion works on codata, which are recursively defined values. Unfortunately, value recursion is not possible in strictly evaluated languages. We can work with explicit thunks though:
const Defer = thunk =>
({get runDefer() {return thunk()}})
const app = f => x => f(x);
const fibs = app(x_ => y_ => {
const go = x => y =>
Defer(() =>
[x, go(y) (x + y)]);
return go(x_) (y_).runDefer;
}) (1) (1);
const take = n => codata => {
const go = ([x, tx], acc, i) =>
i === n
? acc
: go(tx.runDefer, acc.concat(x), i + 1);
return go(codata, [], 0);
};
console.log(
take(10) (fibs));
While this works as expected the approach seems awkward. Especially the hideous pair tuple bugs me. Is there a more natural way to deal with corecursion/codata in JS?
I would encode the thunk within the data constructor itself. For example, consider.
// whnf :: Object -> Object
const whnf = obj => {
for (const [key, val] of Object.entries(obj)) {
if (typeof val === "function" && val.length === 0) {
Object.defineProperty(obj, key, {
get: () => Object.defineProperty(obj, key, {
value: val()
})[key]
});
}
}
return obj;
};
// empty :: List a
const empty = null;
// cons :: (a, List a) -> List a
const cons = (head, tail) => whnf({ head, tail });
// fibs :: List Int
const fibs = cons(0, cons(1, () => next(fibs, fibs.tail)));
// next :: (List Int, List Int) -> List Int
const next = (xs, ys) => cons(xs.head + ys.head, () => next(xs.tail, ys.tail));
// take :: (Int, List a) -> List a
const take = (n, xs) => n === 0 ? empty : cons(xs.head, () => take(n - 1, xs.tail));
// toArray :: List a -> [a]
const toArray = xs => xs === empty ? [] : [ xs.head, ...toArray(xs.tail) ];
// [0,1,1,2,3,5,8,13,21,34]
console.log(toArray(take(10, fibs)));
This way, we can encode laziness in weak head normal form. The advantage is that the consumer has no idea whether a particular field of the given data structure is lazy or strict, and it doesn't need to care.

Similar curry functions producing different results

I am learning functional javascript and I came across two different implementations of the curry function. I am trying to understand the difference between the two they seem similar yet one works incorrectly for some cases and correctly for other cases.
I have tried interchanging the functions the one defined using es6 'const'
works for simple cases but when using 'filter' to filter strings the results are incorrect, but with integers it produces the desired results.
//es6
//Does not work well with filter when filtering strings
//but works correctly with numbers
const curry = (fn, initialArgs=[]) => (
(...args) => (
a => a.length === fn.length ? fn(...a) : curry(fn, a)
)([...initialArgs, ...args])
);
//Regular js
//Works well for all cases
function curry(fn) {
const arity = fn.length;
return function $curry(...args) {
if (args.length < arity) {
return $curry.bind(null, ...args);
}
return fn.call(null, ...args);
};
}
const match = curry((pattern, s) => s.match(pattern));
const filter = curry((f, xs) => xs.filter(f));
const hasQs = match(/q/i);
const filterWithQs = filter(hasQs);
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]));
//Output:
//es6:
[ 'hello', 'quick', 'sand', 'qwerty', 'quack' ]
//regular:
[ 'quick', 'qwerty', 'quack' ]
If you change filter to use xs.filter(x => f(x)) instead of xs.filter(f) it will work -
const filter = curry((f, xs) => xs.filter(x => f(x)))
// ...
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]))
// => [ 'quick', 'qwerty', 'quack' ]
The reason for this is because Array.prototype.filter passes three (3) arguments to the "callback" function,
callback - Function is a predicate, to test each element of the array. Return true to keep the element, false otherwise. It accepts three arguments:
element - The current element being processed in the array.
index (Optional) - The index of the current element being processed in the array.
array (Optional) - The array filter was called upon.
The f you are using in filter is match(/q/i), and so when it is called by Array.prototype.filter, you are getting three (3) extra arguments instead of the expected one (1). In the context of curry, that means a.length will be four (4), and since 4 === fn.length is false (where fn.length is 2), the returned value is curry(fn, a), which is another function. Since all functions are considered truthy values in JavaScript, the filter call returns all of the input strings.
// your original code:
xs.filter(f)
// is equivalent to:
xs.filter((elem, index, arr) => f(elem, index, arr))
By changing filter to use ...filter(x => f(x)), we only allow one (1) argument to be passed to the callback, and so curry will evaluate 2 === 2, which is true, and the return value is the result of evaluating match, which returns the expected true or false.
// the updated code:
xs.filter(x => f(x))
// is equivalent to:
xs.filter((elem, index, arr) => f(elem))
An alternative, and probably better option, is to change the === to >= in your "es6" curry -
const curry = (fn, initialArgs=[]) => (
(...args) => (
a => a.length >= fn.length ? fn(...a) : curry(fn, a)
)([...initialArgs, ...args])
)
// ...
console.log(filterWithQs(["hello", "quick", "sand", "qwerty", "quack"]))
// => [ 'quick', 'qwerty', 'quack' ]
This allows you to "overflow" function parameters "normally", which JavaScript has no problem with -
const foo = (a, b, c) => // has only three (3) parameters
console.log(a + b + c)
foo(1,2,3,4,5) // called with five (5) args
// still works
// => 6
Lastly here's some other ways I've written curry over the past. I've tested that each of them produce the correct output for your problem -
by auxiliary loop -
const curry = f => {
const aux = (n, xs) =>
n === 0 ? f (...xs) : x => aux (n - 1, [...xs, x])
return aux (f.length, [])
}
versatile curryN, works with variadic functions -
const curryN = n => f => {
const aux = (n, xs) =>
n === 0 ? f (...xs) : x => aux (n - 1, [...xs, x])
return aux (n, [])
};
// curry derived from curryN
const curry = f => curryN (f.length) (f)
spreads for days -
const curry = (f, ...xs) => (...ys) =>
f.length > xs.length + ys.length
? curry (f, ...xs, ...ys)
: f (...xs, ...ys)
homage to the lambda calculus and Howard Curry's fixed-point Y-combinator -
const U =
f => f (f)
const Y =
U (h => f => f (x => U (h) (f) (x)))
const curryN =
Y (h => xs => n => f =>
n === 0
? f (...xs)
: x => h ([...xs, x]) (n - 1) (f)
) ([])
const curry = f =>
curryN (f.length) (f)
and my personal favourites -
// for binary (2-arity) functions
const curry2 = f => x => y => f (x, y)
// for ternary (3-arity) functions
const curry3 = f => x => y => z => f (x, y, z)
// for arbitrary arity
const partial = (f, ...xs) => (...ys) => f (...xs, ...ys)
Lastly a fun twist on #Donat's answer that enables anonymous recursion -
const U =
f => f (f)
const curry = fn =>
U (r => (...args) =>
args.length < fn.length
? U (r) .bind (null, ...args)
: fn (...args)
)
The main difference here is not the es6 syntax but how the arguments are partially applied to the function.
First version: curry(fn, a)
Second versison: $curry.bind(null, ...args)
It works for only one step of currying (as needed in your example) if you change first version (es6) to fn.bind(null, ...args)
The representation of the "Regular js" version in es6 syntax would look like this (you need the constant to have a name for the function in the recursive call):
curry = (fn) => {
const c = (...args) => (
args.length < fn.length ? c.bind(null, ...args) : fn(...args)
);
return c;
}

Trampoline recursion stack overflow

I have this recursive function sum which computes sum of all numbers that were passed to it.
function sum(num1, num2, ...nums) {
if (nums.length === 0) { return num1 + num2; }
return sum(num1 + num2, ...nums);
}
let xs = [];
for (let i = 0; i < 100; i++) { xs.push(i); }
console.log(sum(...xs));
xs = [];
for (let i = 0; i < 10000; i++) { xs.push(i); }
console.log(sum(...xs));
It works fine if only 'few' numbers are passed to it but overflows call stack otherwise. So I have tried to modify it a bit and use trampoline so that it can accept more arguments.
function _sum(num1, num2, ...nums) {
if (nums.length === 0) { return num1 + num2; }
return () => _sum(num1 + num2, ...nums);
}
const trampoline = fn => (...args) => {
let res = fn(...args);
while (typeof res === 'function') { res = res(); }
return res;
}
const sum = trampoline(_sum);
let xs = [];
for (let i = 0; i < 10000; i++) { xs.push(i); }
console.log(sum(...xs));
xs = [];
for (let i = 0; i < 100000; i++) { xs.push(i); }
console.log(sum(...xs));
While the first version isn't able to handle 10000 numbers, the second is. But if I pass 100000 numbers to the second version I'm getting call stack overflow error again.
I would say that 100000 is not really that big of a number (might be wrong here) and don't see any runaway closures that might have caused the memory leak.
Does anyone know what is wrong with it?
The other answer points out the limitation on number of function arguments, but I wanted to remark on your trampoline implementation. The long computation we're running may want to return a function. If you use typeof res === 'function', it's not longer possible to compute a function as a return value!
Instead, encode your trampoline variants with some sort of unique identifiers
const bounce = (f, ...args) =>
({ tag: bounce, f: f, args: args })
const done = (value) =>
({ tag: done, value: value })
const trampoline = t =>
{ while (t && t.tag === bounce)
t = t.f (...t.args)
if (t && t.tag === done)
return t.value
else
throw Error (`unsupported trampoline type: ${t.tag}`)
}
Before we hop on, let's first get an example function to fix
const none =
Symbol ()
const badsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? n1
: badsum ([ n1 + n2, ...rest ])
We'll throw a range of numbers at it to see it work
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
console.log (range (10))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]
console.log (badsum (range (10)))
// 55
But can it handle the big leagues?
console.log (badsum (range (1000)))
// 500500
console.log (badsum (range (20000)))
// RangeError: Maximum call stack size exceeded
See the results in your browser so far
const none =
Symbol ()
const badsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? n1
: badsum ([ n1 + n2, ...rest ])
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
console.log (range (10))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]
console.log (badsum (range (1000)))
// 500500
console.log (badsum (range (20000)))
// RangeError: Maximum call stack size exceeded
Somewhere between 10000 and 20000 our badsum function unsurprisingly causes a stack overflow.
Besides renaming the function to goodsum we only have to encode the return types using our trampoline's variants
const goodsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? n1
? done (n1)
: goodsum ([ n1 + n2, ...rest ])
: bounce (goodsum, [ n1 + n2, ...rest ])
console.log (trampoline (goodsum (range (1000))))
// 500500
console.log (trampoline (goodsum (range (20000))))
// 200010000
// No more stack overflow!
You can see the results of this program in your browser here. Now we can see that neither recursion nor the trampoline are at fault for this program being slow. Don't worry though, we'll fix that later.
const bounce = (f, ...args) =>
({ tag: bounce, f: f, args: args })
const done = (value) =>
({ tag: done, value: value })
const trampoline = t =>
{ while (t && t.tag === bounce)
t = t.f (...t.args)
if (t && t.tag === done)
return t.value
else
throw Error (`unsupported trampoline type: ${t.tag}`)
}
const none =
Symbol ()
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
const goodsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? done (n1)
: bounce (goodsum, [ n1 + n2, ...rest ])
console.log (trampoline (goodsum (range (1000))))
// 500500
console.log (trampoline (goodsum (range (20000))))
// 200010000
// No more stack overflow!
The extra call to trampoline can get annoying, and when you look at goodsum alone, it's not immediately apparent what done and bounce are doing there, unless maybe this was a very common convention in many of your programs.
We can better encode our looping intentions with a generic loop function. A loop is given a function that is recalled whenever the function calls recur. It looks like a recursive call, but really recur is constructing a value that loop handles in a stack-safe way.
The function we give to loop can have any number of parameters, and with default values. This is also convenient because we can now avoid the expensive ... destructuring and spreading by simply using an index parameter i initialized to 0. The caller of the function does not have the ability to access these variables outside of the loop call
The last advantage here is that the reader of goodsum can clearly see the loop encoding and the explicit done tag is no longer necessary. The user of the function does not need to worry about calling trampoline either as it's already taken care of for us in loop
const goodsum = (ns = []) =>
loop ((sum = 0, i = 0) =>
i >= ns.length
? sum
: recur (sum + ns[i], i + 1))
console.log (goodsum (range (1000)))
// 500500
console.log (goodsum (range (20000)))
// 200010000
console.log (goodsum (range (999999)))
// 499999500000
Here's our loop and recur pair now. This time we expand upon our { tag: ... } convention using a tagging module
const recur = (...values) =>
tag (recur, { values })
const loop = f =>
{ let acc = f ()
while (is (recur, acc))
acc = f (...acc.values)
return acc
}
const T =
Symbol ()
const tag = (t, x) =>
Object.assign (x, { [T]: t })
const is = (t, x) =>
t && x[T] === t
Run it in your browser to verify the results
const T =
Symbol ()
const tag = (t, x) =>
Object.assign (x, { [T]: t })
const is = (t, x) =>
t && x[T] === t
const recur = (...values) =>
tag (recur, { values })
const loop = f =>
{ let acc = f ()
while (is (recur, acc))
acc = f (...acc.values)
return acc
}
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
const goodsum = (ns = []) =>
loop ((sum = 0, i = 0) =>
i >= ns.length
? sum
: recur (sum + ns[i], i + 1))
console.log (goodsum (range (1000)))
// 500500
console.log (goodsum (range (20000)))
// 200010000
console.log (goodsum (range (999999)))
// 499999500000
extra
My brain has been stuck in anamorphism gear for a few months and I was curious if it was possible to implement a stack-safe unfold using the loop function introduced above
Below, we look at an example program which generates the entire sequence of sums up to n. Think of it as showing the work to arrive at the answer for the goodsum program above. The total sum up to n is the last element in the array.
This is a good use case for unfold. We could write this using loop directly, but the point of this was to stretch the limits of unfold so here goes
const sumseq = (n = 0) =>
unfold
( (loop, done, [ m, sum ]) =>
m > n
? done ()
: loop (sum, [ m + 1, sum + m ])
, [ 1, 0 ]
)
console.log (sumseq (10))
// [ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45 ]
// +1 ↗ +2 ↗ +3 ↗ +4 ↗ +5 ↗ +6 ↗ +7 ↗ +8 ↗ +9 ↗ ...
If we used an unsafe unfold implementation, we could blow the stack
// direct recursion, stack-unsafe!
const unfold = (f, initState) =>
f ( (x, nextState) => [ x, ...unfold (f, nextState) ]
, () => []
, initState
)
console.log (sumseq (20000))
// RangeError: Maximum call stack size exceeded
After playing with it a little bit, it is indeed possible to encode unfold using our stack-safe loop. Cleaning up the ... spread syntax using a push effect makes things a lot quicker too
const push = (xs, x) =>
(xs .push (x), xs)
const unfold = (f, init) =>
loop ((acc = [], state = init) =>
f ( (x, nextState) => recur (push (acc, x), nextState)
, () => acc
, state
))
With a stack-safe unfold, our sumseq function works a treat now
console.time ('sumseq')
const result = sumseq (20000)
console.timeEnd ('sumseq')
console.log (result)
// sumseq: 23 ms
// [ 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ..., 199990000 ]
Verify the result in your browser below
const recur = (...values) =>
tag (recur, { values })
const loop = f =>
{ let acc = f ()
while (is (recur, acc))
acc = f (...acc.values)
return acc
}
const T =
Symbol ()
const tag = (t, x) =>
Object.assign (x, { [T]: t })
const is = (t, x) =>
t && x[T] === t
const push = (xs, x) =>
(xs .push (x), xs)
const unfold = (f, init) =>
loop ((acc = [], state = init) =>
f ( (x, nextState) => recur (push (acc, x), nextState)
, () => acc
, state
))
const sumseq = (n = 0) =>
unfold
( (loop, done, [ m, sum ]) =>
m > n
? done ()
: loop (sum, [ m + 1, sum + m ])
, [ 1, 0 ]
)
console.time ('sumseq')
const result = sumseq (20000)
console.timeEnd ('sumseq')
console.log (result)
// sumseq: 23 ms
// [ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45, ..., 199990000 ]
Browsers have practical limits on the number of arguments a function can take
You can change the sum signature to accept an array rather than a varying number of arguments, and use destructuring to keep the syntax/readability similar to what you have. This "fixes" the stackoverflow error, but is increadibly slow :D
function _sum([num1, num2, ...nums]) { /* ... */ }
I.e.:, if you're running in to problems with maximum argument counts, your recursive/trampoline approach is probably going to be too slow to work with...
The other answer already explained the issue with your code. This answer demonstrates that trampolines are sufficiently fast for most array based computations and offer a higher level of abstraction:
// trampoline
const loop = f => {
let acc = f();
while (acc && acc.type === recur)
acc = f(...acc.args);
return acc;
};
const recur = (...args) =>
({type: recur, args});
// sum
const sum = xs => {
const len = xs.length;
return loop(
(acc = 0, i = 0) =>
i === len
? acc
: recur(acc + xs[i], i + 1));
};
// and run...
const xs = Array(1e5)
.fill(0)
.map((x, i) => i);
console.log(sum(xs));
If a trampoline based computation causes performance problems, then you can still replace it with a bare loop.

Convert array to rest argument

How do I make chain use a spread-parameter (...chain) instead of an array?
let newData,
newArgs,
chain = function (data, myargs) {
if (myargs.length === 0) {
return;
}
newData = myargs[0](data);
myargs.splice(0, 1);
newArgs = myargs;
// newargs === array
chain(newData, newArgs);
},
add1 = (num) => {
return num + 1;
};
When I call the function, I have to do it like this: chain(5, [add1, add1, add1]).
But I want: chain(5, add1, add1, add1)
The problem is, that newArgs (the function parameter inside of the recursion) is converted into an array after the first function call.
https://jsfiddle.net/btx5sfwh/
You probably want to use rest arguments, then spread the array when calling the function recursively
let newData,
newArgs,
chain = (data, ...myargs) => {
if (myargs.length === 0) {
return data;
} else {
newData = myargs[0](data);
myargs.splice(0, 1);
newArgs = myargs;
chain(newData, ...newArgs);
}
},
add1 = (num) => {
return num + 1;
};
chain(5, add1, add1, add1);
console.log(newData); // 8
You can write chain in a much nicer way
const chain = (x, f, ...fs) =>
f === undefined
? x
: chain (f (x), ...fs)
const add1 = x =>
x + 1
console.log (chain (5, add1, add1, add1)) // 8
console.log (chain (5)) // 5
Rest parameter and spread arguments are somewhat expensive tho - we generally want to limit our dependency on these, but we don't want to sacrifice the API we want either.
Don't worry, there's an even higher-level way to think about this problem. We can preserve the variadic API but avoid the costly spread argument.
const chain = (x, ...fs) =>
fs.reduce((x, f) => f (x), x)
const add1 = x =>
x + 1
console.log (chain (5, add1, add1, add1)) // 8
console.log (chain (5)) // 5

How to chain map and filter functions in the correct order

I really like chaining Array.prototype.map, filter and reduce to define a data transformation. Unfortunately, in a recent project that involved large log files, I could no longer get away with looping through my data multiple times...
My goal:
I want to create a function that chains .filter and .map methods by, instead of mapping over an array immediately, composing a function that loops over the data once. I.e.:
const DataTransformation = () => ({
map: fn => (/* ... */),
filter: fn => (/* ... */),
run: arr => (/* ... */)
});
const someTransformation = DataTransformation()
.map(x => x + 1)
.filter(x => x > 3)
.map(x => x / 2);
// returns [ 2, 2.5 ] without creating [ 2, 3, 4, 5] and [4, 5] in between
const myData = someTransformation.run([ 1, 2, 3, 4]);
My attempt:
Inspired by this answer and this blogpost I started writing a Transduce function.
const filterer = pred => reducer => (acc, x) =>
pred(x) ? reducer(acc, x) : acc;
const mapper = map => reducer => (acc, x) =>
reducer(acc, map(x));
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
The problem:
The problem with the Transduce snippet above, is that it runs “backwards”... The last method I chain is the first to be executed:
const someTransformation = Transduce()
.map(x => x + 1)
.filter(x => x > 3)
.map(x => x / 2);
// Instead of [ 2, 2.5 ] this returns []
// starts with (x / 2) -> [0.5, 1, 1.5, 2]
// then filters (x < 3) -> []
const myData = someTransformation.run([ 1, 2, 3, 4]);
Or, in more abstract terms:
Go from:
Transducer(concat).map(f).map(g) == (acc, x) => concat(acc, f(g(x)))
To:
Transducer(concat).map(f).map(g) == (acc, x) => concat(acc, g(f(x)))
Which is similar to:
mapper(f) (mapper(g) (concat))
I think I understand why it happens, but I can't figure out how to fix it without changing the “interface” of my function.
The question:
How can I make my Transduce method chain filter and map operations in the correct order?
Notes:
I'm only just learning about the naming of some of the things I'm trying to do. Please let me know if I've incorrectly used the Transduce term or if there are better ways to describe the problem.
I'm aware I can do the same using a nested for loop:
const push = (acc, x) => (acc.push(x), acc);
const ActionChain = (actions = []) => {
const run = arr =>
arr.reduce((acc, x) => {
for (let i = 0, action; i < actions.length; i += 1) {
action = actions[i];
if (action.type === "FILTER") {
if (action.fn(x)) {
continue;
}
return acc;
} else if (action.type === "MAP") {
x = action.fn(x);
}
}
acc.push(x);
return acc;
}, []);
const addAction = type => fn =>
ActionChain(push(actions, { type, fn }));
return {
map: addAction("MAP"),
filter: addAction("FILTER"),
run
};
};
// Compare to regular chain to check if
// there's a performance gain
// Admittedly, in this example, it's quite small...
const naiveApproach = {
run: arr =>
arr
.map(x => x + 3)
.filter(x => x % 3 === 0)
.map(x => x / 3)
.filter(x => x < 40)
};
const actionChain = ActionChain()
.map(x => x + 3)
.filter(x => x % 3 === 0)
.map(x => x / 3)
.filter(x => x < 40)
const testData = Array.from(Array(100000), (x, i) => i);
console.time("naive");
const result1 = naiveApproach.run(testData);
console.timeEnd("naive");
console.time("chain");
const result2 = actionChain.run(testData);
console.timeEnd("chain");
console.log("equal:", JSON.stringify(result1) === JSON.stringify(result2));
Here's my attempt in a stack snippet:
const filterer = pred => reducer => (acc, x) =>
pred(x) ? reducer(acc, x) : acc;
const mapper = map => reducer => (acc, x) => reducer(acc, map(x));
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
const sameDataTransformation = Transduce()
.map(x => x + 5)
.filter(x => x % 2 === 0)
.map(x => x / 2)
.filter(x => x < 4);
// It's backwards:
// [-1, 0, 1, 2, 3]
// [-0.5, 0, 0.5, 1, 1.5]
// [0]
// [5]
console.log(sameDataTransformation.run([-1, 0, 1, 2, 3, 4, 5]));
before we know better
I really like chaining ...
I see that, and I'll appease you, but you'll come to understand that forcing your program through a chaining API is unnatural, and more trouble than it's worth in most cases.
const Transduce = (reducer = (acc, x) => (acc.push(x), acc)) => ({
map: map => Transduce(mapper(map)(reducer)),
filter: pred => Transduce(filterer(pred)(reducer)),
run: arr => arr.reduce(reducer, [])
});
I think I understand why it happens, but I can't figure out how to fix it without changing the “interface” of my function.
The problem is indeed with your Transduce constructor. Your map and filter methods are stacking map and pred on the outside of the transducer chain, instead of nesting them inside.
Below, I've implemented your Transduce API that evaluates the maps and filters in correct order. I've also added a log method so that we can see how Transduce is behaving
const Transduce = (f = k => k) => ({
map: g =>
Transduce(k =>
f ((acc, x) => k(acc, g(x)))),
filter: g =>
Transduce(k =>
f ((acc, x) => g(x) ? k(acc, x) : acc)),
log: s =>
Transduce(k =>
f ((acc, x) => (console.log(s, x), k(acc, x)))),
run: xs =>
xs.reduce(f((acc, x) => acc.concat(x)), [])
})
const foo = nums => {
return Transduce()
.log('greater than 2?')
.filter(x => x > 2)
.log('\tsquare:')
.map(x => x * x)
.log('\t\tless than 30?')
.filter(x => x < 30)
.log('\t\t\tpass')
.run(nums)
}
// keep square(n), forall n of nums
// where n > 2
// where square(n) < 30
console.log(foo([1,2,3,4,5,6,7]))
// => [ 9, 16, 25 ]
untapped potential
Inspired by this answer ...
In reading that answer I wrote, you overlook the generic quality of Trans as it was written there. Here, our Transduce only attempts to work with Arrays, but really it can work with any type that has an empty value ([]) and a concat method. These two properties make up a category called Monoids and we'd be doing ourselves a disservice if we didn't take advantage of transducer's ability to work with any type in this category.
Above, we hard-coded the initial accumulator [] in the run method, but this should probably be supplied as an argument – much like we do with iterable.reduce(reducer, initialAcc)
Aside from that, both implementations are essentially equivalent. The biggest difference is that the Trans implementation provided in the linked answer is Trans itself is a monoid, but Transduce here is not. Trans neatly implements composition of transducers in the concat method whereas Transduce (above) has composition mixed within each method. Making it a monoid allows us to rationalize Trans the same way do all other monoids, instead of having to understand it as some specialized chaining interface with unique map, filter, and run methods.
I would advise building from Trans instead of making your own custom API
have your cake and eat it too
So we learned the valuable lesson of uniform interfaces and we understand that Trans is inherently simple. But, you still want that sweet chaining API. OK, ok...
We're going to implement Transduce one more time, but this time we'll do so using the Trans monoid. Here, Transduce holds a Trans value instead of a continuation (Function).
Everything else stays the same – foo takes 1 tiny change and produces an identical output.
// generic transducers
const mapper = f =>
Trans(k => (acc, x) => k(acc, f(x)))
const filterer = f =>
Trans(k => (acc, x) => f(x) ? k(acc, x) : acc)
const logger = label =>
Trans(k => (acc, x) => (console.log(label, x), k(acc, x)))
// magic chaining api made with Trans monoid
const Transduce = (t = Trans.empty()) => ({
map: f =>
Transduce(t.concat(mapper(f))),
filter: f =>
Transduce(t.concat(filterer(f))),
log: s =>
Transduce(t.concat(logger(s))),
run: (m, xs) =>
transduce(t, m, xs)
})
// when we run, we must specify the type to transduce
// .run(Array, nums)
// instead of
// .run(nums)
Expand this code snippet to see the final implementation – of course you could skip defining a separate mapper, filterer, and logger, and instead define those directly on Transduce. I think this reads nicer tho.
// Trans monoid
const Trans = f => ({
runTrans: f,
concat: ({runTrans: g}) =>
Trans(k => f(g(k)))
})
Trans.empty = () =>
Trans(k => k)
const transduce = (t, m, xs) =>
xs.reduce(t.runTrans((acc, x) => acc.concat(x)), m.empty())
// complete Array monoid implementation
Array.empty = () => []
// generic transducers
const mapper = f =>
Trans(k => (acc, x) => k(acc, f(x)))
const filterer = f =>
Trans(k => (acc, x) => f(x) ? k(acc, x) : acc)
const logger = label =>
Trans(k => (acc, x) => (console.log(label, x), k(acc, x)))
// now implemented with Trans monoid
const Transduce = (t = Trans.empty()) => ({
map: f =>
Transduce(t.concat(mapper(f))),
filter: f =>
Transduce(t.concat(filterer(f))),
log: s =>
Transduce(t.concat(logger(s))),
run: (m, xs) =>
transduce(t, m, xs)
})
// this stays exactly the same
const foo = nums => {
return Transduce()
.log('greater than 2?')
.filter(x => x > 2)
.log('\tsquare:')
.map(x => x * x)
.log('\t\tless than 30?')
.filter(x => x < 30)
.log('\t\t\tpass')
.run(Array, nums)
}
// output is exactly the same
console.log(foo([1,2,3,4,5,6,7]))
// => [ 9, 16, 25 ]
wrap up
So we started with a mess of lambdas and then made things simpler using a monoid. The Trans monoid provides distinct advantages in that the monoid interface is known and the generic implementation is extremely simple. But we're stubborn or maybe we have goals to fulfill that are not set by us – we decide to build the magic Transduce chaining API, but we do so using our rock-solid Trans monoid which gives us all the power of Trans but also keeps complexity nicely compartmentalised.
dot chaining fetishists anonymous
Here's a couple other recent answers I wrote about method chaining
Is there any way to make a functions return accessible via a property?
Chaining functions and using an anonymous function
Pass result of functional chain to function
I think you need to change the order of your implementations:
const filterer = pred => reducer => (x) =>pred((a=reducer(x) )?x: undefined;
const mapper = map => reducer => (x) => map(reducer(x));
Then you need to change the run command to:
run: arr => arr.reduce((a,b)=>a.concat([reducer(b)]), []);
And the default reducer must be
x=>x
However, this way the filter wont work. You may throw undefined in the filter function and catch in the run function:
run: arr => arr.reduce((a,b)=>{
try{
a.push(reducer(b));
}catch(e){}
return a;
}, []);
const filterer = pred => reducer => (x) =>{
if(!pred((a=reducer(x))){
throw undefined;
}
return x;
};
However, all in all i think the for loop is much more elegant in this case...

Categories