I find myself over and over again, writing code like this, and is thinking. There must be a known pattern for this, but plowing through documentation of different functional libs like Ramda. I can't quite find a match. What should I use?
var arrayOfPersons = [{ firstName: 'Jesper', lastName: 'Jensen', income: 120000, member: true }/* .... a hole lot of persons */];
function createPredicateBuilder(config) {
return {
build() {
var fnPredicate = (p) => true;
if (typeof config.minIncome == 'number') {
fnPredicate = (p) => fnPredicate(p) && config.minIncome <= p.income;
}
if (typeof config.member == 'boolean') {
fnPredicate = (p) => fnPredicate(p) && config.member === p.member;
}
// .. continue to support more predicateparts.
},
map(newConfig) {
return createPredicateBuilder({ ...config, ...newConfig });
}
};
}
var predicateBuilder = createPredicateBuilder({});
// We collect predicates
predicateBuilder = predicateBuilder.map({ minIncome: 200000 });
// ...
predicateBuilder = predicateBuilder.map({ member: false });
// Now we want to query...
console.log(arrayOfPersons.filter(predicateBuilder.build()));
I create a builder instance, and calls to map, creates a new instance returning in an object with build/map methods. The state is captured in the functions scope.
Sometime in the future, I want to get my collected function (or result).
I think this is FP, but what is this pattern, and is there any libs that makes it easyer?
Is my oop-inspired naming of things (builder/build) blinding me?
You could use the where function in Ramda to test against a spec object describing your predicates. Your code could then build the spec object dynamically according to the passed config.
https://ramdajs.com/docs/#where
Example from the Ramda docs:
// pred :: Object -> Boolean
const pred = R.where({
a: R.equals('foo'),
b: R.complement(R.equals('bar')),
x: R.gt(R.__, 10),
y: R.lt(R.__, 20)
});
pred({a: 'foo', b: 'xxx', x: 11, y: 19}); //=> true
pred({a: 'xxx', b: 'xxx', x: 11, y: 19}); //=> false
pred({a: 'foo', b: 'bar', x: 11, y: 19}); //=> false
pred({a: 'foo', b: 'xxx', x: 10, y: 19}); //=> false
pred({a: 'foo', b: 'xxx', x: 11, y: 20}); //=> false
To elaborate, you could "build" the spec object by having a set of functions that return a new spec with an additional predicate, e.g.:
function setMinIncome(oldSpec, minIncome) {
return R.merge(oldSpec, {income: R.gt(R.__, minIncome)})
}
Functional programming is less about patterns and more about laws. Laws allow the programmer to reason about their programs like a mathematician can reason about an equation.
Let's look at adding numbers. Adding is a binary operation (it takes two numbers) and always produces another number.
1 + 2 = 3
2 + 1 = 3
1 + (2 + 3) = 6
(1 + 2) + 3 = 6
((1 + 2) + 3) + 4 = 10
(1 + 2) + (3 + 4) = 10
1 + (2 + 3) + 4 = 10
1 + (2 + (3 + 4)) = 10
We can add numbers in any order and still get the same result. This property is associativity and it forms the basis of the associative law.
Adding zero is somewhat interesting, or taken for granted.
1 + 0 = 1
0 + 1 = 1
3 + 0 = 3
0 + 3 = 3
Adding zero to any number will not change the number. This is known as the identity element.
These two things, (1) an associative binary operation and (2) an identity element, make up a monoid.
If we can ...
encode your predicates as elements of a domain
create a binary operation for the elements
determine the identity element
... then we receive the benefits of belonging to the monoid category, allowing us to reason about our program in an equational way. There's no pattern to learn, only laws to uphold.
1. Making a domain
Getting your data right is tricky, even more so in a multi-paradigm language like JavaScript. This question is about functional programming though so functions would be a good go-to.
In your program ...
build() {
var fnPredicate = (p) => true;
if (typeof config.minIncome == 'number') {
fnPredicate = (p) => fnPredicate(p) && config.minIncome <= p.income;
}
if (typeof config.member == 'boolean') {
fnPredicate = (p) => fnPredicate(p) && config.member === p.member;
}
// .. continue to support more predicateparts.
},
... we see a mixture of the program level and the data level. This program is hard-coded to understand only an input that may have these specific keys (minIncome, member) and their respective types (number and boolean), as well the comparison operation used to determine the predicate.
Let's keep it really simple. Let's take a static predicate
item.name === "Sally"
If I wanted this same predicate but compared using a different item, I would wrap this expression in a function and make item a parameter of the function.
const nameIsSally = item =>
item.name === "Sally"
console .log
( nameIsSally ({ name: "Alice" }) // false
, nameIsSally ({ name: "Sally" }) // true
, nameIsSally ({ name: "NotSally" }) // false
, nameIsSally ({}) // false
)
This predicate is easy to use, but it only works to check for the name Sally. We repeat the process by wrapping the expression in a function and make name a parameter of the function. This general technique is called abstraction and it's used all the time in functional programming.
const nameIs = name => item =>
item.name === name
const nameIsSally =
nameIs ("Sally")
const nameIsAlice =
nameIs ("Alice")
console .log
( nameIsSally ({ name: "Alice" }) // false
, nameIsSally ({ name: "Sally" }) // true
, nameIsAlice ({ name: "Alice" }) // true
, nameIsAlice ({ name: "Sally" }) // false
)
As you can see, it doesn't matter that the expression we wrapped was already a function. JavaScript has first-class support for functions, which means they can be treated as values. Programs that return a function or receive a function as input are called higher-order functions.
Above, our predicates are represented as functions which take a value of any type (a) and produce a boolean. We will denote this as a -> Boolean. So each predicate is an element of our domain, and that domain is all functions a -> Boolean.
2. The Binary Operation
We'll do the exercise of abstraction one more time. Let's take a static combined predicate expression.
p1 (item) && p2 (item)
I can re-use this expression for other items by wrapping it in a function and making item a parameter of the function.
const bothPredicates = item =>
p1 (item) && p2 (item)
But we want to be able to combine any predicates. Again, we wrap the expression we want to re-use in an function then assign parameter(s) for the variable(s), this time for p1 and p2.
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
Before we move on, let's check our domain and ensure our binary operation and is correct. The binary operation must:
take as input two (2) elements from our domain (a -> Boolean)
return as output an element of our domain
the operation must be associative: f(a,b) == f(b,a)
Indeed, and accepts two elements of our domain p1 and p2. The return value is item => ... which is a function receiving an item and returns p1 (item) && p2 (item). Each is a predicate that accepts a single value and returns a Boolean. This simplifies to Boolean && Boolean which we know is another Boolean. To summarize, and takes two predicates and returns a new predicate, which is precisely what the binary operation must do.
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
const nameIs = x => item =>
item.name === x
const minIncome = x => item =>
x <= item.income
const query =
and
( nameIs ("Alice")
, minIncome (5)
)
console .log
( query ({ name: "Sally", income: 3}) // false
, query ({ name: "Alice", income: 3 }) // false
, query ({ name: "Alice", income: 7 }) // true
)
3. The Identity Element
The identity element, when added to any other element, must not change the element. So for any predicate p and the predicate identity element empty, the following must hold
and (p, empty) == p
and (empty, p) == p
We can represent the empty predicate as a function that takes any element and always returns true.
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
const empty = item =>
true
const p = x =>
x > 5
console .log
( and (p, empty) (3) === p (3) // true
, and (empty, p) (3) === p (3) // true
)
Power of Laws
Now that we have a binary operation and an identity element, we can combine an arbitrary amount of predicates. We define sum which plugs our monoid directly into reduce.
// --- predicate monoid ---
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
const empty = item =>
true
const sum = (...predicates) =>
predicates .reduce (and, empty) // [1,2,3,4] .reduce (add, 0)
// --- individual predicates ---
const nameIs = x => item =>
item.name === x
const minIncome = x => item =>
x <= item.income
const isTeenager = item =>
item.age > 12 && item.age < 20
// --- demo ---
const query =
sum
( nameIs ("Alice")
, minIncome (5)
, isTeenager
)
console .log
( query ({ name: "Sally", income: 8, age: 14 }) // false
, query ({ name: "Alice", income: 3, age: 21 }) // false
, query ({ name: "Alice", income: 7, age: 29 }) // false
, query ({ name: "Alice", income: 9, age: 17 }) // true
)
The empty sum predicate still returns a valid result. This is like the empty query that matches all results.
const query =
sum ()
console .log
( query ({ foo: "bar" }) // true
)
Free Convenience
Using functions to encode our predicates makes them useful in other ways too. If you have an array of items, you could use a predicate p directly in .find or .filter. Of course this is true for predicates created using and and sum too.
const p =
sum (pred1, pred2, pred3, ...)
const items =
[ { name: "Alice" ... }
, { name: "Sally" ... }
]
const firstMatch =
items .find (p)
const allMatches =
items .filter (p)
Make it a Module
You don't want to define globals like add and sum and empty. When you package this code, use a module of some sort.
// Predicate.js
const add = ...
const empty = ...
const sum = ...
const Predicate =
{ add, empty, sum }
export default Predicate
When you use it
import { sum } from './Predicate'
const query =
sum (...)
const result =
arrayOfPersons .filter (query)
Quiz
Notice the similarity between our predicate identity element and the identity element for &&
T && ? == T
? && T == T
F && ? == F
? && F == F
We can replace all ? above with T and the equations will hold. Below, what do you think the identity element is for ||?
T || ? == T
? || T == T
F || ? == F
? || F == F
What's the identity element for *, binary multiplication?
n * ? = n
? * n = n
How about the identity element for arrays or lists?
concat (l, ?) == l
concat (?, l) == l
Having Fun?
I think you'll enjoy contravariant functors. In the same arena, transducers. There's a demo showing how to build a higher-level API around these low-level modules too.
This is the Builder design pattern. Although it's changed in a more functional approach but the premise stays the same - you have an entity that collects information via .map() (more traditionally it's .withX() which correspond to setters) and executes all the collected data producing a new object .build().
To make this more recognisable, here is a more Object Oriented approach that still does the same thing:
class Person {
constructor(firstName, lastName, age) {
this.firstName = firstName;
this.lastName = lastName;
this.age = age;
}
toString() {
return `I am ${this.firstName} ${this.lastName} and I am ${this.age} years old`;
}
}
class PersonBuilder {
withFirstName(firstName) {
this.firstName = firstName;
return this;
}
withLastName(lastName) {
this.lastName = lastName;
return this;
}
withAge(age) {
this.age = age;
return this;
}
build() {
return new Person(this.firstName, this.lastName, this.age);
}
}
//make builder
const builder = new PersonBuilder();
//collect data for the object construction
builder
.withFirstName("Fred")
.withLastName("Bloggs")
.withAge(42);
//build the object with the collected data
const person = builder.build();
console.log(person.toString())
I'd stick to a simple array of (composed) predicate functions and a reducer of either
And (f => g => x => f(x) && g(x)), seeded with True (_ => true).
Or (f => g => x => f(x) || g(x)), seeded with False (_ => false).
For example:
const True = _ => true;
const False = _ => false;
const Or = (f, g) => x => f(x) || g(x);
Or.seed = False;
const And = (f, g) => x => f(x) && g(x);
And.seed = True;
const Filter = (fs, operator) => fs.reduce(operator, operator.seed);
const oneOrTwo =
Filter([x => x === 1, x => x === 2], Or);
const evenAndBelowTen =
Filter([x => x % 2 === 0, x => x < 10], And);
const oneToHundred = Array.from(Array(100), (_, i) => i);
console.log(
"One or two",
oneToHundred.filter(oneOrTwo),
"Even and below 10",
oneToHundred.filter(evenAndBelowTen)
);
You can even create complicated filter logic by nesting And/Or structures:
const True = _ => true;
const False = _ => false;
const Or = (f, g) => x => f(x) || g(x);
Or.seed = False;
const And = (f, g) => x => f(x) && g(x);
And.seed = True;
const Filter = (fs, operator) => fs.reduce(operator, operator.seed);
const mod = x => y => y % x === 0;
const oneToHundred = Array.from(Array(100), (_, i) => i);
console.log(
"Divisible by (3 and 5), or (3 and 7)",
oneToHundred.filter(
Filter(
[
Filter([mod(3), mod(5)], And),
Filter([mod(3), mod(7)], And)
],
Or
)
)
);
Or, with your own example situation:
const comp = (f, g) => x => f(g(x));
const gt = x => y => y > x;
const eq = x => y => x === y;
const prop = k => o => o[k];
const And = (f, g) => x => f(x) && g(x);
const True = _ => true;
const Filter = (fs) => fs.reduce(And, True);
const richMemberFilter = Filter(
[
comp(gt(200000), prop("income")),
comp(eq(true), prop("member"))
]
);
console.log(
"Rich members:",
data().filter(richMemberFilter).map(prop("firstName"))
);
function data() {
return [
{ firstName: 'Jesper', lastName: 'Jensen', income: 120000, member: true },
{ firstName: 'Jane', lastName: 'Johnson', income: 230000, member: true },
{ firstName: 'John', lastName: 'Jackson', income: 230000, member: false }
];
};
Related
Here are examples with simple questions:
Example 1: Find maximum depth of binary tree.
I got the right answer but don't know why my original wrong answer is wrong.
Right answer:
var maxDepth = function(root) {
if (root === null) return 0;
var maxDepth = 1;
maxDepth = maxDepthHelper(root, 1, maxDepth);
return maxDepth;
};
function maxDepthHelper(tree, depth, maxDepth) {
if (tree.left === null && tree.right === null) {
maxDepth = depth > maxDepth ? depth : maxDepth;
return maxDepth;
}
if (tree.left) {
maxDepth = maxDepthHelper(tree.left, depth + 1, maxDepth);
}
if (tree.right) {
maxDepth = maxDepthHelper(tree.right, depth + 1, maxDepth);
}
return maxDepth;
}
Wrong answer:
var maxDepth = function(root) {
if (root === null) return 0;
var maxDepth = 1;
maxDepthHelper(root, 1, maxDepth);
return maxDepth;
};
function maxDepthHelper(tree, depth, maxDepth) {
if (tree.left === null && tree.right === null) {
maxDepth = depth > maxDepth ? depth : maxDepth;
return;
}
if (tree.left) {
maxDepthHelper(tree.left, depth + 1, maxDepth);
}
if (tree.right) {
maxDepthHelper(tree.right, depth + 1, maxDepth);
}
}
It has something to do with me thinking the maxDepth should be changed by the helper function and ultimately when I return that it should return changed but it doesn't. It just returns 1 which is the original thing I assign it. But here in the example below, I am able to change a variable from the parent in the helper function, so what am I missing here?
Example 2: Given a binary search tree, write a function kthSmallest to find the kth smallest element in it.
Solution:
var kthSmallest = function(root, k) {
let smallestArr = [];
kthSmallestHelper(root, k, smallestArr);
return smallestArr.pop()
};
function kthSmallestHelper(bst, k, array) {
if (bst === null) return;
kthSmallestHelper(bst.left, k, array);
if (array.length === k) return;
array.push(bst.val);
kthSmallestHelper(bst.right, k, array);
}
The variable maxDepth (lets call this the outer maxDepth) in the function maxDepth (unfortunate naming) stores a value (the number 1). When you call maxDepthHelper(root, 1, maxDepth), the value 1 is passed in and stored in the local variable maxDepth inside maxDepthHelper. We can assign anything to the local maxDepth, but it won't affect the value stored in the outer maxDepth, because they are two different variables.
The variable smallestArr in the function kthSmallest stores a value; that value is a pointer (the memory location) to an empty array. When kthSmallestHelper(root, k, smallestArr) is called, just like before, that value (the pointer) is passed in and stored inside the local variable array in kthSmallestHelper. Effectively, now array and smallestArr both store a pointer (the memory location) to the same empty array. If we now do any assignment to array, like array=['some new arr'], the variable smallestArr won't get affected. But when you call a mutation method, like array.push(bst), what happens is the Javascript engine looks at the pointer stored in array, and modifies the array stored at that memory location. Because smallestArr stores the pointer to this modified array, if you call smallestArr.pop(), the Javascript engine will pop the last item of the modified array.
The important thing to remember is anytime you write an expression like let x = /* some array or object */, an array/object is created, then a pointer to that array/object is stored in the variable. If you write let x = /* some primitive value (like 3)*/, the value 3 is directly stored in the variable.
In the second program, maxDepth is a number and passed by value (copy), not by reference. The recursive calls are effectively no-ops, and their return values are immediately discarded. This is a common mistake for beginners that are learning how different variable types are passed from one function to another.
That said, recursion is a functional heritage and so using it with functional style yields the best results. This means avoiding side effects like mutation and variable reassignment. You can simplify your depth program a lot -
function depth(tree)
{ if (tree == null)
return 0
else
return 1 + max(depth(tree.left), depth(tree.right))
}
function max (a, b)
{ if (a > b)
return a
else
return b
}
An expression-based syntax is often preferred because expressions evaluate to values, whereas statements (like if and return) do not -
const depth = tree =>
tree == null
? 0
: 1 + max(depth(tree.left), depth(tree.right))
const max = (a, b) =>
a > b
? a
: b
Your kthSmallest program is more difficult but JavaScript's imperative-style generators make quick work of the problem. Mutation k-- is used but cannot be observed from outside of the function -
function *inorder (tree)
{ if (tree == null) return
yield* inorder(tree.left)
yield tree.val
yield* inorder(tree.right)
}
function kthSmallest (tree, k)
{ for (const v of inorder(tree))
if (k-- == 0)
return v
}
The pure expression form of this program is slightly different -
const inorder = tree =>
tree == null
? []
: [ ...inorder(tree.left), tree.val, ...inorder(tree.right) ]
const kthSmallest = (tree, k) =>
inorder(tree)[k]
Here's a functioning demonstration -
import { depth, fromArray, inorder, kthSmallest } from "./Tree"
const rand = _ =>
Math.random() * 100 >> 0
const t =
fromArray(Array.from(Array(10), rand))
console.log("inorder:", Array.from(inorder(t)))
console.log("depth:", depth(t))
console.log("0th:", kthSmallest(t, 0))
console.log("1st:", kthSmallest(t, 1))
console.log("2nd:", kthSmallest(t, 2))
console.log("99th:", kthSmallest(t, 99))
Output -
inorder: [ 12, 14, 25, 44, 47, 53, 67, 70, 85, 91 ]
depth: 5
0th: 12
1st: 14
2nd: 25
99th: undefined
Writing modules like Tree below is a good practice for separating concerns and organising your code -
// Tree.js
const empty =
null
const node = (val, left = empty, right = empty) =>
({ val, left, right })
const fromArray = (a = []) =>
a.length < 1
? empty
: insert(fromArray(a.slice(1)), a[0])
const insert = (t = empty, v = null) =>
t === empty
? node(v)
: v < t.val
? node(t.val, insert(t.left, v), t.right)
: v > t.val
? node(t.val, t.left, insert(t.right, v))
: t
const depth = (tree = empty) => ...
const inorder = (tree = empty) => ...
const kthSmallest = (tree = empty, k = 0) => ...
export { depth, empty, fromArray, inorder, kthSmallest, node }
Expand the snippet below to verify the results in your own browser -
const empty =
null
const node = (val, left = empty, right = empty) =>
({ val, left, right })
const fromArray = (a = []) =>
a.length < 1
? empty
: insert(fromArray(a.slice(1)), a[0])
const insert = (t = empty, v) =>
t === empty
? node(v)
: v < t.val
? node(t.val, insert(t.left, v), t.right)
: v > t.val
? node(t.val, t.left, insert(t.right, v))
: t
const inorder = (tree = empty) =>
tree === empty
? []
: [ ...inorder(tree.left), tree.val, ...inorder(tree.right) ]
const kthSmallest = (tree = empty, k = 0) =>
inorder(tree)[k]
const depth = (tree = empty) =>
tree == null
? 0
: 1 + Math.max(depth(tree.left), depth(tree.right))
const rand = _ =>
Math.random() * 100 >> 0
const t =
fromArray(Array.from(Array(10), rand))
console.log("inorder:", JSON.stringify(Array.from(inorder(t))))
console.log("depth:", depth(t))
console.log("0th:", kthSmallest(t, 0))
console.log("1st:", kthSmallest(t, 1))
console.log("2nd:", kthSmallest(t, 2))
console.log("99th:", kthSmallest(t, 99))
I have an array of objects to be transformed and merged into a single string with array.reduce.
const arr = [{id: 7, task: "foo"}, {id: 22, task: "bar"}]
The result should be 7. foo, 22. bar
If I write this code, it will work but produce , 7. foo, 22.bar:
arr.reduce((pre,cur)=> pre + `, ${cur.id}. ${cur.task}`, '')
How can I properly do this without the extra comma, preferably only in FP?
Is reduce a requirement? Map is easier to understand and read.
arr.map(o => `${o.id}. ${o.task}`).join(',')
You can resolve this easily by checking if pre is not a falsy value using ternary operator at start like:
`${pre ? ', ' : ''}`
const arr = [{id: 7, task: "foo"}, {id: 22, task: "bar"}]
const res = arr.reduce((pre,cur)=> pre + `${pre ? ', ' : ''}${cur.id}. ${cur.task}`, '')
console.log(res)
functional 101
The idiomatic JavaScript solution is to map-join. But Functional programming is about breaking the program into reusable modules and creating barriers of abstraction -
// Task.js
const empty =
{ id: 0, task: "" }
const task = (id = 0, task = "") =>
({ id, task })
const toString = (t = empty) =>
`${t.id}. ${t.task}`
const toStringAll = ([ first, ...rest ]) =>
rest.reduce // <-- reduce
( (r, x) => r + ", " + toString(x)
, toString(first)
)
export { empty, task, toString, toStringAll }
So there's a possible and sensible implementation using reduce. Readability of this program is good because each part of the module is small and does just one thing.
Now we close the imaginary lid on our module and forget all of the complexity within. What remains is a clean interface that clearly communicates the capabilities of the module -
// Main.js
import { task, toStringAll } from './Task'
const data =
[ task(7, "foo")
, task(22, "bar")
, task(33, "qux")
]
console.log(toStringAll(data))
// 7. foo, 22. bar, 33. qux
console.log(toStringAll(data.slice(0,2)))
// 7. foo, 22. bar
console.log(toStringAll(data.slice(0,1)))
// 7. foo
console.log(toStringAll(data.slice(0,0)))
// 0.
Expand the snippet below to verify the result in your browser -
const empty =
{ id: 0, task: "" }
const task = (id = 0, task = "") =>
({ id, task })
const toString = (t = empty) =>
`${t.id}. ${t.task}`
const toStringAll = ([ first, ...rest ]) =>
rest.reduce
( (r, x) => r + ", " + toString(x)
, toString(first)
)
const data =
[ task(7, "foo")
, task(22, "bar")
, task(33, "qux")
]
console.log(toStringAll(data))
// 7. foo, 22. bar, 33. qux
console.log(toStringAll(data.slice(0,2)))
// 7. foo, 22. bar
console.log(toStringAll(data.slice(0,1)))
// 7. foo
console.log(toStringAll(data.slice(0,0)))
// 0.
the barrier of abstraction
Main.js is separated from Task by a barrier of abstraction. Now we can choose any implementation for toString and toStringAll without requiring Main to concern itself with how Task operates under the lid.
Let's practice making a change and update how an empty task is represented. In the program above we see 0. but we will make it say Empty. instead. And just for fun, let's try a new implementation of toStringAll -
// Task.js
const empty = //
const task = //
const toString = (t = empty) =>
t === empty
? `Empty.` // <-- custom representation for empty
: `${t.id}. ${t.task}`
const toStringAll = ([ t = empty, ...more ]) =>
more.length === 0
? toString(t)
: toString(t) + ", " + toStringAll(more) // <-- recursive
//
export { empty, task, toString, toStringAll }
Main doesn't need to do anything differently -
// Main.js
import { task, toStringAll } from './Task'
const data = //
console.log(toStringAll(data))
// 7. foo, 22. bar, 33. qux
console.log(toStringAll(data.slice(0,2)))
// 7. foo, 22. bar
console.log(toStringAll(data.slice(0,1)))
// 7. foo
console.log(toStringAll(data.slice(0,0)))
// Empty.
Expand the snippet below to verify the result in your browser -
const empty =
{ id: 0, task: "" }
const task = (id = 0, task = "") =>
({ id, task })
const toString = (t = empty) =>
t === empty
? `Empty.`
: `${t.id}. ${t.task}`
const toStringAll = ([ t = empty, ...more ]) =>
more.length === 0
? toString(t)
: toString(t) + ", " + toStringAll(more)
const data =
[ task(7, "foo")
, task(22, "bar")
, task(33, "qux")
]
console.log(toStringAll(data))
// 7. foo, 22. bar, 33. qux
console.log(toStringAll(data.slice(0,2)))
// 7. foo, 22. bar
console.log(toStringAll(data.slice(0,1)))
// 7. foo
console.log(toStringAll(data.slice(0,0)))
// Empty.
you may add a check for the first array element:
arr.reduce((pre, cur, index) => {
if(index === 0) {
return `${cur.id}. ${cur.task}`
};
return pre + `, ${cur.id}. ${cur.task}`}
, '');
You can use the index argument of the reducer:
const format = object => `${object.id}. ${object.task}`;
const result = arr.reduce((pre, object, index) => (index ? pre + ', ' : '') + format(object), '');
Corecursion means calling oneself on data at each iteration that is greater than or equal to what one had before. Corecursion works on codata, which are recursively defined values. Unfortunately, value recursion is not possible in strictly evaluated languages. We can work with explicit thunks though:
const Defer = thunk =>
({get runDefer() {return thunk()}})
const app = f => x => f(x);
const fibs = app(x_ => y_ => {
const go = x => y =>
Defer(() =>
[x, go(y) (x + y)]);
return go(x_) (y_).runDefer;
}) (1) (1);
const take = n => codata => {
const go = ([x, tx], acc, i) =>
i === n
? acc
: go(tx.runDefer, acc.concat(x), i + 1);
return go(codata, [], 0);
};
console.log(
take(10) (fibs));
While this works as expected the approach seems awkward. Especially the hideous pair tuple bugs me. Is there a more natural way to deal with corecursion/codata in JS?
I would encode the thunk within the data constructor itself. For example, consider.
// whnf :: Object -> Object
const whnf = obj => {
for (const [key, val] of Object.entries(obj)) {
if (typeof val === "function" && val.length === 0) {
Object.defineProperty(obj, key, {
get: () => Object.defineProperty(obj, key, {
value: val()
})[key]
});
}
}
return obj;
};
// empty :: List a
const empty = null;
// cons :: (a, List a) -> List a
const cons = (head, tail) => whnf({ head, tail });
// fibs :: List Int
const fibs = cons(0, cons(1, () => next(fibs, fibs.tail)));
// next :: (List Int, List Int) -> List Int
const next = (xs, ys) => cons(xs.head + ys.head, () => next(xs.tail, ys.tail));
// take :: (Int, List a) -> List a
const take = (n, xs) => n === 0 ? empty : cons(xs.head, () => take(n - 1, xs.tail));
// toArray :: List a -> [a]
const toArray = xs => xs === empty ? [] : [ xs.head, ...toArray(xs.tail) ];
// [0,1,1,2,3,5,8,13,21,34]
console.log(toArray(take(10, fibs)));
This way, we can encode laziness in weak head normal form. The advantage is that the consumer has no idea whether a particular field of the given data structure is lazy or strict, and it doesn't need to care.
This is an advanced topic of my prior question here:
How to store data of a functional chain?
The brief idea is
A simple function below:
const L = a => L;
forms
L
L(1)
L(1)(2)
...
This seems to form a list but the actual data is not stored at all, so if it's required to store the data such as [1,2], what is the smartest practice to have the task done?
One of the prominent ideas is from #user633183 which I marked as an accepted answer(see the Question link), and another version of the curried function is also provided by #Matías Fidemraizer .
So here goes:
const L = a => {
const m = list => x => !x
? list
: m([...list, x]);
return m([])(a);
};
const list1 = (L)(1)(2)(3); //lazy : no data evaluation here
const list2 = (L)(4)(5)(6);
console.log(list1()) // now evaluated by the tail ()
console.log(list2())
What I really like is it turns out lazy evaluation.
Although the given approach satisfies what I mentioned, this function has lost the outer structure or I must mentiion:
Algebraic structure
const L = a => L;
which forms list and more fundamentally gives us an algebraic structure of identity element, potentially along with Monoid or Magma.
Left an Right identity
One of the easiest examples of Monoids and identity is number and "Strings" and [Array] in JavaScript.
0 + a === a === a + 0
1 * a === a === a * 1
In Strings, the empty quoate "" is the identity element.
"" + "Hello world" === "Hello world" === "Hello world" + ""
Same goes to [Array].
Same goes to L:
(L)(a) === (a) === (a)(L)
const L = a => L;
const a = L(5); // number is wrapped or "lift" to Type:L
// Similarity of String and Array
// "5" [5]
//left identity
console.log(
(L)(a) === (a) //true
);
//right identity
console.log(
(a) === (a)(L) //true
);
and the obvious identity immutability:
const L = a => L;
console.log(
(L)(L) === (L) //true
);
console.log(
(L)(L)(L) === (L) //true
);
console.log(
(L)(L)(L)(L) === (L) //true
);
Also the below:
const L = a => L;
const a = (L)(1)(2)(3);
const b = (L)(1)(L)(2)(3)(L);
console.log(
(a) === (b) //true
);
Questions
What is the smartest or most elegant way (very functional and no mutations (no Array.push, also)) to implement L that satisfies 3 requirements:
Requirement 0 - Identity
A simple function:
const L = a => L;
already satisfies the identity law as we already have seen.
Requirement 1 - eval() method
Although L satisfies the identity law, there is no method to access to the listed/accumulated data.
(Answers provided in my previous question provide the data accumulation ability, but breaks the Identity law.)
Lazy evaluation seems the correct approach, so providing a clearer specification:
provide eval method of L
const L = a => L; // needs to enhance to satisfy the requirements
const a = (L)(1)(2)(3);
const b = (L)(1)(L)(2)(3)(L);
console.log(
(a) === (b) //true
);
console.log(
(a).eval() //[1, 2, 3]
);
console.log(
(b).eval() //[1, 2, 3]
);
Requirement 3 - Monoid Associative law
In addition to the prominent Identify structure, Monoids also satisfies Associative law
(a * b) * c === a * b * c === a * (b * c)
This simply means "flatten the list", in other words, the structure does not contain nested lists.
[a, [b, c]] is no good.
Sample:
const L = a => L; // needs to enhance to satisfy the requirements
const a = (L)(1)(2);
const b = (L)(3)(4);
const c = (L)(99);
const ab = (a)(b);
const bc = (b)(c);
const abc1 = (ab)(c);
const abc2 = (a)(bc);
console.log(
abc1 === abc2 // true for Associative
);
console.log(
(ab).eval() //[1, 2, 3, 4]
);
console.log(
(abc1).eval() //[1, 2, 3, 4, 99]
);
console.log(
(abc2).eval() //[1, 2, 3, 4, 99]
);
That is all for 3 requirements to implement L as a monoid.
This is a great challenge for functional programming to me, and actually I tried by myself for a while, but asking the previous questions, it's very good practice to share my own challenge and hear the people and read their elegant code.
Thank you.
Your data type is inconsistent!
So, you want to create a monoid. Consider the structure of a monoid:
class Monoid m where
empty :: m -- identity element
(<*>) :: m -> m -> m -- binary operation
-- It satisfies the following laws:
empty <*> x = x = x <*> empty -- identity law
(x <*> y) <*> z = x <*> (y <*> z) -- associativity law
Now, consider the structure of your data type:
(L)(a) = (a) = (a)(L) // identity law
((a)(b))(c) = (a)((b)(c)) // associativity law
Hence, according to you the identity element is L and the binary operation is function application. However:
(L)(1) // This is supposed to be a valid expression.
(L)(1) != (1) != (1)(L) // But it breaks the identity law.
// (1)(L) is not even a valid expression. It throws an error. Therefore:
((L)(1))(L) // This is supposed to be a valid expression.
((L)(1))(L) != (L)((1)(L)) // But it breaks the associativity law.
The problem is that you are conflating the binary operation with the reverse list constructor:
// First, you're using function application as a reverse cons (a.k.a. snoc):
// cons :: a -> [a] -> [a]
// snoc :: [a] -> a -> [a] -- arguments flipped
const xs = (L)(1)(2); // [1,2]
const ys = (L)(3)(4); // [3,4]
// Later, you're using function application as the binary operator (a.k.a. append):
// append :: [a] -> [a] -> [a]
const zs = (xs)(ys); // [1,2,3,4]
If you're using function application as snoc then you can't use it for append as well:
snoc :: [a] -> a -> [a]
append :: [a] -> [a] -> [a]
Notice that the types don't match, but even if they did you still don't want one operation to do two things.
What you want are difference lists.
A difference list is a function that takes a list and prepends another list to it. For example:
const concat = xs => ys => xs.concat(ys); // This creates a difference list.
const f = concat([1,2,3]); // This is a difference list.
console.log(f([])); // You can get its value by applying it to the empty array.
console.log(f([4,5,6])); // You can also apply it to any other array.
The cool thing about difference lists are that they form a monoid because they are just endofunctions:
const id = x => x; // The identity element is just the id function.
const compose = (f, g) => x => f(g(x)); // The binary operation is composition.
compose(id, f) = f = compose(f, id); // identity law
compose(compose(f, g), h) = compose(f, compose(g, h)); // associativity law
Even better, you can package them into a neat little class where function composition is the dot operator:
class DList {
constructor(f) {
this.f = f;
this.id = this;
}
cons(x) {
return new DList(ys => this.f([x].concat(ys)));
}
concat(xs) {
return new DList(ys => this.f(xs.concat(ys)));
}
apply(xs) {
return this.f(xs);
}
}
const id = new DList(x => x);
const cons = x => new DList(ys => [x].concat(ys)); // Construct DList from value.
const concat = xs => new DList(ys => xs.concat(ys)); // Construct DList from array.
id . concat([1, 2, 3]) = concat([1, 2, 3]) = concat([1, 2, 3]) . id // identity law
concat([1, 2]) . cons(3) = cons(1) . concat([2, 3]) // associativity law
You can use the apply method to retrieve the value of the DList as follows:
class DList {
constructor(f) {
this.f = f;
this.id = this;
}
cons(x) {
return new DList(ys => this.f([x].concat(ys)));
}
concat(xs) {
return new DList(ys => this.f(xs.concat(ys)));
}
apply(xs) {
return this.f(xs);
}
}
const id = new DList(x => x);
const cons = x => new DList(ys => [x].concat(ys));
const concat = xs => new DList(ys => xs.concat(ys));
const identityLeft = id . concat([1, 2, 3]);
const identityRight = concat([1, 2, 3]) . id;
const associativityLeft = concat([1, 2]) . cons(3);
const associativityRight = cons(1) . concat([2, 3]);
console.log(identityLeft.apply([])); // [1,2,3]
console.log(identityRight.apply([])); // [1,2,3]
console.log(associativityLeft.apply([])); // [1,2,3]
console.log(associativityRight.apply([])); // [1,2,3]
An advantage of using difference lists over regular lists (functional lists, not JavaScript arrays) is that concatenation is more efficient because the lists are concatenated from right to left. Hence, it doesn't copy the same values over and over again if you're concatenating multiple lists.
mirror test
To make L self-aware we have to somehow tag the values it creates. This is a generic trait and we can encode it using a pair of functions. We set an expectation of the behavior –
is (Foo, 1) // false 1 is not a Foo
is (Foo, tag (Foo, 1)) // true tag (Foo, 1) is a Foo
Below we implement is and tag. We want to design them such that we can put in any value and we can reliably determine the value's tag at a later time. We make exceptions for null and undefined.
const Tag =
Symbol ()
const tag = (t, x) =>
x == null
? x
: Object.assign (x, { [Tag]: t })
const is = (t, x) =>
x == null
? false
: x[Tag] === t
const Foo = x =>
tag (Foo, x)
console.log
( is (Foo, 1) // false
, is (Foo, []) // false
, is (Foo, {}) // false
, is (Foo, x => x) // false
, is (Foo, true) // false
, is (Foo, undefined) // false
, is (Foo, null) // false
)
console.log
( is (Foo, Foo (1)) // true we can tag primitives
, is (Foo, Foo ([])) // true we can tag arrays
, is (Foo, Foo ({})) // true we can tag objects
, is (Foo, Foo (x => x)) // true we can even tag functions
, is (Foo, Foo (true)) // true and booleans too
, is (Foo, Foo (undefined)) // false but! we cannot tag undefined
, is (Foo, Foo (null)) // false or null
)
We now have a function Foo which is capable of distinguishing values it produced. Foo becomes self-aware –
const Foo = x =>
is (Foo, x)
? x // x is already a Foo
: tag (Foo, x) // tag x as Foo
const f =
Foo (1)
Foo (f) === f // true
L of higher consciousness
Using is and tag we can make List self-aware. If given an input of a List-tagged value, List can respond per your design specification.
const None =
Symbol ()
const L = init =>
{ const loop = (acc, x = None) =>
// x is empty: return the internal array
x === None
? acc
// x is a List: concat the two internal arrays and loop
: is (L, x)
? tag (L, y => loop (acc .concat (x ()), y))
// x is a value: append and loop
: tag (L, y => loop ([ ...acc, x ], y))
return loop ([], init)
}
We try it out using your test data –
const a =
L (1) (2)
const b =
L (3) (4)
const c =
L (99)
console.log
( (a) (b) (c) () // [ 1, 2, 3, 4, 99 ]
, (a (b)) (c) () // [ 1, 2, 3, 4, 99 ]
, (a) (b (c)) () // [ 1, 2, 3, 4, 99 ]
)
It's worth comparing this implementation to the last one –
// previous implementation
const L = init =>
{ const loop = (acc, x) =>
x === undefined // don't use !x, read more below
? acc
: y => loop ([...acc, x], y)
return loop ([], init)
}
In our revision, a new branch is added for is (L, x) that defines the new monoidal behavior. And most importantly, any returned value in wrapped in tag (L, ...) so that it can later be identified as an L-tagged value. The other change is the explicit use of a None symbol; additional remarks on this have been added a the end of this post.
equality of L values
To determine equality of L(x) and L(y) we are faced with another problem. Compound data in JavaScript are represented with Objects which cannot be simply compared with the === operator
console.log
( { a: 1 } === { a: 1 } ) // false
We can write an equality function for L, perhaps called Lequal
const l1 =
L (1) (2) (3)
const l2 =
L (1) (2) (3)
const l3 =
L (0)
console.log
( Lequal (l1, l2) // true
, Lequal (l1, l3) // false
, Lequal (l2, l3) // false
)
But I won't go into how to do that in this post. If you're interested, I covered that topic in this Q&A.
// Hint:
const Lequal = (l1, l2) =>
arrayEqual // compare two arrays
( l1 () // get actual array of l1
, l2 () // get actual array of l2
)
tagging in depth
The tagging technique I used here is one I use in other answers. It is accompanied by a more extensive example here.
other remarks
Don't use !x to test for an empty value because it will return true for any "falsy" x. For example, if you wanted to make a list of L (1) (0) (3) ... It will stop after 1 because !0 is true. Falsy values include 0, "" (empty string), null, undefined, NaN, and of course false itself. It's for this reason we use an explicit None symbol to more precisely identify when the list terminates. All other inputs are appended to the internal array.
And don't rely on hacks like JSON.stringify to test for object equality. Structural traversal is absolutely required.
const x = { a: 1, b: 2 }
const y = { b: 2, a: 1 }
console.log
(JSON.stringify (x) === JSON.stringify (y)) // false
console.log
(Lequal (L (x), L (y))) // should be true!
For advice on how to solve this problem, see this Q&A
I have this recursive function sum which computes sum of all numbers that were passed to it.
function sum(num1, num2, ...nums) {
if (nums.length === 0) { return num1 + num2; }
return sum(num1 + num2, ...nums);
}
let xs = [];
for (let i = 0; i < 100; i++) { xs.push(i); }
console.log(sum(...xs));
xs = [];
for (let i = 0; i < 10000; i++) { xs.push(i); }
console.log(sum(...xs));
It works fine if only 'few' numbers are passed to it but overflows call stack otherwise. So I have tried to modify it a bit and use trampoline so that it can accept more arguments.
function _sum(num1, num2, ...nums) {
if (nums.length === 0) { return num1 + num2; }
return () => _sum(num1 + num2, ...nums);
}
const trampoline = fn => (...args) => {
let res = fn(...args);
while (typeof res === 'function') { res = res(); }
return res;
}
const sum = trampoline(_sum);
let xs = [];
for (let i = 0; i < 10000; i++) { xs.push(i); }
console.log(sum(...xs));
xs = [];
for (let i = 0; i < 100000; i++) { xs.push(i); }
console.log(sum(...xs));
While the first version isn't able to handle 10000 numbers, the second is. But if I pass 100000 numbers to the second version I'm getting call stack overflow error again.
I would say that 100000 is not really that big of a number (might be wrong here) and don't see any runaway closures that might have caused the memory leak.
Does anyone know what is wrong with it?
The other answer points out the limitation on number of function arguments, but I wanted to remark on your trampoline implementation. The long computation we're running may want to return a function. If you use typeof res === 'function', it's not longer possible to compute a function as a return value!
Instead, encode your trampoline variants with some sort of unique identifiers
const bounce = (f, ...args) =>
({ tag: bounce, f: f, args: args })
const done = (value) =>
({ tag: done, value: value })
const trampoline = t =>
{ while (t && t.tag === bounce)
t = t.f (...t.args)
if (t && t.tag === done)
return t.value
else
throw Error (`unsupported trampoline type: ${t.tag}`)
}
Before we hop on, let's first get an example function to fix
const none =
Symbol ()
const badsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? n1
: badsum ([ n1 + n2, ...rest ])
We'll throw a range of numbers at it to see it work
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
console.log (range (10))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]
console.log (badsum (range (10)))
// 55
But can it handle the big leagues?
console.log (badsum (range (1000)))
// 500500
console.log (badsum (range (20000)))
// RangeError: Maximum call stack size exceeded
See the results in your browser so far
const none =
Symbol ()
const badsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? n1
: badsum ([ n1 + n2, ...rest ])
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
console.log (range (10))
// [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ]
console.log (badsum (range (1000)))
// 500500
console.log (badsum (range (20000)))
// RangeError: Maximum call stack size exceeded
Somewhere between 10000 and 20000 our badsum function unsurprisingly causes a stack overflow.
Besides renaming the function to goodsum we only have to encode the return types using our trampoline's variants
const goodsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? n1
? done (n1)
: goodsum ([ n1 + n2, ...rest ])
: bounce (goodsum, [ n1 + n2, ...rest ])
console.log (trampoline (goodsum (range (1000))))
// 500500
console.log (trampoline (goodsum (range (20000))))
// 200010000
// No more stack overflow!
You can see the results of this program in your browser here. Now we can see that neither recursion nor the trampoline are at fault for this program being slow. Don't worry though, we'll fix that later.
const bounce = (f, ...args) =>
({ tag: bounce, f: f, args: args })
const done = (value) =>
({ tag: done, value: value })
const trampoline = t =>
{ while (t && t.tag === bounce)
t = t.f (...t.args)
if (t && t.tag === done)
return t.value
else
throw Error (`unsupported trampoline type: ${t.tag}`)
}
const none =
Symbol ()
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
const goodsum = ([ n1, n2 = none, ...rest ]) =>
n2 === none
? done (n1)
: bounce (goodsum, [ n1 + n2, ...rest ])
console.log (trampoline (goodsum (range (1000))))
// 500500
console.log (trampoline (goodsum (range (20000))))
// 200010000
// No more stack overflow!
The extra call to trampoline can get annoying, and when you look at goodsum alone, it's not immediately apparent what done and bounce are doing there, unless maybe this was a very common convention in many of your programs.
We can better encode our looping intentions with a generic loop function. A loop is given a function that is recalled whenever the function calls recur. It looks like a recursive call, but really recur is constructing a value that loop handles in a stack-safe way.
The function we give to loop can have any number of parameters, and with default values. This is also convenient because we can now avoid the expensive ... destructuring and spreading by simply using an index parameter i initialized to 0. The caller of the function does not have the ability to access these variables outside of the loop call
The last advantage here is that the reader of goodsum can clearly see the loop encoding and the explicit done tag is no longer necessary. The user of the function does not need to worry about calling trampoline either as it's already taken care of for us in loop
const goodsum = (ns = []) =>
loop ((sum = 0, i = 0) =>
i >= ns.length
? sum
: recur (sum + ns[i], i + 1))
console.log (goodsum (range (1000)))
// 500500
console.log (goodsum (range (20000)))
// 200010000
console.log (goodsum (range (999999)))
// 499999500000
Here's our loop and recur pair now. This time we expand upon our { tag: ... } convention using a tagging module
const recur = (...values) =>
tag (recur, { values })
const loop = f =>
{ let acc = f ()
while (is (recur, acc))
acc = f (...acc.values)
return acc
}
const T =
Symbol ()
const tag = (t, x) =>
Object.assign (x, { [T]: t })
const is = (t, x) =>
t && x[T] === t
Run it in your browser to verify the results
const T =
Symbol ()
const tag = (t, x) =>
Object.assign (x, { [T]: t })
const is = (t, x) =>
t && x[T] === t
const recur = (...values) =>
tag (recur, { values })
const loop = f =>
{ let acc = f ()
while (is (recur, acc))
acc = f (...acc.values)
return acc
}
const range = n =>
Array.from
( Array (n + 1)
, (_, n) => n
)
const goodsum = (ns = []) =>
loop ((sum = 0, i = 0) =>
i >= ns.length
? sum
: recur (sum + ns[i], i + 1))
console.log (goodsum (range (1000)))
// 500500
console.log (goodsum (range (20000)))
// 200010000
console.log (goodsum (range (999999)))
// 499999500000
extra
My brain has been stuck in anamorphism gear for a few months and I was curious if it was possible to implement a stack-safe unfold using the loop function introduced above
Below, we look at an example program which generates the entire sequence of sums up to n. Think of it as showing the work to arrive at the answer for the goodsum program above. The total sum up to n is the last element in the array.
This is a good use case for unfold. We could write this using loop directly, but the point of this was to stretch the limits of unfold so here goes
const sumseq = (n = 0) =>
unfold
( (loop, done, [ m, sum ]) =>
m > n
? done ()
: loop (sum, [ m + 1, sum + m ])
, [ 1, 0 ]
)
console.log (sumseq (10))
// [ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45 ]
// +1 ↗ +2 ↗ +3 ↗ +4 ↗ +5 ↗ +6 ↗ +7 ↗ +8 ↗ +9 ↗ ...
If we used an unsafe unfold implementation, we could blow the stack
// direct recursion, stack-unsafe!
const unfold = (f, initState) =>
f ( (x, nextState) => [ x, ...unfold (f, nextState) ]
, () => []
, initState
)
console.log (sumseq (20000))
// RangeError: Maximum call stack size exceeded
After playing with it a little bit, it is indeed possible to encode unfold using our stack-safe loop. Cleaning up the ... spread syntax using a push effect makes things a lot quicker too
const push = (xs, x) =>
(xs .push (x), xs)
const unfold = (f, init) =>
loop ((acc = [], state = init) =>
f ( (x, nextState) => recur (push (acc, x), nextState)
, () => acc
, state
))
With a stack-safe unfold, our sumseq function works a treat now
console.time ('sumseq')
const result = sumseq (20000)
console.timeEnd ('sumseq')
console.log (result)
// sumseq: 23 ms
// [ 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ..., 199990000 ]
Verify the result in your browser below
const recur = (...values) =>
tag (recur, { values })
const loop = f =>
{ let acc = f ()
while (is (recur, acc))
acc = f (...acc.values)
return acc
}
const T =
Symbol ()
const tag = (t, x) =>
Object.assign (x, { [T]: t })
const is = (t, x) =>
t && x[T] === t
const push = (xs, x) =>
(xs .push (x), xs)
const unfold = (f, init) =>
loop ((acc = [], state = init) =>
f ( (x, nextState) => recur (push (acc, x), nextState)
, () => acc
, state
))
const sumseq = (n = 0) =>
unfold
( (loop, done, [ m, sum ]) =>
m > n
? done ()
: loop (sum, [ m + 1, sum + m ])
, [ 1, 0 ]
)
console.time ('sumseq')
const result = sumseq (20000)
console.timeEnd ('sumseq')
console.log (result)
// sumseq: 23 ms
// [ 0, 1, 3, 6, 10, 15, 21, 28, 36, 45, ..., 199990000 ]
Browsers have practical limits on the number of arguments a function can take
You can change the sum signature to accept an array rather than a varying number of arguments, and use destructuring to keep the syntax/readability similar to what you have. This "fixes" the stackoverflow error, but is increadibly slow :D
function _sum([num1, num2, ...nums]) { /* ... */ }
I.e.:, if you're running in to problems with maximum argument counts, your recursive/trampoline approach is probably going to be too slow to work with...
The other answer already explained the issue with your code. This answer demonstrates that trampolines are sufficiently fast for most array based computations and offer a higher level of abstraction:
// trampoline
const loop = f => {
let acc = f();
while (acc && acc.type === recur)
acc = f(...acc.args);
return acc;
};
const recur = (...args) =>
({type: recur, args});
// sum
const sum = xs => {
const len = xs.length;
return loop(
(acc = 0, i = 0) =>
i === len
? acc
: recur(acc + xs[i], i + 1));
};
// and run...
const xs = Array(1e5)
.fill(0)
.map((x, i) => i);
console.log(sum(xs));
If a trampoline based computation causes performance problems, then you can still replace it with a bare loop.