Related
I have 2 dependent functions that call each others, so one or the other has to be declared first which fires an eslint no-use-before-define error. I know I can disable the rule but is there any better way do do this?
Simplified example:
const a = number => {
if (number === 0) {
return b(number);
}
c(number);
}
const b = number => a(number + 1);
a(0);
I can't merge a and b as they both need to be called separately somewhere else in the code.
You may use callback or higher order functions. Hope that will help.
Plz try this code and leme know if it works fine with your linting rules.
const a = (number, call) => {
if (number === 0) {
return call(number);
}
c(number);
}
const b = number => a(number + 1, b);
const c = number => console.log(1);
a(0, b);
I sometimes use array destructuring combined with IIFE to define both functions simultaneously:
const [a, b] = (() => [
(number) => (number === 0 ? b(number) : c(number)),
(number) => a(number + 1),
])();
a(0);
Also works with object destructuring if you prefer named components:
const { a, b } = (() => ({
a: (number) => (number === 0 ? b(number) : c(number)),
b: (number) => a(number + 1),
}))();
a(0);
I find myself over and over again, writing code like this, and is thinking. There must be a known pattern for this, but plowing through documentation of different functional libs like Ramda. I can't quite find a match. What should I use?
var arrayOfPersons = [{ firstName: 'Jesper', lastName: 'Jensen', income: 120000, member: true }/* .... a hole lot of persons */];
function createPredicateBuilder(config) {
return {
build() {
var fnPredicate = (p) => true;
if (typeof config.minIncome == 'number') {
fnPredicate = (p) => fnPredicate(p) && config.minIncome <= p.income;
}
if (typeof config.member == 'boolean') {
fnPredicate = (p) => fnPredicate(p) && config.member === p.member;
}
// .. continue to support more predicateparts.
},
map(newConfig) {
return createPredicateBuilder({ ...config, ...newConfig });
}
};
}
var predicateBuilder = createPredicateBuilder({});
// We collect predicates
predicateBuilder = predicateBuilder.map({ minIncome: 200000 });
// ...
predicateBuilder = predicateBuilder.map({ member: false });
// Now we want to query...
console.log(arrayOfPersons.filter(predicateBuilder.build()));
I create a builder instance, and calls to map, creates a new instance returning in an object with build/map methods. The state is captured in the functions scope.
Sometime in the future, I want to get my collected function (or result).
I think this is FP, but what is this pattern, and is there any libs that makes it easyer?
Is my oop-inspired naming of things (builder/build) blinding me?
You could use the where function in Ramda to test against a spec object describing your predicates. Your code could then build the spec object dynamically according to the passed config.
https://ramdajs.com/docs/#where
Example from the Ramda docs:
// pred :: Object -> Boolean
const pred = R.where({
a: R.equals('foo'),
b: R.complement(R.equals('bar')),
x: R.gt(R.__, 10),
y: R.lt(R.__, 20)
});
pred({a: 'foo', b: 'xxx', x: 11, y: 19}); //=> true
pred({a: 'xxx', b: 'xxx', x: 11, y: 19}); //=> false
pred({a: 'foo', b: 'bar', x: 11, y: 19}); //=> false
pred({a: 'foo', b: 'xxx', x: 10, y: 19}); //=> false
pred({a: 'foo', b: 'xxx', x: 11, y: 20}); //=> false
To elaborate, you could "build" the spec object by having a set of functions that return a new spec with an additional predicate, e.g.:
function setMinIncome(oldSpec, minIncome) {
return R.merge(oldSpec, {income: R.gt(R.__, minIncome)})
}
Functional programming is less about patterns and more about laws. Laws allow the programmer to reason about their programs like a mathematician can reason about an equation.
Let's look at adding numbers. Adding is a binary operation (it takes two numbers) and always produces another number.
1 + 2 = 3
2 + 1 = 3
1 + (2 + 3) = 6
(1 + 2) + 3 = 6
((1 + 2) + 3) + 4 = 10
(1 + 2) + (3 + 4) = 10
1 + (2 + 3) + 4 = 10
1 + (2 + (3 + 4)) = 10
We can add numbers in any order and still get the same result. This property is associativity and it forms the basis of the associative law.
Adding zero is somewhat interesting, or taken for granted.
1 + 0 = 1
0 + 1 = 1
3 + 0 = 3
0 + 3 = 3
Adding zero to any number will not change the number. This is known as the identity element.
These two things, (1) an associative binary operation and (2) an identity element, make up a monoid.
If we can ...
encode your predicates as elements of a domain
create a binary operation for the elements
determine the identity element
... then we receive the benefits of belonging to the monoid category, allowing us to reason about our program in an equational way. There's no pattern to learn, only laws to uphold.
1. Making a domain
Getting your data right is tricky, even more so in a multi-paradigm language like JavaScript. This question is about functional programming though so functions would be a good go-to.
In your program ...
build() {
var fnPredicate = (p) => true;
if (typeof config.minIncome == 'number') {
fnPredicate = (p) => fnPredicate(p) && config.minIncome <= p.income;
}
if (typeof config.member == 'boolean') {
fnPredicate = (p) => fnPredicate(p) && config.member === p.member;
}
// .. continue to support more predicateparts.
},
... we see a mixture of the program level and the data level. This program is hard-coded to understand only an input that may have these specific keys (minIncome, member) and their respective types (number and boolean), as well the comparison operation used to determine the predicate.
Let's keep it really simple. Let's take a static predicate
item.name === "Sally"
If I wanted this same predicate but compared using a different item, I would wrap this expression in a function and make item a parameter of the function.
const nameIsSally = item =>
item.name === "Sally"
console .log
( nameIsSally ({ name: "Alice" }) // false
, nameIsSally ({ name: "Sally" }) // true
, nameIsSally ({ name: "NotSally" }) // false
, nameIsSally ({}) // false
)
This predicate is easy to use, but it only works to check for the name Sally. We repeat the process by wrapping the expression in a function and make name a parameter of the function. This general technique is called abstraction and it's used all the time in functional programming.
const nameIs = name => item =>
item.name === name
const nameIsSally =
nameIs ("Sally")
const nameIsAlice =
nameIs ("Alice")
console .log
( nameIsSally ({ name: "Alice" }) // false
, nameIsSally ({ name: "Sally" }) // true
, nameIsAlice ({ name: "Alice" }) // true
, nameIsAlice ({ name: "Sally" }) // false
)
As you can see, it doesn't matter that the expression we wrapped was already a function. JavaScript has first-class support for functions, which means they can be treated as values. Programs that return a function or receive a function as input are called higher-order functions.
Above, our predicates are represented as functions which take a value of any type (a) and produce a boolean. We will denote this as a -> Boolean. So each predicate is an element of our domain, and that domain is all functions a -> Boolean.
2. The Binary Operation
We'll do the exercise of abstraction one more time. Let's take a static combined predicate expression.
p1 (item) && p2 (item)
I can re-use this expression for other items by wrapping it in a function and making item a parameter of the function.
const bothPredicates = item =>
p1 (item) && p2 (item)
But we want to be able to combine any predicates. Again, we wrap the expression we want to re-use in an function then assign parameter(s) for the variable(s), this time for p1 and p2.
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
Before we move on, let's check our domain and ensure our binary operation and is correct. The binary operation must:
take as input two (2) elements from our domain (a -> Boolean)
return as output an element of our domain
the operation must be associative: f(a,b) == f(b,a)
Indeed, and accepts two elements of our domain p1 and p2. The return value is item => ... which is a function receiving an item and returns p1 (item) && p2 (item). Each is a predicate that accepts a single value and returns a Boolean. This simplifies to Boolean && Boolean which we know is another Boolean. To summarize, and takes two predicates and returns a new predicate, which is precisely what the binary operation must do.
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
const nameIs = x => item =>
item.name === x
const minIncome = x => item =>
x <= item.income
const query =
and
( nameIs ("Alice")
, minIncome (5)
)
console .log
( query ({ name: "Sally", income: 3}) // false
, query ({ name: "Alice", income: 3 }) // false
, query ({ name: "Alice", income: 7 }) // true
)
3. The Identity Element
The identity element, when added to any other element, must not change the element. So for any predicate p and the predicate identity element empty, the following must hold
and (p, empty) == p
and (empty, p) == p
We can represent the empty predicate as a function that takes any element and always returns true.
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
const empty = item =>
true
const p = x =>
x > 5
console .log
( and (p, empty) (3) === p (3) // true
, and (empty, p) (3) === p (3) // true
)
Power of Laws
Now that we have a binary operation and an identity element, we can combine an arbitrary amount of predicates. We define sum which plugs our monoid directly into reduce.
// --- predicate monoid ---
const and = (p1, p2) => item =>
p1 (item) && p2 (item)
const empty = item =>
true
const sum = (...predicates) =>
predicates .reduce (and, empty) // [1,2,3,4] .reduce (add, 0)
// --- individual predicates ---
const nameIs = x => item =>
item.name === x
const minIncome = x => item =>
x <= item.income
const isTeenager = item =>
item.age > 12 && item.age < 20
// --- demo ---
const query =
sum
( nameIs ("Alice")
, minIncome (5)
, isTeenager
)
console .log
( query ({ name: "Sally", income: 8, age: 14 }) // false
, query ({ name: "Alice", income: 3, age: 21 }) // false
, query ({ name: "Alice", income: 7, age: 29 }) // false
, query ({ name: "Alice", income: 9, age: 17 }) // true
)
The empty sum predicate still returns a valid result. This is like the empty query that matches all results.
const query =
sum ()
console .log
( query ({ foo: "bar" }) // true
)
Free Convenience
Using functions to encode our predicates makes them useful in other ways too. If you have an array of items, you could use a predicate p directly in .find or .filter. Of course this is true for predicates created using and and sum too.
const p =
sum (pred1, pred2, pred3, ...)
const items =
[ { name: "Alice" ... }
, { name: "Sally" ... }
]
const firstMatch =
items .find (p)
const allMatches =
items .filter (p)
Make it a Module
You don't want to define globals like add and sum and empty. When you package this code, use a module of some sort.
// Predicate.js
const add = ...
const empty = ...
const sum = ...
const Predicate =
{ add, empty, sum }
export default Predicate
When you use it
import { sum } from './Predicate'
const query =
sum (...)
const result =
arrayOfPersons .filter (query)
Quiz
Notice the similarity between our predicate identity element and the identity element for &&
T && ? == T
? && T == T
F && ? == F
? && F == F
We can replace all ? above with T and the equations will hold. Below, what do you think the identity element is for ||?
T || ? == T
? || T == T
F || ? == F
? || F == F
What's the identity element for *, binary multiplication?
n * ? = n
? * n = n
How about the identity element for arrays or lists?
concat (l, ?) == l
concat (?, l) == l
Having Fun?
I think you'll enjoy contravariant functors. In the same arena, transducers. There's a demo showing how to build a higher-level API around these low-level modules too.
This is the Builder design pattern. Although it's changed in a more functional approach but the premise stays the same - you have an entity that collects information via .map() (more traditionally it's .withX() which correspond to setters) and executes all the collected data producing a new object .build().
To make this more recognisable, here is a more Object Oriented approach that still does the same thing:
class Person {
constructor(firstName, lastName, age) {
this.firstName = firstName;
this.lastName = lastName;
this.age = age;
}
toString() {
return `I am ${this.firstName} ${this.lastName} and I am ${this.age} years old`;
}
}
class PersonBuilder {
withFirstName(firstName) {
this.firstName = firstName;
return this;
}
withLastName(lastName) {
this.lastName = lastName;
return this;
}
withAge(age) {
this.age = age;
return this;
}
build() {
return new Person(this.firstName, this.lastName, this.age);
}
}
//make builder
const builder = new PersonBuilder();
//collect data for the object construction
builder
.withFirstName("Fred")
.withLastName("Bloggs")
.withAge(42);
//build the object with the collected data
const person = builder.build();
console.log(person.toString())
I'd stick to a simple array of (composed) predicate functions and a reducer of either
And (f => g => x => f(x) && g(x)), seeded with True (_ => true).
Or (f => g => x => f(x) || g(x)), seeded with False (_ => false).
For example:
const True = _ => true;
const False = _ => false;
const Or = (f, g) => x => f(x) || g(x);
Or.seed = False;
const And = (f, g) => x => f(x) && g(x);
And.seed = True;
const Filter = (fs, operator) => fs.reduce(operator, operator.seed);
const oneOrTwo =
Filter([x => x === 1, x => x === 2], Or);
const evenAndBelowTen =
Filter([x => x % 2 === 0, x => x < 10], And);
const oneToHundred = Array.from(Array(100), (_, i) => i);
console.log(
"One or two",
oneToHundred.filter(oneOrTwo),
"Even and below 10",
oneToHundred.filter(evenAndBelowTen)
);
You can even create complicated filter logic by nesting And/Or structures:
const True = _ => true;
const False = _ => false;
const Or = (f, g) => x => f(x) || g(x);
Or.seed = False;
const And = (f, g) => x => f(x) && g(x);
And.seed = True;
const Filter = (fs, operator) => fs.reduce(operator, operator.seed);
const mod = x => y => y % x === 0;
const oneToHundred = Array.from(Array(100), (_, i) => i);
console.log(
"Divisible by (3 and 5), or (3 and 7)",
oneToHundred.filter(
Filter(
[
Filter([mod(3), mod(5)], And),
Filter([mod(3), mod(7)], And)
],
Or
)
)
);
Or, with your own example situation:
const comp = (f, g) => x => f(g(x));
const gt = x => y => y > x;
const eq = x => y => x === y;
const prop = k => o => o[k];
const And = (f, g) => x => f(x) && g(x);
const True = _ => true;
const Filter = (fs) => fs.reduce(And, True);
const richMemberFilter = Filter(
[
comp(gt(200000), prop("income")),
comp(eq(true), prop("member"))
]
);
console.log(
"Rich members:",
data().filter(richMemberFilter).map(prop("firstName"))
);
function data() {
return [
{ firstName: 'Jesper', lastName: 'Jensen', income: 120000, member: true },
{ firstName: 'Jane', lastName: 'Johnson', income: 230000, member: true },
{ firstName: 'John', lastName: 'Jackson', income: 230000, member: false }
];
};
This question already has answers here:
Javascript compare 3 values
(7 answers)
Closed 4 years ago.
I need a way to compare 3 values in a short way like this:
'aaa'=='aaa'=='aaa'
false
but as you can see, it doesn't work. Why?
With 2 values it does work obviously:
'aaa'=='aaa'
true
Comparing of first two values evaluates to true and then that true is compared with "aaa" which evaluates to false.
To make it correct you can write:
const a = 'aaa';
const b = 'aaa';
const c = 'aaa';
console.log(a === b && b === c); //true
if you have those strings stored in variables you can do
let a = 'aaa', b = 'aaa', c = 'aaa'
console.log(a === b && b === c) // true
The expression 'aaa'=='aaa'=='aaa' is evaluated as ('aaa'=='aaa')=='aaa'.
The sub-expression in parentheses evaluates to true and it becomes true=='aaa' which is false because when compares two values of different types, JavaScript first converts one of them or both to a common type. It converts the boolean true to the number 1 and the string 'aaa' to the number 0 which are, obviously, not equal.
What you need is
console.log('aaa'=='aaa' && 'aaa'=='aaa')
You can put all values in an Array and then use Array.prototype.every() function to test if all satisfy the condition defined in the callback you pass to it:
let a = 'aaa', b = 'aaa', c = 'aaa'
let arr = [a, b, c]
let arr2 = [1, 2, 1]
console.log(arr.every(i => [arr[0]].includes(i)))
console.log(arr2.every(i => [arr2[0]].includes(i)))
Also, you may get the unique values from a given sequence, and if you get a single element is because all equal:
const same = xs => new Set (xs).size === 1
const x = 'aaa'
const y = 'aaa'
const z = 'aaa'
const areSame = same ([ x, y, z ])
console.log(areSame)
const x_ = 'aaa'
const y_ = 'bbb'
const z_ = 'aaa'
const areSame_ = same ([ x_, y_, z_ ])
console.log (areSame_)
With variadic arguments
const same = (...xs) => new Set (xs).size === 1
const x = 'aaa'
const y = 'aaa'
const z = 'aaa'
const areSame = same (x, y, z)
console.log(areSame)
const x_ = 'aaa'
const y_ = 'bbb'
const z_ = 'aaa'
const areSame_ = same (x_, y_, z_)
console.log (areSame_)
This is an advanced topic of my prior question here:
How to store data of a functional chain?
The brief idea is
A simple function below:
const L = a => L;
forms
L
L(1)
L(1)(2)
...
This seems to form a list but the actual data is not stored at all, so if it's required to store the data such as [1,2], what is the smartest practice to have the task done?
One of the prominent ideas is from #user633183 which I marked as an accepted answer(see the Question link), and another version of the curried function is also provided by #Matías Fidemraizer .
So here goes:
const L = a => {
const m = list => x => !x
? list
: m([...list, x]);
return m([])(a);
};
const list1 = (L)(1)(2)(3); //lazy : no data evaluation here
const list2 = (L)(4)(5)(6);
console.log(list1()) // now evaluated by the tail ()
console.log(list2())
What I really like is it turns out lazy evaluation.
Although the given approach satisfies what I mentioned, this function has lost the outer structure or I must mentiion:
Algebraic structure
const L = a => L;
which forms list and more fundamentally gives us an algebraic structure of identity element, potentially along with Monoid or Magma.
Left an Right identity
One of the easiest examples of Monoids and identity is number and "Strings" and [Array] in JavaScript.
0 + a === a === a + 0
1 * a === a === a * 1
In Strings, the empty quoate "" is the identity element.
"" + "Hello world" === "Hello world" === "Hello world" + ""
Same goes to [Array].
Same goes to L:
(L)(a) === (a) === (a)(L)
const L = a => L;
const a = L(5); // number is wrapped or "lift" to Type:L
// Similarity of String and Array
// "5" [5]
//left identity
console.log(
(L)(a) === (a) //true
);
//right identity
console.log(
(a) === (a)(L) //true
);
and the obvious identity immutability:
const L = a => L;
console.log(
(L)(L) === (L) //true
);
console.log(
(L)(L)(L) === (L) //true
);
console.log(
(L)(L)(L)(L) === (L) //true
);
Also the below:
const L = a => L;
const a = (L)(1)(2)(3);
const b = (L)(1)(L)(2)(3)(L);
console.log(
(a) === (b) //true
);
Questions
What is the smartest or most elegant way (very functional and no mutations (no Array.push, also)) to implement L that satisfies 3 requirements:
Requirement 0 - Identity
A simple function:
const L = a => L;
already satisfies the identity law as we already have seen.
Requirement 1 - eval() method
Although L satisfies the identity law, there is no method to access to the listed/accumulated data.
(Answers provided in my previous question provide the data accumulation ability, but breaks the Identity law.)
Lazy evaluation seems the correct approach, so providing a clearer specification:
provide eval method of L
const L = a => L; // needs to enhance to satisfy the requirements
const a = (L)(1)(2)(3);
const b = (L)(1)(L)(2)(3)(L);
console.log(
(a) === (b) //true
);
console.log(
(a).eval() //[1, 2, 3]
);
console.log(
(b).eval() //[1, 2, 3]
);
Requirement 3 - Monoid Associative law
In addition to the prominent Identify structure, Monoids also satisfies Associative law
(a * b) * c === a * b * c === a * (b * c)
This simply means "flatten the list", in other words, the structure does not contain nested lists.
[a, [b, c]] is no good.
Sample:
const L = a => L; // needs to enhance to satisfy the requirements
const a = (L)(1)(2);
const b = (L)(3)(4);
const c = (L)(99);
const ab = (a)(b);
const bc = (b)(c);
const abc1 = (ab)(c);
const abc2 = (a)(bc);
console.log(
abc1 === abc2 // true for Associative
);
console.log(
(ab).eval() //[1, 2, 3, 4]
);
console.log(
(abc1).eval() //[1, 2, 3, 4, 99]
);
console.log(
(abc2).eval() //[1, 2, 3, 4, 99]
);
That is all for 3 requirements to implement L as a monoid.
This is a great challenge for functional programming to me, and actually I tried by myself for a while, but asking the previous questions, it's very good practice to share my own challenge and hear the people and read their elegant code.
Thank you.
Your data type is inconsistent!
So, you want to create a monoid. Consider the structure of a monoid:
class Monoid m where
empty :: m -- identity element
(<*>) :: m -> m -> m -- binary operation
-- It satisfies the following laws:
empty <*> x = x = x <*> empty -- identity law
(x <*> y) <*> z = x <*> (y <*> z) -- associativity law
Now, consider the structure of your data type:
(L)(a) = (a) = (a)(L) // identity law
((a)(b))(c) = (a)((b)(c)) // associativity law
Hence, according to you the identity element is L and the binary operation is function application. However:
(L)(1) // This is supposed to be a valid expression.
(L)(1) != (1) != (1)(L) // But it breaks the identity law.
// (1)(L) is not even a valid expression. It throws an error. Therefore:
((L)(1))(L) // This is supposed to be a valid expression.
((L)(1))(L) != (L)((1)(L)) // But it breaks the associativity law.
The problem is that you are conflating the binary operation with the reverse list constructor:
// First, you're using function application as a reverse cons (a.k.a. snoc):
// cons :: a -> [a] -> [a]
// snoc :: [a] -> a -> [a] -- arguments flipped
const xs = (L)(1)(2); // [1,2]
const ys = (L)(3)(4); // [3,4]
// Later, you're using function application as the binary operator (a.k.a. append):
// append :: [a] -> [a] -> [a]
const zs = (xs)(ys); // [1,2,3,4]
If you're using function application as snoc then you can't use it for append as well:
snoc :: [a] -> a -> [a]
append :: [a] -> [a] -> [a]
Notice that the types don't match, but even if they did you still don't want one operation to do two things.
What you want are difference lists.
A difference list is a function that takes a list and prepends another list to it. For example:
const concat = xs => ys => xs.concat(ys); // This creates a difference list.
const f = concat([1,2,3]); // This is a difference list.
console.log(f([])); // You can get its value by applying it to the empty array.
console.log(f([4,5,6])); // You can also apply it to any other array.
The cool thing about difference lists are that they form a monoid because they are just endofunctions:
const id = x => x; // The identity element is just the id function.
const compose = (f, g) => x => f(g(x)); // The binary operation is composition.
compose(id, f) = f = compose(f, id); // identity law
compose(compose(f, g), h) = compose(f, compose(g, h)); // associativity law
Even better, you can package them into a neat little class where function composition is the dot operator:
class DList {
constructor(f) {
this.f = f;
this.id = this;
}
cons(x) {
return new DList(ys => this.f([x].concat(ys)));
}
concat(xs) {
return new DList(ys => this.f(xs.concat(ys)));
}
apply(xs) {
return this.f(xs);
}
}
const id = new DList(x => x);
const cons = x => new DList(ys => [x].concat(ys)); // Construct DList from value.
const concat = xs => new DList(ys => xs.concat(ys)); // Construct DList from array.
id . concat([1, 2, 3]) = concat([1, 2, 3]) = concat([1, 2, 3]) . id // identity law
concat([1, 2]) . cons(3) = cons(1) . concat([2, 3]) // associativity law
You can use the apply method to retrieve the value of the DList as follows:
class DList {
constructor(f) {
this.f = f;
this.id = this;
}
cons(x) {
return new DList(ys => this.f([x].concat(ys)));
}
concat(xs) {
return new DList(ys => this.f(xs.concat(ys)));
}
apply(xs) {
return this.f(xs);
}
}
const id = new DList(x => x);
const cons = x => new DList(ys => [x].concat(ys));
const concat = xs => new DList(ys => xs.concat(ys));
const identityLeft = id . concat([1, 2, 3]);
const identityRight = concat([1, 2, 3]) . id;
const associativityLeft = concat([1, 2]) . cons(3);
const associativityRight = cons(1) . concat([2, 3]);
console.log(identityLeft.apply([])); // [1,2,3]
console.log(identityRight.apply([])); // [1,2,3]
console.log(associativityLeft.apply([])); // [1,2,3]
console.log(associativityRight.apply([])); // [1,2,3]
An advantage of using difference lists over regular lists (functional lists, not JavaScript arrays) is that concatenation is more efficient because the lists are concatenated from right to left. Hence, it doesn't copy the same values over and over again if you're concatenating multiple lists.
mirror test
To make L self-aware we have to somehow tag the values it creates. This is a generic trait and we can encode it using a pair of functions. We set an expectation of the behavior –
is (Foo, 1) // false 1 is not a Foo
is (Foo, tag (Foo, 1)) // true tag (Foo, 1) is a Foo
Below we implement is and tag. We want to design them such that we can put in any value and we can reliably determine the value's tag at a later time. We make exceptions for null and undefined.
const Tag =
Symbol ()
const tag = (t, x) =>
x == null
? x
: Object.assign (x, { [Tag]: t })
const is = (t, x) =>
x == null
? false
: x[Tag] === t
const Foo = x =>
tag (Foo, x)
console.log
( is (Foo, 1) // false
, is (Foo, []) // false
, is (Foo, {}) // false
, is (Foo, x => x) // false
, is (Foo, true) // false
, is (Foo, undefined) // false
, is (Foo, null) // false
)
console.log
( is (Foo, Foo (1)) // true we can tag primitives
, is (Foo, Foo ([])) // true we can tag arrays
, is (Foo, Foo ({})) // true we can tag objects
, is (Foo, Foo (x => x)) // true we can even tag functions
, is (Foo, Foo (true)) // true and booleans too
, is (Foo, Foo (undefined)) // false but! we cannot tag undefined
, is (Foo, Foo (null)) // false or null
)
We now have a function Foo which is capable of distinguishing values it produced. Foo becomes self-aware –
const Foo = x =>
is (Foo, x)
? x // x is already a Foo
: tag (Foo, x) // tag x as Foo
const f =
Foo (1)
Foo (f) === f // true
L of higher consciousness
Using is and tag we can make List self-aware. If given an input of a List-tagged value, List can respond per your design specification.
const None =
Symbol ()
const L = init =>
{ const loop = (acc, x = None) =>
// x is empty: return the internal array
x === None
? acc
// x is a List: concat the two internal arrays and loop
: is (L, x)
? tag (L, y => loop (acc .concat (x ()), y))
// x is a value: append and loop
: tag (L, y => loop ([ ...acc, x ], y))
return loop ([], init)
}
We try it out using your test data –
const a =
L (1) (2)
const b =
L (3) (4)
const c =
L (99)
console.log
( (a) (b) (c) () // [ 1, 2, 3, 4, 99 ]
, (a (b)) (c) () // [ 1, 2, 3, 4, 99 ]
, (a) (b (c)) () // [ 1, 2, 3, 4, 99 ]
)
It's worth comparing this implementation to the last one –
// previous implementation
const L = init =>
{ const loop = (acc, x) =>
x === undefined // don't use !x, read more below
? acc
: y => loop ([...acc, x], y)
return loop ([], init)
}
In our revision, a new branch is added for is (L, x) that defines the new monoidal behavior. And most importantly, any returned value in wrapped in tag (L, ...) so that it can later be identified as an L-tagged value. The other change is the explicit use of a None symbol; additional remarks on this have been added a the end of this post.
equality of L values
To determine equality of L(x) and L(y) we are faced with another problem. Compound data in JavaScript are represented with Objects which cannot be simply compared with the === operator
console.log
( { a: 1 } === { a: 1 } ) // false
We can write an equality function for L, perhaps called Lequal
const l1 =
L (1) (2) (3)
const l2 =
L (1) (2) (3)
const l3 =
L (0)
console.log
( Lequal (l1, l2) // true
, Lequal (l1, l3) // false
, Lequal (l2, l3) // false
)
But I won't go into how to do that in this post. If you're interested, I covered that topic in this Q&A.
// Hint:
const Lequal = (l1, l2) =>
arrayEqual // compare two arrays
( l1 () // get actual array of l1
, l2 () // get actual array of l2
)
tagging in depth
The tagging technique I used here is one I use in other answers. It is accompanied by a more extensive example here.
other remarks
Don't use !x to test for an empty value because it will return true for any "falsy" x. For example, if you wanted to make a list of L (1) (0) (3) ... It will stop after 1 because !0 is true. Falsy values include 0, "" (empty string), null, undefined, NaN, and of course false itself. It's for this reason we use an explicit None symbol to more precisely identify when the list terminates. All other inputs are appended to the internal array.
And don't rely on hacks like JSON.stringify to test for object equality. Structural traversal is absolutely required.
const x = { a: 1, b: 2 }
const y = { b: 2, a: 1 }
console.log
(JSON.stringify (x) === JSON.stringify (y)) // false
console.log
(Lequal (L (x), L (y))) // should be true!
For advice on how to solve this problem, see this Q&A
Say I want to sum a.x for each element in arr.
arr = [ { x: 1 }, { x: 2 }, { x: 4 } ];
arr.reduce(function(a, b){ return a.x + b.x; }); // => NaN
I have cause to believe that a.x is undefined at some point.
The following works fine
arr = [ 1, 2, 4 ];
arr.reduce(function(a, b){ return a + b; }); // => 7
What am I doing wrong in the first example?
A cleaner way to accomplish this is by providing an initial value as the second argument to reduce:
var arr = [{x:1}, {x:2}, {x:4}];
var result = arr.reduce(function (acc, obj) { return acc + obj.x; }, 0);
console.log(result); // 7
The first time the anonymous function is called, it gets called with (0, {x: 1}) and returns 0 + 1 = 1. The next time, it gets called with (1, {x: 2}) and returns 1 + 2 = 3. It's then called with (3, {x: 4}), finally returning 7.
This also handles the case where the array is empty, returning 0.
After the first iteration your're returning a number and then trying to get property x of it to add to the next object which is undefined and maths involving undefined results in NaN.
try returning an object contain an x property with the sum of the x properties of the parameters:
var arr = [{x:1},{x:2},{x:4}];
arr.reduce(function (a, b) {
return {x: a.x + b.x}; // returns object with property x
})
// ES6
arr.reduce((a, b) => ({x: a.x + b.x}));
// -> {x: 7}
Explanation added from comments:
The return value of each iteration of [].reduce used as the a variable in the next iteration.
Iteration 1: a = {x:1}, b = {x:2}, {x: 3} assigned to a in Iteration 2
Iteration 2: a = {x:3}, b = {x:4}.
The problem with your example is that you're returning a number literal.
function (a, b) {
return a.x + b.x; // returns number literal
}
Iteration 1: a = {x:1}, b = {x:2}, // returns 3 as a in next iteration
Iteration 2: a = 3, b = {x:2} returns NaN
A number literal 3 does not (typically) have a property called x so it's undefined and undefined + b.x returns NaN and NaN + <anything> is always NaN
Clarification: I prefer my method over the other top answer in this thread as I disagree with the idea that passing an optional parameter to reduce with a magic number to get out a number primitive is cleaner. It may result in fewer lines written but imo it is less readable.
TL;DR, set the initial value
Using destructuring
arr.reduce( ( sum, { x } ) => sum + x , 0)
Without destructuring
arr.reduce( ( sum , cur ) => sum + cur.x , 0)
With Typescript
arr.reduce( ( sum, { x } : { x: number } ) => sum + x , 0)
Let's try the destructuring method:
const arr = [ { x: 1 }, { x: 2 }, { x: 4 } ]
const result = arr.reduce( ( sum, { x } ) => sum + x , 0)
console.log( result ) // 7
The key to this is setting initial value. The return value becomes first parameter of the next iteration.
Technique used in top answer is not idiomatic
The accepted answer proposes NOT passing the "optional" value. This is wrong, as the idiomatic way is that the second parameter always be included. Why? Three reasons:
1. Dangerous
-- Not passing in the initial value is dangerous and can create side-effects and mutations if the callback function is careless.
Behold
const badCallback = (a,i) => Object.assign(a,i)
const foo = [ { a: 1 }, { b: 2 }, { c: 3 } ]
const bar = foo.reduce( badCallback ) // bad use of Object.assign
// Look, we've tampered with the original array
foo // [ { a: 1, b: 2, c: 3 }, { b: 2 }, { c: 3 } ]
If however we had done it this way, with the initial value:
const bar = foo.reduce( badCallback, {})
// foo is still OK
foo // { a: 1, b: 2, c: 3 }
For the record, unless you intend to mutate the original object, set the first parameter of Object.assign to an empty object. Like this: Object.assign({}, a, b, c).
2 - Better Type Inference
--When using a tool like Typescript or an editor like VS Code, you get the benefit of telling the compiler the initial and it can catch errors if you're doing it wrong. If you don't set the initial value, in many situations it might not be able to guess and you could end up with creepy runtime errors.
3 - Respect the Functors
-- JavaScript shines best when its inner functional child is unleashed. In the functional world, there is a standard on how you "fold" or reduce an array. When you fold or apply a catamorphism to the array, you take the values of that array to construct a new type. You need to communicate the resulting type--you should do this even if the final type is that of the values in the array, another array, or any other type.
Let's think about it another way. In JavaScript, functions can be pass around like data, this is how callbacks work, what is the result of the following code?
[1,2,3].reduce(callback)
Will it return an number? An object? This makes it clearer
[1,2,3].reduce(callback,0)
Read more on the functional programming spec here: https://github.com/fantasyland/fantasy-land#foldable
Some more background
The reduce method takes two parameters,
Array.prototype.reduce( callback, initialItem )
The callback function takes the following parameters
(accumulator, itemInArray, indexInArray, entireArray) => { /* do stuff */ }
For the first iteration,
If initialItem is provided, the reduce function passes the initialItem as the accumulator and the first item of the array as the itemInArray.
If initialItem is not provided, the reduce function passes the first item in the array as the initialItem and the second item in the array as itemInArray which can be confusing behavior.
I teach and recommend always setting the initial value of reduce.
You can check out the documentation at:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/Reduce
Others have answered this question, but I thought I'd toss in another approach. Rather than go directly to summing a.x, you can combine a map (from a.x to x) and reduce (to add the x's):
arr = [{x:1},{x:2},{x:4}]
arr.map(function(a) {return a.x;})
.reduce(function(a,b) {return a + b;});
Admittedly, it's probably going to be slightly slower, but I thought it worth mentioning it as an option.
To formalize what has been pointed out, a reducer is a catamorphism which takes two arguments which may be the same type by coincidence, and returns a type which matches the first argument.
function reducer (accumulator: X, currentValue: Y): X { }
That means that the body of the reducer needs to be about converting currentValue and the current value of the accumulator to the value of the new accumulator.
This works in a straightforward way, when adding, because the accumulator and the element values both happen to be the same type (but serve different purposes).
[1, 2, 3].reduce((x, y) => x + y);
This just works because they're all numbers.
[{ age: 5 }, { age: 2 }, { age: 8 }]
.reduce((total, thing) => total + thing.age, 0);
Now we're giving a starting value to the aggregator. The starting value should be the type that you expect the aggregator to be (the type you expect to come out as the final value), in the vast majority of cases.
While you aren't forced to do this (and shouldn't be), it's important to keep in mind.
Once you know that, you can write meaningful reductions for other n:1 relationship problems.
Removing repeated words:
const skipIfAlreadyFound = (words, word) => words.includes(word)
? words
: words.concat(word);
const deduplicatedWords = aBunchOfWords.reduce(skipIfAlreadyFound, []);
Providing a count of all words found:
const incrementWordCount = (counts, word) => {
counts[word] = (counts[word] || 0) + 1;
return counts;
};
const wordCounts = words.reduce(incrementWordCount, { });
Reducing an array of arrays, to a single flat array:
const concat = (a, b) => a.concat(b);
const numbers = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
].reduce(concat, []);
Any time you're looking to go from an array of things, to a single value that doesn't match a 1:1, reduce is something you might consider.
In fact, map and filter can both be implemented as reductions:
const map = (transform, array) =>
array.reduce((list, el) => list.concat(transform(el)), []);
const filter = (predicate, array) => array.reduce(
(list, el) => predicate(el) ? list.concat(el) : list,
[]
);
I hope this provides some further context for how to use reduce.
The one addition to this, which I haven't broken into yet, is when there is an expectation that the input and output types are specifically meant to be dynamic, because the array elements are functions:
const compose = (...fns) => x =>
fns.reduceRight((x, f) => f(x), x);
const hgfx = h(g(f(x)));
const hgf = compose(h, g, f);
const hgfy = hgf(y);
const hgfz = hgf(z);
For the first iteration 'a' will be the first object in the array, hence a.x + b.x will return 1+2 i.e. 3.
Now in the next iteration the returned 3 is assigned to a, so a is a number now n calling a.x will give NaN.
Simple solution is first mapping the numbers in array and then reducing them as below:
arr.map(a=>a.x).reduce(function(a,b){return a+b})
here arr.map(a=>a.x) will provide an array of numbers [1,2,4] now using .reduce(function(a,b){return a+b}) will simple add these numbers without any hassel
Another simple solution is just to provide an initial sum as zero by assigning 0 to a as below:
arr.reduce(function(a,b){return a + b.x},0)
At each step of your reduce, you aren't returning a new {x:???} object. So you either need to do:
arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(a,b){return a + b.x})
or you need to do
arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(a,b){return {x: a.x + b.x}; })
If you have a complex object with a lot of data, like an array of objects, you can take a step by step approach to solve this.
For e.g:
const myArray = [{ id: 1, value: 10}, { id: 2, value: 20}];
First, you should map your array into a new array of your interest, it could be a new array of values in this example.
const values = myArray.map(obj => obj.value);
This call back function will return a new array containing only values from the original array and store it on values const. Now your values const is an array like this:
values = [10, 20];
And now your are ready to perform your reduce:
const sum = values.reduce((accumulator, currentValue) => { return accumulator + currentValue; } , 0);
As you can see, the reduce method executes the call back function multiple times. For each time, it takes the current value of the item in the array and sum with the accumulator. So to properly sum it you need to set the initial value of your accumulator as the second argument of the reduce method.
Now you have your new const sum with the value of 30.
I did it in ES6 with a little improvement:
arr.reduce((a, b) => ({x: a.x + b.x})).x
return number
In the first step, it will work fine as the value of a will be 1 and that of b will be 2 but as 2+1 will be returned and in the next step the value of b will be the return value from step 1 i.e 3 and so b.x will be undefined...and undefined + anyNumber will be NaN and that is why you are getting that result.
Instead you can try this by giving initial value as zero i.e
arr.reduce(function(a,b){return a + b.x},0);
I used to encounter this is my development, what I do is wrap my solution in a function to make it reusable in my environment, like this:
const sumArrayOfObject =(array, prop)=>array.reduce((sum, n)=>{return sum + n[prop]}, 0)
Just my 2 cents on setting a default value with object literal.
let arr = [{
duration: 1
}, {
duration: 3
}, {
duration: 5
}, {
duration: 6
}];
const out = arr.reduce((a, b) => {
return {
duration: a.duration + b.duration
};
}, {
duration: 0
});
console.log(out);
let temp =[{x:1},
{x:2},
{x:3},
{x:4}];
let sum = temp.map(element => element.x).reduce((a, b) => a+ b , 0)
console.log(sum);
we can used this way for sum of x
Output : 10
reduce function iterates over a collection
arr = [{x:1},{x:2},{x:4}] // is a collection
arr.reduce(function(a,b){return a.x + b.x})
translates to:
arr.reduce(
//for each index in the collection, this callback function is called
function (
a, //a = accumulator ,during each callback , value of accumulator is
passed inside the variable "a"
b, //currentValue , for ex currentValue is {x:1} in 1st callback
currentIndex,
array
) {
return a.x + b.x;
},
accumulator // this is returned at the end of arr.reduce call
//accumulator = returned value i.e return a.x + b.x in each callback.
);
during each index callback, value of variable "accumulator" is
passed into "a" parameter in the callback function. If we don't initialize "accumulator", its value will be undefined. Calling undefined.x would give you error.
To solve this, initialize "accumulator" with value 0 as Casey's answer showed above.
To understand the in-outs of "reduce" function, I would suggest you look at the source code of this function.
Lodash library has reduce function which works exactly same as "reduce" function in ES6.
Here is the link :
reduce source code
to return a sum of all x props:
arr.reduce(
(a,b) => (a.x || a) + b.x
)
You can use reduce method as bellow; If you change the 0(zero) to 1 or other numbers, it will add it to total number. For example, this example gives the total number as 31 however if we change 0 to 1, total number will be 32.
const batteryBatches = [4, 5, 3, 4, 4, 6, 5];
let totalBatteries= batteryBatches.reduce((acc,val) => acc + val ,0)
function aggregateObjectArrayByProperty(arr, propReader, aggregator, initialValue) {
const reducer = (a, c) => {
return aggregator(a, propReader(c));
};
return arr.reduce(reducer, initialValue);
}
const data = [{a: 'A', b: 2}, {a: 'A', b: 2}, {a: 'A', b: 3}];
let sum = aggregateObjectArrayByProperty(data, function(x) { return x.b; }, function(x, y) { return x + y; }, 0);
console.log(`Sum = ${sum}`);
console.log(`Average = ${sum / data.length}`);
let product = aggregateObjectArrayByProperty(data, function(x) { return x.b; }, function(x, y) { return x * y; }, 1);
console.log(`Product = ${product}`);
Just wrote a generic function from previously given solutions. I am a Java developer, so apologies for any mistakes or non-javascript standards :-)
A generic typescript function:
const sum = <T>(array: T[], predicate: (value: T, index: number, array: T[]) => number) => {
return array.reduce((acc, value, index, array) => {
return acc + predicate(value, index, array);
}, 0);
};
Example:
const s = sum(arr, (e) => e.x);
var arr = [{x:1}, {x:2}, {x:3}];
arr.map(function(a) {return a.x;})
.reduce(function(a, b) {return a + b});
console.log(arr);
//I tried using the following code and the result is the data array
//result = [{x:1}, {x:2}, {x:3}];
var arr2 = [{x:1}, {x:2}, {x:3}]
.reduce((total, thing) => total + thing.x, 0);
console.log(arr2);
// and I changed the code to like this and it worked.
// result = 6
We can use array reduce method to create new Object and we can use this option to sum or filter
const FRUITS = ["apple", "orange"]
const fruitBasket = {banana: {qty: 10, kg:3}, apple: {qty: 30, kg:10}, orange: {qty: 1, kg:3}}
const newFruitBasket = FRUITS.reduce((acc, fruit) => ({ ...acc, [fruit]: fruitBasket[fruit]}), {})
console.log(newFruitBasket)
Array reduce function takes three parameters i.e, initialValue(default
it's 0) , accumulator and current value .
By default the value of initialValue will be "0" . which is taken by
accumulator
Let's see this in code .
var arr =[1,2,4] ;
arr.reduce((acc,currVal) => acc + currVal ) ;
// (remember Initialvalue is 0 by default )
//first iteration** : 0 +1 => Now accumulator =1;
//second iteration** : 1 +2 => Now accumulator =3;
//third iteration** : 3 + 4 => Now accumulator = 7;
No more array properties now the loop breaks .
// solution = 7
Now same example with initial Value :
var initialValue = 10;
var arr =[1,2,4] ;
arr.reduce((acc,currVal) => acc + currVal,initialValue ) ;
/
// (remember Initialvalue is 0 by default but now it's 10 )
//first iteration** : 10 +1 => Now accumulator =11;
//second iteration** : 11 +2 => Now accumulator =13;
//third iteration** : 13 + 4 => Now accumulator = 17;
No more array properties now the loop breaks .
//solution=17
Same applies for the object arrays as well(the current stackoverflow question) :
var arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(acc,currVal){return acc + currVal.x})
// destructing {x:1} = currVal;
Now currVal is object which have all the object properties .So now
currVal.x=>1
//first iteration** : 0 +1 => Now accumulator =1;
//second iteration** : 1 +2 => Now accumulator =3;
//third iteration** : 3 + 4 => Now accumulator = 7;
No more array properties now the loop breaks
//solution=7
ONE THING TO BARE IN MIND is InitialValue by default is 0 and can be given anything i mean {},[] and number
//fill creates array with n element
//reduce requires 2 parameter , 3rd parameter as a length
var fibonacci = (n) => Array(n).fill().reduce((a, b, c) => {
return a.concat(c < 2 ? c : a[c - 1] + a[c - 2])
}, [])
console.log(fibonacci(8))
you should not use a.x for accumulator , Instead you can do like this
`arr = [{x:1},{x:2},{x:4}]
arr.reduce(function(a,b){a + b.x},0)`