Are arrays monads in modern javascript - javascript

I am kinda new to functional programming and I now want to figure out if arrays in modern javascript are monads. Arrays in modern javascript now have the flatMap method (this method was added recently https://tc39.es/ecma262/#sec-array.prototype.flatmap). I was able to satisfy all of the monad laws using this method. Now I want to figure out if I'm actually correct but I have not been able to find a resource that validates this claim. I have found a statement that arrays are almost monads, but not quite, but this statement was made before flatMap was added. (https://stackoverflow.com/a/50478169/11083823)
These are the validations of the monad laws:
left identity (satisfied):
const value = 10
const array = [value]
const twice = (value) => [value, value]
array.flatMap(twice) === twice(value) // both [10, 10]
right identity (satisfied):
const array = [10]
const wrap = (value) => [value]
array.flatMap(wrap) === array // both [10]
associativity (satisfied):
const array = [10]
const twice = (value) => [value, value]
const doubled = (value) => [value * 2]
array.flatMap(twice).flatMap(doubled) === array.flatMap(doubled).flatMap(twice) // both [20, 20]

Yes, arrays are monads.
In Haskell, we can use use a monadic bind on lists like so:
λ> [1, 2, 3] >>= \a -> [a, 0 - a]
[1,-1,2,-2,3,-3]
Here's the haskell Monad instance for a list: https://hackage.haskell.org/package/base-4.14.1.0/docs/src/GHC.Base.html#line-1133
Here's a resource that explains the list monad: https://en.wikibooks.org/wiki/Haskell/Understanding_monads/List
PS. Monads are a mathematical formalism, and are language agnostic.

Related

Higher-order function with recursion in Javascript

newbie here... I'm trying to grasp the concept of functional programming in Javascript, but I got stuck.
I'm trying to apply a function to another function with recursion (higher-order function). Let's say I have an input that can be a variable or an array, for example:
const A = [5, 14, 23, 32, 41];
const B = 50;
My basic function should convert Fahrenheit to Celsius (but it could really be any function)
const convertF2C = x => (x - 32) / 1.8;
So the way I would normally solve it would be:
const result = array => array.map ? array.map(result) : convertF2C(array); // using recursion if the input is an array
The problem with the above is that if I would like to change the convertF2C in the "result" function, I would have to modify the code
So, thinking functionally I should be able to create a general function that takes a basic function, like this:
const arrayResult = apply2Array(convertF2C);
console.log(arrayResult(A)); // Output: [-15, -10, -5, 0, 5]
console.log(arrayResult(B)); // Output: 10
Where i'm guessing that the general function "apply2Array", should look something along the lines of:
const apply2Array = fn => (...args) => args.map ? args.map(apply2Array) : fn(...args); // does not work
I found a “sort of” similar question here, but it did not help me : Higher-order function of recursive functions?
Any guidance, help or pointing me in the right direction would be much appreciated.
I'm a bit confused by the answers here. I can't tell if they are responding to requirements I don't actually see, or if I'm missing something important.
But if you just want a decorator that converts a function on a scalar to one that operates on either a scalar or an array of scalars, it's quite simple, and you weren't far off. This should do it:
const apply2Array = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (fn) : fn (arg)
const convertF2C = (t) => (t - 32) / 1.8
const A = [5, 14, 23, 32, 41]
const B = 50
const arrayResult = apply2Array(convertF2C);
console .log (arrayResult (A))
console .log (arrayResult (B))
.as-console-wrapper {max-height: 100% !important; top: 0}
I would suggest that you should use Array.isArray for the check and not the existence of a map property. A property named map might be something other than Array.prototype.map, perhaps something to do with cartography.
Other comments and answers suggest you also want to work the same on nested arrays, to convert something like [5, [[14, 23], 32], 41] into [-15, [[-10, -5], 0], 5]. That wouldn't be much harder. All you need to do, as Bergi suggest, is to wrap the recursively applied function in the same decorator:
const apply2Array = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (apply2Array (fn)) : fn (arg)
// ^^^^^^^^^^^
const convertF2C = (t) => (t - 32) / 1.8
const A = [5, 14, 23, 32, 41]
const B = 50
const C = [5, [[14, 23], 32], 41]
const arrayResult = apply2Array(convertF2C);
console .log (arrayResult (A))
console .log (arrayResult (B))
console .log (arrayResult (C))
.as-console-wrapper {max-height: 100% !important; top: 0}
Don't do this
Still, I would suggest that this enterprise if fraught with potential pitfalls. Imagine, for instance, you had a sum function that operated on an array of number, and you want to use it to operate on either an array of numbers or on an array of arrays of numbers.
If you wrapped it up with either version of apply2Array, it wouldn't work properly. With the first version, the function will work as expected if you supply an array of arrays of numbers, but will fail if you simply supply an array of number. The second one will fail either way.
The trouble is that sometimes your basic function wants to operate on an array. Making a function that does multiple things based on the types of its inputs loses you some simplicity.
Instead, I would suggest that you create multiple functions to do the different things that you need. You can still use a decorator, but a more general one than the above.
Here we use one called map, which reifies Array.prototype.map:
const map = (fn) => (xs) =>
xs .map (x => fn (x))
const convertF2C = (t) => (t - 32) / 1.8
const convertAllF2C = map (convertF2C)
const A = [5, 14, 23, 32, 41]
const B = 50
console .log (convertAllF2C (A))
console .log (convertF2C (B))
.as-console-wrapper {max-height: 100% !important; top: 0}
And if you also wanted deep mapping, you might rename the decorator above, and do this:
const map = (fn) => (xs) =>
xs .map (x => fn(x))
const deepMap = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (deepMap (fn)) : fn (arg)
const convertF2C = (t) => (t - 32) / 1.8
const convertAllF2C = map (convertF2C)
const deepConvertF2C = deepMap (convertF2C)
const A = [5, 14, 23, 32, 41]
const B = 50
const C = [5, [[14, 23], 32], 41]
const arrayResult = deepMap (convertF2C);
console .log (convertAllF2C (A))
console .log (convertF2C (B))
console .log (deepConvertF2C (C))
.as-console-wrapper {max-height: 100% !important; top: 0}
Having three separate functions to call for your three cases is generally simpler than one function that can be called with three different styles of input associated with three different styles of output. And as these are built from our base function with only some generic decorators, they are still easy to maintain.
But doesn't that contradict...?
Some people know me as a founder and primary author of Ramda. And Ramda has a map function related to this. But it seems to operate on multiple types, including arrays, objects, functions, and more. Isn't this a contradiction?
I'd say no. We just need to move up a layer of abstraction. FantasyLand specifies an abstract generic type, Functor (borrowed from abstract mathematics). These are types which in some way contain one or more values of another type, and to which we can create a similarly-structured container by mapping the function supplied to each of those values. There are certain simple laws that your map function must obey for it to be considered a Functor, but if you do, then Ramda's map will work just fine with your type. In other words, Ramda's map does not work on Arrays specifically, but on any Functor. Ramda itself supplies implementations for Arrays, Objects, and Functions, but delegates the call to other types to their own map methods.
The basic point, though, is that Ramda does not really impose additional complexity here, because the input type of Ramda's map is Functor instead of Array.
Simplicity
Functional programming is about many things. But one of the central topics has to be simplicity. If you haven't seen Rich Hickey's talk Simple Made Easy, I would highly recommend it. It explains an objective notion of simplicity and describes how you might achieve it.
If you give the inner function a name, it becomes easier to write the recursion:
const makeDeeplyMappable = fn => function deepMap(a) {
return Array.isArray(a) ? a.map(deepMap) : fn(a);
};
const convertF2C = x => (x - 32) / 1.8;
const deepF2C = makeDeeplyMappable(convertF2C);
console.log(deepF2C([[32, [0]]]));
console.log(deepF2C([[[[5]]], 32]));
genericRecursiveLoop should be the answer you are looking for.
const convertF2C = x => (x - 32) / 1.8;
// in case you really want to use recursion
// args[0].map is actually just [[1,2,3]][0].map will return try if nested Array
// insuch a case you flaten them manuallu ...arg[0] is extracting nested array
const recursiveLoop = (args) => args.map ? args.map(recursiveLoop) : convertF2C(args)
console.log('recursiveLoop')
console.log(recursiveLoop(5))
console.log(recursiveLoop(6))
console.log(recursiveLoop(5, 6)) // it will only show response for 5
console.log(recursiveLoop([5, 6]))
/* ANSWER */
const genericRecursiveLoop = (func) => (args) => args.map ? args.map(genericRecursiveLoop(func)) : func(args);
let innerRecursiveLoopFunc = genericRecursiveLoop(convertF2C)
console.log('genericRecursiveLoop')
console.log(innerRecursiveLoopFunc(5))
console.log(innerRecursiveLoopFunc(6))
console.log(innerRecursiveLoopFunc(5, 6)) // it will only show response for 5
console.log(innerRecursiveLoopFunc([5, 6]))
You could also consider Array#flatMap if your data structure isn't arbitrarily nested... this would probably help you preserve readability.
const A = [5, 14, 23, 32, 41];
const B = 50;
const toCelsius = (x) => []
.concat(x)
.flatMap((n) => (n - 32) / 1.8)
;
console.log('A', toCelsius(A));
console.log('B', toCelsius(B));

flattening an array? [].concat(...[array])?

I understand that [].concat(...array) will flatten an array of arrays, but I've been taking a webpack course, and in the code to load presets it uses the syntax [].concat(...[array])
My understanding of it is:
const array = [[1,2,3], [4,5,6], [7,8,9]];
const result = [].concat(...array); // [1,2,3,4,5,6,7,8,9]
const result2 = [].concat(...[array]); // [[1,2,3], [4,5,6], [7,8,9]]
It's definitely confusing me because the course code (below) does work, but I can't see what [].concat(...[array]) achieves?
const webpackMerge = require("webpack-merge");
const applyPresets = (env = {presets: []}) => {
const presets = env.presets || [];
/** #type {string[]} */
const mergedPresets = [].concat(...[presets]);
const mergedConfigs = mergedPresets.map(presetName =>
require(`./presets/webpack.${presetName}`)(env)
);
return webpackMerge({}, ...mergedConfigs);
};
module.exports = applyPresets;
Can anyone give me a vague idea please?
This is a bit odd.
the concat() method takes each element from multiple arrays and adds them to a new array. So
[].concat([1],[2],[3]) = [1, 2, 3]
Now the [somevariable] syntax, places the somevariable into an array.
let arr1 = [1, 2, 3, 4]
let arr2 = [arr1]
console.log(arr2) // prints [[1, 2, 3]]
And finally, the ... syntax (called spread syntax) essentially disassembles an array, so its elements can be accessed directly
function myFunc(...[x, y, z]) {
return x * y* z; // without the dots, we'd have to say arr[0] * arr[1] * arr[2]
}
Thus, the [].concat(...[array]) expression you're confused about indeed accomplishes nothing; it places array into another array with the [] syntax, then immediately disassembles it back to how it was with the ... syntax. An equivalent expression is [].concat(array), which doesn't accomplish much since it has a single argument and the contact() method is called on an empty array.
Let me start of by saying that I have no clue why the spread operator is used. [].concat(...[presets]) is equivalent to [].concat(presets) as far as I know.
However [].concat(presets) is probably used to normalize presets. If presets already is an array than this does nothing other than creating a shallow copy.
If presets is not concat spreadable (has the Symbol.isConcatSpreadable property set to a falsy value), like "foo" then it is converted into an array with a single element and the output will be ["foo"].
If there are custom data types that are concat spreadable (has the Symbol.isConcatSpreadable property set to a truthy value), but don't have all array methods. It can be converted into an array of size presets.length using this method.
Since the output is always an array methods like map (mergedPresets.map) can be used without worrying about the type of presets.
const normalize = presets => [].concat(presets);
console.log(normalize(["foo"]));
console.log(normalize("foo"));
console.log(normalize({
0: "foo",
length: 1,
[Symbol.isConcatSpreadable]: true
}));

Why can't we use the spread operator within Array.map() and what are the alternative to flatten array of arrays? [duplicate]

This question already has answers here:
Using spread operator multiple times in javascript?
(4 answers)
Closed 4 years ago.
Here is what I tried:
let a = [[1,2], [3,4]];
a.map(val => ...val)
// => SyntaxError: expected expression, got '...'
// Expected output: [1,2,3,4]
I tried with explicit return statement and surrounding value with parenthesis but none worked...
I just wonder if there is a simple way to return "spreaded array" ?
Edit: Now I have seen this SO question which has got precision on how spread operator works although in doesn't really answer the question on how to "flatten" an array (I modified the title of the question).
This isn't valid syntax, for this to work you need to spread ("unpack" the contents of) your array into a container of some sort (such as an array). However, if you were to do something like:
a.map(val => [...val])
you would not be doing much with your array, and instead, you would end up with copies of the same array. Thus, you can use different methods other than .map such as .reduce or .flatMap/.flat to achieve your desired output:
Using .reduce with the spread syntax:
let a = [[1,2], [3,4]];
let res = a.reduce((acc, val) => [...acc, ...val], []);
console.log(res)
Using .flatMap():
let a = [[1,2], [3,4]];
let res = a.flatMap(val => val); // map the contents (without the holding array) [[1, 2], [3, 4]] -> [1, 2, [3, 4]] -> [1, 2, 3, 4]
console.log(res)
.flatMap() is, however, useless here, and thus using .flat() would simply suffice:
let a = [[1,2], [3,4]];
let res = a.flat();
console.log(res)
Try to use a.flatMap(x=>x); to flat array and map elements or flat (with no mapping)
a.flat();
let a = [[1,2], [3,4]];
let r=a.flat();
console.log(r);
In flat arg you can set deep-level of flatting - setting Infinity will flat any nested array
let a = [[1,2,[3,4,[5,[6]]]], [[7,[8]],9]];
let r=a.flat(Infinity);
console.log(r);
As written in comments, functions can only return one value.
But there is a simple trick what you can use:
let a = [[1,2], [3,4]];
a.reduce((a,b) => a.concat(b),[])
// Expected output: [1,2,3,4]

Transducer flatten and uniq

I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))

How can I replicate Python's dict.items() in Javascript?

In Javascript I have a JSON object from which I want to process just the items:
var json = {
itema: {stuff: 'stuff'},
itemb: {stuff: 'stuff'},
itemc: {stuff: 'stuff'},
itemd: {stuff: 'stuff'}
}
In Python I could do
print json.items()
[{stuff: 'stuff'},{stuff: 'stuff'},{stuff: 'stuff'},{stuff: 'stuff'}]
Can I do this is js?
You cannot do this the same way as in python without extending Object.prototype, which you don't want to do, because it is the path to misery.
You could create a helper function easily that could loop over the object and put the value into an array however, like this:
function items(obj) {
var i, arr = [];
for(i in obj) {
arr.push(obj[i]);
}
return arr;
}
Ps: JSON is a data format, what you have is an object literal.
In python dict.items returns a list of tuples containing both the keys and the values of the dict. Javascript doesn't have tuples, so it would have to be a nested array.
If you'll excuse me a little python code to show the difference.
>>> {1:2, 2:3}.items()
[(1, 2), (2, 3)]
>>> {1:2, 2:3}.values()
[2, 3]
I see the accepted answer returns an array of the objects values, which is the equivalent of the python function dict.values. What is asked for is dict.items. To do this just loop and build up a nested array of 2 element arrays.
function items(obj){
var ret = [];
for(v in obj){
ret.push(Object.freeze([v, obj[v]]));
}
return Object.freeze(ret);
}
I put the Object.freeze in to be pedantic and enforce that the returned value shouldn't be altered, to emulate the immutability of python tuples. Obviously it still works if you take it out.
It should be noted that doing this somewhat defeats the purpose of items in that it is used when iterating over the object in a linear rather than associative fashion and it avoids calculating the hash value to look up each element in the associative array. For small objects who cares but for large ones it might slow you down and there might be a more idiomatic way to do what you want in javascript.
Another newer way to do it is to use Object.entries() which will do exactly what you want.
Object.entries({1:1, 2:2, 3:3})
.forEach(function(v){
console.log(v)
});
The support is limited to those browser versions mentioned in the documentation.
Thanks to recent updates to JavaScript - we can solve this now:
function items(iterable) {
return {
[Symbol.iterator]: function* () {
for (key in iterable) {
yield [key, iterable[key]];
}
}
};
}
for (const [key, val] of items({"a": 3, "b": 4, "c": 5})) {
console.log(key, val);
}
// a 3
// b 4
// c 5
for (const [key, val] of items(["a", "b", "c"])) {
console.log(key, val);
}
// 0 a
// 1 b
// 2 c
ubershmekel's answer makes use of lazy evaluation, compared to my answer below which uses eager evaluation. Lazy evaluation has many benefits which make it much more appropriate for performance reasons in some cases, but the transparency of eager evaluation can be a development speed boon that may make it preferable in other cases.
const keys = Object.keys;
const values = object =>
keys(object).map(key => object[key]);
const items = object =>
keys(object).map(key => [key, object[key]])
obj = {a: 10, b: 20, c: 30};
keys(obj) // ["a", "b", "c"]
values(obj) // [10, 20, 30]
items(obj) // [["a", 10], ["b", 20], ["c", 30]]
items(obj).forEach(([k, v]) => console.log(k, v))
// a 10
// b 20
// c 30
Not sure what you want to do but I guess Json.stringify will do something like that. See http://www.json.org/js.html

Categories