Higher-order function with recursion in Javascript - javascript

newbie here... I'm trying to grasp the concept of functional programming in Javascript, but I got stuck.
I'm trying to apply a function to another function with recursion (higher-order function). Let's say I have an input that can be a variable or an array, for example:
const A = [5, 14, 23, 32, 41];
const B = 50;
My basic function should convert Fahrenheit to Celsius (but it could really be any function)
const convertF2C = x => (x - 32) / 1.8;
So the way I would normally solve it would be:
const result = array => array.map ? array.map(result) : convertF2C(array); // using recursion if the input is an array
The problem with the above is that if I would like to change the convertF2C in the "result" function, I would have to modify the code
So, thinking functionally I should be able to create a general function that takes a basic function, like this:
const arrayResult = apply2Array(convertF2C);
console.log(arrayResult(A)); // Output: [-15, -10, -5, 0, 5]
console.log(arrayResult(B)); // Output: 10
Where i'm guessing that the general function "apply2Array", should look something along the lines of:
const apply2Array = fn => (...args) => args.map ? args.map(apply2Array) : fn(...args); // does not work
I found a “sort of” similar question here, but it did not help me : Higher-order function of recursive functions?
Any guidance, help or pointing me in the right direction would be much appreciated.

I'm a bit confused by the answers here. I can't tell if they are responding to requirements I don't actually see, or if I'm missing something important.
But if you just want a decorator that converts a function on a scalar to one that operates on either a scalar or an array of scalars, it's quite simple, and you weren't far off. This should do it:
const apply2Array = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (fn) : fn (arg)
const convertF2C = (t) => (t - 32) / 1.8
const A = [5, 14, 23, 32, 41]
const B = 50
const arrayResult = apply2Array(convertF2C);
console .log (arrayResult (A))
console .log (arrayResult (B))
.as-console-wrapper {max-height: 100% !important; top: 0}
I would suggest that you should use Array.isArray for the check and not the existence of a map property. A property named map might be something other than Array.prototype.map, perhaps something to do with cartography.
Other comments and answers suggest you also want to work the same on nested arrays, to convert something like [5, [[14, 23], 32], 41] into [-15, [[-10, -5], 0], 5]. That wouldn't be much harder. All you need to do, as Bergi suggest, is to wrap the recursively applied function in the same decorator:
const apply2Array = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (apply2Array (fn)) : fn (arg)
// ^^^^^^^^^^^
const convertF2C = (t) => (t - 32) / 1.8
const A = [5, 14, 23, 32, 41]
const B = 50
const C = [5, [[14, 23], 32], 41]
const arrayResult = apply2Array(convertF2C);
console .log (arrayResult (A))
console .log (arrayResult (B))
console .log (arrayResult (C))
.as-console-wrapper {max-height: 100% !important; top: 0}
Don't do this
Still, I would suggest that this enterprise if fraught with potential pitfalls. Imagine, for instance, you had a sum function that operated on an array of number, and you want to use it to operate on either an array of numbers or on an array of arrays of numbers.
If you wrapped it up with either version of apply2Array, it wouldn't work properly. With the first version, the function will work as expected if you supply an array of arrays of numbers, but will fail if you simply supply an array of number. The second one will fail either way.
The trouble is that sometimes your basic function wants to operate on an array. Making a function that does multiple things based on the types of its inputs loses you some simplicity.
Instead, I would suggest that you create multiple functions to do the different things that you need. You can still use a decorator, but a more general one than the above.
Here we use one called map, which reifies Array.prototype.map:
const map = (fn) => (xs) =>
xs .map (x => fn (x))
const convertF2C = (t) => (t - 32) / 1.8
const convertAllF2C = map (convertF2C)
const A = [5, 14, 23, 32, 41]
const B = 50
console .log (convertAllF2C (A))
console .log (convertF2C (B))
.as-console-wrapper {max-height: 100% !important; top: 0}
And if you also wanted deep mapping, you might rename the decorator above, and do this:
const map = (fn) => (xs) =>
xs .map (x => fn(x))
const deepMap = (fn) => (arg) =>
Array .isArray (arg) ? arg .map (deepMap (fn)) : fn (arg)
const convertF2C = (t) => (t - 32) / 1.8
const convertAllF2C = map (convertF2C)
const deepConvertF2C = deepMap (convertF2C)
const A = [5, 14, 23, 32, 41]
const B = 50
const C = [5, [[14, 23], 32], 41]
const arrayResult = deepMap (convertF2C);
console .log (convertAllF2C (A))
console .log (convertF2C (B))
console .log (deepConvertF2C (C))
.as-console-wrapper {max-height: 100% !important; top: 0}
Having three separate functions to call for your three cases is generally simpler than one function that can be called with three different styles of input associated with three different styles of output. And as these are built from our base function with only some generic decorators, they are still easy to maintain.
But doesn't that contradict...?
Some people know me as a founder and primary author of Ramda. And Ramda has a map function related to this. But it seems to operate on multiple types, including arrays, objects, functions, and more. Isn't this a contradiction?
I'd say no. We just need to move up a layer of abstraction. FantasyLand specifies an abstract generic type, Functor (borrowed from abstract mathematics). These are types which in some way contain one or more values of another type, and to which we can create a similarly-structured container by mapping the function supplied to each of those values. There are certain simple laws that your map function must obey for it to be considered a Functor, but if you do, then Ramda's map will work just fine with your type. In other words, Ramda's map does not work on Arrays specifically, but on any Functor. Ramda itself supplies implementations for Arrays, Objects, and Functions, but delegates the call to other types to their own map methods.
The basic point, though, is that Ramda does not really impose additional complexity here, because the input type of Ramda's map is Functor instead of Array.
Simplicity
Functional programming is about many things. But one of the central topics has to be simplicity. If you haven't seen Rich Hickey's talk Simple Made Easy, I would highly recommend it. It explains an objective notion of simplicity and describes how you might achieve it.

If you give the inner function a name, it becomes easier to write the recursion:
const makeDeeplyMappable = fn => function deepMap(a) {
return Array.isArray(a) ? a.map(deepMap) : fn(a);
};
const convertF2C = x => (x - 32) / 1.8;
const deepF2C = makeDeeplyMappable(convertF2C);
console.log(deepF2C([[32, [0]]]));
console.log(deepF2C([[[[5]]], 32]));

genericRecursiveLoop should be the answer you are looking for.
const convertF2C = x => (x - 32) / 1.8;
// in case you really want to use recursion
// args[0].map is actually just [[1,2,3]][0].map will return try if nested Array
// insuch a case you flaten them manuallu ...arg[0] is extracting nested array
const recursiveLoop = (args) => args.map ? args.map(recursiveLoop) : convertF2C(args)
console.log('recursiveLoop')
console.log(recursiveLoop(5))
console.log(recursiveLoop(6))
console.log(recursiveLoop(5, 6)) // it will only show response for 5
console.log(recursiveLoop([5, 6]))
/* ANSWER */
const genericRecursiveLoop = (func) => (args) => args.map ? args.map(genericRecursiveLoop(func)) : func(args);
let innerRecursiveLoopFunc = genericRecursiveLoop(convertF2C)
console.log('genericRecursiveLoop')
console.log(innerRecursiveLoopFunc(5))
console.log(innerRecursiveLoopFunc(6))
console.log(innerRecursiveLoopFunc(5, 6)) // it will only show response for 5
console.log(innerRecursiveLoopFunc([5, 6]))

You could also consider Array#flatMap if your data structure isn't arbitrarily nested... this would probably help you preserve readability.
const A = [5, 14, 23, 32, 41];
const B = 50;
const toCelsius = (x) => []
.concat(x)
.flatMap((n) => (n - 32) / 1.8)
;
console.log('A', toCelsius(A));
console.log('B', toCelsius(B));

Related

Are arrays monads in modern javascript

I am kinda new to functional programming and I now want to figure out if arrays in modern javascript are monads. Arrays in modern javascript now have the flatMap method (this method was added recently https://tc39.es/ecma262/#sec-array.prototype.flatmap). I was able to satisfy all of the monad laws using this method. Now I want to figure out if I'm actually correct but I have not been able to find a resource that validates this claim. I have found a statement that arrays are almost monads, but not quite, but this statement was made before flatMap was added. (https://stackoverflow.com/a/50478169/11083823)
These are the validations of the monad laws:
left identity (satisfied):
const value = 10
const array = [value]
const twice = (value) => [value, value]
array.flatMap(twice) === twice(value) // both [10, 10]
right identity (satisfied):
const array = [10]
const wrap = (value) => [value]
array.flatMap(wrap) === array // both [10]
associativity (satisfied):
const array = [10]
const twice = (value) => [value, value]
const doubled = (value) => [value * 2]
array.flatMap(twice).flatMap(doubled) === array.flatMap(doubled).flatMap(twice) // both [20, 20]
Yes, arrays are monads.
In Haskell, we can use use a monadic bind on lists like so:
λ> [1, 2, 3] >>= \a -> [a, 0 - a]
[1,-1,2,-2,3,-3]
Here's the haskell Monad instance for a list: https://hackage.haskell.org/package/base-4.14.1.0/docs/src/GHC.Base.html#line-1133
Here's a resource that explains the list monad: https://en.wikibooks.org/wiki/Haskell/Understanding_monads/List
PS. Monads are a mathematical formalism, and are language agnostic.

Best way to check if some coordinate is contained in an array (JavaScript)

When dealing with arrays of coordinates, seen as arrays of length 2, it is necessary to check if some coordinate is contained in that array. However, JavaScript cannot really do that directly (here I use that ES2016 method Array.includes, but with the more classical Array.indexOf the same issue appears):
const a = [[1,2],[5,6]];
const find = a.includes([5,6]);
console.log(find);
This returns false. This has always bothered me. Can someone explain to me why it returns false? To solve this issue, I usually define a helper function:
function hasElement(arr,el) {
return arr.some(x => x[0] === el[0] && x[1] === el[1])
}
The inner condition here could also be replaced by x.toString() === el.toString(). Then in the example above hasElement(a,[5,6]) returns true.
Is there a more elegant way to check the inclusion, preferably without writing helper functions?
You can use JSON.stringify method to convert the javascript object or value to a JSON string and then do the same for the array you want to search and just check if the main array includes the array you want to find.
const a = [[1,2],[5,6]], array = [5,6];
const find = JSON.stringify(a).includes(JSON.stringify(array));
console.log(find);
The reason is that in JavaScript, arrays are just objects and cannot be compared like values. Two object instances will never be equal, so even though they look identical to the eye, they are completely different objects and therefore will always be unequal. See:
console.log([5, 6] === [5, 6]) // false
The JavaScript Array class is a global object that is used in the construction of arrays; which are high-level, list-like objects.
You can try with find and destructure to simplify.
const a = [
[1, 2],
[5, 6]
];
const target = [5, 6];
// Method 1
const find = a.find(x => x.every(y => target.includes(y)));
// Method 2
const [xt, yt] = target;
const find2 = a.find(([x, y]) => xt === x && yt === y);
console.log(find, find2);
Referring back to Siva K V's answer. If you want to find the index of the first occurrence in the array where this is true, just replace find with findIndex (https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/findIndex)
const a = [
[1, 2],
[5, 6]
];
const target = [5, 6];
// Method 1
const find = a.findIndex(x => x.every(y => target.includes(y)));
// Method 2
const [xt, yt] = target;
const find2 = a.findIndex(([x, y]) => xt === x && yt === y);
console.log(find, find2);

Function Composition - Isn't looping over an array multiple times for multiple operations inefficient?

I am trying to understand the concepts and basics of functional programming. I am not going hard-core with Haskell or Clojure or Scala. Instead, I am using JavaScript.
So, if I have understood correctly, the idea behind Functional Programming is to compose a software application using Pure Functions - which take care of handling single responsibility/functionality in an application without any side-effects.
Composition takes place in such a way that the output of one function is piped in as an input to another (according to the logic).
I write 2 functions for doubling and incrementing respectively which take an integer as an argument. Followed by a utility function that composes the functions passed in as arguments.
{
// doubles the input
const double = x => x * 2
// increments the input
const increment = x => x + 1
// composes the functions
const compose = (...fns) => x => fns.reduceRight((x, f) => f(x), x)
// input of interest
const arr = [2,3,4,5,6]
// composed function
const doubleAndIncrement = compose(increment, double)
// only doubled
console.log(arr.map(double))
// only incremented
console.log(arr.map(increment))
// double and increment
console.log(arr.map(doubleAndIncrement))
}
The outputs are as follows:
[4, 6, 8, 10, 12] // double
[3, 4, 5, 6, 7] // increment
[5, 7, 9, 11, 13] // double and increment
So, my question is, the reduceRight function will be going through the array twice in this case for applying the 2 functions.
If the array gets larger in size, wouldn't this be inefficient?
Using a loop, it can be done in a single traversal with the two operations in the same loop.
How can this be prevented or is my understanding incorrect in any way?
It is map that traverses the array, and that happens only once. reduceRight is traversing the list of composed functions (2 in your example) and threading the current value of the array through that chain of functions. The equivalent inefficient version you describe would be:
const map = f => x => x.map(f)
const doubleAndIncrement = compose(map(increment), map(double))
// double and increment inefficient
console.log(doubleAndIncrement(arr))
This reveals one of the laws that map must satisfy, that:
map(compose(g, f)) is equivalent (isomorphic) to compose(map(g), map(f))
But as we now know, the latter can be made more efficient by simplifying it to the former, and it will traverse the input array only once.
Optimising time complexity while still having small and dedicated functions is a widely discussed topic in fp.
Let's take this simple case:
const double = n => 2 * n;
const square = n => n * n;
const increment = n => 1 + n;
const isOdd = n => n % 2;
const result = [1, 2, 3, 4, 5, 6, 7, 8]
.map(double)
.map(square)
.map(increment)
.filter(isOdd);
console.log('result', result);
In linear composition, as well as chaining, this operation can be read as O(4n) time complexity... meaning that for each input we perform roughly 4 operations (e.g. in a list of 4 billion items we would perform 16 billion operations).
we could solve this issue (intermediate values + unnecessary number of operations) by embedding all the operations (double, square, increment, isOdd) into a single reduce function... however, this would result in a loss of readability.
In FP you have the concept of Transducer (read here) so that you could still keep the readability given by single purposed functions and have the efficiency of performing as little operations as possible.
const double = n => 2 * n;
const square = n => n * n;
const increment = n => 1 + n;
const isOdd = n => n % 2;
const transducer = R.into(
[],
R.compose(R.map(double), R.map(square), R.map(increment), R.filter(isOdd)),
);
const result = transducer([1, 2, 3, 4, 5, 6, 7, 8]);
console.log('result', result);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js" integrity="sha256-xB25ljGZ7K2VXnq087unEnoVhvTosWWtqXB4tAtZmHU=" crossorigin="anonymous"></script>

Calculate the mathematical difference of each element between two arrays

Given two array of same length, return an array containing the mathematical difference of each element between two arrays.
Example:
a = [3, 4, 7]
b = [3, 9, 10 ]
results: c = [(3-3), (9-4), (10,7)] so that c = [0, 5 3]
let difference = []
function calculateDifferenceArray(data_one, data_two){
let i = 0
for (i in data_duplicates) {
difference.push(data_two[i]-data_one[i])
}
console.log(difference)
return difference
}
calculateDifferenceArray((b, a))
It does work.
I am wondering if there is a more elegant way to achieve the same
Use map as following:
const a = [3, 4, 7]
const b = [3, 9, 10]
const c = b.map((e, i) => e - a[i])
// [0, 5, 3]
for-in isn't a good tool for looping through arrays (more in my answer here).
"More elegant" is subjective, but it can be more concise and, to my eyes, clear if you use map:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => data_two[index] - v1)
}
calculateDifferenceArray(b, a) // < Note just one set of () here
Live Example:
const a = [3, 4, 7];
const b = [3, 9, 10 ];
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => v1 - data_two[index]);
}
console.log(calculateDifferenceArray(b, a));
or if you prefer it slightly more verbose for debugging et. al.:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => {
const v2 = data_two[index]
return v1 - v2
})
}
calculateDifferenceArray(b, a)
A couple of notes on the version of this in the question:
It seems to loop over something (data_duplicates?) unrelated to the two arrays passed into the method.
It pushes to an array declared outside the function. That means if you call the function twice, it'll push the second set of values into the array but leave the first set of values there. That declaration and initialization should be inside the function, not outside it.
You had two sets of () in the calculateDifferenceArray call. That meant you only passed one argument to the function, because the inner () wrapped an expression with the comma operator, which takes its second operand as its result.
You had the order of the subtraction operation backward.
You could use higher order array method map. It would work something like this:
let a = [2,3,4];
let b = [3,5,7];
let difference = a.map((n,i)=>n-b[i]);
console.log(difference);
you can read more about map here

Transducer flatten and uniq

I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))

Categories