Default parameter value undefined; is this a JavaScript Bug? - javascript

Below is a syntactically valid javascript program – only, it doesn't behave quite the way we're expecting. The title of the question should help your eyes zoom to The Problem Area
const recur = (...args) =>
({ type: recur, args })
const loop = f =>
{
let acc = f ()
while (acc.type === recur)
acc = f (...acc.args)
return acc
}
const repeat = n => f => x =>
loop ((n = n, f = f, x = x) => // The Problem Area
n === 0
? x
: recur (n - 1, f, f (x)))
console.time ('loop/recur')
console.log (repeat (1e6) (x => x + 1) (0))
console.timeEnd ('loop/recur')
// Error: Uncaught ReferenceError: n is not defined
If instead I use unique identifiers, the program works perfectly
const recur = (...args) =>
({ type: recur, args })
const loop = f =>
{
let acc = f ()
while (acc.type === recur)
acc = f (...acc.args)
return acc
}
const repeat = $n => $f => $x =>
loop ((n = $n, f = $f, x = $x) =>
n === 0
? x
: recur (n - 1, f, f (x)))
console.time ('loop/recur')
console.log (repeat (1e6) (x => x + 1) (0)) // 1000000
console.timeEnd ('loop/recur') // 24 ms
Only this doesn't make sense. Let's talk about the original code that doesn't use the $-prefixes now.
When the lambda for loop is being evaluated, n as received by repeat, is available in the lambda's environment. Setting the inner n to the outer n's value should effectively shadow the outer n. But instead, JavaScript sees this as some kind of problem and the inner n results in an assignment of undefined.
This seems like a bug to me but I suck at reading the spec so I'm unsure.
Is this a bug?

I guess you already figured out why your code doesn't work. Default arguments behave like recursive let bindings. Hence, when you write n = n you're assigning the newly declared (but yet undefined) variable n to itself. Personally, I think this makes perfect sense.
So, you mentioned Racket in your comments and remarked on how Racket allows programmers to choose between let and letrec. I like to compare these bindings to the Chomsky hierarchy. The let binding is akin to regular languages. It isn't very powerful but allows variable shadowing. The letrec binding is akin to recursively enumerable languages. It can do everything but doesn't allow variable shadowing.
Since letrec can do everything that let can do, you don't really need let at all. A prime example of this is Haskell which only has recursive let bindings (unfortunately called let instead of letrec). The question now arises as to whether languages like Haskell should also have let bindings. To answer this question, let's look at the following example:
-- Inserts value into slot1 or slot2
insert :: (Bool, Bool, Bool) -> (Bool, Bool, Bool)
insert (slot1, slot2, value) =
let (slot1', value') = (slot1 || value, slot1 && value)
(slot2', value'') = (slot2 || value', slot2 && value')
in (slot1', slot2', value'')
If let in Haskell wasn't recursive then we could write this code as:
-- Inserts value into slot1 or slot2
insert :: (Bool, Bool, Bool) -> (Bool, Bool, Bool)
insert (slot1, slot2, value) =
let (slot1, value) = (slot1 || value, slot1 && value)
(slot2, value) = (slot2 || value, slot2 && value)
in (slot1, slot2, value)
So why doesn't Haskell have non-recursive let bindings? Well, there's definitely some merit to using distinct names. As a compiler writer, I notice that this style of programming is similar to the single static assignment form in which every variable name is used exactly once. By using a variable name only once, the program becomes easier for a compiler to analyze.
I think this applies to humans as well. Using distinct names helps people reading your code to understand it. For a person writing the code it might be more desirable to reuse existing names. However, for a person reading the code using distinct names prevents any confusion that might arise due to everything looking the same. In fact, Douglas Crockford (oft-touted JavaScript guru) advocates context coloring to solve a similar problem.
Anyway, back to the question at hand. There are two possible ways that I can think of to solve your immediate problem. The first solution is to simply use different names, which is what you did. The second solution is to emulate non-recursive let expressions. Note that in Racket, let is just a macro which expands to a left-left-lambda expression. For example, consider the following code:
(let ([x 5])
(* x x))
This let expression would be macro expanded to the following left-left-lambda expression:
((lambda (x) (* x x)) 5)
In fact, we can do the same thing in Haskell using the reverse application operator (&):
import Data.Function ((&))
-- Inserts value into slot1 or slot2
insert :: (Bool, Bool, Bool) -> (Bool, Bool, Bool)
insert (slot1, slot2, value) =
(slot1 || value, slot1 && value) & \(slot1, value) ->
(slot2 || value, slot2 && value) & \(slot2, value) ->
(slot1, slot2, value)
In the same spirit, we can solve your problem by manually "macro expanding" the let expression:
const recur = (...args) => ({ type: recur, args });
const loop = (args, f) => {
let acc = f(...args);
while (acc.type === recur)
acc = f(...acc.args);
return acc;
};
const repeat = n => f => x =>
loop([n, f, x], (n, f, x) =>
n === 0 ? x : recur (n - 1, f, f(x)));
console.time('loop/recur');
console.log(repeat(1e6)(x => x + 1)(0)); // 1000000
console.timeEnd('loop/recur');
Here, instead of using default parameters for the initial loop state I'm passing them directly to loop instead. You can think of loop as the (&) operator in Haskell which also does recursion. In fact, this code can be directly transliterated into Haskell:
import Prelude hiding (repeat)
data Recur r a = Recur r | Return a
loop :: r -> (r -> Recur r a) -> a
loop r f = case f r of
Recur r -> loop r f
Return a -> a
repeat :: Int -> (a -> a) -> a -> a
repeat n f x = loop (n, f, x) (\(n, f, x) ->
if n == 0 then Return x else Recur (n - 1, f, f x))
main :: IO ()
main = print $ repeat 1000000 (+1) 0
As you can see you don't really need let at all. Everything that can be done by let can also be done by letrec and if you really want variable shadowing then you can just manually perform the macro expansion. In Haskell, you could even go one step further and make your code prettier using The Mother of all Monads.

Related

How does compose function work with multiple parameters?

Here's a 'compose' function which I need to improve:
const compose = (fns) => (...args) => fns.reduceRight((args, fn) => [fn(...args)], args)[0];
Here's a practical implementation of one:
const compose = (fns) => (...args) => fns.reduceRight((args, fn) => [fn(...args)], args)[0];
const fn = compose([
(x) => x - 8,
(x) => x ** 2,
(x, y) => (y > 0 ? x + 3 : x - 3),
]);
console.log(fn("3", 1)); // 1081
console.log(fn("3", -1)); // -8
And here's an improvement my mentor came to.
const compose = (fns) => (arg, ...restArgs) => fns.reduceRight((acc, func) => func(acc, ...restArgs), arg);
If we pass arguments list like that func(x, [y]) with first iteration, I still don't understand how do we make function work with unpacked array of [y]?
Let's analyse what the improved compose does
compose = (fns) =>
(arg, ...restArgs) =>
fns.reduceRight((acc, func) => func(acc, ...restArgs), arg);
When you feed compose with a number of functions, you get back... a function. In your case you give it a name, fn.
What does this fn function look like? By simple substitution you can think of it as this:
(arg, ...restArgs) => fns.reduceRight((acc, func) => func(acc, ...restArgs), arg);
where fns === [(x) => x - 8, (x) => x ** 2, (x, y) => (y > 0 ? x + 3 : x - 3)].
So you can feed this function fn with some arguments, that will be "pattern-matched" against (arg, ...restArgs); in your example, when you call fn("3", 1), arg is "3" and restArgs is [1] (so ...restArgs expands to just 1 after the comma, so you see that fn("3", 1) reduces to
fns.reduceRight((acc, func) => func(acc, 1), "3");
From this you see that
the rightmost function, (x, y) => (y > 0 ? x + 3 : x - 3) is called with the two arguments "3" (the initial value of acc) and 1,
the result will be passed as the first argument to the middle function with the following call to func,
and so on,
but the point is that the second argument to func, namely 1, is only used by the rightmost function, whereas it is passed to but ignored by the other two functions!
Conclusion
Function composition is a thing between unary functions.¹ Using it with functions with higher-than-1 arity leads to confusion.²
For instance consider these two functions
square = (x) => x**2; // unary
plus = (x,y) => x + y; // binary
can you compose them? Well, you can compose them into a function like this
sum_and_square = (x,y) => square(plus(x,y));
the compose function that you've got at the bottom of your question would go well:
sum_and_square = compose([square, plus]);
But what if your two functions were these?
apply_twice = (f) => ((x) => f(f(x))); // still unary, technically
plus = (x,y) => x + y; // still binary
Your compose would not work.
Even though, if the function plus was curried, e.g. if it was defined as
plus = (x) => (y) => x + y
then one could think of composing them in a function that acts like this:
f = (x,y) => apply_twice(plus(x))(y)
which would predictably produce f(3,4) === 10.
You can get it as f = compose([apply_twice, plus]).
A cosmetic improvement
Additionally, I would suggest a "cosmetic" change: make compose accept ...fns instead of fns,
compose = (...fns)/* I've only added the three dots on this line */ =>
(arg, ...restArgs) =>
fns.reduceRight((acc, func) => func(acc, ...restArgs), arg);
and you'll be able to call it without groupint the functions to be composed in an array, e.g. you'd write compose(apply_twice, plus) instead of compose([apply_twice, plus]).
Btw, there's lodash
There's two functions in that library that can handle function composition:
_.flow
_.flowRight, which is aliased to _.compose in lodash/fp
(¹) This is Haskell's choice (. is the composition operator in Haskell). If you apply f . g . h to more than one argument, the first argument will be passed thought the whole pipeline; that intermediate result will be applied to the second argument; that further intermediate result will be applied to the third argument, and so on. In other words, if you had haskellCompose in JavaScript, and if f was binary and g and h unary, haskellCompose(f, g, h)(x, y) would be equal to f(g(h(x)), y).
(²) Clojure's comp instead takes another choice. It saturates the rightmost function and then passes the result over to the others. So if you had clojureCompose in JavaScript, and f and g where unary while h binary, then clojureCompose(f, g, h)(x, y) would be equal to f(g(h(x,y))).
Might be because I'm used to Haskell's automatically curryed functions, but I prefer Haskell's choice.

Difficulty understanding functional programming composition, functors and monads example

I've kind of grasped some knowledge on functional programming but can't really wrap my head around this function programming block of code. I didn't really knew where I should ask something like this I couldn't figure out so asked it here. So I would really appreciate if someone will help me understand what this higher order function, or a monads example doing?
P.S this code is from composing software book by Eric Elliot
const f = n => n+1;
const g = n => n*2;
This composeM functions is made to compose and map or over numbers of functions? I know reduce but really have no idea how this function should be working.
const composeM = (...mps) => mps.reduce((f, g) => x => g(x).map(f));
const h = composeM(f,g);
h(20)
Then, the function composeM was more generalized by doing:
const compose = methods => (...mps) => mps.reduce((f, g) => x => g(x)[method](f));
Then, I could create composedPromises or composedMaps like
const composePromises = compose("then")(f,g);
How is the g(x)[method](f) even working? it should be g(x).then(f).
Update above map composeM function not working
const f = n => Promise.resolve( n+1 );
const g = n => Promise.resolve( n*2 );
const composePromises = (...mps) => mps.reduce((f, g) => x => g(x).then(f))
const h = composePromises(f, g)
h(20)
Consider function composition, which has the following type signature.
// compose :: (b -> c) -- The 1st argument is a function from b to c.
// -> (a -> b) -- The 2nd argument is a function from a to b.
// -> (a -> c) -- The final result is a function from a to c.
// +-----------------b -> c
// | +---------a -> b
// | | +-- a
// | | |
const compose = (f, g) => x => f(g(x));
// |_____|
// |
// c
The composeP function is similar to the compose function, except it composes functions that return promises.
// composeP :: (b -> Promise c) -- The 1st argument is a function from b to promise of c.
// -> (a -> Promise b) -- The 2nd argument is a function from a to promise of b.
// -> (a -> Promise c) -- The final result is a function from a to promise of c.
// +-------------------------b -> Promise c
// | +-------- a -> Promise b
// | | +-- a
// | | |
const composeP = (f, g) => x => g(x).then(f);
// |__________|
// |
// Promise c
Remember that the then method applies the callback function to the value of the promise. If we replace .then with [method] where method is the name of the bind function of a specific monad, then we can compose functions that produces values of that monad.
For example, .flatMap is the bind function of arrays. Hence, we can compose functions that return arrays as follows.
// composeA :: (b -> Array c) -- The 1st argument is a function from b to array of c.
// -> (a -> Array b) -- The 2nd argument is a function from a to array of b.
// -> (a -> Array c) -- The final result is a function from a to array of c.
const composeA = (f, g) => x => g(x).flatMap(f);
// f :: Bool -> Array String
const f = x => x ? ["yes"] : ["no"];
// g :: Int -> Array String
const g = n => [n <= 0, n >= 0];
// h :: Int -> Array String
const h = composeA(f, g);
console.log(h(-1)); // ["yes", "no"]
console.log(h(0)); // ["yes", "yes"]
console.log(h(1)); // ["no", "yes"]
That was a very contrived example but it got the point across.
Anyway, the generic compose function composes monads.
const compose = method => (f, g) => x => g(x)[method](f);
const composeP = compose("then");
const composeA = compose("flatMap");
Finally, the various monad compose functions only compose two functions at a time. We can use reduce to compose several of them at once. Hope that helps.

Is explicit type passing not equivalent to type inference (in terms of expressiveness)?

I try to translate traverse/sequenceA to Javascript. Now the following behavior of the Haskell implementation gives me trouble:
traverse (\x -> x) Nothing -- yields Nothing
sequenceA Nothing -- yields Nothing
traverse (\x -> x) (Just [7]) -- yields [Just 7]
sequenceA (Just [7]) -- yields [Just 7]
As a Haskell newbie I wonder why the first expression works at all:
instance Traversable Maybe where
traverse _ Nothing = pure Nothing
traverse f (Just x) = Just <$> f x
pure Nothing shouldn't work in this case, since there is no minimal applicative context in which the value could be put in. It seems as if the compiler checks the type of this expression lazily and since mapping the id function over Nothing is a noop, it simply "overlooks" the type error, so to speak.
Here is my Javascript translation:
(Please note, that since Javascirpt's prototype system is insufficient for a couple of type classes and there is no strict type checking anyway, I work with Church encoding and pass type constraints to functions explicitly.)
// minimal type system realized with `Symbol`s
const $tag = Symbol.for("ftor/tag");
const $Option = Symbol.for("ftor/Option");
const Option = {};
// value constructors (Church encoded)
const Some = x => {
const Some = r => {
const Some = f => f(x);
return Some[$tag] = "Some", Some[$Option] = true, Some;
};
return Some[$tag] = "Some", Some[$Option] = true, Some;
};
const None = r => {
const None = f => r;
return None[$tag] = "None", None[$Option] = true, None;
};
None[$tag] = "None";
None[$Option] = true;
// Traversable
// of/map are explicit arguments of traverse to imitate type inference
// tx[$Option] is just duck typing to enforce the Option type
// of == pure in Javascript
Option.traverse = (of, map) => ft => tx =>
tx[$Option] && tx(of(None)) (x => map(Some) (ft(x)));
// (partial) Array instance of Applicative
const map = f => xs => xs.map(f);
const of = x => [x];
// helpers
const I = x => x;
// applying
Option.traverse(of, map) (I) (None) // ~ [None]
Option.traverse(of, map) (I) (Some([7])) // ~ [Some(7)]
Obviously, this translation deviates from the Haskell implementation, because I get a [None] where I should get a None. Honestly, this behavior corresponds precisely to my intuition, but I guess intuition isn't that helpful in functional programming. Now my question is
did I merely make a rookie mistake?
or is explicit type passing not equivalent to type inference (in terms of expressiveness)?
GHCi does not overlook any type error. It defaults an unconstrained Applicative to IO, but you only get this behavior in a GHCi prompt (and not a .hs source file). You can check
> :t pure Nothing
pure Nothing :: Applicative f => f (Maybe b)
But still have
> pure Nothing
Nothing
Your javascript implementation is fine; you passed in an Applicative instance for arrays and got what is expected.

How can the lack of return type polymorphism in untyped languages be alleviated?

I've been working with Church encoding recently and when I look at a typical type
newtype ChurchMaybe a =
ChurchMaybe { runChurchMaybe :: forall r . r -> (a -> r) -> r }
it looks as if functions with an existential type (runChurchMaybe) might behave similarly to functions that are polymorphic in their return type. I haven't fully understood the logic behind existential types though. So I am probably wrong.
Now I've often read that monads are less useful in untyped languages like Javascript, also because of the lack of return type polymorphism. So I wondered if I can ease this shortcoming:
// JS version of Haskell's read
// read :: String -> forall r . (String -> r) -> r
const read = x => cons => cons(x);
// option type
const Some = x => r => f => f(x);
const None = r => f => r;
console.log(
read(prompt("any string")) (Array.of) // [a]
);
console.log(
read(prompt("any string")) (Some) // Some(a)
);
console.log(
read(prompt("number string")) (x => Number(x)) // Number
);
const append_ = x => y => append => append(x) (y);
const all = x => y => x && y;
const any = x => y => x || y;
const add = x => y => x + y;
const semigroup = append_(true) (false)
semigroup(all); // false
semigroup(any); // true
semigroup(add); // 1
Obviously, read isn't polymorphic in its return type, because it always returns a lambda. However, this lambda can serve as a proxy of the actual return value and the context is now able to determine what type this proxy actually produces by passing a suitable constructor.
And while read can produce any type, append_ is limited to types that have a semigroup constraint.
Of course there is now a little noise in the context of such functions, since they return a proxy instead of the actual result.
Is this essentially the mechanism behind the term "return type polymorphism"? This subject seems to be quite complex and so I guess I am missing something. Any help is appreciated.
In a comment I made an assertion without justifying myself: I said that return type polymorphism isn't a meaningful concept in an untyped language. That was rude of me and I apologise for being so brusque. What I meant was something rather more subtle than what I said, so please allow me to attempt to make amends for my poor communication by explaining in more detail what I was trying to get at. (I hope this answer doesn't come across as condescending; I don't know your base level of knowledge so I'm going to start at the beginning.)
When Haskellers say "return type polymorphism", they're referring to one particular effect of the type class dispatch mechanism. It comes about as the interplay between dictionary passing and bidirectional type inference. (I'm going to ignore polymorphic _|_s like undefined :: forall a. a or let x = x in x :: forall a. a. They don't really count.)
First, note that type class instances in Haskell are syntactic sugar for explicit dictionary passing. By the time GHC translates your program into its Core intermediate representation, all the type classes are gone. They are replaced with "dictionary" records and passed around as regular explicit arguments; => is represented at runtime as ->. So code like
class Eq a where
(==) :: a -> a -> Bool
instance Eq Bool where
True == True = True
False == False = True
_ == _ = False
headEq :: Eq a => a -> [a] -> Bool
headEq _ [] = False
headEq x (y:_) = x == y
main = print $ headEq True [False]
is translated into something like
-- The class declaration is translated into a regular record type. (D for Dictionary)
data EqD a = EqD { eq :: a -> a -> Bool }
-- The instance is translated into a top-level value of that type
eqDBool :: EqD Bool
eqDBool = EqD { eq = eq }
where eq True True = True
eq False False = True
eq _ _ = False
-- the Eq constraint is translated into a regular dictionary argument
headEq :: EqD a -> a -> [a] -> Bool
headEq _ _ [] = False
headEq eqD x (y:_) = eq eqD x y
-- the elaborator sees you're calling headEq with a ~ Bool and passes in Bool's instance dictionary
main = print $ headEq eqDBool True [False]
It works because of instance coherence: every constraint has at most one "best" matching instance (unless you switch on IncoherentInstances, which is usually a bad idea). At the call site of an overloaded function, the elaborator looks at the constraint's type parameter, searches for an instance matching that constraint - either a top-level instance or a constraint that's in scope - and passes in the single corresponding dictionary as an argument. (For more on the notion of instance coherence I recommend this talk by Ed Kmett. It's quite advanced - it took me a couple of watches to grasp his point - but it's full of insight.)
Much of the time, as in headEq, the constraint's type parameters can be determined by looking only at the types of the overloaded function's arguments, but in the case of polymorphic return values (such as read :: Read a => String -> a or mempty :: Monoid m => m) the typing information has to come from the call site's context. This works via the usual mechanism of bidirectional type inference: GHC looks at the return value's usages, generates and solves unification constraints to figure out its type, and then uses that type to search for an instance. It makes for a kinda magical developer experience: you write mempty and the machine figures out from the context which mempty you meant!
(Incidentally, that's why show . read :: String -> String is forbidden. show and read are type class methods, whose concrete implementation isn't known without any clues about the type at which they're being used. The intermediate type in show . read - the one you're reading into and then showing from - is ambiguous, so GHC doesn't know how to choose an instance dictionary in order to generate runtime code.)
So "return type polymorphism" is actually a slightly misleading term. It's really a by-word for a particular kind of type-directed code generation; its Core representation is just as a regular function whose return type can be determined from the type of its (dictionary) argument. In a language without type classes (or a language without types at all, like JS), you have to simulate type classes with explicit dictionary parameters that are manually passed around by the programmer, as #4Castle has demonstrated in another answer. You can't do type-directed code generation without types to be directed by!
If I understand your question correctly, you would like to know how to implement functions which need access to the methods of a type class so that they can be polymorphic.
One way to think about type classes is as lookup tables between types and implementations. For example, Show would be a mapping of types to functions which return strings. This article explains this in more detail and also gives some alternative ways to implement type classes.
In a language that doesn't support types at all, you will have to implement the types as some kind of unique value which you can pass to polymorphic functions — such as a string, symbol, or object reference. I prefer an object reference because it means I can implement my types as functions and gain the ability to implement parameterized types.
Here's an example of how you could implement Read for Maybe and Int:
// MACROS
const TYPE = (constructor, ...args) => Object.freeze({ constructor, args });
const TYPECLASS = (name, defaultMethods = {}) => {
const id = Symbol(name);
const typeClass = ({ constructor, args }) => {
return Object.assign({}, defaultMethods, constructor[id](...args));
};
typeClass._instance_ = (constructor, implementation) => {
constructor[id] = implementation;
};
return Object.freeze(typeClass);
};
// TYPES
const Int = () => TYPE(Int);
const Maybe = a => TYPE(Maybe, a);
// DATA CONSTRUCTORS
const Just = x => r => f => f(x);
const Nothing = r => f => r;
// TYPE CLASSES and INSTANCES
const Read = TYPECLASS('Read');
Read._instance_(Maybe, A => ({
read: str =>
str.slice(0, 5) === "Just "
? Just (Read(A).read(str.slice(5)))
: str === "Nothing"
? Nothing
: undefined
}));
Read._instance_(Int, () => ({
read: str => {
const num = parseInt(str);
return isNaN(num) ? undefined : num;
}
}));
// FUNCTIONS
const error = msg => { throw new Error(msg); };
const maybe = x => f => m => m(x)(f);
const isJust = maybe (false) (_ => true);
const fromJust = maybe (undefined) (x => x);
const read = A => str => {
const x = Read(A).read(str);
return x === undefined ? error ("read: no parse") : x;
};
const readMaybe = A => str => {
try { return Just (read (A) (str)); }
catch (e) { return Nothing; }
};
// TESTS
console.log([
fromJust (read (Maybe(Int())) ("Just 123")), // 123
read (Int()) ("123"), // 123
fromJust (readMaybe (Int()) ("abc")) // undefined
]);

Recurrence in arrow functions (JS Harmony) [duplicate]

Arrow functions in ES6 do not have an arguments property and therefore arguments.callee will not work and would anyway not work in strict mode even if just an anonymous function was being used.
Arrow functions cannot be named, so the named functional expression trick can not be used.
So... How does one write a recursive arrow function? That is an arrow function that recursively calls itself based on certain conditions and so on of-course?
Writing a recursive function without naming it is a problem that is as old as computer science itself (even older, actually, since λ-calculus predates computer science), since in λ-calculus all functions are anonymous, and yet you still need recursion.
The solution is to use a fixpoint combinator, usually the Y combinator. This looks something like this:
(y =>
y(
givenFact =>
n =>
n < 2 ? 1 : n * givenFact(n-1)
)(5)
)(le =>
(f =>
f(f)
)(f =>
le(x => (f(f))(x))
)
);
This will compute the factorial of 5 recursively.
Note: the code is heavily based on this: The Y Combinator explained with JavaScript. All credit should go to the original author. I mostly just "harmonized" (is that what you call refactoring old code with new features from ES/Harmony?) it.
It looks like you can assign arrow functions to a variable and use it to call the function recursively.
var complex = (a, b) => {
if (a > b) {
return a;
} else {
complex(a, b);
}
};
Claus Reinke has given an answer to your question in a discussion on the esdiscuss.org website.
In ES6 you have to define what he calls a recursion combinator.
let rec = (f)=> (..args)=> f( (..args)=>rec(f)(..args), ..args )
If you want to call a recursive arrow function, you have to call the recursion combinator with the arrow function as parameter, the first parameter of the arrow function is a recursive function and the rest are the parameters. The name of the recursive function has no importance as it would not be used outside the recursive combinator. You can then call the anonymous arrow function. Here we compute the factorial of 6.
rec( (f,n) => (n>1 ? n*f(n-1) : n) )(6)
If you want to test it in Firefox you need to use the ES5 translation of the recursion combinator:
function rec(f){
return function(){
return f.apply(this,[
function(){
return rec(f).apply(this,arguments);
}
].concat(Array.prototype.slice.call(arguments))
);
}
}
TL;DR:
const rec = f => f((...xs) => rec(f)(...xs));
There are many answers here with variations on a proper Y -- but that's a bit redundant... The thing is that the usual way Y is explained is "what if there is no recursion", so Y itself cannot refer to itself. But since the goal here is a practical combinator, there's no reason to do that. There's this answer that defines rec using itself, but it's complicated and kind of ugly since it adds an argument instead of currying.
The simple recursively-defined Y is
const rec = f => f(rec(f));
but since JS isn't lazy, the above adds the necessary wrapping.
Use a variable to which you assign the function, e.g.
const fac = (n) => n>0 ? n*fac(n-1) : 1;
If you really need it anonymous, use the Y combinator, like this:
const Y = (f) => ((x)=>f((v)=>x(x)(v)))((x)=>f((v)=>x(x)(v)))
… Y((fac)=>(n)=> n>0 ? n*fac(n-1) : 1) …
(ugly, isn't it?)
A general purpose combinator for recursive function definitions of any number of arguments (without using the variable inside itself) would be:
const rec = (le => ((f => f(f))(f => (le((...x) => f(f)(...x))))));
This could be used for example to define factorial:
const factorial = rec( fact => (n => n < 2 ? 1 : n * fact(n - 1)) );
//factorial(5): 120
or string reverse:
const reverse = rec(
rev => (
(w, start) => typeof(start) === "string"
? (!w ? start : rev(w.substring(1), w[0] + start))
: rev(w, '')
)
);
//reverse("olleh"): "hello"
or in-order tree traversal:
const inorder = rec(go => ((node, visit) => !!(node && [go(node.left, visit), visit(node), go(node.right, visit)])));
//inorder({left:{value:3},value:4,right:{value:5}}, function(n) {console.log(n.value)})
// calls console.log(3)
// calls console.log(4)
// calls console.log(5)
// returns true
I found the provided solutions really complicated, and honestly couldn't understand any of them, so i thought out a simpler solution myself (I'm sure it's already known, but here goes my thinking process):
So you're making a factorial function
x => x < 2 ? x : x * (???)
the (???) is where the function is supposed to call itself, but since you can't name it, the obvious solution is to pass it as an argument to itself
f => x => x < 2 ? x : x * f(x-1)
This won't work though. because when we call f(x-1) we're calling this function itself, and we just defined it's arguments as 1) f: the function itself, again and 2) x the value. Well we do have the function itself, f remember? so just pass it first:
f => x => x < 2 ? x : x * f(f)(x-1)
^ the new bit
And that's it. We just made a function that takes itself as the first argument, producing the Factorial function! Just literally pass it to itself:
(f => x => x < 2 ? x : x * f(f)(x-1))(f => x => x < 2 ? x : x * f(f)(x-1))(5)
>120
Instead of writing it twice, you can make another function that passes it's argument to itself:
y => y(y)
and pass your factorial making function to it:
(y => y(y))(f => x => x < 2 ? x : x * f(f)(x-1))(5)
>120
Boom. Here's a little formula:
(y => y(y))(f => x => endCondition(x) ? default(x) : operation(x)(f(f)(nextStep(x))))
For a basic function that adds numbers from 0 to x, endCondition is when you need to stop recurring, so x => x == 0. default is the last value you give once endCondition is met, so x => x. operation is simply the operation you're doing on every recursion, like multiplying in Factorial or adding in Fibonacci: x1 => x2 => x1 + x2. and lastly nextStep is the next value to pass to the function, which is usually the current value minus one: x => x - 1. Apply:
(y => y(y))(f => x => x == 0 ? x : x + f(f)(x - 1))(5)
>15
var rec = () => {rec()};
rec();
Would that be an option?
Since arguments.callee is a bad option due to deprecation/doesnt work in strict mode, and doing something like var func = () => {} is also bad, this a hack like described in this answer is probably your only option:
javascript: recursive anonymous function?
This is a version of this answer, https://stackoverflow.com/a/3903334/689223, with arrow functions.
You can use the U or the Y combinator. Y combinator being the simplest to use.
U combinator, with this you have to keep passing the function:
const U = f => f(f)
U(selfFn => arg => selfFn(selfFn)('to infinity and beyond'))
Y combinator, with this you don't have to keep passing the function:
const Y = gen => U(f => gen((...args) => f(f)(...args)))
Y(selfFn => arg => selfFn('to infinity and beyond'))
You can assign your function to a variable inside an iife
var countdown = f=>(f=a=>{
console.log(a)
if(a>0) f(--a)
})()
countdown(3)
//3
//2
//1
//0
i think the simplest solution is looking at the only thing that you don't have, which is a reference to the function itself. because if you have that then recusion is trivial.
amazingly that is possible through a higher order function.
let generateTheNeededValue = (f, ...args) => f(f, ...args);
this function as the name sugests, it will generate the reference that we'll need. now we only need to apply this to our function
(generateTheNeededValue)(ourFunction, ourFunctionArgs)
but the problem with using this thing is that our function definition needs to expect a very special first argument
let ourFunction = (me, ...ourArgs) => {...}
i like to call this special value as 'me'. and now everytime we need recursion we do like this
me(me, ...argsOnRecursion);
with all of that. we can now create a simple factorial function.
((f, ...args) => f(f, ...args))((me, x) => {
if(x < 2) {
return 1;
} else {
return x * me(me, x - 1);
}
}, 4)
-> 24
i also like to look at the one liner of this
((f, ...args) => f(f, ...args))((me, x) => (x < 2) ? 1 : (x * me(me, x - 1)), 4)
Here is the example of recursive function js es6.
let filterGroups = [
{name: 'Filter Group 1'}
];
const generateGroupName = (nextNumber) => {
let gN = `Filter Group ${nextNumber}`;
let exists = filterGroups.find((g) => g.name === gN);
return exists === undefined ? gN : generateGroupName(++nextNumber); // Important
};
let fg = generateGroupName(filterGroups.length);
filterGroups.push({name: fg});

Categories