Function Composition With Monads...not working - javascript

I have some ugly data, that requires a lot of ugly null checks. My goal is to write a suite of functions to access/modify it in a point-free, declarative style, using the Maybe monad to keep null checks to a minimum. Ideally I would be able to use Ramda with the monads, but it's not working out so great.
This works:
const Maybe = require('maybe');
const R = require('ramda');
const curry = fn => (...args) => fn.bind(null, ...args);
const map = curry((fn, monad) => (monad.isNothing()) ? monad : Maybe(fn(monad.value())));
const pipe = (...fns) => acc => fns.reduce((m, f) => map(f)(m), acc);
const getOrElse = curry((opt, monad) => monad.isNothing() ? opt : monad.value());
const Either = (val, d) => val ? val : d;
const fullName = (person, alternative, index) => R.pipe(
map(R.prop('names')),
map(R.nth(Either(index, 0))),
map(R.prop('value')),
map(R.split('/')),
map(R.join('')),
getOrElse(Either(alternative, ''))
)(Maybe(person));
However, having to type out 'map()' a billion times doesn't seem very DRY, nor does it look very nice. I'd rather have a special pipe/compose function that wraps each function in a map().
Notice how I'm using R.pipe() instead of my custom pipe()? My custom implementation always throws an error, 'isNothing() is not a function,' upon executing the last function passed to it.
I'm not sure what went wrong here or if there is a better way of doing this, but any suggestions are appreciated!

first things first
that Maybe implementation (link) is pretty much junk - you might want to consider picking an implementation that doesn't require you to implement the Functor interface (like you did with map) – I might suggest Data.Maybe from folktale. Or since you're clearly not afraid of implementing things on your own, make your own Maybe ^_^
Your map implementation is not suitably generic to work on any functor that implements the functor interface. Ie, yours only works with Maybe, but map should be generic enough to work with any mappable, if there is such a word.
No worries tho, Ramda includes map in the box – just use that along with a Maybe that implements the .map method (eg Data.Maybe referenced above)
Your curry implementation doesn't curry functions quite right. It only works for functions with an arity of 2 – curry should work for any function length.
// given, f
const f = (a,b,c) => a + b + c
// what yours does
curry (f) (1) (2) (3) // => Error: curry(...)(...)(...) is not a function
// because
curry (f) (1) (2) // => NaN
// what it should do
curry (f) (1) (2) (3) // => 6
There's really no reason for you to implement curry on your own if you're already using Ramda, as it already includes curry
Your pipe implementation is mixing concerns of function composition and mapping functors (via use of map). I would recommend reserving pipe specifically for function composition.
Again, not sure why you're using Ramda then reinventing a lot of it. Ramda already includes pipe
Another thing I noticed
// you're doing
R.pipe (a,b,c) (Maybe(x))
// but that's the same as
R.pipe (Maybe,a,b,c) (x)
That Either you made is probably not the Either functor/monad you're thinking of. See Data.Either (from folktale) for a more complete implementation
Not a single monad was observed – your question is about function composition with monads but you're only using functor interfaces in your code. Some of the confusion here might be coming from the fact that Maybe implements Functor and Monad, so it can behave as both (and like any other interface it implements) ! The same is true for Either, in this case.
You might want to see Kleisli category for monadic function composition, though it's probably not relevant to you for this particular problem.
functional interfaces are governed by laws
Your question is born out of a lack of exposure/understanding of the functor laws – What these mean is if your data type adheres to these laws, only then can it can be said that your type is a functor. Under all other circumstances, you might be dealing with something like a functor, but not actually a functor.
functor laws
where map :: Functor f => (a -> b) -> f a -> f b, id is the identity function a -> a, and f :: b -> c and g :: a -> b
// identity
map(id) == id
// composition
compose(map(f), map(g)) == map(compose(f, g))
What this says to us is that we can either compose multiple calls to map with each function individually, or we can compose all the functions first, and then map once. – Note on the left-hand side of the composition law how we call .map twice to apply two functions, but on the right-hand side .map was only called once. The result of each expression is identical.
monad laws
While we're at it, we can cover the monad laws too – again, if your data type obeys these laws, only then can it be called a monad.
where mreturn :: Monad m => a -> m a, mbind :: Monad m => m a -> (a -> m b) -> m b
// left identity
mbind(mreturn(x), f) == f(x)
// right identity
mbind(m, mreturn) == m
// associativity
mbind(mbind(m, f), g) == mbind(m, x => mbind(f(x), g))
It's maybe even a little easier to see the laws using Kleisli composition function, composek – now it's obvious that Monads truly obey the associativity law
monad laws defined using Kleisli composition
where composek :: Monad m => (a -> m b) -> (b -> m c) -> (a -> m c)
// kleisli left identity
composek(mreturn, f) == f
// kleisli right identity
composek(f, mreturn) == f
// kleisli associativity
composek(composek(f, g), h) == composek(f, composek(g, h))
finding a solution
So what does all of this mean for you? In short, you're doing more work than you have to – especially implementing a lot of the things that already comes with your chosen library, Ramda. Now, there's nothing wrong with that (in fact, I'm a huge proponent of this if you audit many of my
other answers on the site), but it can be the source of confusion if you get some of the implementations wrong.
Since you seem mostly hung up on the map aspect, I will help you see a simple transformation. This takes advantage of the Functor composition law illustrated above:
Note, this uses R.pipe which composes left-to-right instead of right-to-left like R.compose. While I prefer right-to-left composition, the choice to use pipe vs compose is up to you – it's just a notation difference; either way, the laws are fulfilled.
// this
R.pipe(map(f), map(g), map(h), map(i)) (Maybe(x))
// is the same as
Maybe(x).map(R.pipe(f,g,h,i))
I'd like to help more, but I'm not 100% sure what your function is actually trying to do.
starting with Maybe(person)
read person.names property
get the first index of person.names – is it an array or something? or the first letter of the name?
read the .value property?? We're you expecting a monad here? (look at .chain compared to .map in the Maybe and Either implementations I linked from folktale)
split the value on /
join the values with ''
if we have a value, return it, otherwise return some alternative
That's my best guess at what's going on, but I can't picture your data here or make sense of the computation you're trying to do. If you provide more concrete data examples and expected output, I might be able to help you develop a more concrete answer.
remarks
I too was in your boat a couple of years ago; just getting into functional programming, I mean. I wondered how all the little pieces could fit together and actually produce a human-readable program.
The majority of benefits that functional programming provides can only be observed when functional techniques are applied to an entire system. At first, it will feel like you had to introduce tons of dependencies just to rewrite one function in a "functional way". But once you have those dependencies in play in more places in your program, you can start slashing complexity left and right. It's really cool to see, but it takes a while to get your program (and your head) there.
In hindsight, this might not be a great answer, but I hope this helped you in some capacity. It's a very interesting topic to me and I'm happy to assist in answering any other questions you have ^_^

Related

How would a functional language actually define/translate primitives to hardware?

Let's say I have a few primitives defined, here using javascript:
const TRUE = x => y => x;
const FALSE = x => y => y;
const ZERO = f => a => a;
const ONE = f => a => f(a);
const TWO = f => a => f(f(a));
If a language is purely function, how would it translate these primitives to something physical? For example, usually I see something like a function that is not a pure function, such as:
const TWO = f => a => f(f(a));
const inc = x => x+1;
console.log(TWO(inc)(0));
// 2
But again this is sort of a 'trick' to print something, in this case a number. But how is the pure-functional stuff translated into something that can actually do something?
A function is pure if its result (the return value) only depends on the inputs you give to it.
A language is purely functional if all its functions are pure¹.
Therefore it's clear that "utilities" like getchar, which are fairly common functions in many ordinary, non-functional languages, pose a problem in functional languages, because they take no input², and still they give different outputs everytime.
It looks like a functional language needs to give up on purity at least for doing I/O, doesn't it?
Not quite. If a language wants to be purely functional, it can't ever break function purity, not even for doing I/O. Still it needs to be useful. You do need to get things done with it, or, as you say, you need
something that can actually do something
If that's the case, how can a purely functional language, like Haskell, stay pure and yet provide you with utilities to interact with keyboard, terminal, and so on? Or, in other words, how can purely functional languages provide you with entities that have the same "read the user input" role of those impure functions of ordinary languages?
The trick is that those functions are secretly (and platonically) taking one more argument, in addition to the 0 or more arguments they'd have in other languages, and spitting out an additional return value: these two guys are the "real world" before and after the "action" that function performs. It's a bit like saying that the signatures of getchar and putchar are not
char getchar()
void putchar(char)
but
[char, newWorld] = getchar(oldWorld)
[newWorld] = putchar(char, oldWorld)
This way you can give to your program the "initial" world, and all those functions which are impure in ordinary languages will, in functional languages, pass the evolving world to each other, like Olympic torch.
Now you could ask: what's the advantage of doing so?
The point of a pure functional language like Haskell, is that it abstracts this mechanism away from you, hiding that *word stuff from you, so that you can't do silly things like the following
[firstChar, newWorld] = getchar(oldWorld)
[secondChar, newerWorld] = getchar(oldWorld) // oops, I'm mistakenly passing
// oldWorld instead of newWorld
The language just doesn't give you tools to put you hands on the "real world". If it did, you'd have the same degrees of freedom you have in languages like C, and you'd end up with the type of bugs which are common in thos languages.
A purely functional language, instead, hides that stuff from you. It basically constrains and limits your freedom inside a smaller set than non-functional languages allow, letting the runtime machinary that actually runs the program (and on which you have no control whatsoever), take care of the plumbing on your behalf.
(A good reference)
¹ Haskell is such a language (and there isn't any other mainstream purely functional language around, as far as I know); JavaScript is not, even if it provides several tools for doing some functional programming (think of arrow functions, which are essentially lambdas, and the Lodash library).
² No, what you enter from the keyboard is not an input to the getchar function; you call getchar without arguments, and assign its return value to some variable, e.g. char c = getchar() or let c = getchar(), or whatever the language syntax is.

What's so special about Monads in Kleisli category?

The related Question is
What is so special about Monads?
bind can be composed of fmap and join, so do we have to use monadic functions a -> m b?
In the first question:
What is so special about Monads?
A monad is a mathematical structure which is heavily used in (pure) functional programming, basically Haskell. However, there are many other mathematical structures available, like for example applicative functors, strong monads, or monoids. Some have more specific, some are more generic. Yet, monads are much more popular. Why is that?
The comment to reply the question:
As far as I recall, monads were popularised by Wadler, and at the time the idea of doing IO without tedious CPS and parsing without explicit state passing were huge selling points; it was a hugely exciting time. A.F.A.I.R., Haskell didn't do constructor classes, but Gofer (father of Hugs) did. Wadler proposed overloading list comprehension for monads, so the do notation came later. Once IO was monadic, monads became a big deal for beginners, cementing them as a major thing to grok. Applicatives are much nicer when you can, and Arrows more general, but they came later, and IO sells monads hard. – AndrewC May 9 '13 at 1:34
The answer by #Conal is:
I suspect that the disproportionately large attention given to this one particular type class (Monad) over the many others is mainly a historical fluke. People often associate IO with Monad, although the two are independently useful ideas (as are list reversal and bananas). Because IO is magical (having an implementation but no denotation) and Monad is often associated with IO, it's easy to fall into magical thinking about Monad.
First of all, I agree with them, and I think the usefulness of Monads mostly arises from Functors that we can embed many functions within the structure, and Monads is a little expansion for robustness of function composition by join : M(M(X)) -> M(X) to avoid the nested type.
In the 2nd Question:
do we have to use monadic functions a -> m b?
so many tutorials around the web still insist to use a monadic functions since that is the Kleisli triple and the monad-laws.
and many answers like
I like to think of such an m as meaning "plan-to-get", where "plans" involve some sort of additional interaction beyond pure computation.
or
In situations where Monad isn't necessary, it is often simpler to use Applicative, Functor, or just basic pure functions. In these cases, these things should be (and generally are) used in place of a Monad. For example:
ws <- getLine >>= return . words -- Monad
ws <- words <$> getLine -- Functor (much nicer)
To be clear: If it's possible without a monad, and it's simpler and more readable without a monad, then you should do it without a monad! If a monad makes the code more complex or confusing than it needs to be, don't use a monad! Haskell has monads for the sole purpose of making certain complex computations simpler, easier to read, and easier to reason about. If that's not happening, you shouldn't be using a monad.
Reading their answers, I suppose their special feeling about Monad arises from the historical incident that Haskell community has happend to chose Monads in Kleisli category to solve their problem(IO etc.)
So, again, I think the usefulness of Monads mostly arises from Functors that we can embed many functions within the structure, and Monads is a little expansion for robustness of function composition by join : M(M(X)) -> M(X) to avoid the nested type.
In fact, in JavaScript I implemented as below..
Functor
console.log("Functor");
{
const unit = (val) => ({
// contextValue: () => val,
fmap: (f) => unit((() => {
//you can do pretty much anything here
const newVal = f(val);
// console.log(newVal); //IO in the functional context
return newVal;
})()),
});
const a = unit(3)
.fmap(x => x * 2) //6
.fmap(x => x + 1); //7
}
The point is we can implement whatever we like in the Functor structure, and in this case, I simply made it IO/console.log the value.
Another point is, to do this Monads is absolutely unnecessary.
Monad
Now, based on the Functor implementation above, I add extra join: MMX => MX feature to avoid the nested structure that should be helpful for robustness of complex functional composition.
The functionality is exactly identical to the Functor above, and please note the usage is also identical to the Functor fmap. This does not require a "monadic function" to bind (Kleisli composition of monads).
console.log("Monad");
{
const unit = (val) => ({
contextValue: () => val,
bind: (f) => {
//fmap value operation
const result = (() => {
//you can do pretty much anything here
const newVal = f(val);
console.log(newVal);
return newVal;
})();
//join: MMX => MX
return (result.contextValue !== undefined)//result is MX
? result //return MX
: unit(result) //result is X, so re-wrap and return MX
}
});
//the usage is identical to the Functor fmap.
const a = unit(3)
.bind(x => x * 2) //6
.bind(x => x + 1); //7
}
Monad Laws
Just in case, this implementation of the Monad satisfies the monad laws, and the Functor above does not.
console.log("Monad laws");
{
const unit = (val) => ({
contextValue: () => val,
bind: (f) => {
//fmap value operation
const result = (() => {
//you can do pretty much anything here
const newVal = f(val);
//console.log(newVal);
return newVal;
})();
//join: MMX => MX
return (result.contextValue !== undefined)
? result
: unit(result)
}
});
const M = unit;
const a = 1;
const f = a => (a * 2);
const g = a => (a + 1);
const log = m => console.log(m.contextValue()) && m;
log(
M(f(a))//==m , and f is not monadic
);//2
console.log("Left Identity");
log(
M(a).bind(f)
);//2
console.log("Right Identity");
log(
M(f(a))//m
.bind(M)// m.bind(M)
);//2
console.log("Associativity");
log(
M(5).bind(f).bind(g)
);//11
log(
M(5).bind(x => M(x).bind(f).bind(g))
);//11
}
So, here is my question.
I may be wrong.
Is there any counter example that Functors cannnot do what Monads can do except the robustness of functional composition by flattening the nested structure?
What's so special about Monads in Kleisli category? It seems like it's fairly possible to implement Monads with a little expansion to avoid the nested structure of Functor and without the monadic functions a -> m b that is the entity in Kleisli category.
Thanks.
edit(2018-11-01)
Reading the answers, I agree it's not appropriate to perform console.log inside the IdentityFunctor that should satisfy Functor-laws, so I commented out like the Monad code.
So, eliminating that problem, my question still holds:
Is there any counter example that Functors cannnot do what Monads can do except the robustness of functional composition by flattening the nested structure?
What's so special about Monads in Kleisli category? It seems like it's fairly possible to implement Monads with a little expansion to avoid the nested structure of Functor and without the monadic functions a -> m b that is the entity in Kleisli category.
An answer from #DarthFennec is:
"Avoiding the nested type" is not in fact the purpose of join, it's just a neat side-effect. The way you put it makes it sound like join just strips the outer type, but the monad's value is unchanged.
I believe "Avoiding the nested type" is not just a neat side-effect, but a definition of "join" of Monad in category theory,
the multiplication natural transformation μ:T∘T⇒T of the monad provides for each object X a morphism μX:T(T(X))→T(X)
monad (in computer science): Relation to monads in category theory
and that's exactly what my code does.
On the other hand,
This is not the case. join is the heart of a monad, and it's what allows the monad to do things.
I know many people implements monads in Haskell in this manner, but the fact is, there is Maybe functor in Haskell, that does not has join, or there is Free monad that join is embedded from the first place into the defined structure. They are objects that users define Functors to do things.
Therefore,
You can think of a functor as basically a container. There's an arbitrary inner type, and around it an outer structure that allows some variance, some extra values to "decorate" your inner value. fmap allows you to work on the things inside the container, the way you would work on them normally. This is basically the limit of what you can do with a functor.
A monad is a functor with a special power: where fmap allows you to work on an inner value, bind allows you to combine outer values in a consistent way. This is much more powerful than a simple functor.
These observation does not fit the fact of the existence of Maybe functor and Free monad.
Is there any counter example that Functors cannnot do what Monads can do except the robustness of functional composition by flattening the nested structure?
I think this is the important point:
Monads is a little expansion for robustness of function composition by join : M(M(X)) -> M(X) to avoid the nested type.
"Avoiding the nested type" is not in fact the purpose of join, it's just a neat side-effect. The way you put it makes it sound like join just strips the outer type, but the monad's value is unchanged. This is not the case. join is the heart of a monad, and it's what allows the monad to do things.
You can think of a functor as basically a container. There's an arbitrary inner type, and around it an outer structure that allows some variance, some extra values to "decorate" your inner value. fmap allows you to work on the things inside the container, the way you would work on them normally. This is basically the limit of what you can do with a functor.
A monad is a functor with a special power: where fmap allows you to work on an inner value, bind allows you to combine outer values in a consistent way. This is much more powerful than a simple functor.
The point is we can implement whatever we like in the Functor structure, and in this case, I simply made it IO/console.log the value.
This is incorrect, actually. The only reason you were able to do IO here is because you're using Javascript, and you can do IO anywhere. In a purely functional language like Haskell, IO cannot be done in a functor like this.
This is a gross generalization, but for the most part it's useful to describe IO as a glorified State monad. Each IO action takes an extra hidden parameter called RealWorld (which represents the state of the real world), maybe reads from it or modifies it, and then sends it on to the next IO action. This RealWorld parameter is threaded through the chain. If something is written to the screen, that's RealWorld being copied, modified, and passed along. But how does the "passing along" work? The answer is join.
Say we want to read a line from the user, and print it back to the screen:
getLine :: IO String
putStrLn :: String -> IO ()
main :: IO ()
main = -- ?
Let's assume IO is a functor. How do we implement this?
main :: IO (IO ())
main = fmap putStrLn getLine
Here we've lifted putStrLn to IO, to get fmap putStrLn :: IO String -> IO (IO ()). If you recall, putStrLn takes a String and a hidden RealWorld and returns a modified RealWorld, where the String parameter is printed to the screen. We've lifted this function with fmap, so that it now takes an IO (which is an action that takes a hidden RealWorld, and returns a modified RealWorld and a String), and returns the same io action, just wrapped around a different value (a completely separate action that also takes a separate hidden RealWorld and returns a RealWorld). Even after applying getLine to this function, nothing actually happens or gets printed.
We now have a main :: IO (IO ()). This is an action that takes a hidden RealWorld, and returns a modified RealWorld and a separate action. This second action takes a different RealWorld and returns another modified RealWorld. This on its own is pointless, it doesn't get you anything and it doesn't print anything to the screen. What needs to happen is, the two IO actions need to be connected together, so that one action's returned RealWorld gets fed in as the other action's RealWorld parameter. This way it becomes one continuous chain of RealWorlds that mutate as time goes on. This "connection" or "chaining" happens when the two IO actions are merged with join.
Of course, join does different things depending on which monad you're working with, but for IO and State-type monads, this is more or less what's happening under the hood. There are plenty of situations where you're doing something very simple that doesn't require join, and in those cases it's easy to treat the monad as a functor or applicative functor. But usually that isn't enough, and in those cases we use monads.
EDIT: Responses to comments and edited question:
I don't see any definition of Monads in categort theory explains this. All I read about join is stil MMX => MX and that is exactly what my code does.
Can you also tell exactly what a function String -> String does? Might it not return the input verbatim, reverse it, filter it, append to it, ignore it and return a completely different value, or anything else that results in a String? A type does not determine what a function does, it restricts what a function can do. Since join in general is only defined by its type, any particular monad can do anything allowed by that type. This might just be stripping the outer layer, or it might be some extremely complex method of combining the two layers into one. As long as you start with two layers and end up with one layer, it doesn't matter. The type allows for a number of possibilities, which is part of what makes monads so powerful to begin with.
There is MaybeFunctor in Haskell. There's no "join" or "bind" there, and I wonder from where the power come. What is the difference between MaybeFunctor and MaybeMonad?
Every monad is also a functor: a monad is nothing more than a functor that also has a join function. If you use join or bind with a Maybe, you're using it as a monad, and it has the full power of a monad. If you do not use join or bind, but only use fmap and pure, you're using it as a functor, and it becomes limited to doing the things a functor can do. If there's no join or bind, there is no extra monad power.
I believe "Avoiding the nested type" is not just a neat side-effect, but a definition of "join" of Monad in category theory
The definition of join is a transformation from a nested monad to a non-nested monad. Again, this could imply anything. Saying the purpose of join is "to avoid the nested type" is like saying the purpose of + is to avoid pairs of numbers. Most operations combine things in some way, but very few of those operations exist simply for the sake of having a combination of things. The important thing is how the combining happens.
there is Maybe functor in Haskell, that does not has join, or there is Free monad that join is embedded from the first place into the defined structure. They are objects that users define Functors to do things.
I've already discussed Maybe, and how when you use it as a functor only, it can't do the things it can do if you use it as a monad. Free is weird, in that it's one of the few monads that doesn't actually do anything.
Free can be used to turn any functor into a monad, which allows you to use do notation and other conveniences. However, the conceit of Free is that join does not combine your actions the way other monads do, instead it keeps them separate, inserting them into a list-like structure; the idea being that this structure is later processed and the actions are combined by separate code. An equivalent approach would be to move that processing code into join itself, but that would turn the functor into a monad and there would be no point in using Free. So the only reason Free works is because it delegates the actual "doing things" part of the monad elsewhere; its join opts to defer action to code running outside the monad. This is like a + operator that, instead of adding the numbers, returns an abstract syntax tree; one could then process that tree later in whatever way is needed.
These observation does not fit the fact of the existence of Maybe functor and Free monad.
You are incorrect. As explained, Maybe and Free fit perfectly into my previous observations:
The Maybe functor simply does not have the same expressiveness as the Maybe monad.
The Free monad transforms functors into monads in the only way it possibly can: by not implementing a monadic behavior, and instead simply deferring it to some assumed processing code.
The point is we can implement whatever we like in the Functor structure, and in this case, I simply made it IO/console.log the value.
Another point is, to do this Monads is absolutely unnecessary.
The problem is that once you do that your functor is no longer a functor. Functors should preserve identities and composition. For Haskell Functors, those requirements amount to:
fmap id = id
fmap (g . f) = fmap g . fmap f
Those laws are a guarantee that all fmap does is using the supplied function to modify values -- it doesn't do funny stuff behind your back. In the case of your code, fmap(x => x) should do nothing; instead, it prints to the console.
Note that all of the above applies to the IO functor: if a is an IO action, executing fmap f a will have no I/O effects other than those a already had. One stab at writing something similar in spirit to your code might be...
applyAndPrint :: Show b => (a -> b) -> a -> IO b
applyAndPrint f x = let y = f x in fmap (const y) (print y)
pseudoFmap :: Show b => (a -> b) -> IO a -> IO b
pseudoFmap f a = a >>= applyAndPrint f
... but that makes use of Monad already, as we have an effect (printing a result) which depends on the result of a previous computation.
It goes without saying that if you are so inclined (and your type system allows it) you can write code that disregards all of those distinctions. There is, however, a trade-off: the decreased power of Functor with respect to Monad comes with extra guarantees on what functions using the interface can and cannot do -- that is what makes the distinctions useful in the first place.
Your “functor” is very manifestly not a functor, violating both the identity and composition law:
console.log("Functor");
{
const unit = (val) => ({
// contextValue: () => val,
fmap: (f) => unit((() => {
//you can do pretty much anything here
const newVal = f(val);
console.log(newVal); //IO in the functional context
return newVal;
})()),
});
console.log("fmap(id) ...");
const a0 = unit(3)
.fmap(x => x); // prints something
console.log(" ≡ id?");
const a1 = (x => x)(unit(3)); // prints nothing
console.log("fmap(f) ∘ fmap(g) ...");
const b0 = unit(3)
.fmap(x => 3*x)
.fmap(x => 4+x); // prints twice
console.log(" ≡ fmap(f∘g)?");
const b1 = unit(3)
.fmap(x => 4+(3*x)); // prints once
}
overly long comment:
I would suggest forgetting about Kleisli categories for now; I don't believe they have anything to do with your confusion.
Also while I still don't fully understand your question and assertions, some context that might be useful: category theory is extremely general and abstract; the concepts like Monad and Functor as they exist in haskell are (necessarily) somewhat less general and less abstract (e.g. the notion of the category of "Hask").
As a general rule the more concrete (the less abstract) a thing becomes, the more power you have: if I tell you you have a vehicle then you know you have a thing that can take you from one place to another, but you don't know how fast, you don't know whether it can go on land, etc. If I tell you you have a speed boat then there opens up a whole larger world of things you can do and reason about (you can use it to catch fish, you know that it won't get you from NYC to Denver).
When you say:
What's so special about Monads in Kleisli category?
...I believe you're making the mistake of suspecting that the conception of Monad and Functor in haskell is in some way more restrictive relative to category theory but, as I try to explain by analogy above, the opposite is true.
Your code is the same sort of flawed thinking: you model a speedboat (which is a vehicle) and claim it shows that all vehicles are fast and travel on water.

Monads not with "flatMap" but "flatUnit"? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 4 years ago.
Improve this question
Monads in category theory is defined by triples T, unit, flat⟩.
class Monad t where
map :: (a -> b) -> (t a -> t b) -- functorial action
unit :: a -> t a
flat :: t (t a) -> t a
class KleisliTriple t where
unit :: a -> t a
flatMap :: t a -> (a -> t b) -> t b
KleisliTriple flats the structure by the operator: flatMap (or bind in Haskell) that is composition of map and flat.
However, I always think it's much simpler and easier to understand and implement the Monad conept in functional programming to compose functions by flatten the structure with the object such as flatUnit that is composition of unit and flat.
In this case, flatUnit(flatUnit(x)) = flatUnit(x). I actually implemented in this manner in JavaScript, and with flatUnit and map (just a legacy functor operator), all the benefit of Monad seems to be obtained.
So, here's my question.
I have kept looking for documents about the kind of flatUnit formalization in functional programming, but never found it. I understand there's a historical context that Eugenio Moggi who first discovered the relevance of monads in functional programming, and in his paper that happened to be KleisliTriple application, but since Monads are not limited to Kleisli Category and considering the simplicity of flatUnit, to me it's very strange.
Why is that? and what do I miss?
EDIT:code is removed.
In this answer, I won't dwell on flatUnit. As others have pointed out, join . return = id for any monad (it is one of the monad laws), and so there isn't much to talk about it in and of itself. Instead, I will discuss some of the surrounding themes raised in the discussion here.
Quoting a comment:
in other words, functor with a flat structure, it's a monad.
This, I believe, is the heart of the question. A monad need not be a functor with a flat structure, but a functor whose values can be flattened (with join) in a way that follows certain laws ("a monoid in the category of endofunctors", as the saying goes). It isn't required for the flattening to be a lossless operation (i.e. for join to be an isomorphism).
Monads whose join is an isomorphism are called, in category theory parlance, idempotent monads 1. For a Haskell Monad to be idempotent, though, the monadic values must have no extra structure. That means most monads of immediate interest to a programmer won't be idempotent (in fact, I'm having trouble to think of idempotent Haskell Monads that aren't Identity or identity-like). One example already raised in the comments was that of lists:
join [[1,2],[3,4,5]] = [1,2,3,4,5] -- Grouping information discarded
The function/reader monad gives what I'd say is an even more dramatic illustration:
join (+) = \x -> x + x
This recent question gives an interesting illustration involving Maybe. The OP there had a function with signature...
appFunc :: Integer -> Integer -> Bool -> Maybe (Integer,Integer)
... and used it like this...
appFunc <$> u <*> v <*> w
... thus obtaining a Maybe (Maybe (Integer, Integer)) result. The two layers of Maybe correspond to two different ways of failing: if u, v or w are Nothing, we get Nothing; if the three of them are Just-values but appFunc results in Nothing, we get Just Nothing; finally, if everything succeeds we get a Just-value within a Just. Now, it might be the case that we, like the author of that question, didn't care about which layer of Maybe led to the failure; in that case, we would discard that information, either by using join on the result or by rewriting it as u >>= \x -> v >>= \y -> w >>= \b -> appFunc x y b. In any case, the information is there for us to use or discard.
Note 1: In Combining Monads by King and Wadler (one of Wadler's papers about monads), the authors introduce a different, and largely unrelated, meaning for "idempotent monad". In their sense, an idempotent monad is one for which (in applicative notation) f <$> u <*> u = (\x -> f x x) <$> u -- one example would be Maybe.

Is Underscore.js functional programming a fake?

According to my understanding of functional programming, you should be able to chain multiple functions and then execute the whole chain by going through the input data once.
In other words, when I do the following (pseudo-code):
list = [1, 2, 3];
sum_squares = list
.map(function(item) { return item * item; })
.reduce(function(total, item) { return total + item; }, 0);
I expect that the list will be traversed once, when each value will be squared and then everything will be added up (hence, the map operation would be called as needed by the reduce operation).
However, when I look at the source code of Underscore.js, I see that all the "functional programming" functions actually produce intermediate collections like, for example, so:
// Return the results of applying the iteratee to each element.
_.map = _.collect = function(obj, iteratee, context) {
iteratee = cb(iteratee, context);
var keys = !isArrayLike(obj) && _.keys(obj),
length = (keys || obj).length,
results = Array(length);
for (var index = 0; index < length; index++) {
var currentKey = keys ? keys[index] : index;
results[index] = iteratee(obj[currentKey], currentKey, obj);
}
return results;
};
So the question is, as stated in the title, are we fooling ourselves that we do functional programming when we use Underscore.js?
What we actually do is make program look like functional programming without actually it being functional programming in fact. Imagine, I build a long chain of K filter() functions on list of length N, and then in Underscore.js my computational complexity will be O(K*N) instead of O(N) as would be expected in functional programming.
P.S. I've heard a lot about functional programming in JavaScript, and I was expecting to see some functions, generators, binding... Am I missing something?
Is Underscore.js functional programming a fake?
No, Underscore does have lots of useful functional helper functions. But yes, they're doing it wrong. You may want to have a look at Ramda instead.
I expect that the list will be traversed once
Yes, list will only be traversed once. It won't be mutated, it won't be held in memory (if you had not a variable reference to it). What reduce traverses is a different list, the one produced by map.
All the functions actually produce intermediate collections
Yes, that's the simplest way to implement this in a language like JavaScript. Many people rely on map executing all its callbacks before reduce is called, as they use side effects. JS does not enforce pure functions, and library authors don't want to confuse people.
Notice that even in pure languages like Haskell an intermediate structure is built1, though it would be consumed lazily so that it never is allocated as a whole.
There are libraries that implement this kind of optimisation in strict languages with the concept of transducers as known from Clojure. Examples in JS are transduce, transducers-js, transducers.js or underarm. Underscore and Ramda have been looking into them2 too.
I was expecting to see some […] generators
Yes, generators/iterators that can be consumed lazily are another choice. You'll want to have a look at Lazy.js, highland, or immutable-js.
[1]: Well, not really - it's a too easy optimisation
[2]: https://github.com/jashkenas/underscore/issues/1896, https://github.com/ramda/ramda/pull/865
Functional programming has nothing to do with traversing a sequence once; even Haskell, which is as pure as you're going to get, will traverse the length of a strict list twice if you ask it to filter pred (map f x).
Functional programming is a simpler model of computation where the only things that are allowed to happen do not include side effects. For example, in Haskell basically only the following things are allowed to happen:
You can apply a value f to another value x, producing a new value f x with no side-effects. The first value f is called a "function". It must be the case that any time you apply the same f to the same x you get the same answer for f x.
You can give a name to a value, which might be a function or a simple value or whatever.
You can define a new structure for data with a new type signature, and/or structure some data with those "constructors."
You can define a new type-class or show how an existing data structure instantiates a type-class.
You can "pattern match" a data structure, which is a combination of a case dispatch with naming the parts of the data structure for the rest of your project.
Notice how "print something to the console" is not doable in Haskell, nor is "alter an existing data structure." To print something to the console, you construct a value which represents the action of printing something to the console, and then give it a special name, main. (When you're compiling Haskell, you compute the action named main and then write it to disk as an executable program; when you run the program, that action is actually completed.) If there is already a main program, you figure out where you want to include the new action in the existing actions of that program, then use a function to sequence the console logging with the existing actions. The Haskell program never does anything; it just represents doing something.
That is the essence of functional programming. It is weaker than normal programming languages where the language itself does stuff, like JavaScript's console.log() function which immediately performs its side effect whenever the JS interpreter runs through it. In particular, there are some things which are (or seem to be) O(1) or O(log(log(n))) in normal programs where our best functional equivalent is O(log(n)).

Is there a reason why `this` is nullified in Crockford's `curry` method?

In Douglas Crockford's book "Javascript: The Good Parts" he provides code for a curry method which takes a function and arguments and returns that function with the arguments already added (apparently, this is not really what "curry" means, but is an example of "partial application"). Here's the code, which I have modified so that it works without some other custom code he made:
Function.prototype.curry = function(){
var slice = Array.prototype.slice,
args = slice.apply(arguments),
that = this;
return function() {
// context set to null, which will cause `this` to refer to the window
return that.apply(null, args.concat(slice.apply(arguments)));
};
};
So if you have an add function:
var add = function(num1, num2) {
return num1 + num2;
};
add(2, 4); // returns 6
You can make a new function that already has one argument:
var add1 = add.curry(1);
add1(2); // returns 3
That works fine. But what I want to know is why does he set this to null? Wouldn't the expected behavior be that the curried method is the same as the original, including the same this?
My version of curry would look like this:
Function.prototype.myCurry = function(){
var slice = [].slice,
args = slice.apply(arguments),
that = this;
return function() {
// context set to whatever `this` is when myCurry is called
return that.apply(this, args.concat(slice.apply(arguments)));
};
};
Example
(Here is a jsfiddle of the example)
var calculator = {
history: [],
multiply: function(num1, num2){
this.history = this.history.concat([num1 + " * " + num2]);
return num1 * num2;
},
back: function(){
return this.history.pop();
}
};
var myCalc = Object.create(calculator);
myCalc.multiply(2, 3); // returns 6
myCalc.back(); // returns "2 * 3"
If I try to do it Douglas Crockford's way:
myCalc.multiplyPi = myCalc.multiply.curry(Math.PI);
myCalc.multiplyPi(1); // TypeError: Cannot call method 'concat' of undefined
If I do it my way:
myCalc.multiplyPi = myCalc.multiply.myCurry(Math.PI);
myCalc.multiplyPi(1); // returns 3.141592653589793
myCalc.back(); // returns "3.141592653589793 * 1"
However, I feel like if Douglas Crockford did it his way, he probably has a good reason. What am I missing?
Reader beware, you're in for a scare.
There's a lot to talk about when it comes to currying, functions, partial application and object-orientation in JavaScript. I'll try to keep this answer as short as possible but there's a lot to discuss. Hence I have structured my article into several sections and at the end of each I have summarized each section for those of you who are too impatient to read it all.
1. To curry or not to curry
Let's talk about Haskell. In Haskell every function is curried by default. For example we could create an add function in Haskell as follows:
add :: Int -> Int -> Int
add a b = a + b
Notice the type signature Int -> Int -> Int? It means that add takes an Int and returns a function of type Int -> Int which in turn takes an Int and returns an Int. This allows you to partially apply functions in Haskell easily:
add2 :: Int -> Int
add2 = add 2
The same function in JavaScript would look ugly:
function add(a) {
return function (b) {
return a + b;
};
}
var add2 = add(2);
The problem here is that functions in JavaScript are not curried by default. You need to manually curry them and that's a pain. Hence we use partial application (aka bind) instead.
Lesson 1: Currying is used to make it easier to partially apply functions. However it's only effective in languages in which functions are curried by default (e.g. Haskell). If you have to manually curry functions then it's better to use partial application instead.
2. The structure of a function
Uncurried functions also exist in Haskell. They look like functions in "normal" programming languages:
main = print $ add(2, 3)
add :: (Int, Int) -> Int
add(a, b) = a + b
You can convert a function in its curried form to its uncurried form and vice versa using the uncurry and curry functions in Haskell respectively. An uncurried function in Haskell still takes only one argument. However that argument is a product of multiple values (i.e. a product type).
In the same vein functions in JavaScript also take only a single argument (it just doesn't know it yet). That argument is a product type. The arguments value inside a function is a manifestation of that product type. This is exemplified by the apply method in JavaScript which takes a product type and applies a function to it. For example:
print(add.apply(null, [2, 3]));
Can you see the similarity between the above line in JavaScript and the following line in Haskell?
main = print $ add(2, 3)
Ignore the assignment to main if you don't know what it's for. It's irrelevant apropos to the topic at hand. The important thing is that the tuple (2, 3) in Haskell is isomorphic to the array [2, 3] in JavaScript. What do we learn from this?
The apply function in JavaScript is the same as function application (or $) in Haskell:
($) :: (a -> b) -> a -> b
f $ a = f a
We take a function of type a -> b and apply it to a value of type a to get a value of type b. However since all functions in JavaScript are uncurried by default the apply function always takes a product type (i.e. an array) as its second argument. That is to say that the value of type a is actually a product type in JavaScript.
Lesson 2: All functions in JavaScript only take a single argument which is a product type (i.e. the arguments value). Whether this was intended or happenstance is a matter of speculation. However the important point is that you understand that mathematically every function only takes a single argument.
Mathematically a function is defined as a morphism: a -> b. It takes a value of type a and returns a value of type b. A morphism can only have one argument. If you want multiple arguments then you could either:
Return another morphism (i.e. b is another morphism). This is currying. Haskell does this.
Define a to be a product of multiple types (i.e. a is a product type). JavaScript does this.
Out of the two I prefer curried functions as they make partial application trivial. Partial application of "uncurried" functions is more complicated. Not difficult, mind you, but just more complicated. This is one of the reasons why I like Haskell more than JavaScript: functions are curried by default.
3. Why OOP matters not
Let's take a look at some object-oriented code in JavaScript. For example:
var oddities = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9].filter(odd).length;
function odd(n) {
return n % 2 !== 0;
}
Now you might wonder how is this object-oriented. It looks more like functional code. After all you could do the same thing in Haskell:
oddities = length . filter odd $ [0..9]
Nevertheless the above code is object-oriented. The array literal is an object which has a method filter which returns a new array object. Then we simply access the length of the new array object.
What do we learn from this? Chaining operations in object-oriented languages is the same as composing functions in functional languages. The only difference is that the functional code reads backwards. Let's see why.
In JavaScript the this parameter is special. It's separate from the formal parameters of the function which is why you need to specify a value for it separately in the apply method. Because this comes before the formal parameters, methods are chained from left-to-right.
add.apply(null, [2, 3]); // this comes before the formal parameters
If this were to come after the formal parameters the above code would probably read as:
var oddities = length.filter(odd).[0, 1, 2, 3, 4, 5, 6, 7, 8, 9];
apply([2, 3], null).add; // this comes after the formal parameters
Not very nice is it? Then why do functions in Haskell read backwards? The answer is currying. You see functions in Haskell also have a "this" parameter. However unlike in JavaScript the this parameter in Haskell is not special. In addition it comes at the end of the argument list. For example:
filter :: (a -> Bool) -> [a] -> [a]
The filter function takes a predicate function and a this list and returns a new list with only the filtered elements. So why is the this parameter last? It makes partial application easier. For example:
filterOdd = filter odd
oddities = length . filterOdd $ [0..9]
In JavaScript you would write:
Array.prototype.filterOdd = [].filter.myCurry(odd);
var oddities = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9].filterOdd().length;
Now which one would you choose? If you're still complaining about reading backwards then I have news for you. You can make Haskell code read forwards using "backward application" and "backward composition" as follows:
($>) :: a -> (a -> b) -> b
a $> f = f a
(>>>) :: (a -> b) -> (b -> c) -> (a -> c)
f >>> g = g . f
oddities = [0..9] $> filter odd >>> length
Now you have the best of both worlds. Your code reads forwards and you get all the benefits of currying.
There are a lot of problems with this that don't occur in functional languages:
The this parameter is specialized. Unlike other parameters you can't simply set it to an arbitrary object. Hence you need to use call to specify a different value for this.
If you want to partially apply functions in JavaScript then you need to specify null as the first parameter of bind. Similarly for call and apply.
Object-oriented programming has nothing to do with this. In fact you can write object-oriented code in Haskell as well. I would go as far as to say that Haskell is in fact an object-oriented programming language, and a far better one at that than Java or C++.
Lesson 3: Functional programming languages are more object-oriented than most mainstream object-oriented programming languages. In fact object-oriented code in JavaScript would be better (although admittedly less readable) if written in a functional style.
The problem with object-oriented code in JavaScript is the this parameter. In my humble opinion the this parameter shouldn't be treated any differently than formal parameters (Lua got this right). The problem with this is that:
There's no way to set this like other formal parameters. You have to use call instead.
You have to set this to null in bind if you wish to only partially apply a function.
On a side note I just realized that every section of this article is becoming longer than the preceding section. Hence I promise to keep the next (and final) section as short as possible.
4. In defense of Douglas Crockford
By now you must have picked up that I think that most of JavaScript is broken and that you should shift to Haskell instead. I like to believe that Douglas Crockford is a functional programmer too and that he is trying to fix JavaScript.
How do I know that he's a functional programmer? He's the guy that:
Popularized the functional equivalent of the new keyword (a.k.a Object.create). If you don't already do then you should stop using the new keyword.
Attempted to explain the concept of monads and gonads to the JavaScript community.
Anyway, I think Crockford nullified this in the curry function because he knows how bad this is. It would be sacrilege to set it to anything other than null in a book entitled "JavaScript: The Good Parts". I think he's making the world a better place one feature at a time.
By nullifying this Crockford is forcing you to stop relying on it.
Edit: As Bergi requested I'll describe a more functional way to write your object-oriented Calculator code. We will use Crockford's curry method. Let's start with the multiply and back functions:
function multiply(a, b, history) {
return [a * b, [a + " * " + b].concat(history)];
}
function back(history) {
return [history[0], history.slice(1)];
}
As you can see the multiply and back functions don't belong to any object. Hence you can use them on any array. In particular your Calculator class is just a wrapper for list of strings. Hence you don't even need to create a different data type for it. Hence:
var myCalc = [];
Now you can use Crockford's curry method for partial application:
var multiplyPi = multiply.curry(Math.PI);
Next we'll create a test function to multiplyPi by one and to go back to the previous state:
var test = bindState(multiplyPi.curry(1), function (prod) {
alert(prod);
return back;
});
If you don't like the syntax then you could switch to LiveScript:
test = do
prod <- bindState multiplyPi.curry 1
alert prod
back
The bindState function is the bind function of the state monad. It's defined as follows:
function bindState(g, f) {
return function (s) {
var a = g(s);
return f(a[0])(a[1]);
};
}
So let's put it to the test:
alert(test(myCalc)[0]);
See the demo here: http://jsfiddle.net/5h5R9/
BTW this entire program would have been more succinct if written in LiveScript as follows:
multiply = (a, b, history) --> [a * b, [a + " * " + b] ++ history]
back = ([top, ...history]) -> [top, history]
myCalc = []
multiplyPi = multiply Math.PI
bindState = (g, f, s) -->
[a, t] = g s
(f a) t
test = do
prod <- bindState multiplyPi 1
alert prod
back
alert (test myCalc .0)
See the demo of the compiled LiveScript code: http://jsfiddle.net/5h5R9/1/
So how is this code object oriented? Wikipedia defines object-oriented programming as:
Object-oriented programming (OOP) is a programming paradigm that represents concepts as "objects" that have data fields (attributes that describe the object) and associated procedures known as methods. Objects, which are usually instances of classes, are used to interact with one another to design applications and computer programs.
According to this definition functional programming languages like Haskell are object-oriented because:
In Haskell we represent concepts as algebraic data types which are essentially "objects on steroids". An ADT has one or more constructors which may have zero or more data fields.
ADTs in Haskell have associated functions. However unlike in mainstream object-oriented programming languages ADTs don't own the functions. Instead the functions specialize upon the ADTs. This is actually a good thing as ADTs are open to adding more methods. In traditional OOP languages like Java and C++ they are closed.
ADTs can be made instances of typeclasses which are similar to interfaces in Java. Hence you still do have inheritance, variance and subtype polymorphism but in a much less intrusive form. For example Functor is a superclass of Applicative.
The above code is also object-oriented. The object in this case is myCalc which is simply an array. It has two functions associated with it: multiply and back. However it doesn't own these functions. As you can see the "functional" object-oriented code has the following advantages:
Objects don't own methods. Hence it's easy to associate new functions to objects.
Partial application is made simple via currying.
It promotes generic programming.
So I hope that helped.
Reason 1 - not easy to provide a general solution
The problem is that your solution is not general. If the caller doesn't assign the new function to any object, or assigns it to a completely different object, your multiplyPi function will stop working:
var multiplyPi = myCalc.multiply.myCurry(Math.PI);
multiplyPi(1); // TypeError: this.history.concat is not a function
So, neither Crockford's nor your solution can assure that the function will be used correctly. Then it may be easier to say that the curry function works only on "functions", not "methods", and set this to null to force that. We might only speculate though, since Crockford doesn't mention that in the book.
Reason 2 - functions are being explained
If you asking "why Crockford didn't use this or that" - the very likely answer is: "It wasn't important in regard to the demonstrated matter." Crockford uses this example in the chapter Functions. The purpose of the sub-chapter curry was:
to show that functions are objects you can create and manipulate
to demonstrate another usage of closures
to show how arguments can be manipulated.
Finetuning this for a general usage with objects was not purpose of this chapter. As it is problematic if not even impossible (see Reason 1), it was more educational to put there just null instead if putting there something which could raise questions if it actually works or not (didn't help in your case though :-)).
Conclusion
That said, I think you can be perfectly confident in your solution! There's no particular reason in your case to follow Crockfords' decision to reset this to null. You must be aware though that your solution only works under certain circumstances, and is not 100% clean. Then clean "object oriented" solution would be to ask the object to create a clone of its method inside itself, to ensure that the resultant method will stay within the same object.
But what I want to know is why does he set this to null?
There is not really a reason. Probably he wanted to simplify, and most functions that make sense to be curried or partially applied are not OOP-methods that use this. In a more functional style the history array that is appended to would be another parameter of the function (and maybe even a return value).
Wouldn't the expected behavior be that the curried method is the same as the original, including the same this?
Yes, your implementation makes much more sense, however one might not expect that a partially applied function still needs to be called in the correct context (as you do by re-assigning it to your object) if it uses one.
For those, you might have a look at the bind method of Function objects for partial application including a specific this-value.
From MDN:
thisArg The value of this provided for the call to fun. Note that this
may not be the actual value seen by the method: if the method is a
function in non-strict mode code, null and undefined will be replaced
with the global object, and primitive values will be boxed.
Hence, if the method is in non-strict mode and the first argument is null or undefined, this inside of that method will reference Window. In strict mode, this is null or undefined. I've added a live example on this Fiddle.
Furthermore passing in nullor undefined does not do any harm in case the function does not reference this at all. That's probably why Crockford used null in his example, to not overcomplicate things.

Categories