How would a functional language actually define/translate primitives to hardware? - javascript

Let's say I have a few primitives defined, here using javascript:
const TRUE = x => y => x;
const FALSE = x => y => y;
const ZERO = f => a => a;
const ONE = f => a => f(a);
const TWO = f => a => f(f(a));
If a language is purely function, how would it translate these primitives to something physical? For example, usually I see something like a function that is not a pure function, such as:
const TWO = f => a => f(f(a));
const inc = x => x+1;
console.log(TWO(inc)(0));
// 2
But again this is sort of a 'trick' to print something, in this case a number. But how is the pure-functional stuff translated into something that can actually do something?

A function is pure if its result (the return value) only depends on the inputs you give to it.
A language is purely functional if all its functions are pure¹.
Therefore it's clear that "utilities" like getchar, which are fairly common functions in many ordinary, non-functional languages, pose a problem in functional languages, because they take no input², and still they give different outputs everytime.
It looks like a functional language needs to give up on purity at least for doing I/O, doesn't it?
Not quite. If a language wants to be purely functional, it can't ever break function purity, not even for doing I/O. Still it needs to be useful. You do need to get things done with it, or, as you say, you need
something that can actually do something
If that's the case, how can a purely functional language, like Haskell, stay pure and yet provide you with utilities to interact with keyboard, terminal, and so on? Or, in other words, how can purely functional languages provide you with entities that have the same "read the user input" role of those impure functions of ordinary languages?
The trick is that those functions are secretly (and platonically) taking one more argument, in addition to the 0 or more arguments they'd have in other languages, and spitting out an additional return value: these two guys are the "real world" before and after the "action" that function performs. It's a bit like saying that the signatures of getchar and putchar are not
char getchar()
void putchar(char)
but
[char, newWorld] = getchar(oldWorld)
[newWorld] = putchar(char, oldWorld)
This way you can give to your program the "initial" world, and all those functions which are impure in ordinary languages will, in functional languages, pass the evolving world to each other, like Olympic torch.
Now you could ask: what's the advantage of doing so?
The point of a pure functional language like Haskell, is that it abstracts this mechanism away from you, hiding that *word stuff from you, so that you can't do silly things like the following
[firstChar, newWorld] = getchar(oldWorld)
[secondChar, newerWorld] = getchar(oldWorld) // oops, I'm mistakenly passing
// oldWorld instead of newWorld
The language just doesn't give you tools to put you hands on the "real world". If it did, you'd have the same degrees of freedom you have in languages like C, and you'd end up with the type of bugs which are common in thos languages.
A purely functional language, instead, hides that stuff from you. It basically constrains and limits your freedom inside a smaller set than non-functional languages allow, letting the runtime machinary that actually runs the program (and on which you have no control whatsoever), take care of the plumbing on your behalf.
(A good reference)
¹ Haskell is such a language (and there isn't any other mainstream purely functional language around, as far as I know); JavaScript is not, even if it provides several tools for doing some functional programming (think of arrow functions, which are essentially lambdas, and the Lodash library).
² No, what you enter from the keyboard is not an input to the getchar function; you call getchar without arguments, and assign its return value to some variable, e.g. char c = getchar() or let c = getchar(), or whatever the language syntax is.

Related

How is injecting an impure function different from calling it?

I am reading a book where it says one way to handle impure functions is to inject them into the function instead of calling it like the example below.
normal function call:
const getRandomFileName = (fileExtension = "") => {
...
for (let i = 0; i < NAME_LENGTH; i++) {
namePart[i] = getRandomLetter();
}
...
};
inject and then function call:
const getRandomFileName2 = (fileExtension = "", randomLetterFunc = getRandomLetter) => {
const NAME_LENGTH = 12;
let namePart = new Array(NAME_LENGTH);
for (let i = 0; i < NAME_LENGTH; i++) {
namePart[i] = randomLetterFunc();
}
return namePart.join("") + fileExtension;
};
The author says such injections could be helpful when we are trying to test the function, as we can pass a function we know the result of, to the original function to get a more predictable solution.
Is there any difference between the above two functions in terms of being pure as I understand the second function is still impure even after getting injected?
An impure function is just a function that contains one or more side effects that are not disenable from the given inputs.
That is if it mutates data outside of its scope and does not predictably produce the same output for the same input.
In the first example NAME_LENGTH is defined outside the scope of the function - so if that value changes the behaviour of getRandomFileName also changes - even if we supply the same fileExtension each time. Likewise, getRandomLetter is defined outside the scope - and almost certainly produces random output - so would be inherently impure.
In second example everything is referenced in the scope of the function or is passed to it or defined in it. This means that it could be pure - but isn't necessarily. Again this is because some functions are inherently impure - so it would depend on how randomLetterFunc is defined.
If we called it with
getRandomFileName2('test', () => 'a');
...then it would be pure - because every time we called it we would get the same result.
On the other hand if we called it with
getRandomFileName2(
'test',
() => 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'.charAt(Math.floor(25 * Math.random()))
);
It would be impure, because calling it each time would give a different result.
There's more than one thing at stake here. At one level, as Fraser's answer explains, assuming that getRandomLetter is impure (by being nondeterministic), then getRandomFileName also is.
At least, by making getRandomFileName2 a higher-order function, you at least give it the opportunity to be a pure function. Assuming that getRandomFileName2 performs no other impure action, if you pass it a pure function, it will, itself, transitively, be pure.
If you pass it an impure function, it will also, transitively, be impure.
Giving a function an opportunity to be pure can be useful for testing, but doesn't imply that the design is functional. You can also use dependency injection and Test Doubles to make objects deterministic, but that doesn't make them functional.
In most languages, including JavaScript, you can't get any guarantees from functions as first-class values. A function of a particular 'shape' (type) can be pure or impure, and you can check this neither at compile time nor at run time.
In Haskell, on the other hand, you can explicitly declare whether a function is pure or impure. In order to even have the option of calling an impure action, a function must itself be declared impure.
Thus, the opportunity to be impure must be declared at compile time. Even if you pass a pure 'implementation' of an impure type, the receiving, higher-order function still looks impure.
While something like described in the OP would be technically possible in Haskell, it would make everything impure, so it wouldn't be the way you go about it.
What you do instead depends on circumstances and requirements. In the OP, it looks as though you need exactly 12 random values. Instead of passing an impure action as an argument, you might instead generate 12 random values in the 'impure shell' of the program, and pass those values to a function that can then remain pure.
There's more at stake than just testing. While testability is nice, the design suggested in the OP will most certainly be impure 'in production' (i.e. when composed with a proper random value generator).
Impure actions are harder to understand, and their interactions can be surprising. Pure functions, on the other hand, are referentially transparent, and referential transparency fits in your head.
It'd be a good idea to have as a goal pure functions whenever possible. The proposed getRandomFileName2 is unlikely to be pure when composed with a 'real' random value generator, so a more functional design is warranted.
Anything that contains random (or Date or stuff like that). Will be considered impure and hard to test because what it returns doesn't strictly depends on its inputs (always different). However, if the random part of the function is injected, the function can be made "pure" in the test suite by replacing whatever injected randomness with something predictable.
function getRandomFileName(fileExtension = "", randomLetterFunc = getRandomLetter) {}
can be tested by calling it with a predictable "getLetter" function instead of a random one:
getRandomFileName("", predictableLetterFunc)

Is it correct to refer to a higher-order function as a "template"?

Minimal example: Say I have the higher-order function
const my_fn = (a) => (b) => a + b
that, when called like so:
my_fn(42)
returns the function (b) => 42 + b.
Would it be correct to refer to my_fn as a "template function"?
I know that, in languages such as C++, the word "template" has a very specific technical meaning.
But in JavaScript, there is (AFAIK) no built-in template in syntax in the way that there is in C++.
I don't want to abuse terminology.
Is it correct to refer to higher-order functions in JS as template functions, and vice-versa?
(related, optional question: is there a difference between skillfully using higher-order functions in JS and doing "generic programming" in this language?)
I wouldn't call this a “template” function, though I might call this function a “function factory.” To me, the word “template” in languages like C++ implies the specific goal of applying one function to a range of different data types.
A function like the example you gave doesn’t really accomplish anything new in terms of “generic programming” because it's not designed to produce functions that operate on different types of values. Javascript is not strict about types, so you can pass a value of any type to your function and the language will do its best to work with it. You don't have to do anything additional to make a function apply to different types of objects; every function accepts every kind of object unless the programmer adds explicit typechecking logic.

What's so special about Monads in Kleisli category?

The related Question is
What is so special about Monads?
bind can be composed of fmap and join, so do we have to use monadic functions a -> m b?
In the first question:
What is so special about Monads?
A monad is a mathematical structure which is heavily used in (pure) functional programming, basically Haskell. However, there are many other mathematical structures available, like for example applicative functors, strong monads, or monoids. Some have more specific, some are more generic. Yet, monads are much more popular. Why is that?
The comment to reply the question:
As far as I recall, monads were popularised by Wadler, and at the time the idea of doing IO without tedious CPS and parsing without explicit state passing were huge selling points; it was a hugely exciting time. A.F.A.I.R., Haskell didn't do constructor classes, but Gofer (father of Hugs) did. Wadler proposed overloading list comprehension for monads, so the do notation came later. Once IO was monadic, monads became a big deal for beginners, cementing them as a major thing to grok. Applicatives are much nicer when you can, and Arrows more general, but they came later, and IO sells monads hard. – AndrewC May 9 '13 at 1:34
The answer by #Conal is:
I suspect that the disproportionately large attention given to this one particular type class (Monad) over the many others is mainly a historical fluke. People often associate IO with Monad, although the two are independently useful ideas (as are list reversal and bananas). Because IO is magical (having an implementation but no denotation) and Monad is often associated with IO, it's easy to fall into magical thinking about Monad.
First of all, I agree with them, and I think the usefulness of Monads mostly arises from Functors that we can embed many functions within the structure, and Monads is a little expansion for robustness of function composition by join : M(M(X)) -> M(X) to avoid the nested type.
In the 2nd Question:
do we have to use monadic functions a -> m b?
so many tutorials around the web still insist to use a monadic functions since that is the Kleisli triple and the monad-laws.
and many answers like
I like to think of such an m as meaning "plan-to-get", where "plans" involve some sort of additional interaction beyond pure computation.
or
In situations where Monad isn't necessary, it is often simpler to use Applicative, Functor, or just basic pure functions. In these cases, these things should be (and generally are) used in place of a Monad. For example:
ws <- getLine >>= return . words -- Monad
ws <- words <$> getLine -- Functor (much nicer)
To be clear: If it's possible without a monad, and it's simpler and more readable without a monad, then you should do it without a monad! If a monad makes the code more complex or confusing than it needs to be, don't use a monad! Haskell has monads for the sole purpose of making certain complex computations simpler, easier to read, and easier to reason about. If that's not happening, you shouldn't be using a monad.
Reading their answers, I suppose their special feeling about Monad arises from the historical incident that Haskell community has happend to chose Monads in Kleisli category to solve their problem(IO etc.)
So, again, I think the usefulness of Monads mostly arises from Functors that we can embed many functions within the structure, and Monads is a little expansion for robustness of function composition by join : M(M(X)) -> M(X) to avoid the nested type.
In fact, in JavaScript I implemented as below..
Functor
console.log("Functor");
{
const unit = (val) => ({
// contextValue: () => val,
fmap: (f) => unit((() => {
//you can do pretty much anything here
const newVal = f(val);
// console.log(newVal); //IO in the functional context
return newVal;
})()),
});
const a = unit(3)
.fmap(x => x * 2) //6
.fmap(x => x + 1); //7
}
The point is we can implement whatever we like in the Functor structure, and in this case, I simply made it IO/console.log the value.
Another point is, to do this Monads is absolutely unnecessary.
Monad
Now, based on the Functor implementation above, I add extra join: MMX => MX feature to avoid the nested structure that should be helpful for robustness of complex functional composition.
The functionality is exactly identical to the Functor above, and please note the usage is also identical to the Functor fmap. This does not require a "monadic function" to bind (Kleisli composition of monads).
console.log("Monad");
{
const unit = (val) => ({
contextValue: () => val,
bind: (f) => {
//fmap value operation
const result = (() => {
//you can do pretty much anything here
const newVal = f(val);
console.log(newVal);
return newVal;
})();
//join: MMX => MX
return (result.contextValue !== undefined)//result is MX
? result //return MX
: unit(result) //result is X, so re-wrap and return MX
}
});
//the usage is identical to the Functor fmap.
const a = unit(3)
.bind(x => x * 2) //6
.bind(x => x + 1); //7
}
Monad Laws
Just in case, this implementation of the Monad satisfies the monad laws, and the Functor above does not.
console.log("Monad laws");
{
const unit = (val) => ({
contextValue: () => val,
bind: (f) => {
//fmap value operation
const result = (() => {
//you can do pretty much anything here
const newVal = f(val);
//console.log(newVal);
return newVal;
})();
//join: MMX => MX
return (result.contextValue !== undefined)
? result
: unit(result)
}
});
const M = unit;
const a = 1;
const f = a => (a * 2);
const g = a => (a + 1);
const log = m => console.log(m.contextValue()) && m;
log(
M(f(a))//==m , and f is not monadic
);//2
console.log("Left Identity");
log(
M(a).bind(f)
);//2
console.log("Right Identity");
log(
M(f(a))//m
.bind(M)// m.bind(M)
);//2
console.log("Associativity");
log(
M(5).bind(f).bind(g)
);//11
log(
M(5).bind(x => M(x).bind(f).bind(g))
);//11
}
So, here is my question.
I may be wrong.
Is there any counter example that Functors cannnot do what Monads can do except the robustness of functional composition by flattening the nested structure?
What's so special about Monads in Kleisli category? It seems like it's fairly possible to implement Monads with a little expansion to avoid the nested structure of Functor and without the monadic functions a -> m b that is the entity in Kleisli category.
Thanks.
edit(2018-11-01)
Reading the answers, I agree it's not appropriate to perform console.log inside the IdentityFunctor that should satisfy Functor-laws, so I commented out like the Monad code.
So, eliminating that problem, my question still holds:
Is there any counter example that Functors cannnot do what Monads can do except the robustness of functional composition by flattening the nested structure?
What's so special about Monads in Kleisli category? It seems like it's fairly possible to implement Monads with a little expansion to avoid the nested structure of Functor and without the monadic functions a -> m b that is the entity in Kleisli category.
An answer from #DarthFennec is:
"Avoiding the nested type" is not in fact the purpose of join, it's just a neat side-effect. The way you put it makes it sound like join just strips the outer type, but the monad's value is unchanged.
I believe "Avoiding the nested type" is not just a neat side-effect, but a definition of "join" of Monad in category theory,
the multiplication natural transformation μ:T∘T⇒T of the monad provides for each object X a morphism μX:T(T(X))→T(X)
monad (in computer science): Relation to monads in category theory
and that's exactly what my code does.
On the other hand,
This is not the case. join is the heart of a monad, and it's what allows the monad to do things.
I know many people implements monads in Haskell in this manner, but the fact is, there is Maybe functor in Haskell, that does not has join, or there is Free monad that join is embedded from the first place into the defined structure. They are objects that users define Functors to do things.
Therefore,
You can think of a functor as basically a container. There's an arbitrary inner type, and around it an outer structure that allows some variance, some extra values to "decorate" your inner value. fmap allows you to work on the things inside the container, the way you would work on them normally. This is basically the limit of what you can do with a functor.
A monad is a functor with a special power: where fmap allows you to work on an inner value, bind allows you to combine outer values in a consistent way. This is much more powerful than a simple functor.
These observation does not fit the fact of the existence of Maybe functor and Free monad.
Is there any counter example that Functors cannnot do what Monads can do except the robustness of functional composition by flattening the nested structure?
I think this is the important point:
Monads is a little expansion for robustness of function composition by join : M(M(X)) -> M(X) to avoid the nested type.
"Avoiding the nested type" is not in fact the purpose of join, it's just a neat side-effect. The way you put it makes it sound like join just strips the outer type, but the monad's value is unchanged. This is not the case. join is the heart of a monad, and it's what allows the monad to do things.
You can think of a functor as basically a container. There's an arbitrary inner type, and around it an outer structure that allows some variance, some extra values to "decorate" your inner value. fmap allows you to work on the things inside the container, the way you would work on them normally. This is basically the limit of what you can do with a functor.
A monad is a functor with a special power: where fmap allows you to work on an inner value, bind allows you to combine outer values in a consistent way. This is much more powerful than a simple functor.
The point is we can implement whatever we like in the Functor structure, and in this case, I simply made it IO/console.log the value.
This is incorrect, actually. The only reason you were able to do IO here is because you're using Javascript, and you can do IO anywhere. In a purely functional language like Haskell, IO cannot be done in a functor like this.
This is a gross generalization, but for the most part it's useful to describe IO as a glorified State monad. Each IO action takes an extra hidden parameter called RealWorld (which represents the state of the real world), maybe reads from it or modifies it, and then sends it on to the next IO action. This RealWorld parameter is threaded through the chain. If something is written to the screen, that's RealWorld being copied, modified, and passed along. But how does the "passing along" work? The answer is join.
Say we want to read a line from the user, and print it back to the screen:
getLine :: IO String
putStrLn :: String -> IO ()
main :: IO ()
main = -- ?
Let's assume IO is a functor. How do we implement this?
main :: IO (IO ())
main = fmap putStrLn getLine
Here we've lifted putStrLn to IO, to get fmap putStrLn :: IO String -> IO (IO ()). If you recall, putStrLn takes a String and a hidden RealWorld and returns a modified RealWorld, where the String parameter is printed to the screen. We've lifted this function with fmap, so that it now takes an IO (which is an action that takes a hidden RealWorld, and returns a modified RealWorld and a String), and returns the same io action, just wrapped around a different value (a completely separate action that also takes a separate hidden RealWorld and returns a RealWorld). Even after applying getLine to this function, nothing actually happens or gets printed.
We now have a main :: IO (IO ()). This is an action that takes a hidden RealWorld, and returns a modified RealWorld and a separate action. This second action takes a different RealWorld and returns another modified RealWorld. This on its own is pointless, it doesn't get you anything and it doesn't print anything to the screen. What needs to happen is, the two IO actions need to be connected together, so that one action's returned RealWorld gets fed in as the other action's RealWorld parameter. This way it becomes one continuous chain of RealWorlds that mutate as time goes on. This "connection" or "chaining" happens when the two IO actions are merged with join.
Of course, join does different things depending on which monad you're working with, but for IO and State-type monads, this is more or less what's happening under the hood. There are plenty of situations where you're doing something very simple that doesn't require join, and in those cases it's easy to treat the monad as a functor or applicative functor. But usually that isn't enough, and in those cases we use monads.
EDIT: Responses to comments and edited question:
I don't see any definition of Monads in categort theory explains this. All I read about join is stil MMX => MX and that is exactly what my code does.
Can you also tell exactly what a function String -> String does? Might it not return the input verbatim, reverse it, filter it, append to it, ignore it and return a completely different value, or anything else that results in a String? A type does not determine what a function does, it restricts what a function can do. Since join in general is only defined by its type, any particular monad can do anything allowed by that type. This might just be stripping the outer layer, or it might be some extremely complex method of combining the two layers into one. As long as you start with two layers and end up with one layer, it doesn't matter. The type allows for a number of possibilities, which is part of what makes monads so powerful to begin with.
There is MaybeFunctor in Haskell. There's no "join" or "bind" there, and I wonder from where the power come. What is the difference between MaybeFunctor and MaybeMonad?
Every monad is also a functor: a monad is nothing more than a functor that also has a join function. If you use join or bind with a Maybe, you're using it as a monad, and it has the full power of a monad. If you do not use join or bind, but only use fmap and pure, you're using it as a functor, and it becomes limited to doing the things a functor can do. If there's no join or bind, there is no extra monad power.
I believe "Avoiding the nested type" is not just a neat side-effect, but a definition of "join" of Monad in category theory
The definition of join is a transformation from a nested monad to a non-nested monad. Again, this could imply anything. Saying the purpose of join is "to avoid the nested type" is like saying the purpose of + is to avoid pairs of numbers. Most operations combine things in some way, but very few of those operations exist simply for the sake of having a combination of things. The important thing is how the combining happens.
there is Maybe functor in Haskell, that does not has join, or there is Free monad that join is embedded from the first place into the defined structure. They are objects that users define Functors to do things.
I've already discussed Maybe, and how when you use it as a functor only, it can't do the things it can do if you use it as a monad. Free is weird, in that it's one of the few monads that doesn't actually do anything.
Free can be used to turn any functor into a monad, which allows you to use do notation and other conveniences. However, the conceit of Free is that join does not combine your actions the way other monads do, instead it keeps them separate, inserting them into a list-like structure; the idea being that this structure is later processed and the actions are combined by separate code. An equivalent approach would be to move that processing code into join itself, but that would turn the functor into a monad and there would be no point in using Free. So the only reason Free works is because it delegates the actual "doing things" part of the monad elsewhere; its join opts to defer action to code running outside the monad. This is like a + operator that, instead of adding the numbers, returns an abstract syntax tree; one could then process that tree later in whatever way is needed.
These observation does not fit the fact of the existence of Maybe functor and Free monad.
You are incorrect. As explained, Maybe and Free fit perfectly into my previous observations:
The Maybe functor simply does not have the same expressiveness as the Maybe monad.
The Free monad transforms functors into monads in the only way it possibly can: by not implementing a monadic behavior, and instead simply deferring it to some assumed processing code.
The point is we can implement whatever we like in the Functor structure, and in this case, I simply made it IO/console.log the value.
Another point is, to do this Monads is absolutely unnecessary.
The problem is that once you do that your functor is no longer a functor. Functors should preserve identities and composition. For Haskell Functors, those requirements amount to:
fmap id = id
fmap (g . f) = fmap g . fmap f
Those laws are a guarantee that all fmap does is using the supplied function to modify values -- it doesn't do funny stuff behind your back. In the case of your code, fmap(x => x) should do nothing; instead, it prints to the console.
Note that all of the above applies to the IO functor: if a is an IO action, executing fmap f a will have no I/O effects other than those a already had. One stab at writing something similar in spirit to your code might be...
applyAndPrint :: Show b => (a -> b) -> a -> IO b
applyAndPrint f x = let y = f x in fmap (const y) (print y)
pseudoFmap :: Show b => (a -> b) -> IO a -> IO b
pseudoFmap f a = a >>= applyAndPrint f
... but that makes use of Monad already, as we have an effect (printing a result) which depends on the result of a previous computation.
It goes without saying that if you are so inclined (and your type system allows it) you can write code that disregards all of those distinctions. There is, however, a trade-off: the decreased power of Functor with respect to Monad comes with extra guarantees on what functions using the interface can and cannot do -- that is what makes the distinctions useful in the first place.
Your “functor” is very manifestly not a functor, violating both the identity and composition law:
console.log("Functor");
{
const unit = (val) => ({
// contextValue: () => val,
fmap: (f) => unit((() => {
//you can do pretty much anything here
const newVal = f(val);
console.log(newVal); //IO in the functional context
return newVal;
})()),
});
console.log("fmap(id) ...");
const a0 = unit(3)
.fmap(x => x); // prints something
console.log(" ≡ id?");
const a1 = (x => x)(unit(3)); // prints nothing
console.log("fmap(f) ∘ fmap(g) ...");
const b0 = unit(3)
.fmap(x => 3*x)
.fmap(x => 4+x); // prints twice
console.log(" ≡ fmap(f∘g)?");
const b1 = unit(3)
.fmap(x => 4+(3*x)); // prints once
}
overly long comment:
I would suggest forgetting about Kleisli categories for now; I don't believe they have anything to do with your confusion.
Also while I still don't fully understand your question and assertions, some context that might be useful: category theory is extremely general and abstract; the concepts like Monad and Functor as they exist in haskell are (necessarily) somewhat less general and less abstract (e.g. the notion of the category of "Hask").
As a general rule the more concrete (the less abstract) a thing becomes, the more power you have: if I tell you you have a vehicle then you know you have a thing that can take you from one place to another, but you don't know how fast, you don't know whether it can go on land, etc. If I tell you you have a speed boat then there opens up a whole larger world of things you can do and reason about (you can use it to catch fish, you know that it won't get you from NYC to Denver).
When you say:
What's so special about Monads in Kleisli category?
...I believe you're making the mistake of suspecting that the conception of Monad and Functor in haskell is in some way more restrictive relative to category theory but, as I try to explain by analogy above, the opposite is true.
Your code is the same sort of flawed thinking: you model a speedboat (which is a vehicle) and claim it shows that all vehicles are fast and travel on water.

Is Underscore.js functional programming a fake?

According to my understanding of functional programming, you should be able to chain multiple functions and then execute the whole chain by going through the input data once.
In other words, when I do the following (pseudo-code):
list = [1, 2, 3];
sum_squares = list
.map(function(item) { return item * item; })
.reduce(function(total, item) { return total + item; }, 0);
I expect that the list will be traversed once, when each value will be squared and then everything will be added up (hence, the map operation would be called as needed by the reduce operation).
However, when I look at the source code of Underscore.js, I see that all the "functional programming" functions actually produce intermediate collections like, for example, so:
// Return the results of applying the iteratee to each element.
_.map = _.collect = function(obj, iteratee, context) {
iteratee = cb(iteratee, context);
var keys = !isArrayLike(obj) && _.keys(obj),
length = (keys || obj).length,
results = Array(length);
for (var index = 0; index < length; index++) {
var currentKey = keys ? keys[index] : index;
results[index] = iteratee(obj[currentKey], currentKey, obj);
}
return results;
};
So the question is, as stated in the title, are we fooling ourselves that we do functional programming when we use Underscore.js?
What we actually do is make program look like functional programming without actually it being functional programming in fact. Imagine, I build a long chain of K filter() functions on list of length N, and then in Underscore.js my computational complexity will be O(K*N) instead of O(N) as would be expected in functional programming.
P.S. I've heard a lot about functional programming in JavaScript, and I was expecting to see some functions, generators, binding... Am I missing something?
Is Underscore.js functional programming a fake?
No, Underscore does have lots of useful functional helper functions. But yes, they're doing it wrong. You may want to have a look at Ramda instead.
I expect that the list will be traversed once
Yes, list will only be traversed once. It won't be mutated, it won't be held in memory (if you had not a variable reference to it). What reduce traverses is a different list, the one produced by map.
All the functions actually produce intermediate collections
Yes, that's the simplest way to implement this in a language like JavaScript. Many people rely on map executing all its callbacks before reduce is called, as they use side effects. JS does not enforce pure functions, and library authors don't want to confuse people.
Notice that even in pure languages like Haskell an intermediate structure is built1, though it would be consumed lazily so that it never is allocated as a whole.
There are libraries that implement this kind of optimisation in strict languages with the concept of transducers as known from Clojure. Examples in JS are transduce, transducers-js, transducers.js or underarm. Underscore and Ramda have been looking into them2 too.
I was expecting to see some […] generators
Yes, generators/iterators that can be consumed lazily are another choice. You'll want to have a look at Lazy.js, highland, or immutable-js.
[1]: Well, not really - it's a too easy optimisation
[2]: https://github.com/jashkenas/underscore/issues/1896, https://github.com/ramda/ramda/pull/865
Functional programming has nothing to do with traversing a sequence once; even Haskell, which is as pure as you're going to get, will traverse the length of a strict list twice if you ask it to filter pred (map f x).
Functional programming is a simpler model of computation where the only things that are allowed to happen do not include side effects. For example, in Haskell basically only the following things are allowed to happen:
You can apply a value f to another value x, producing a new value f x with no side-effects. The first value f is called a "function". It must be the case that any time you apply the same f to the same x you get the same answer for f x.
You can give a name to a value, which might be a function or a simple value or whatever.
You can define a new structure for data with a new type signature, and/or structure some data with those "constructors."
You can define a new type-class or show how an existing data structure instantiates a type-class.
You can "pattern match" a data structure, which is a combination of a case dispatch with naming the parts of the data structure for the rest of your project.
Notice how "print something to the console" is not doable in Haskell, nor is "alter an existing data structure." To print something to the console, you construct a value which represents the action of printing something to the console, and then give it a special name, main. (When you're compiling Haskell, you compute the action named main and then write it to disk as an executable program; when you run the program, that action is actually completed.) If there is already a main program, you figure out where you want to include the new action in the existing actions of that program, then use a function to sequence the console logging with the existing actions. The Haskell program never does anything; it just represents doing something.
That is the essence of functional programming. It is weaker than normal programming languages where the language itself does stuff, like JavaScript's console.log() function which immediately performs its side effect whenever the JS interpreter runs through it. In particular, there are some things which are (or seem to be) O(1) or O(log(log(n))) in normal programs where our best functional equivalent is O(log(n)).

How is a functional programming-based JavaScript app laid out?

I've been working with node.js for a while on a chat app (I know, very original, but I figured it'd be a good learning project). Underscore.js provides a lot of functional programming concepts which look interesting, so I'd like to understand how a functional program in JavaScript would be setup.
From my understanding of functional programming (which may be wrong), the whole idea is to avoid side effects, which are basically having a function which updates another variable outside of the function so something like
var external;
function foo() {
external = 'bar';
}
foo();
would be creating a side effect, correct? So as a general rule, you want to avoid disturbing variables in the global scope.
Ok, so how does that work when you're dealing with objects and what not? For example, a lot of times, I'll have a constructor and an init method that initializes the object, like so:
var Foo = function(initVars) {
this.init(initVars);
}
Foo.prototype.init = function(initVars) {
this.bar1 = initVars['bar1'];
this.bar2 = initVars['bar2'];
//....
}
var myFoo = new Foo({'bar1': '1', 'bar2': '2'});
So my init method is intentionally causing side effects, but what would be a functional way to handle the same sort of situation?
Also, if anyone could point me to either a Python or JavaScript source code of a program that tries to be as functional as possible, that would also be much appreciated. I feel like I'm close to "getting it", but I'm just not quite there. Mainly I'm interested in how functional programming works with traditional OOP classes concept (or does away with it for something different if that's the case).
You should read this question:
Javascript as a functional language
There are lots of useful links, including:
Use functional programming techniques to write elegant JavaScript
The Little JavaScripter
Higher-Order JavaScript
Eloquent JavaScript, Chapter 6: Functional Programming
Now, for my opinion. A lot of people misunderstand JavaScript, possibly because its syntax looks like most other programming languages (where Lisp/Haskell/OCaml look completely different). JavaScript is not object-oriented, it is actually a prototype-based language. It doesn't have classes or classical inheritance so shouldn't really be compared to Java or C++.
JavaScript can be better compared to a Lisp; it has closures and first-class functions. Using them you can create other functional programming techniques, such as partial application (currying).
Let's take an example (using sys.puts from node.js):
var external;
function foo() {
external = Math.random() * 1000;
}
foo();
sys.puts(external);
To get rid of global side effects, we can wrap it in a closure:
(function() {
var external;
function foo() {
external = Math.random() * 1000;
}
foo();
sys.puts(external);
})();
Notice that we can't actually do anything with external or foo outside of the scope. They're completely wrapped up in their own closure, untouchable.
Now, to get rid of the external side-effect:
(function() {
function foo() {
return Math.random() * 1000;
}
sys.puts(foo());
})();
In the end, the example is not purely-functional because it can't be. Using a random number reads from the global state (to get a seed) and printing to the console is a side-effect.
I also want to point out that mixing functional programming with objects is perfectly fine. Take this for example:
var Square = function(x, y, w, h) {
this.x = x;
this.y = y;
this.w = w;
this.h = h;
};
function getArea(square) {
return square.w * square.h;
}
function sum(values) {
var total = 0;
values.forEach(function(value) {
total += value;
});
return total;
}
sys.puts(sum([new Square(0, 0, 10, 10), new Square(5, 2, 30, 50), new Square(100, 40, 20, 19)].map(function(square) {
return getArea(square);
})));
As you can see, using objects in a functional language can be just fine. Some Lisps even have things called property lists which can be thought of as objects.
The real trick to using objects in a functional style is to make sure that you don't rely on their side effects but instead treat them as immutable. An easy way is whenever you want to change a property, just create a new object with the new details and pass that one along, instead (this is the approach often used in Clojure and Haskell).
I strongly believe that functional aspects can be very useful in JavaScript but ultimately, you should use whatever makes the code more readable and what works for you.
You have to understand that functional programming and object oriented programming are somewhat antithetical to each other. It's not possible to both be purely functional and purely object oriented.
Functional programming is all about stateless computations. Object oriented programming is all about state transitions. (Paraphasing this. Hopefully not too badly)
JavaScript is more object oriented than it is functional. Which means that if you want to program in a purely functional style, you have to forego large parts of the language. Specifically all the object orient parts.
If you are willing to be more pragmatic about it, there are some inspirations from the purely functional world that you could use.
I try to adhere to the following rules:
Functions that perform computations should not alter state. And functions that alter state should not perform computations. Also, functions that alter state should alter as little state as possible. The goal is to have lots of little functions that only do one thing. Then, if you need to do anything big, you compose a bunch of little functions to do what you need.
There are a number of benefits to be gained from following these rules:
Ease of reuse. The longer and more complex a function is, the more specialized it also is, and therefore the less likely it is that it can be reused. The reverse implication is that shorter functions tend to more generic and therefore easier to reuse.
Reliability of code. It is easier to reason about correctness of the code if it is less complex.
It is easier to test functions when they do only one thing. That way there are fewer special cases to test.
Update:
Incorporated suggestion from comment.
Update 2:
Added some useful links.
I think, http://documentcloud.github.com/underscore/ should be nice fit for what you need - it provides the most important higher-order functions for functional programming and does not has client-side functions for DOM manipulation which you don't need for server side. Though I don't have experience with it.
As a side note: IMHO primary feature of functional programming is Referential transparency of a function - function result depends only on its parameters - function does not depend on changes on other objects and does not introduce any change except its result value. It makes it easy to reason about program's correctness and very valuable for implementing of predictable multi-threading (if relevant). Though JavaScript is not the bet language for FP - I expect immutable data structures to be very expensive performance-wise to use.
So 2 things to point out ,
In your first example your variable would not be leaking into the global area and is the way it should be done , try to never use variables without declaring them i.e. test = 'data' would cause data to leak into the global area.
Your second example is correct as well , bar1 and bar2 would only be declared on the Foo object.
Things to keep in mind try not to overuse prototyping since it applies to every object that you create , this could be extremely memory intensive depending on how complex your objects are.
If you are looking for a app development framework , have a look at ExtJs. Personally I think it would fit perfectly into the model you are trying to develop against. Just keep in mind how their licensing model works before getting heavily invested in it.

Categories