I've been trying to learn the Ramda library and get my head around functional programming. This is mostly academic, but I was trying to create a nice logging function that I could use to log values to the console from within pipe or compose
The thing I noticed
Once you've curried a function with Ramda, invoking a function without any parameters returns the same function
f() returns f
but
f(undefined) and f(null)
do not.
I've created a utility function that brings these calls into alignment so that
f() equals f(null) even if f is curried.
// Returns true if x is a function
const isFunction = x =>
Object.prototype.toString.call(x) == '[object Function]';
// Converts a curried fuction so that it always takes at least one argument
const neverZeroArgs = fn => (...args) => {
let ret = args.length > 0 ?
fn(...args) :
fn(null)
return isFunction(ret) ?
neverZeroArgs(ret) :
ret
}
const minNullCurry = compose(neverZeroArgs, curry);
Here it is in use:
const logMsg = minNullCurry((msg, val) => {
if(isNil(msg) || msg.length < 1) console.log(val);
else if(isNil(val)) console.log(msg);
else console.log(`${msg}: `, val);
});
const logWithoutMsg = logMsg();
logWithoutMsg({Arrr: "Matey"})
Then if I want to use it in Ramda pipes or composition, I could do this:
// Same as logMsg, but always return the value you're given
const tapLog = pipe(logMsg, tap);
pipe(
prop('length'),
tapLog() // -> "5"
)([1,2,3,4,5]);
pipe(
prop('length'),
tapLog('I have an thing of length') // -> "I have an thing of length: 5"
)([1,2,3,4,5]);
pipe(
always(null),
tapLog('test') // -> "test"
)([1,2,3,4,5]);
I've just started with Ramda and was wondering if it comes with anything that might make this a bit easier/cleaner. I do realise that I could just do this:
const logMsg = msg => val => {
if(isNil(msg)) console.log(val);
else if(isNil(val)) console.log(msg);
else console.log(`${msg}: `, val);
});
and I'm done, but now I have to forever apply each argument 1 at a time.
Which is fine, but I'm here to learn if there are any fun alternatives. How can I transform a curried function so that f() returns f(null) or is it a code smell to even want to do that?
(Ramda founder and maintainer here).
Once you've curried a function with Ramda, invoking a function without any parameters returns the same function
f() returns f
but
f(undefined) and f(null)
do not.
Quite true. This is by design. In Ramda, for i < n, where n is the function length, calling a function with i arguments and then with j arguments should have the same behavior as if we'd called it originally with i + j arguments. There is no exception if i is zero. There has been some controversy about this over the years. The other co-founder disagreed with me on this, but our third collaborator agreed we me, and it's been like this ever since. And note that the other founder didn't want to treat it as though you'd supplied undefined/null, but to throw an error. There is a lot to be said for consistency.
I'm here to learn if there are any fun alternatives. How can I transform a curried function so that f() returns f(null) or is it a code smell to even want to do that?
It is not a code smell, not at all. Ramda does not supply this to you, and probably never will, as it doesn't really match the rest of the library. Ramda needs to be able to distinguish an empty call from one with a nil input, because for some users that might be important. But no one ever said that all your composition tools had to come from a particular library.
I see nothing wrong with what you've done.
If you are interested in a different API, something like this might possibly be interesting:
const {pipe, prop, always} = R
const tapLog = Object .assign (
(...val) => console .log (...val) || val,
{
msg: (msg) => (...val) => console .log (`${msg}:`, ...val) || val,
val: (...val) => (_) => console .log (...val) || _
}
)
tapLog ({Arrr: "Matey"})
pipe(
prop('length'),
tapLog // -> "5"
)([1,2,3,4,5]);
pipe(
prop('length'),
tapLog.msg('I have an thing of length') // -> "I have an thing of length: 5"
)([1,2,3,4,5]);
pipe(
always(null),
tapLog.val('test') // -> "test"
)([1,2,3,4,5]);
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>
Related
I am reading right now the source code of Ramda and don't understand the point of _curry1().
function _curry1(fn) {
return function f1(a) {
if (arguments.length === 0 || _isPlaceholder(a)) {
return f1;
} else {
return fn.apply(this, arguments);
}
};
}
Functions that are curried with this function can be called without arguments.
const inc = (n) => n + 1
const curriedInc = _curry1(inc)
// now it can be called like this
curriedInc(1) // => 2
curriedInc()(1) // => 2
// but it works only with no arguments provided
const add = (a, b) => a + b
const curriedAdd = _curry1(add)
curriedAdd(1) // => NaN
curriedAdd(1)(1) // => TypeError: curriedAdd(...) is not a function
curriedAdd() // => function f1(a)
Question:
It seems to me that you can't chain arguments with this function at all. What are use cases for this function? How different is curriedInc()(1) from inc(1)?
_curry1, _curry2, and _curry3 are performance-boosting tricks that can't be used in the public curry function. Ramda (disclaimer: I'm an author) uses them to perform several tasks, clearly mostly doing the currying, but also checking for placeholders so that we can properly do some partial application. The other magic thing about Ramda's currying is the way that you can apply all, some, or even none of the required arguments, and so long as we're not complete, you get back another function looking for the remaining ones.
Obviously _curry1 theoretically shouldn't be necessary, but it makes it easier to add this sort of consistency:
const a = curry ((a, b, c) => ...)
a (1, 2, 3) == a (1, 2) (3) == a (1) (2, 3) == a (1) (2) (3) == a () () (1, 2) () (3)
Notice that last one. When you call a curried function still looking for arguments with no arguments at all, you get back the same function.
curry1 is what makes that happen; it adds consistency. But that's all it's for.
Currently, I have the following code (which works):
const double = R.multiply(2);
const piped = R.pipe(
(obj) => R.assoc('b', double(obj.a))(obj),
(obj) => R.assoc('c', double(obj.b))(obj)
);
console.log(
piped({ a: 1 })
);
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
However I think that due to that (obj) at the end of each pipe function, I guess that I could refactor it to something better in the "Ramda world".
I'm still new to this library, so I yet don't know all the methods and tricks.
Is there a better way to do so using Ramda?
My "real" code is this:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathAndRunProps = pipe(
// Note: The `scriptsPath` function is a bound path.join function.
// It just returns a string.
(dir) => assoc('path', scriptsPath(dir.name, 'index.js'))(dir),
(dir) => assoc('run', tryRequire(dir.path))(dir)
);
const addModuleRunAndFilterInvalid = pipe(
map(addPathAndRunProps),
filter((dir) => typeof dir.run === 'function')
);
return addModuleRunAndFilterInvalid(
fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true })
);
}
I think you might be over-using Ramda here. The code is a bit confusing. This would likely be easier to read in the future and more maintainable, while still being functional:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathAndRunProps = dir => {
const path = scriptsPath(dir.name, 'index.js')
return {
...dir,
path,
run: tryRequire(path),
}
}
return pipe(
map(addPathAndRunProps),
filter(x => typeof x.run === 'function'),
)(fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true }))
}
Or, if you really want to keep those setters, try splitting your addPathAndRunProps function into two setters:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathProp = x => assoc('path', scriptsPath(x.name, 'index.js'), x)
const addRunProp = x => assoc('run', tryRequire(x.path), x)
return pipe(
map(addPathProp),
map(addRunProp),
filter(x => typeof x.run === 'function'),
)(fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true }))
}
In both cases, I got rid of your addModuleRunAndFilterInvalid function. It doesn't add any clarity to your function to have addModuleRunAndFilterInvalid split out into its own function, and returning the result of the pipe clarifies the purpose of the getScripts function itself.
Also, in your code, you keep calling the object you're operating on dir. This is confusing since it implies the object has the same structure on each function call. However the variable passed to addRunProp does not have the same structure as what is passed to addPathProp (the one passed to addRunProp requires an extra path prop). Either come up with a descriptive name, or just use x. You can think of x as the thing your function is operating on. To figure out what x is, look at the function name (e.g. addRunProp means that x is something that will have a run property added to it).
One other potentially useful tip: I've settled on the naming convention of aug (short of "augment") for adding a property or bit of info to an object. So I'd rename your addPathProp function augPath and rename your addRunProp function augRun. Since I use it consistently, I know that when I see aug at the beginning of a function, it's adding a property.
I agree with Cully's answer -- there might not be any good reason to try to use Ramda's functions here.
But, if you're interested, there are some Ramda functions which you might choose to use.
chain and ap are fairly generic functions operating on two different abstract types. But when used with functions, they have some fairly useful behavior as combinators:
chain (f, g) (x) //=> f (g (x)) (x)
ap (f, g) (x) //=> f (x) (g (x))
That means that you could write your function like this:
const piped = R.pipe(
chain (assoc ('b'), pipe (prop ('a'), double)),
chain (assoc ('c'), pipe (prop ('b'), double)),
)
I don't think this version improves on the original; the repetition involved in those internal pipe calls is too complex.
However with a helper function, this might be more reasonable:
const doubleProp = curry (pipe (prop, double))
// or doubleProp = (prop) => (obj) => 2 * obj[prop]
const piped = R.pipe(
chain (assoc ('b'), doubleProp ('a')),
chain (assoc ('c'), doubleProp ('b')),
);
This is now, to my mind, pretty readable code. Of course it requires an understanding of chain and how it applies to functions, but with that, I think it's actually an improvement on the original.
I frequently make the point that point-free code is a useful tool only when it makes our code more readable. When it doesn't pointed code is no less functional than point-free.
By the way, I just want to note that I'm impressed with the quality of your question. It's really nice to read questions that are well-thought out and well-presented. Thank you!
I try to translate traverse/sequenceA to Javascript. Now the following behavior of the Haskell implementation gives me trouble:
traverse (\x -> x) Nothing -- yields Nothing
sequenceA Nothing -- yields Nothing
traverse (\x -> x) (Just [7]) -- yields [Just 7]
sequenceA (Just [7]) -- yields [Just 7]
As a Haskell newbie I wonder why the first expression works at all:
instance Traversable Maybe where
traverse _ Nothing = pure Nothing
traverse f (Just x) = Just <$> f x
pure Nothing shouldn't work in this case, since there is no minimal applicative context in which the value could be put in. It seems as if the compiler checks the type of this expression lazily and since mapping the id function over Nothing is a noop, it simply "overlooks" the type error, so to speak.
Here is my Javascript translation:
(Please note, that since Javascirpt's prototype system is insufficient for a couple of type classes and there is no strict type checking anyway, I work with Church encoding and pass type constraints to functions explicitly.)
// minimal type system realized with `Symbol`s
const $tag = Symbol.for("ftor/tag");
const $Option = Symbol.for("ftor/Option");
const Option = {};
// value constructors (Church encoded)
const Some = x => {
const Some = r => {
const Some = f => f(x);
return Some[$tag] = "Some", Some[$Option] = true, Some;
};
return Some[$tag] = "Some", Some[$Option] = true, Some;
};
const None = r => {
const None = f => r;
return None[$tag] = "None", None[$Option] = true, None;
};
None[$tag] = "None";
None[$Option] = true;
// Traversable
// of/map are explicit arguments of traverse to imitate type inference
// tx[$Option] is just duck typing to enforce the Option type
// of == pure in Javascript
Option.traverse = (of, map) => ft => tx =>
tx[$Option] && tx(of(None)) (x => map(Some) (ft(x)));
// (partial) Array instance of Applicative
const map = f => xs => xs.map(f);
const of = x => [x];
// helpers
const I = x => x;
// applying
Option.traverse(of, map) (I) (None) // ~ [None]
Option.traverse(of, map) (I) (Some([7])) // ~ [Some(7)]
Obviously, this translation deviates from the Haskell implementation, because I get a [None] where I should get a None. Honestly, this behavior corresponds precisely to my intuition, but I guess intuition isn't that helpful in functional programming. Now my question is
did I merely make a rookie mistake?
or is explicit type passing not equivalent to type inference (in terms of expressiveness)?
GHCi does not overlook any type error. It defaults an unconstrained Applicative to IO, but you only get this behavior in a GHCi prompt (and not a .hs source file). You can check
> :t pure Nothing
pure Nothing :: Applicative f => f (Maybe b)
But still have
> pure Nothing
Nothing
Your javascript implementation is fine; you passed in an Applicative instance for arrays and got what is expected.
I've been working with Church encoding recently and when I look at a typical type
newtype ChurchMaybe a =
ChurchMaybe { runChurchMaybe :: forall r . r -> (a -> r) -> r }
it looks as if functions with an existential type (runChurchMaybe) might behave similarly to functions that are polymorphic in their return type. I haven't fully understood the logic behind existential types though. So I am probably wrong.
Now I've often read that monads are less useful in untyped languages like Javascript, also because of the lack of return type polymorphism. So I wondered if I can ease this shortcoming:
// JS version of Haskell's read
// read :: String -> forall r . (String -> r) -> r
const read = x => cons => cons(x);
// option type
const Some = x => r => f => f(x);
const None = r => f => r;
console.log(
read(prompt("any string")) (Array.of) // [a]
);
console.log(
read(prompt("any string")) (Some) // Some(a)
);
console.log(
read(prompt("number string")) (x => Number(x)) // Number
);
const append_ = x => y => append => append(x) (y);
const all = x => y => x && y;
const any = x => y => x || y;
const add = x => y => x + y;
const semigroup = append_(true) (false)
semigroup(all); // false
semigroup(any); // true
semigroup(add); // 1
Obviously, read isn't polymorphic in its return type, because it always returns a lambda. However, this lambda can serve as a proxy of the actual return value and the context is now able to determine what type this proxy actually produces by passing a suitable constructor.
And while read can produce any type, append_ is limited to types that have a semigroup constraint.
Of course there is now a little noise in the context of such functions, since they return a proxy instead of the actual result.
Is this essentially the mechanism behind the term "return type polymorphism"? This subject seems to be quite complex and so I guess I am missing something. Any help is appreciated.
In a comment I made an assertion without justifying myself: I said that return type polymorphism isn't a meaningful concept in an untyped language. That was rude of me and I apologise for being so brusque. What I meant was something rather more subtle than what I said, so please allow me to attempt to make amends for my poor communication by explaining in more detail what I was trying to get at. (I hope this answer doesn't come across as condescending; I don't know your base level of knowledge so I'm going to start at the beginning.)
When Haskellers say "return type polymorphism", they're referring to one particular effect of the type class dispatch mechanism. It comes about as the interplay between dictionary passing and bidirectional type inference. (I'm going to ignore polymorphic _|_s like undefined :: forall a. a or let x = x in x :: forall a. a. They don't really count.)
First, note that type class instances in Haskell are syntactic sugar for explicit dictionary passing. By the time GHC translates your program into its Core intermediate representation, all the type classes are gone. They are replaced with "dictionary" records and passed around as regular explicit arguments; => is represented at runtime as ->. So code like
class Eq a where
(==) :: a -> a -> Bool
instance Eq Bool where
True == True = True
False == False = True
_ == _ = False
headEq :: Eq a => a -> [a] -> Bool
headEq _ [] = False
headEq x (y:_) = x == y
main = print $ headEq True [False]
is translated into something like
-- The class declaration is translated into a regular record type. (D for Dictionary)
data EqD a = EqD { eq :: a -> a -> Bool }
-- The instance is translated into a top-level value of that type
eqDBool :: EqD Bool
eqDBool = EqD { eq = eq }
where eq True True = True
eq False False = True
eq _ _ = False
-- the Eq constraint is translated into a regular dictionary argument
headEq :: EqD a -> a -> [a] -> Bool
headEq _ _ [] = False
headEq eqD x (y:_) = eq eqD x y
-- the elaborator sees you're calling headEq with a ~ Bool and passes in Bool's instance dictionary
main = print $ headEq eqDBool True [False]
It works because of instance coherence: every constraint has at most one "best" matching instance (unless you switch on IncoherentInstances, which is usually a bad idea). At the call site of an overloaded function, the elaborator looks at the constraint's type parameter, searches for an instance matching that constraint - either a top-level instance or a constraint that's in scope - and passes in the single corresponding dictionary as an argument. (For more on the notion of instance coherence I recommend this talk by Ed Kmett. It's quite advanced - it took me a couple of watches to grasp his point - but it's full of insight.)
Much of the time, as in headEq, the constraint's type parameters can be determined by looking only at the types of the overloaded function's arguments, but in the case of polymorphic return values (such as read :: Read a => String -> a or mempty :: Monoid m => m) the typing information has to come from the call site's context. This works via the usual mechanism of bidirectional type inference: GHC looks at the return value's usages, generates and solves unification constraints to figure out its type, and then uses that type to search for an instance. It makes for a kinda magical developer experience: you write mempty and the machine figures out from the context which mempty you meant!
(Incidentally, that's why show . read :: String -> String is forbidden. show and read are type class methods, whose concrete implementation isn't known without any clues about the type at which they're being used. The intermediate type in show . read - the one you're reading into and then showing from - is ambiguous, so GHC doesn't know how to choose an instance dictionary in order to generate runtime code.)
So "return type polymorphism" is actually a slightly misleading term. It's really a by-word for a particular kind of type-directed code generation; its Core representation is just as a regular function whose return type can be determined from the type of its (dictionary) argument. In a language without type classes (or a language without types at all, like JS), you have to simulate type classes with explicit dictionary parameters that are manually passed around by the programmer, as #4Castle has demonstrated in another answer. You can't do type-directed code generation without types to be directed by!
If I understand your question correctly, you would like to know how to implement functions which need access to the methods of a type class so that they can be polymorphic.
One way to think about type classes is as lookup tables between types and implementations. For example, Show would be a mapping of types to functions which return strings. This article explains this in more detail and also gives some alternative ways to implement type classes.
In a language that doesn't support types at all, you will have to implement the types as some kind of unique value which you can pass to polymorphic functions — such as a string, symbol, or object reference. I prefer an object reference because it means I can implement my types as functions and gain the ability to implement parameterized types.
Here's an example of how you could implement Read for Maybe and Int:
// MACROS
const TYPE = (constructor, ...args) => Object.freeze({ constructor, args });
const TYPECLASS = (name, defaultMethods = {}) => {
const id = Symbol(name);
const typeClass = ({ constructor, args }) => {
return Object.assign({}, defaultMethods, constructor[id](...args));
};
typeClass._instance_ = (constructor, implementation) => {
constructor[id] = implementation;
};
return Object.freeze(typeClass);
};
// TYPES
const Int = () => TYPE(Int);
const Maybe = a => TYPE(Maybe, a);
// DATA CONSTRUCTORS
const Just = x => r => f => f(x);
const Nothing = r => f => r;
// TYPE CLASSES and INSTANCES
const Read = TYPECLASS('Read');
Read._instance_(Maybe, A => ({
read: str =>
str.slice(0, 5) === "Just "
? Just (Read(A).read(str.slice(5)))
: str === "Nothing"
? Nothing
: undefined
}));
Read._instance_(Int, () => ({
read: str => {
const num = parseInt(str);
return isNaN(num) ? undefined : num;
}
}));
// FUNCTIONS
const error = msg => { throw new Error(msg); };
const maybe = x => f => m => m(x)(f);
const isJust = maybe (false) (_ => true);
const fromJust = maybe (undefined) (x => x);
const read = A => str => {
const x = Read(A).read(str);
return x === undefined ? error ("read: no parse") : x;
};
const readMaybe = A => str => {
try { return Just (read (A) (str)); }
catch (e) { return Nothing; }
};
// TESTS
console.log([
fromJust (read (Maybe(Int())) ("Just 123")), // 123
read (Int()) ("123"), // 123
fromJust (readMaybe (Int()) ("abc")) // undefined
]);
I guess this is two questions. I am still having trouble with the reduce method, I get the simple way of using it
reduce([1,2,3], function(a, b) {
return a + b;
}, 0);
//6
Using it with anything other than numbers really confuses me. So how would I build a contains function using reduce in place of the for loop? Comments would be appreciated. Thank you all.
function contains(collection, target) {
for(var i=0; i < collection.length; i++){
if(collection[i] === target){
return true;
}
}
return false;
}
contains([1, 2, 3, 4, 5], 4);
//true
This is what you need:
function contains(collection, target) {
return collection.reduce( function(acc, elem) {
return acc || elem == target;
}, false)
};
As adaneo says, there is probably an easier way for this particular question, but you tagged this 'functional programming' so I guess you want to get better at this way of tackling problems, which I totally endorse.
Here is a ES2015 solution:
const contains = (x, xs) => xs.some(y => x === y);
let collection = [1,2,3,4,5];
console.log(contains(4, collection)); // true;
The big advantage of Array.prototype.some compared to Array.prototype.reduce is that the former exits the iteration as soon as the condition is true, whereas the latter always traverses the whole array. That means contains(4, xs) stops it iteration with the fourth element of xs.
How do I do X using Y ?
Generally I approach all of these questions the same way: programming languages aren't meant to be magic wands. If your language doesn't come with a built-in feature or behaviour, you should be able to write it on your own. From there, if you later learn that your language does offer a built-in for such a behaviour (or it is added), then you can refactor your code if you wish. But by all means, do not sit around waiting for the magic wand to be waved and for your code to magically work.
You can use Array.prototype.reduce if you want, but given the way it works, it will always iterate through the entire contents of the array — even if a match is found in the first element. So this means you shouldn't use Array.prototype.reduce for your function.
However, you can use a reduce solution if you write a reduce which supports an early exit. Below is reducek which passes a continuation to the callback. Applying the continuation will continuing reducing, but returning a value will perform an early exit. Sounds like exactly what the doctor ordered...
This answer is meant to accompany LUH3417's answer to show you that before you might be aware of Array.prototype.some, you shouldn't sit around waiting for ECMAScript to implement behaviour that you need. This answer demonstrates you can use a reducing procedure and still have early exit behaviour.
const reducek = f=> y=> ([x,...xs])=>
x === undefined ? y : f (y) (x) (y=> reducek (f) (y) (xs))
const contains = x=>
reducek (b=> y=> k=> y === x ? true : k(b)) (false)
console.log(contains (4) ([1,2,3,4,5])) // true
console.log(contains (4) ([1,2,3,5])) // false
console.log(contains (4) ([])) // false
Seeing reducek here and the example contains function, it should be somewhat obvious that contains could be generalized, which is exactly what Array.prototype.some is.
Again, programming isn't magic, so I'll show you how you could do this if Array.prototype.some didn't already exist.
const reducek = f=> y=> ([x,...xs])=>
x === undefined ? y : f (y) (x) (y=> reducek (f) (y) (xs))
const some = f=>
reducek (b=> x=> k=> f(x) ? true : k(b)) (false)
const contains = x=> some (y=> y === x)
console.log(contains (4) ([1,2,3,4,5])) // true
console.log(contains (4) ([1,2,3,5])) // false
console.log(contains (4) ([])) // false