Ramda.js pipe that sets a property based in a previous parameter - javascript

Currently, I have the following code (which works):
const double = R.multiply(2);
const piped = R.pipe(
(obj) => R.assoc('b', double(obj.a))(obj),
(obj) => R.assoc('c', double(obj.b))(obj)
);
console.log(
piped({ a: 1 })
);
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
However I think that due to that (obj) at the end of each pipe function, I guess that I could refactor it to something better in the "Ramda world".
I'm still new to this library, so I yet don't know all the methods and tricks.
Is there a better way to do so using Ramda?
My "real" code is this:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathAndRunProps = pipe(
// Note: The `scriptsPath` function is a bound path.join function.
// It just returns a string.
(dir) => assoc('path', scriptsPath(dir.name, 'index.js'))(dir),
(dir) => assoc('run', tryRequire(dir.path))(dir)
);
const addModuleRunAndFilterInvalid = pipe(
map(addPathAndRunProps),
filter((dir) => typeof dir.run === 'function')
);
return addModuleRunAndFilterInvalid(
fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true })
);
}

I think you might be over-using Ramda here. The code is a bit confusing. This would likely be easier to read in the future and more maintainable, while still being functional:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathAndRunProps = dir => {
const path = scriptsPath(dir.name, 'index.js')
return {
...dir,
path,
run: tryRequire(path),
}
}
return pipe(
map(addPathAndRunProps),
filter(x => typeof x.run === 'function'),
)(fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true }))
}
Or, if you really want to keep those setters, try splitting your addPathAndRunProps function into two setters:
function getScripts() {
const tryRequire = tryCatch((path) => require(path).run, always(null));
const addPathProp = x => assoc('path', scriptsPath(x.name, 'index.js'), x)
const addRunProp = x => assoc('run', tryRequire(x.path), x)
return pipe(
map(addPathProp),
map(addRunProp),
filter(x => typeof x.run === 'function'),
)(fs.readdirSync(SCRIPTS_PATH, { withFileTypes: true }))
}
In both cases, I got rid of your addModuleRunAndFilterInvalid function. It doesn't add any clarity to your function to have addModuleRunAndFilterInvalid split out into its own function, and returning the result of the pipe clarifies the purpose of the getScripts function itself.
Also, in your code, you keep calling the object you're operating on dir. This is confusing since it implies the object has the same structure on each function call. However the variable passed to addRunProp does not have the same structure as what is passed to addPathProp (the one passed to addRunProp requires an extra path prop). Either come up with a descriptive name, or just use x. You can think of x as the thing your function is operating on. To figure out what x is, look at the function name (e.g. addRunProp means that x is something that will have a run property added to it).
One other potentially useful tip: I've settled on the naming convention of aug (short of "augment") for adding a property or bit of info to an object. So I'd rename your addPathProp function augPath and rename your addRunProp function augRun. Since I use it consistently, I know that when I see aug at the beginning of a function, it's adding a property.

I agree with Cully's answer -- there might not be any good reason to try to use Ramda's functions here.
But, if you're interested, there are some Ramda functions which you might choose to use.
chain and ap are fairly generic functions operating on two different abstract types. But when used with functions, they have some fairly useful behavior as combinators:
chain (f, g) (x) //=> f (g (x)) (x)
ap (f, g) (x) //=> f (x) (g (x))
That means that you could write your function like this:
const piped = R.pipe(
chain (assoc ('b'), pipe (prop ('a'), double)),
chain (assoc ('c'), pipe (prop ('b'), double)),
)
I don't think this version improves on the original; the repetition involved in those internal pipe calls is too complex.
However with a helper function, this might be more reasonable:
const doubleProp = curry (pipe (prop, double))
// or doubleProp = (prop) => (obj) => 2 * obj[prop]
const piped = R.pipe(
chain (assoc ('b'), doubleProp ('a')),
chain (assoc ('c'), doubleProp ('b')),
);
This is now, to my mind, pretty readable code. Of course it requires an understanding of chain and how it applies to functions, but with that, I think it's actually an improvement on the original.
I frequently make the point that point-free code is a useful tool only when it makes our code more readable. When it doesn't pointed code is no less functional than point-free.
By the way, I just want to note that I'm impressed with the quality of your question. It's really nice to read questions that are well-thought out and well-presented. Thank you!

Related

How to duplicate value in functional js?

I have couple of functions, first of which is "expensive" getter:
function getter() {
return {
a: "foo",
b: "bar",
c: "should be intentionally skipped"
}
}
Second is transformer, which has a requirement to stay in strictly functional form:
const transformer = x => [getter().a+x, getter().b+x]
Issue is that here are 2 expensive getter calls.
How can I call getter only once, keeping it in fp-form syntax (I particularly mean - without using var, const, let and return inside transformer)?
In other words, what is js fp equivalent of transformer function:
const transformer = (x) => {
const cached = getter()
return [cached.a+x, cached.b+x]
}
console.log(f("_test"))
output:
[ 'foo_test', 'bar_test' ]
keeping it in fp-form syntax - I particularly mean, without using var, const, let and return inside transformer
That is not what functional programming means, not even purely functional programming. You should avoid side effects and keep functions pure so that you gain referential transparency to help understanding the code. It does not mean that you should avoid introducing names in your program. Using const is totally fine! You even use it to declare const transformer.
If you absolutely want to avoid such statements and basically emulate let expressions, you can do
const transformer = x =>
(cached =>
[cached.a+x, cached.b+x]
)( getter() );
And of course, if getter is a pure function, there's no reason to run it every time transformer is called. So just hoist it outside the function body:
const cached = getter();
const transformer = x => [cached.a+x, cached.b+x];
edit the question has been amended operate on a subset of keys in the computationally expensive object.
This amended answer uses Object.entries() to gather keys and values. Before transforming values, the entries are filtered to include only the desired keys...
function getter() {
return {
a: "foo",
b: "bar",
c: "should be intentionally skipped"
}
}
const transformer = x => {
return Object.entries(getter())
.filter(([k, v]) => ['a', 'b'].includes(k))
.map(([k, v]) => v + x);
}
console.log(transformer(" plus x"));

Ramda Curry with Implicit Null

I've been trying to learn the Ramda library and get my head around functional programming. This is mostly academic, but I was trying to create a nice logging function that I could use to log values to the console from within pipe or compose
The thing I noticed
Once you've curried a function with Ramda, invoking a function without any parameters returns the same function
f() returns f
but
f(undefined) and f(null)
do not.
I've created a utility function that brings these calls into alignment so that
f() equals f(null) even if f is curried.
// Returns true if x is a function
const isFunction = x =>
Object.prototype.toString.call(x) == '[object Function]';
// Converts a curried fuction so that it always takes at least one argument
const neverZeroArgs = fn => (...args) => {
let ret = args.length > 0 ?
fn(...args) :
fn(null)
return isFunction(ret) ?
neverZeroArgs(ret) :
ret
}
const minNullCurry = compose(neverZeroArgs, curry);
Here it is in use:
const logMsg = minNullCurry((msg, val) => {
if(isNil(msg) || msg.length < 1) console.log(val);
else if(isNil(val)) console.log(msg);
else console.log(`${msg}: `, val);
});
const logWithoutMsg = logMsg();
logWithoutMsg({Arrr: "Matey"})
Then if I want to use it in Ramda pipes or composition, I could do this:
// Same as logMsg, but always return the value you're given
const tapLog = pipe(logMsg, tap);
pipe(
prop('length'),
tapLog() // -> "5"
)([1,2,3,4,5]);
pipe(
prop('length'),
tapLog('I have an thing of length') // -> "I have an thing of length: 5"
)([1,2,3,4,5]);
pipe(
always(null),
tapLog('test') // -> "test"
)([1,2,3,4,5]);
I've just started with Ramda and was wondering if it comes with anything that might make this a bit easier/cleaner. I do realise that I could just do this:
const logMsg = msg => val => {
if(isNil(msg)) console.log(val);
else if(isNil(val)) console.log(msg);
else console.log(`${msg}: `, val);
});
and I'm done, but now I have to forever apply each argument 1 at a time.
Which is fine, but I'm here to learn if there are any fun alternatives. How can I transform a curried function so that f() returns f(null) or is it a code smell to even want to do that?
(Ramda founder and maintainer here).
Once you've curried a function with Ramda, invoking a function without any parameters returns the same function
f() returns f
but
f(undefined) and f(null)
do not.
Quite true. This is by design. In Ramda, for i < n, where n is the function length, calling a function with i arguments and then with j arguments should have the same behavior as if we'd called it originally with i + j arguments. There is no exception if i is zero. There has been some controversy about this over the years. The other co-founder disagreed with me on this, but our third collaborator agreed we me, and it's been like this ever since. And note that the other founder didn't want to treat it as though you'd supplied undefined/null, but to throw an error. There is a lot to be said for consistency.
I'm here to learn if there are any fun alternatives. How can I transform a curried function so that f() returns f(null) or is it a code smell to even want to do that?
It is not a code smell, not at all. Ramda does not supply this to you, and probably never will, as it doesn't really match the rest of the library. Ramda needs to be able to distinguish an empty call from one with a nil input, because for some users that might be important. But no one ever said that all your composition tools had to come from a particular library.
I see nothing wrong with what you've done.
If you are interested in a different API, something like this might possibly be interesting:
const {pipe, prop, always} = R
const tapLog = Object .assign (
(...val) => console .log (...val) || val,
{
msg: (msg) => (...val) => console .log (`${msg}:`, ...val) || val,
val: (...val) => (_) => console .log (...val) || _
}
)
tapLog ({Arrr: "Matey"})
pipe(
prop('length'),
tapLog // -> "5"
)([1,2,3,4,5]);
pipe(
prop('length'),
tapLog.msg('I have an thing of length') // -> "I have an thing of length: 5"
)([1,2,3,4,5]);
pipe(
always(null),
tapLog.val('test') // -> "test"
)([1,2,3,4,5]);
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.27.1/ramda.min.js"></script>

Functional Programming - then() between chained filter/map calls

I am parsing data like this:
getData()
.filter(fn)
.filter(fn2)
.filter(fn3)
.map(fn4)
in which the filters are conceptually separated and do different operations.
For debugging purposes, is there a JavaScript library or a way to wrap promises such that I can do this:
getData()
.filter(fn)
.then((result) => { log(result.count); return result })
.filter(fn2)
.then(debugFn) // extra chained debug step (not iterating through arr)
.filter(fn3)
.map(fn4)
Or is this an anti-pattern?
EDIT
After some thoughts I'm convinced that the best answer to this question has been given by V-for-Vaggelis: just use breakpoints.
If you do proper function composition then inserting a few tap calls in your pipeline is cheap, easy and non intrusive but it won't give you as much information than what a breakpoint (and knowing how to use a debugger to step through your code) would.
Applying a function on x and returning x as is, no matter what, already has a name: tap. In libraries like ramda.js, it is described as follow:
Runs the given function with the supplied object, then returns the object.
Since filter, map, ... all return new instances, you probably have no other choice than extending the prototype.
We can find ways to do it in a controlled manner though. This is what I'd suggest:
const debug = (xs) => {
Array.prototype.tap = function (fn) {
fn(this);
return this;
};
Array.prototype.debugEnd = function () {
delete Array.prototype.tap;
delete Array.prototype.debugEnd;
return this;
};
return xs;
};
const a = [1, 2, 3];
const b =
debug(a)
.tap(x => console.log('Step 1', x))
.filter(x => x % 2 === 0)
.tap(x => console.log('Step 2', x))
.map(x => x * 10)
.tap(x => console.log('Step 3', x))
.debugEnd();
console.log(b);
try {
b.tap(x => console.log('WAT?!'));
} catch (e) {
console.log('Array prototype is "clean"');
}
If you can afford a library like Ramda, the safest way (IMHO) would be to introduce tap in your pipeline.
const a = [1, 2, 3];
const transform =
pipe(
tap(x => console.log('Step 1', x))
, filter(x => x % 2 === 0)
, tap(x => console.log('Step 2', x))
, map(x => x * 10)
, tap(x => console.log('Step 2', x))
);
console.log(transform(a));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.min.js"></script>
<script>const {pipe, filter, map, tap} = R;</script>
Adding functions to built-in object prototypes is controversial, so many people might advise against it. However, if you really want to be able to do what you're asking, that's probably the only option:
Object.defineProperty(Array.prototype, "examine", {
value: function(callback) {
callback.call(this, this);
return this;
}
});
Then you can put .examine(debugFn) calls in the chain of .filter() calls, as you described.
You could monkey-patch Array.prototype, but it's not recommended.
As long as you only use it for debugging:
Array.prototype.debug = function (fn) {
fn(this);
return this;
};
// example usage
[1, 2, 3].map(n = > n * 2).debug(console.log).map(n => n * 3);
It's not a promise - you probably don't need async - but gives you .then-like behaviour.
The main issue here is that you're trying to use the chaining pattern that doesn't scale very well.
a.method().method() does only let you apply functions (methods) that are supported by the prototype of the given context (a in this case).
I'd rather suggest you to take a look at function composition (pipe VS compose). This design pattern doesn't depend on a specific context, hence you can provide behaviour externally.
const asyncPipe = R.pipeWith(R.then);
const fetchWarriors = (length) => Promise.resolve(
Array.from({ length }, (_, n) => n),
);
const battle = asyncPipe([
fetchWarriors,
R.filter(n => n % 2 === 0),
R.filter(n => n / 5 < 30),
R.map(n => n ** n),
R.take(4),
R.tap(list => console.log('survivors are', list)),
]);
/* const survivors = await */ battle(100);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.26.1/ramda.js"></script>
As you can see from the snippet above, it is not really needed for the Array type to implement everything...
I believe one could use breakpoints to debug something like this.
When you dont want to overwrite the prototype of either you could write a wrapper function that takes a promise and gives you a modified promise that has the additional features you want. However the problem here is that you will need to import all methods that may be used, which is bad for tree-shaking.
The ES6 pipeline operator proposal tries to address this problem.
Until then things like lodashs _.flow remain, that allow you to do this:
_.pipe([
_.filter(fn),
_.filter(fn2),
])(data);
Now you basically want this in an async way. This should be pretty easy to accomplish with tools like Ramda.
You can do what you want pretty easily with rubico, a functional (programming) promise library
import { pipe, tap, map, filter, transform } from 'rubico'
const pipeline = pipe([
getData,
filter(fn),
tap((result) => { log(result.count) }),
filter(fn2),
debugFn,
filter(fn3),
map(fn4),
])
you can use the above pipeline as a transducer (without debugFn for now, since I am not sure the exact nature of what it does) using rubico's transform
transform(pipeline, [])
you are left with an efficient transformation pipeline based off transduction.

Pass a list of functions to pipe or compose in Ramda.js

About functions that take multiple arguments.
In particular I assume "pipe" and "compose".
They take multiple functions as arguments.
At this time, I want to pass them a list of multiple functions.
In Ramda.js
Normally:
const piped = R.pipe(R.inc, R.negate);
I wanna like this:
const funcs = [R.inc, R.negate];
const piped = R.pipe(funcs);
I'm also thinking about passing a list of partially applied functions
const funcs = [R.add (1), R.pow (2)];
The functions in these lists have no name property.
So I wondered if a solution could be found by binding these Ramdajs functions and partially applied functions to variables.
But they didn't seem so smart.
This is my first experience in English and in stack overflow.
And I am sorry in ugly English because it is a mechanical translation.
How can I solve this problem, please tell me the solution.
Thank you.
The simplest method to convert an array into multiple parameters is destructuring. Add "..." in front of the parameter:
const funcs = [R.inc, R.negate];
const piped = R.pipe(...funcs);
More about destructuring here https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Destructuring_assignment
What you actually want is to fold a list of functions with the reverse function composition combinbator (aka contravariant functor):
const inc = x => x + 1;
const sqr = x => x * x;
const reduce1 = f => xs =>
xs.reduce(f);
const contra = (g, f) => x =>
f(g(x));
console.log(
reduce1(contra) ([inc, inc, inc, sqr]) (1)); // 16
This works only for non-empty arrays. We need a fold with an accumulator to make the partially applied fold reduce1(contra) a total function:
const inc = x => x + 1;
const sqr = x => x * x;
const id = x => x;
const reduce = f => acc => xs =>
xs.reduce(f, acc);
const contra = (g, f) => x =>
f(g(x));
const pipen = reduce(contra) (id);
console.log(
pipen([inc, inc, inc, sqr]) (1)); // 16
console.log(
pipen([]) (1)); // 1
In Ramda though, using R.apply is totally fine. But note that this function is specific to Ramda.
Ramda.js functions are normal javascript functions so Function.call and Function.apply methods are available.
So the solution to your problem is to use .apply method to apply multiple arguments (that are a list):
example:
const funcs = [R.inc, R.negate];
const piped = R.pipe.apply(R, funcs);
There is also a Ramda.apply as well (which you can check in the Ramda documentation) but native Function.apply method solves the problem elegantly and natively (note according to documentation of the pipe function there is no mention of using a specific context as there is for some other functions in Ramda documentation ).
Answer to comment:
R.apply has exactly same issues with bound context as native apply method has (according to Ramda documentation). So the issue of context of apply method is mostly irrelevant and of minor importance.

Increasing performance of a function which processes an array using destructuring and recursion

I wanted to create a function which takes another function and an array as parameters and calls that function for each three consecutive elements of the array (e.g. first, second and third; second, third and forth). I implemented it using destructuring and recursion. However, I found out that it has terrible performance – it takes about 100 ms to process a 1000-element array, and uses a lot of memory. Here's a code snippet:
const eachThree = fn => ([first, second, third, ...rest]) => {
fn(first, second, third);
if (rest.length !== 0) {
eachThree(fn)([second, third, ...rest]);
}
};
const noop = () => {};
const arr = Array(1000).fill(undefined);
console.time('eachThree');
eachThree(noop)(arr);
console.timeEnd('eachThree');
I'm aware that to increase performance I could just use a regular for loop, but is it possible to modify this function somehow to make it faster while keeping destructuring and recursion?
Also, are there any plans to optimize JavaScript engines to make code like this run faster? Would tail-call optimization solve this?
it takes about 100 ms to process a 1000-element array, and uses a lot of memory.
The reason for that is obviously the destructuring and spread syntax, which creates 2000 arrays with an average size of 500 elements. That ain't cheap.
I'm aware that to increase performance I could just use a regular for loop, but is it possible to modify this function somehow to make it faster while keeping destructuring and recursion?
Giving up the destructuring (in favour of an index, like in #Sylwester's answer) would be the best solution, but there's indeed a few things other things you can optimise:
Use return to allow for tail call optimisation (if the engine supports it)
Cache the inner function instead of re-creating the closure with the same fn over and over again.
const eachThree = fn => {
const eachThreeFn = ([first, second, third, ...rest]) => {
fn(first, second, third);
if (rest.length !== 0) {
return eachThreeFn([second, third, ...rest]);
}
};
return eachThreeFn;
};
So I hope for your and our sake you really need to shave off those 200ms since it doesn't seem like a real life problem to me. Know that in the rules of optimization:
Don't
Don't
Profile so you know were to optimize.
When it comes to optimization neatness and style goes out of the window. That means you can throw out recursion. In this case however, it's not the recursion that is your biggest problem but the 2n arrays you are making.
Here is a slightly faster version:
const eachThree = fn => arr => {
const maxLen = arr.length-3;
const recur = (n) => {
fn(arr[n], arr[n+1], arr[n+2]);
if (n < maxLen) {
recur(n+1);
}
};
recur(0);
};
Now ES6 has proper tail calls and Node6 has implemented this. It runs about 400 times faster than your original code when I test it there. Not that you will notice the change IRL.
Since you seem open to refactoring the code, here's a pretty dramatic rewrite that performs the same task about 10x faster (using your means of benchmarking)
I think you'll appreciate that a very functional style has been maintained
const drop = (n,xs) =>
xs.slice(n)
const take = (n,xs) =>
xs.slice(0,n)
const slide = (x,y) => xs =>
x > xs.length
? []
: [take (x,xs)] .concat (slide (x,y) (drop (y,xs)))
const eachThree = f => xs =>
slide (3,1) (xs) .forEach (([a,b,c]) => f (a,b,c))
const noop = () => {}
const arr = Array(1000).fill(undefined)
console.time('eachThree')
eachThree(noop)(arr)
console.timeEnd('eachThree')
Using a trampoline solves the memory usage problem, but it doesn't speed up the code. Here's a code snippet:
const trampoline = fn => {
do {
fn = fn();
} while (typeof fn === 'function');
};
const eachThree = fn => ([first, second, third, ...rest]) => () => {
fn(first, second, third);
if (rest.length !== 0) {
return eachThree(fn)([second, third, ...rest]);
}
};
const noop = () => {};
const arr = Array(5000).fill(undefined);
console.time('eachThree');
trampoline(eachThree(noop)(arr));
console.timeEnd('eachThree');

Categories