Please why is my code not working,do I need to install any library - javascript

Recently started functional programming and all explanations of the pipe and compose using reduce which I have seen are very sketchy.
const x = 4
const add2 = x + 2
const multiplyBy5 = x * 5
const subtract1 = x - 1
pipe = (...functions) =>
(x) => functions.reduce((v, function) => function(v), x)
const result = pipe(add2, multiplyBy5, subtract1)(4)
console.log(result)

There were 2 errors.
The first one was that the x, add2, multiplyBy5 and subtract1 were not functions, but mere definitions.
The other was that you naming a variable (using the arguments) to a name that is a "reserved" word such as "function" did break the syntax parser.
const x = (x) => x
const add2 = (x) => x+2
const multiplyBy5 = (x) => x*5
const subtract1 = (x) => x-1
const pipe = (...functions) => (x) => functions.reduce((v,fn)=>fn(v),x)
const result = pipe(
add2,
multiplyBy5,
subtract1,
)(4);
console.log(result)

I think it should be done like this:
const add2 = (x) => x+2
const multiplyBy5 = (x) => x*5
const subtract1 = (x) => x-1
const pipe=(...functions) => (x) => functions.reduce((v, functionA) => functionA(v), x)
const result=pipe(add2, multiplyBy5, subtract1)(4)
console.log(result)

Related

How to share intermediate results of continuations?

Please note that even though the example in this question is encoded in Javascript, the underlying concepts are common in Haskell and I while I prefer to express myself in Javascript I also appreciate answers in Haskell.
In Javascript I use CPS to handle asynchronous computations according to monadic principles. For the sake of simplicity, however, I will use the normal continuation monad for this question.
As soon as my continuation compositions grow, I keep finding myself in a situation where I need access to intermediate results of these compositions. Since Javascript is imperative it is easy to store such results in variables and access them later. But since we're talking about continuations accessing intermediate results means calling functions and accessing them several times means a lot of re-evaluation.
This seems to be well suited for memoization. But how can I memoize a function's return value if that very function doesn't return anything but merely calls its continuation (and btw. as I mentioned before I use asynchronous functions that also don't return anything in the current cycle of Javascript's event loop).
It seems as if I have to extract the right continuation. Maybe this is possible with delimited continuations through shift/reset, but I don't know how to apply these combinators. This issue is probably not that hard to solve and I'm just confused by the magical land of continuation passing style...so please be indulgent with me.
Here is a simplified example of Cont without memoization in Javascript:
const taggedLog = tag => s =>
(console.log(tag, s), s);
const id = x => x;
const Cont = k => ({
runCont: k,
[Symbol.toStringTag]: "Cont"
});
const contAp = tf => tx =>
Cont(k => tf.runCont(f => tx.runCont(x => k(f(x)))));
const contLiftA2 = f => tx => ty =>
contAp(contMap(f) (tx)) (ty);
const contOf = x => Cont(k => k(x));
const contMap = f => tx =>
Cont(k => tx.runCont(x => k(f(x))));
const contReset = tx => // delimited continuations
contOf(tx.runCont(id));
const contShift = f => // delimited continuations
Cont(k => f(k).runCont(id));
const inc = contMap(x => taggedLog("eval inc") (x + 1));
const inc2 = inc(contOf(2));
const inc3 = inc(contOf(3));
const add = contLiftA2(x => y => taggedLog("eval add") (x + y));
const mul = contLiftA2(x => y => taggedLog("eval mul") (x * y));
const intermediateResult = add(inc2) (inc3);
mul(intermediateResult) (intermediateResult).runCont(id);
/*
should only log four lines:
eval inc 3
eval inc 4
eval add 7
eval mul 49
*/
Your problems seems to be that your Cont has no monad implementation yet. With that, it's totally simple to access previous results - they're just in scope (as constants) of nested continuation callbacks:
const contChain = tx => f =>
Cont(k => tx.runCont(r => f(r).runCont(k)));
contChain( add(inc2) (inc3), intermediateResult => {
const intermediateCont = contOf(intermediateResult);
return mul(intermediateCont) (intermediateCont);
}).runCont(id);
(Of course it's a little weird that all your functions are already lifted and take Cont values as arguments - they shouldn't do that and simply be functions that return Cont values)
Your code in Haskell:
import Control.Monad.Cont
import Control.Applicative
let inc = liftA (+1)
let inc2 = inc $ return 2
let inc3 = inc $ return 3
let add = liftA2 (+)
let mul = liftA2 (*)
(`runCont` id) $ add inc2 inc3 >>= \intermediateResult ->
let intermediateCont = return intermediateResult
in mul intermediateCont intermediateCont
-- 49
{- or with do notation: -}
(`runCont` id) $ do
intermediateResult <- add inc2 inc3
let intermediateCont = return intermediateResult
mul intermediateCont intermediateCont
-- 49
(I haven't used monad transformers to make a taggedLog side effect)
It seems that I can't avoid getting impure to obtain the desired behavior. The impurity is only local though, because I just replace the continuation chain with its result value. I can do this without changing the behavior of my program, because this is exactly what referential transparency guarantees us.
Here is the transformation of the Cont constructor:
const Cont = k => ({
runCont: k,
[Symbol.toStringTag]: "Cont"
});
// becomes
const Cont = k => thisify(o => { // A
o.runCont = (res, rej) => k(x => { // B
o.runCont = l => l(x); // C
return res(x); // D
}, rej); // E
o[Symbol.toStringTag] = "Cont";
return o;
});
thisify in line A merely mimics this context, so that the Object to be constructed is aware of itself.
Line B is the decisive change: Instead of just passing res to the continuation k I construct another lambda that stores the result x wrapped in a continuation under the runTask property of the current Task object (C), before it calls res with x (D).
In case of an error rej is just applied to x, as usual (E).
Here is the runnning example from above, now working as expected:
const taggedLog = pre => s =>
(console.log(pre, s), s);
const id = x => x;
const thisify = f => f({}); // mimics this context
const Cont = k => thisify(o => {
o.runCont = (res, rej) => k(x => {
o.runCont = l => l(x);
return res(x);
}, rej);
o[Symbol.toStringTag] = "Cont";
return o;
});
const contAp = tf => tx =>
Cont(k => tf.runCont(f => tx.runCont(x => k(f(x)))));
const contLiftA2 = f => tx => ty =>
contAp(contMap(f) (tx)) (ty);
const contOf = x => Cont(k => k(x));
const contMap = f => tx =>
Cont(k => tx.runCont(x => k(f(x))));
const inc = contMap(x => taggedLog("eval inc") (x + 1));
const inc2 = inc(contOf(2));
const inc3 = inc(contOf(3));
const add = contLiftA2(x => y => taggedLog("eval add") (x + y));
const mul = contLiftA2(x => y => taggedLog("eval mul") (x * y));
const intermediateResult = add(inc2) (inc3);
mul(intermediateResult) (intermediateResult).runCont(id);
/* should merely log
eval inc 3
eval inc 4
eval add 7
eval add 49
*/

How to implement a coroutine for applicative computations?

Here is a coroutine that avoids nested patterns like (chain(m) (chain(...)) for monadic computations:
const some = x => none => some => some(x);
const none = none => some => none;
const option = none => some => tx => tx(none) (some);
const id = x => x;
const of = some;
const chain = fm => m => none => some => m(none) (x => fm(x) (none) (some));
const doM = (chain, of) => gf => {
const it = gf();
const loop = ({done, value}) =>
done
? of(value)
: chain(x => loop(it.next(x))) (value);
return loop(it.next());
};
const tx = some(4),
ty = some(5),
tz = none;
const data = doM(chain, of) (function*() {
const x = yield tx,
y = yield ty,
z = yield tz;
return x + y + z;
});
console.log(
option(0) (id) (data)); // 0
But I'm not able to implement an equivalent coroutine for applicative computations:
const some = x => none => some => some(x);
const none = none => some => none;
const option = none => some => tx => tx(none) (some);
const id = x => x;
const of = some;
const map = f => t => none => some => t(none) (x => some(f(x)));
const ap = tf => t => none => some => tf(none) (f => t(none) (x => some(f(x))));
const doA = (ap, of) => gf => {
const it = gf();
const loop = ({done, value}, initial) =>
done
? value
: ap(of(x => loop(it.next(x)))) (value);
return loop(it.next());
};
const tx = some(4),
ty = some(5),
tz = none;
const data = doA(ap, of) (function*() {
const x = yield tx,
y = yield ty,
z = yield tz;
return x + y + z;
});
console.log(
option(0) (id) (data)); // none => some => ...
This should work, but it doesn't. Where does the additional functorial wrapping come from? I guess I am a bit lost in recursion here.
By the way I know that this only works for deterministic functors/monads.
I'm not able to implement an equivalent coroutine for applicative computations
Yes, because generator functions are monadic, not just applicative. The operand of a yield expression can depend on the result of the previous yield expression - that's characteristic for a monad.
Where does the additional functorial wrapping come from? I guess I am a bit lost here.
You are doing ap(of(…))(…) - this is equivalent to map(…)(…) according to the Applicative laws. Compared to the chain call in the first snippet, this does not do any unwrapping of the result, so you get a nested maybe type (which in your implementation, is encoded as a function).

Compose method in recompose Library

I was looking at the compose function in recompose library by #acdlite to compose boundary conditions for Higher Order Components and this is what it looks the compose function looks like
const compose = (...funcs) => funcs.reduce((a, b) => (...args) => a(b(...args)), arg => arg);
However, I tried Eric-Elliott's one liner approach to compose, from https://medium.com/javascript-scene/reduce-composing-software-fe22f0c39a1d, specifically, this piece of code.
const compose = (...fns) => x => fns.reduceRight((v, f) => f(v), x);
I tried using both these variants, in my react component like so,
const ListWithConditionalRendering = compose(
withLoadingIndicator,
withDataNull,
withListEmpty
)(Users);
and they both seem to work fine. I am unable to understand if there is any difference in the way the above functions work, if so, what are they.
There's a few differences for very niche scenarios that might be helpful to be aware of.
The first one precomposes a function, which means it calls reduce() when it is composed rather than when it will be called. In contrast, the second approach returns a scoped function that calls reduceRight() when it is called, rather than when it was composed.
The first method accepts multiple arguments to the last function in the array, while the second method only accepts one argument:
const compose1 = (...funcs) => funcs.reduce((a, b) => (...args) => a(b(...args)), arg => arg);
const compose2 = (...fns) => x => fns.reduceRight((v, f) => f(v), x);
const f = s => (...args) => (console.log('function', s, 'length', args.length), args);
compose1(f(1), f(2), f(3))(1, 2, 3);
compose2(f(4), f(5), f(6))(1, 2, 3);
The first method may result in a stack overflow if the array of functions is very large because it is pre-composed, whereas the second method is (relatively)† stack safe:
const compose1 = (...funcs) => funcs.reduce((a, b) => (...args) => a(b(...args)), arg => arg);
const compose2 = (...fns) => x => fns.reduceRight((v, f) => f(v), x);
const f = v => v;
try {
compose1.apply(null, Array.from({ length: 1e5 }, () => f))();
console.log('1 is safe');
} catch (e) {
console.log('1 failed');
}
try {
compose2.apply(null, Array.from({ length: 1e5 }, () => f))();
console.log('2 is safe');
} catch (e) {
console.log('2 failed');
}
† The second method will still result in a stack overflow if ...fns is too large because arguments are also allocated on the stack.
If you are interested in what structure the reduce-composition actually builds, you can visualize it as follows:
/* original:
const compose = (...funcs) =>
funcs.reduce((a, b) => (...args) => a(b(...args)), arg => arg);
*/
const compose = (...funcs) =>
funcs.reduce((a, b) => `((...args) => ${a}(${b}(...args)))`, $_("id"));
const $_ = name =>
`${name}`;
const id = x => x;
const inc = x => x + 1;
const sqr = x => x * x;
const neg = x => -x;
const computation = compose($_("inc"), $_("sqr"), $_("neg"));
console.log(computation);
/* yields:
((...args) => ((...args) => ((...args) =>
id(inc(...args))) (sqr(...args))) (neg(...args)))
*/
console.log(eval(computation) (2)); // 5 (= id(inc(sqr(neg(2))))
So what is going on here? I replaced the inner function (...args) => a(b(...args)) with a Template-String and arg => arg with the $_ helper function. Then I wrapped the Template-String in parenthesis, so that the resulting String represents an IIFE. Last but not least I pass $_ helper functions with proper names to compose.
$_ is a bit odd but it is really helpful to visualize unapplied/partially applied functions.
You can see from the computational structure that the reduce-composition builds a nested structure of anonymous functions and rest/spread operations are scattered throughout the code.
Visualizing and interpreting partially applied functions is hard. We can simplify it by omitting the inner anonymous function:
const compose = (...funcs) =>
funcs.reduce($xy("reducer"), $_("id"));
const $_ = name =>
`${name}`;
const $xy = name => (x, y) =>
`${name}(${x}, ${y})`;
const id = x => x;
const inc = x => x + 1;
const sqr = x => x * x;
const neg = x => -x;
console.log(
compose($_("inc"), $_("sqr"), $_("neg"))
// reducer(reducer(reducer(id, inc), sqr), neg)
);
We can further simplify by actually running the composition:
const compose = (...funcs) =>
funcs.reduce((a, b) => (...args) => a(b(...args)), $x("id"));
const $x = name => x =>
`${name}(${x})`;
console.log(
compose($x("inc"), $x("sqr"), $x("neg")) (2) // id(inc(sqr(neg(2))))
);
I believe that the visualization of complex computations like this is a powerful technique to comprehend them correctly and to gain a better understanding of nested/recursive computational structures.
Implementation show and tell? Okay -
const identity = x =>
x
const compose = (f = identity, ...fs) => x =>
f === identity
? x
: compose (...fs) (f (x))
const add1 = x =>
x + 1
console .log
( compose () (0) // 0
, compose (add1) (0) // 1
, compose (add1, add1) (0) // 2
, compose (add1, add1, add1) (0) // 3
)
Or instead of using compose in-line ...
const ListWithConditionalRendering = compose(
withLoadingIndicator,
withDataNull,
withListEmpty
)(Users);
You could make a sort of "forward composition" function where the argument comes first -
const $ = x => k =>
$ (k (x))
const add1 = x =>
x + 1
const double = x =>
x * 2
$ (0) (add1) (console.log)
// 1
$ (2) (double) (double) (double) (console.log)
// 16
$ (2) (double) (add1) (double) (console.log)
// 10
$ is useful when you can maintain a pattern of -
$ (value) (pureFunc) (pureFunc) (pureFunc) (...) (effect)
Above, $ puts a value into a sort of "pipeline", but there's no way to take the value out. A small adjustment allows us write very flexible variadic expressions. Below, we use $ as a way of delimiting the beginning and ending of a pipeline expression.
const $ = x => k =>
k === $
? x
: $ (k (x))
const double = x =>
x * 2
const a =
$ (2) (double) ($)
const b =
$ (3) (double) (double) (double) ($)
console .log (a, b)
// 4 24
This variadic interface gives you the ability to write expressions similar to the coveted |> operator found in other more function-oriented languages -
value
|> pureFunc
|> pureFunc
|> ...
|> pureFunc
5 |> add1
|> double
|> double
// 24
Using $, that translates to -
$ (value) (pureFunc) (pureFunc) (...) (pureFunc) ($)
$ (5) (add1) (double) (double) ($) // 24
The technique also mixes nicely with curried functions -
const $ = x => k =>
$ (k (x))
const add = x => y =>
x + y
const mult = x => y =>
x * y
$ (1) (add (2)) (mult (3)) (console.log)
// 9
Or with a slightly more interesting example -
const $ = x => k =>
$ (k (x))
const flatMap = f => xs =>
xs .flatMap (f)
const join = y => xs =>
xs .join (y)
const twice = x =>
[ x, x ]
$ ('mississippi')
(([...chars]) => chars)
(flatMap (twice))
(join (''))
(console.log)
// 'mmiissssiissssiippppii'

How does reduce work in this scenario?

I was looking into pipe functions and I came across this reduce function which takes the _pipe function as the parameter. The _pipe function has two params a,b and then returns another function.
How does reduce work here?
add3 = x => x + 3;
add5 = x => x + 5;
const _pipe = (a, b) => (arg) => b(a(arg))
const pipe = (...ops) => ops.reduce(_pipe)
const add = pipe(add3, add5)
add(10)
output
18
look at pipe function definition:
const pipe = (...ops) => ops.reduce(_pipe)
It receives array of functions called ops (this is called rest params).
Then we call reduce for ops array. Reducer function has 2 params: accumulator and current value.
Here is our _pipe written with human readable variables:
const _pipe = (accumulator, currentValue) => (arg) => currentValue(accumulator(arg));
So the result of _pipe will be curried functions from ops array.
If it's [add3, add5] then result will be (arg) => add3(add5(arg))
If it's [add3, add5, add2] then result is: (arg) => add2(accumulator(arg)) where accumulator is (arg) => add3(add5(arg)).
You simply compose all functions from array using reduce. Then pass initial value which is 10.
It's like: add3(add5(10)) = 18
the reduce function is equivalent to
add3 = x => x + 3;
add5 = x => x + 5;
const _pipe = (a, b) => (arg) => b(a(arg))
const pipe = (...ops) => {
// create a base function
let sum = x => x;
// for every operation
for (let op of ops) {
// call _pipe with sum and it
// equivalent to sum = x => sum(op(x))
sum = _pipe(sum, op)
}
// return the resulting function
return x => sum(x)
}
const add = pipe(add3, add5)
console.log(add(10))

function composition with rest operator, reducer and mapper

I'm following an article about Transducers in JavaScript, and in particular I have defined the following functions
const reducer = (acc, val) => acc.concat([val]);
const reduceWith = (reducer, seed, iterable) => {
let accumulation = seed;
for (const value of iterable) {
accumulation = reducer(accumulation, value);
}
return accumulation;
}
const map =
fn =>
reducer =>
(acc, val) => reducer(acc, fn(val));
const sumOf = (acc, val) => acc + val;
const power =
(base, exponent) => Math.pow(base, exponent);
const squares = map(x => power(x, 2));
const one2ten = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10];
res1 = reduceWith(squares(sumOf), 0, one2ten);
const divtwo = map(x => x / 2);
Now I want to define a composition operator
const more = (f, g) => (...args) => f(g(...args));
and I see that it is working in the following cases
res2 = reduceWith(more(squares,divtwo)(sumOf), 0, one2ten);
res3 = reduceWith(more(divtwo,squares)(sumOf), 0, one2ten);
which are equivalent to
res2 = reduceWith(squares(divtwo(sumOf)), 0, one2ten);
res3 = reduceWith(divtwo(squares(sumOf)), 0, one2ten);
The whole script is online.
I don't understand why I can't concatenate also the last function (sumOf) with the composition operator (more). Ideally I'd like to write
res2 = reduceWith(more(squares,divtwo,sumOf), 0, one2ten);
res3 = reduceWith(more(divtwo,squares,sumOf), 0, one2ten);
but it doesn't work.
Edit
It is clear that my initial attempt was wrong, but even if I define the composition as
const compose = (...fns) => x => fns.reduceRight((v, fn) => fn(v), x);
I still can't replace compose(divtwo,squares)(sumOf) with compose(divtwo,squares,sumOf)
Finally I've found a way to implement the composition that seems to work fine
const more = (f, ...g) => {
if (g.length === 0) return f;
if (g.length === 1) return f(g[0]);
return f(more(...g));
}
Better solution
Here it is another solution with a reducer and no recursion
const compose = (...fns) => (...x) => fns.reduceRight((v, fn) => fn(v), ...x);
const more = (...args) => compose(...args)();
usage:
res2 = reduceWith(more(squares,divtwo,sumOf), 0, one2ten);
res3 = reduceWith(more(divtwo,squares,sumOf), 0, one2ten);
full script online
Your more operates with only 2 functions. And the problem is here more(squares,divtwo)(sumOf) you execute a function, and here more(squares,divtwo, sumOf) you return a function which expects another call (fo example const f = more(squares,divtwo, sumOf); f(args)).
In order to have a variable number of composable functions you can define a different more for functions composition. Regular way of composing any number of functions is compose or pipe functions (the difference is arguments order: pipe takes functions left-to-right in execution order, compose - the opposite).
Regular way of defining pipe or compose:
const pipe = (...fns) => x => fns.reduce((v, fn) => fn(v), x);
const compose = (...fns) => x => fns.reduceRight((v, fn) => fn(v), x);
You can change x to (...args) to match your more definition.
Now you can execute any number of functions one by one:
const pipe = (...fns) => x => fns.reduce((v, fn) => fn(v), x);
const compose = (...fns) => x => fns.reduceRight((v, fn) => fn(v), x);
const inc = x => x + 1;
const triple = x => x * 3;
const log = x => { console.log(x); return x; } // log x, then return x for further processing
// left to right application
const pipe_ex = pipe(inc, log, triple, log)(10);
// right to left application
const compose_ex = compose(log, inc, log, triple)(10);
I still can't replace compose(divtwo,squares)(sumOf) with compose(divtwo,squares,sumOf)
Yes, they are not equivalent. And you shouldn't try anyway! Notice that divtwo and squares are transducers, while sumOf is a reducer. They have different types. Don't build a more function that mixes them up.
If you insist on using a dynamic number of transducers, put them in an array:
[divtwo, squares].reduceRight((t, r) => t(r), sumOf)

Categories