I have some data in the form:
const data = {
list: [1, 2, 3],
newItem: 5
}
I want to make a function that appends the value of newItem to list resulting in this new version of data:
{
list: [1,2,3,5],
newItem: 5,
}
(Ultimately, I'd remove newItem after it's been moved into the list, but I'm trying to simplify the problem for this question).
I'm trying to do it using pointfree style and Ramda.js, as a learning experience.
Here's where I am so far:
const addItem = R.assoc('list',
R.pipe(
R.prop('list'),
R.append(R.prop('newItem'))
)
)
The idea is to generate a function that accepts data, but in this example the call to R.append also needs a reference to data, I'm trying to avoid explicitly mentioning data in order to maintain Pointfree style.
Is this possible to do without mentioning data?
If I understand correctly you want to go from {x:3, y:[1,2]} to [1,2,3]. Here's one way:
const fn = compose(apply(append), props(['x', 'y']))
fn({x:3, y:[1,2]});
//=> [1,2,3]
As the discussion on the answer from customcommander shows, there are two different possible interpretations.
If you want to just receive [1, 2, 3, 5], then you can do it as customcommander does, or the way I would choose:
const fn1 = lift (append) (prop ('newItem'), prop ('list'))
But if you wanted something like {list: [1, 2, 3, 5], newItem: 5}, then you might use the above inside applySpec and combine that with a merge, like this:
const fn2 = chain (mergeLeft, applySpec ({list: fn1}))
Here's a snippet:
const fn1 = lift (append) (prop ('newItem'), prop ('list'))
const fn2 = chain (mergeLeft, applySpec ({list: fn1}))
const data = {list: [1, 2, 3], newItem: 5}
console .log (fn1 (data)) //=> [1, 2, 3, 5]
console .log (fn2 (data)) //=> {list: [1, 2, 3, 5], newItem: 5}
.as-console-wrapper {max-height: 100% !important; top: 0}
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
<script> const {lift, append, prop, chain, mergeLeft, applySpec} = R </script>
This second one is a little unwieldy, once you inline fn1. It repeats the property list in two places, and that always bothers me. But I don't have a good solution at the moment.
I've several times wanted a combination of R.evolve and R.applySpec, which works on the outside like evolve, letting you specify only the properties which need to change, but whose transformation functions are given the whole input object, and not just the corresponding property.
With something like that, this might look like
const f3 = evolveSpec ({
list: ({list, newItem}) => [...list, newItem]
})
or using the above:
const f3 = evolveSpec ({
list: lift (append) (prop ('newItem'), prop ('list'))
})
I think this might be a useful candidate for inclusion in Ramda.
const addItem = R.chain
( R.assoc('list') )
( R.converge(R.append, [R.prop('newItem'), R.prop('list')]) );
const data = {
list: [1, 2, 3],
newItem: 5
};
console.log(addItem(data));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
And here is why:
First we can have a look at what the current addItem is supposed to look like when not point free:
const addItem = x => R.assoc('list')
(
R.pipe(
R.prop('list'),
R.append(R.prop('newItem')(x))
)(x)
)(x);
console.log(addItem({ list: [1, 2, 3], newItem: 5 }));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
It takes some data and uses it in three places. We can refactor a bit:
const f = R.assoc('list');
const g = x => R.pipe(
R.prop('list'),
R.append(R.prop('newItem')(x))
)(x)
const addItem = x => f(g(x))(x);
console.log(addItem({ list: [1, 2, 3], newItem: 5 }));
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.28.0/ramda.min.js"></script>
The x => f(g(x))(x) part might not be obvious immediately but looking at the list of common combinators in JavaScript it can be identified as S_:
Name
#
Haskell
Ramda
Sanctuary
Signature
chain
S_³
(=<<)²
chain²
chain²
(a → b → c) → (b → a) → b → c
Thus x => f(g(x))(x) can be simplified pointfree to R.chain(f)(g).
This leaves the g which still takes one argument and uses it in two places. The ultimate goal is to extract two properties from an object and pass them to R.append(), this can be more easily (and pointfree) be expressed with R.converge() as:
const g = R.converge(R.append, [R.prop('newItem'), R.prop('list')]);
Substituting the f and g back gives
const addItem = R.chain
( R.assoc('list') )
( R.converge(R.append, [R.prop('newItem'), R.prop('list')]) );
Related
Two days ago, I announced a preview release of Underscore that integrates with the new Node.js way of natively supporting ES modules.1 Yesterday, somebody responded on Twitter with the following question:
Can you do Ramda-style data last functions?
He or she was referring to one of the main differences between Underscore and Ramda. In Underscore, functions typically take the data to be operated on as the first parameter, while Ramda takes them as the last parameter:
import _ from 'underscore';
import * as R from 'ramda';
const square = x => x * x;
// Underscore
_.map([1, 2, 3], square); // [1, 4, 9]
// Ramda
R.map(square, [1, 2, 3]); // [1, 4, 9]
The idea behind the data-last order in Ramda is that when doing partial application, the data argument is often supplied last. Taking the data as the last parameter removes the need for a placeholder in such cases:
// Let's create a function that maps `square` over its argument.
// Underscore
const mapSquare = _.partial(_.map, _, square);
// Ramda with explicit partial application
const mapSquare = R.partial(R.map, [square]);
// Ramda, shorter notation through automatic currying
const mapSquare = R.map(square);
// Ramda with currying and placeholder if it were data-first
const mapSquare = R.map(R.__, square)
// Behavior in all cases
mapSquare([1, 2, 3]); // [1, 4, 9]
mapSquare([4, 5, 6]); // [16, 25, 36]
As the example shows, it is especially the curried notation that makes data-last attractive for such scenarios.
Why doesn't Underscore do this? There are several reasons for that, which I put in a footnote.2 Nevertheless, making Underscore behave like Ramda is an interesting exercise in functional programming. In my answer below, I'll show how you can do this in just a few lines of code.
1 At the time of writing, if you want to try it, I recommend installing underscore#preview from NPM. This ensures that you get the latest preview version. I just published a fix that bumped the version to 1.13.0-1. I will release 1.13.0 as underscore#latest some time in the near future.
2 Reasons for Underscore to not implement data-last or currying:
Underscore was born when Jeremy Ashkenas factored out common patterns from DocumentCloud (together with Backbone). As it happens, neither data-last partial application nor currying were common patterns in that application.
Changing Underscore from data-first to data-last would break a lot of code.
It is not a universal rule that data are supplied last in partial application; supplying the data first is equally imaginable. Thus, data-last isn't fundamentally better, it's just making a different tradeoff.
While currying is nice, it also has some disadvantages: it adds overhead and it fixes the arity of a function (unless you make the function lazy, which adds more overhead). Underscore works more with optional and variadic arguments than Ramda, and also prefers making features that add overhead opt-in instead of enabling them by default.
Taking the question very literally, let's just start with a function that transforms a data-first function into a data-last function:
const dataLast = f => _.restArguments(function(args) {
args.unshift(args.pop());
return f.apply(this, args);
});
const dataLastMap = dataLast(_.map);
dataLastMap(square, [1, 2, 3]); // [1, 4, 9]
We could map dataLast over Underscore to get a data-last version of the entire library:
const L = _.mapObject(_, dataLast);
const isOdd = x => x % 2;
L.map(square, [1, 2, 3]); // [1, 4, 9]
L.filter(isOdd, [1, 2, 3]); // [1, 3]
However, we can do better. Ramda-style currying is not too hard to implement, either:
const isPlaceholder = x => x === _;
function curry(f, arity = f.length, preArgs = []) {
const applied = _.partial.apply(null, [f].concat(preArgs));
return _.restArguments(function(args) {
const supplied = _.countBy(args, isPlaceholder)['false'];
if (supplied < arity) {
return curry(applied, arity - supplied, args);
} else {
return applied.apply(null, args);
}
});
}
With just a little bit of extra sophistication, we can even correctly support this bindings:
function curry(f, arity = f.length, preArgs = [], thisArg) {
if (!_.isUndefined(thisArg)) f = f.bind(thisArg);
const applied = _.partial.apply(null, [f].concat(preArgs));
return _.restArguments(function(args) {
const supplied = _.countBy(args, isPlaceholder)['false'];
if (supplied < arity) {
return curry(applied, arity - supplied, args, this);
} else {
return applied.apply(this, args);
}
});
}
Currying by itself is independent of whether you do data-first or data-last. Here's a curried version of _.map that is still data-first:
const curriedMap = curry(_.map);
curriedMap([1, 2, 3], square, null);
curriedMap([1, 2, 3])(square, null);
curriedMap([1, 2, 3])(square)(null);
curriedMap([1, 2, 3], square)(null);
curriedMap([1, 2, 3], _, null)(square);
curriedMap(_, _, null)([1, 2, 3], square);
curriedMap(_, _, null)(_, square)([1, 2, 3]);
curriedMap(_, square, _)(_, null)([1, 2, 3]);
// all [1, 4, 9]
Note that I have to pass null every time, because _.map takes an optional third argument that lets you bind the callback to a context. This eager style of currying forces you to pass a fixed number of arguments. In the Variation section below, I'll show how this can be avoided with a lazy variant of curry.
The Ramda library omits the optional context parameter instead, so you need to pass exactly two instead of exactly three arguments to R.map. We can write a function that composes dataLast and curry and that optionally adjusts the arity, in order to make an Underscore function behave exactly like its Ramda counterpart:
const ramdaLike = (f, arity = f.length) => curry(dataLast(f), arity);
const ramdaMap = ramdaLike(_.map, 2);
ramdaMap(square, [1, 2, 3]);
ramdaMap(square)([1, 2, 3]);
ramdaMap(_, [1, 2, 3])(square);
// all [1, 4, 9]
Mapping this over the entire library requires some administration in order to get a satisfying result, but the result is a surprisingly faithful imitation of Ramda:
const arityOverrides = {
map: 2,
filter: 2,
reduce: 3,
extend: 2,
defaults: 2,
// etcetera, as desired
};
const R_ = _.extend(
// start with just passing everything through `ramdaLike`
_.mapObject(_, f => ramdaLike(f)),
// then replace a subset with arity overrides
_.mapObject(arityOverrides, (arity, name) => ramdaLike(_[name], arity)),
);
R_.identity(1); // 1
R_.map(square)([1, 2, 3]); // [1, 4, 9]
R_.filter(isOdd)([1, 2, 3]); // [1, 3]
const add = (a, b) => a + b;
const sum = R_.reduce(add, 0);
sum([1, 2, 3]); // 6
Variation
At the cost of introducing laziness, we can avoid having to fix the arity of a function. This lets us preserve all the optional and variadic parameters from the original Underscore functions, without always needing to supply them, and removes the need for per-function administration when mapping the library. We start with a variant of curry that returns a lazy function instead of an eager one:
function curryLazy(f, preArgs = [], thisArg) {
if (!_.isUndefined(thisArg)) f = f.bind(thisArg);
const applied = _.partial.apply(null, [f].concat(preArgs));
return _.restArguments(function(args) {
if (args.length > 0) {
return curryLazy(applied, args, this);
} else {
return applied.call(this);
}
});
}
This is basically R.curry with a builtin R.thunkify on top. Note that this implementation is actually a bit simpler than the eager variant. On top of that, creating a lazy, Ramda-like port of Underscore is reduced to an elegant oneliner:
const LR_ = _.mapObject(_, _.compose(curryLazy, dataLast));
We can now pass as many or as few arguments to each function as we want. We just have to append an extra call without arguments in order to force evaluation:
LR_.identity(1)(); // 1
LR_.map([1, 2, 3])(); // [1, 2, 3]
LR_.map(square)([1, 2, 3])(); // [1, 4, 9]
LR_.map(_, [1, 2, 3])(square)(); // [1, 4, 9]
LR_.map(Math.sqrt)(Math)([1, 4, 9])(); // [1, 2, 3]
LR_.filter([1, false, , '', 'yes'])(); // [1, 'yes']
LR_.filter(isOdd)([1, 2, 3])(); // [1, 3]
LR_.filter(_, [1, 2, 3])(isOdd)(); // [1, 3]
LR_.filter(window.confirm)(window)([1, 2, 3])(); // depends on user
LR_.extend({a: 1})({a: 2, b: 3})();
// {a: 1, b: 3}
LR_.extend({a: 1})({a: 2, b: 3})({a: 4})({b: 5, c: 6})();
// {a: 4, b: 3, c: 6}
This trades some faithfulness to Ramda for faithfulness to Underscore. In my opinion, it is a best of both worlds: data-last currying like in Ramda, with all the parametric flexibility from Underscore.
References:
Underscore documentation
Ramda documentation
Given two array of same length, return an array containing the mathematical difference of each element between two arrays.
Example:
a = [3, 4, 7]
b = [3, 9, 10 ]
results: c = [(3-3), (9-4), (10,7)] so that c = [0, 5 3]
let difference = []
function calculateDifferenceArray(data_one, data_two){
let i = 0
for (i in data_duplicates) {
difference.push(data_two[i]-data_one[i])
}
console.log(difference)
return difference
}
calculateDifferenceArray((b, a))
It does work.
I am wondering if there is a more elegant way to achieve the same
Use map as following:
const a = [3, 4, 7]
const b = [3, 9, 10]
const c = b.map((e, i) => e - a[i])
// [0, 5, 3]
for-in isn't a good tool for looping through arrays (more in my answer here).
"More elegant" is subjective, but it can be more concise and, to my eyes, clear if you use map:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => data_two[index] - v1)
}
calculateDifferenceArray(b, a) // < Note just one set of () here
Live Example:
const a = [3, 4, 7];
const b = [3, 9, 10 ];
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => v1 - data_two[index]);
}
console.log(calculateDifferenceArray(b, a));
or if you prefer it slightly more verbose for debugging et. al.:
function calculateDifferenceArray(data_one, data_two){
return data_one.map((v1, index) => {
const v2 = data_two[index]
return v1 - v2
})
}
calculateDifferenceArray(b, a)
A couple of notes on the version of this in the question:
It seems to loop over something (data_duplicates?) unrelated to the two arrays passed into the method.
It pushes to an array declared outside the function. That means if you call the function twice, it'll push the second set of values into the array but leave the first set of values there. That declaration and initialization should be inside the function, not outside it.
You had two sets of () in the calculateDifferenceArray call. That meant you only passed one argument to the function, because the inner () wrapped an expression with the comma operator, which takes its second operand as its result.
You had the order of the subtraction operation backward.
You could use higher order array method map. It would work something like this:
let a = [2,3,4];
let b = [3,5,7];
let difference = a.map((n,i)=>n-b[i]);
console.log(difference);
you can read more about map here
JavaScript's array.sort method takes an optional compare function as argument, which takes two arguments and decides which one of them is smaller than the other.
However, sometimes it would be more convenient to customize the sort order with a key function, which is a function that takes one value as an argument and assigns it a sort key. For example:
function keyFunc(value){
return Math.abs(value);
}
myArr = [1, 3, -2];
myArr.sort(keyFunc);
// the result should be [1, -2, 3]
Does JavaScript have support for this, or is there no way around writing a full-blown comparison function?
There's no support for exactly what you describe, but it's quite trivial to write a standard .sort function that achieves the same thing, with minimal code - just return the difference between calling keyFunc on the two arguments to sort:
function keyFunc(value){
// complicated custom logic here, if desired
return Math.abs(value);
}
myArr = [1, 3, -2];
myArr.sort((a, b) => keyFunc(a) - keyFunc(b));
console.log(myArr);
// the result should be [1, -2, 3]
If the key function is complicated and you don't want to run it more than necessary, then it would be pretty simple to create a lookup table for each input, accessing the lookup table if keyFunc has been called with that value before:
const keyValues = new Map();
function keyFunc(value){
const previous = keyValues.get(value);
if (previous !== undefined) return previous
console.log('running expensive operations for ' + value);
// complicated custom logic here, if desired
const result = Math.abs(value);
keyValues.set(value, result);
return result;
}
myArr = [1, 3, -2];
myArr.sort((a, b) => keyFunc(a) - keyFunc(b));
console.log(myArr);
// the result should be [1, -2, 3]
As stated already you have to write that functionality yourself or extend the current array sort method etc.
Another approach is if you ware using lodash and its orderBy method ... then this becomes:
myArr=[1, 3, -2];
const result = _.orderBy(myArr, Math.abs)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.11/lodash.min.js"></script>
You could use a closure over the wanted function.
const
keyFunc = value => Math.abs(value),
sortBy = fn => (a, b) => fn(a) - fn(b),
array = [1, 3, -2];
array.sort(sortBy(keyFunc));
console.log(array); // [1, -2, 3]
You can easily subtract the "keys" from the two elements:
myArr.sort((a, b) => keyFunc(a) - keyFunc(b));
You could also monkey patch sort:
{
const { sort } = Array.prototype;
Array.prototype.sort = function(sorter) {
if(sorter.length === 2) {
sort.call(this, sorter);
} else {
sort.call(this, (a, b) => sorter(a) - sorter(b));
}
};
}
So then:
myArr.sort(keyFunc);
works.
I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))
I'm working on better understanding functional programming in javascript, but I'm a bit confused by what I've seen fed to map functions. Take the example below:
const f = x => (a, b, c) => b + a;
const arr = [1, 2, 3, 4, 5, 6];
const m = arr.map(f(1));
document.write(m);
When f returns a it will print each value, as expected. If it returns b it seems to return the index, and c will return the entire array for each value. Is there a reason to why this function works this way?
Array.prototype.map() callback function has three default parameters
The current element of the iteration
The index of the current element of the iteration
The array that .map() function was called upon
f returns a function which is set as callback of .map()
See also Array.from()
In your example, You are invoking map with a 3-ary callback where:
a -> current element
b -> current index
c -> original array
and returning c. Therefore, your result will be a new array containing a reference to the original array for every element iterated over.
Since you aren't doing anything with x, there is no need for a nested function here. A better example of how you can use this concept would be something like:
const add = a => b => a + b
const arr = [1, 2, 3, 4]
const newArr = arr.map(add(3))
// [4, 5, 6, 7]
console.log(newArr)