What's the difference between transform and reduce in lodash - javascript

Other than stating "transform is a more powerful alternative to reduce", I can find no documentation of what the differences are. What are the differences between transform and reduce in lodash (Other than it being 25% slower)?

I like to dive into the source code before I pull in utilities. For lo-dash this can be difficult as there is a ton of abstracted internal functionality in all the utilities.
transform source
reduce source
So the obvious differences are:
If you dont specify the accumulator (commonly referred to as memo
if you're used to underscore), _.transform will guess if you want
an array or object while reduce will make the accumulator the initial item of the collection.
Example, array or object map via transform:
_.transform([1, 2, 3], function(memo, val, idx) {
memo[idx] = val + 5;
});
// => [6, 7, 8]
Versus reduce (note, you have to know the input type!)
_.reduce([1, 2, 3], function(memo, val, idx) {
memo[idx] = val + 5;
return memo;
}, []);
So while with reduce the below will compute the sum of an array, this wouldn't be possible with transform as a will be an array.
var sum = _.reduce([1, 2, 3, 4], function(a, b) {
return a + b;
});
Another big difference is you don't have to return the accumulator with transform and the accumulator can't change value (ie it will always be the same array). I'd say this is the biggest advantage of the function as I've often forgotten to return the accumulator using reduce.
For example if you wanted to convert an array to dictionary of values with reduce you would write code like the following:
_.reduce([1, 2, 3, 4, 5], function(memo, idx) {
memo[idx] = true;
return memo;
}, {});
Whereas with transform we can write this very nicely without needing to return the accumulator like in reduce
_.transform([1, 2, 3, 4, 5], function(memo, idx) {
memo[idx] = true;
}, {});
Another distinction is we can exit the transform iteration by returning false.
Overall, I'd say reduce is a more flexible method, as you have more control over the accumulator, but transform can be used to write equivalent code for some cases in a simpler style.

transform works on objects, reduce does not not

Related

Why map or reduce keeps running without any condition given?

const array = [7, 2, 4, 1, 10, 6, 5, 11]
const max = array.reduce((acc, val) => {
console.log(val, acc)
return val > acc ? val : acc
}, 0)
console.log(max)
I was looking at this code of reduce array method, one thing I couldn't understand at all is, How the reducer function is going to the next iteration? There is no condition that forces the reducer function to go to the next element in the array. In the first iteration, the val is 7, the first element of the array, and acc is 0, the reducer function returns 7 as per the condition written.
My question is how the number 7 as being the new accumulator is going to be called on the reducer function. I thought the normal procedure is you have to meet some kind of condition to iterate over again and again. Is there something written in the reduce method itself? Can you explain me please?
Note that array.reduce:
reduce
calls the callback, as a function, once for each element after the
first element present in the array, in ascending order.
You could understand the reduce as a array.map but the goal of it is to change the array to a singe output.
It will loop over the whole array same with the forEach/map/...
Check below example, even though you don't do anything, like return or anything else to array.reduce, it will still work and iterate the array
You could check here for more
But of course if you don't use return for array.reduce, there will be no benefit for you to use array.reduce
const array = [7, 2, 4, 1, 10, 6, 5, 11]
const max = array.reduce((acc, val) => {
console.log(val)
}, 0)
console.log(max)
As per the docs here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/reduce
The reduce() method executes a user-supplied “reducer” callback function on each element of the array,
The reduce method's implementation resembles something like this:
Array.prototype.reduce = function(callback, initialValue) {
var acc = initialValue;
for(var i = 0; i < this.length; i++) {
acc = callback(acc, this[i], i);
}
return acc;
}
The reduce methods iteration condition is the array length.
The same goes for map.

Use reduce and recursion in javascript to flatten nested array

I have found this interesting code example which flattens an array using the recursion and reduce function instead of flat. I get what it does except the following:
acc = acc.concat(flatWithRec(item)); why accumulator is being reassign? how is it possible in reduce function?
and why use concat here?
return acc; why acc is being returned? is it a new Flat Array each time function is called?
is there a way to, still, use recursion and make it easier for the reader to understand?
Please clarify
function flatWithRec(arr) {
const flatArray = arr.reduce((acc, item) => {
if (Array.isArray(item)) {
acc = acc.concat(flatWithRec(item));
} else {
acc.push(item);
}
return acc;
}, []);
return flatArray;
}
console.log(flatWithRec([1, [2, 3, [4],
[5, 6, [7]]
]]))
// output: [1, 2, 3, 4, 5, 6, 7])
The accumulator is an array. You reassign it to give it the new array containing the items of the one you have at the beginning of the loop and the items of the array items to add. As said in the comments, acc.concat returns a new array containing the items of the arrays passed in parameter. See https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/concat
You need to return the accumulator at the end of each loop for the new value to be taken in account at the next loop.
Javascript recursive array flattening
acc = acc.concat(flatWithRec(item)); why accumulator is being reassign? how is it possible in reduce function? and why use concat here?
The accumulator (acc) is a function argument, and can be re-assigned, although it's not a good practice to do so. Concat combines two items (array or otherwise), and returns a new array, so you need to assign it to the acc as the result of the current loop.
return acc; why acc is being returned? is it a new Flat Array each time function is called?
The accumulator holds the current state of the reduced items, in your case the current flat array after each loop. You need to return it, so the next loop can continue to accumulate.
is there a way to, still, use recursion and make it easier for the reader to understand?
My take on flatWithRec - always concat, but if it's an array call flatWithRec on it before concatenating:
function flatWithRec(arr) {
return arr.reduce((acc, item) =>
acc.concat(
Array.isArray(item)
? flatWithRec(item)
: item
), []);
}
const result = flatWithRec([1, [2, 3, [4], [5, 6, [7]]]])
console.log(result) // output: [1, 2, 3, 4, 5, 6, 7])
So, the callback for the reduce method runs for every item in the array and whatever is returned from iteration x is passed as the first argument to iteration x+1. So, it is essential to make sure that during every iteration the correct state is returned.
acc = acc.concat(flatWithRec(item)) Why accumulator is being reassigned? How is it possible in reduce function? And why use concat here?
So, we are assigning acc the return value of concat because concat does not change the original array, it returns a fresh array. Accumulator is just like any other parameter so you can reassign. Not at all necessary to use concat (see my solution at the end using push and spread).
return acc; Why acc is being returned? Is it a new flat array each time the function is called?
As, already mentioned you need to return the correct state from the callback. And yes, during each iteration a new array is being created (which is not that performant, see my solution at the end)
Is there a way to still use recursion and make it easier for the reader to understand?
Easier for reader I can't tell, but here's my solution using => functions and spread.
const flatWithRec = (arr) =>
arr.reduce((acc, item) => (
Array.isArray(item) ? acc.push(...flatWithRec(item)) : acc.push(item), acc
), []);
console.log(flatWithRec([1, [2, 3, [4], [5, 6, [7]]]]));
Yes, there is a way to use recursion and make it easier to understand:
function f(A, i=0){
return i == A.length ? [] : (Array.isArray(A[i]) ? f(A[i]) : [A[i]]).concat(f(A, i+1));
}
var A = [1, [2, 3, [4], [5, 6, [7]]]];
console.log(JSON.stringify(A));
console.log(JSON.stringify(f(A)));

Does JS support sorting with a key function, rather than a comparator?

JavaScript's array.sort method takes an optional compare function as argument, which takes two arguments and decides which one of them is smaller than the other.
However, sometimes it would be more convenient to customize the sort order with a key function, which is a function that takes one value as an argument and assigns it a sort key. For example:
function keyFunc(value){
return Math.abs(value);
}
myArr = [1, 3, -2];
myArr.sort(keyFunc);
// the result should be [1, -2, 3]
Does JavaScript have support for this, or is there no way around writing a full-blown comparison function?
There's no support for exactly what you describe, but it's quite trivial to write a standard .sort function that achieves the same thing, with minimal code - just return the difference between calling keyFunc on the two arguments to sort:
function keyFunc(value){
// complicated custom logic here, if desired
return Math.abs(value);
}
myArr = [1, 3, -2];
myArr.sort((a, b) => keyFunc(a) - keyFunc(b));
console.log(myArr);
// the result should be [1, -2, 3]
If the key function is complicated and you don't want to run it more than necessary, then it would be pretty simple to create a lookup table for each input, accessing the lookup table if keyFunc has been called with that value before:
const keyValues = new Map();
function keyFunc(value){
const previous = keyValues.get(value);
if (previous !== undefined) return previous
console.log('running expensive operations for ' + value);
// complicated custom logic here, if desired
const result = Math.abs(value);
keyValues.set(value, result);
return result;
}
myArr = [1, 3, -2];
myArr.sort((a, b) => keyFunc(a) - keyFunc(b));
console.log(myArr);
// the result should be [1, -2, 3]
As stated already you have to write that functionality yourself or extend the current array sort method etc.
Another approach is if you ware using lodash and its orderBy method ... then this becomes:
myArr=[1, 3, -2];
const result = _.orderBy(myArr, Math.abs)
console.log(result)
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.11/lodash.min.js"></script>
You could use a closure over the wanted function.
const
keyFunc = value => Math.abs(value),
sortBy = fn => (a, b) => fn(a) - fn(b),
array = [1, 3, -2];
array.sort(sortBy(keyFunc));
console.log(array); // [1, -2, 3]
You can easily subtract the "keys" from the two elements:
myArr.sort((a, b) => keyFunc(a) - keyFunc(b));
You could also monkey patch sort:
{
const { sort } = Array.prototype;
Array.prototype.sort = function(sorter) {
if(sorter.length === 2) {
sort.call(this, sorter);
} else {
sort.call(this, (a, b) => sorter(a) - sorter(b));
}
};
}
So then:
myArr.sort(keyFunc);
works.

Transducer flatten and uniq

I'm wondering if there is a way by using a transducer for flattening a list and filter on unique values?
By chaining, it is very easy:
import {uniq, flattenDeep} from 'lodash';|
const arr = [1, 2, [2, 3], [1, [4, 5]]];
uniq(flattendDeep(arr)); // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
But here we loop twice over the list (+ n by depth layer). Not ideal.
What I'm trying to achieve is to use a transducer for this case.
I've read Ramda documentation about it https://ramdajs.com/docs/#transduce, but I still can't find a way to write it correctly.
Currently, I use a reduce function with a recursive function inside it:
import {isArray} from 'lodash';
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const flattenDeepUniq = (p, c) => {
if (isArray(c)) {
c.forEach(o => p = flattenDeepUniq(p, o));
}
else {
p = !p.includes(c) ? [...p, c] : p;
}
return p;
};
arr.reduce(flattenDeepUniq, []) // -> [1, 2, 3, 4, 5]
<script src="https://cdnjs.cloudflare.com/ajax/libs/lodash.js/4.17.10/lodash.core.min.js"></script>
We have one loop over the elements (+ n loop with deep depth layers) which seems better and more optimized.
Is this even possible to use a transducer and an iterator in this case?
For more information about Ramda transduce function: https://gist.github.com/craigdallimore/8b5b9d9e445bfa1e383c569e458c3e26
Transducers don't make much sense here. Your data structure is recursive. The best code to deal with recursive structures usually requires recursive algorithms.
How transducers work
(Roman Liutikov wrote a nice introduction to transducers.)
Transducers are all about replacing multiple trips through the same data with a single one, combining the atomic operations of the steps into a single operation.
A transducer would be a good fit to turn this code:
xs.map(x => x * 7).map(x => x + 3).filter(isOdd(x)).take(5)
// ^ ^ ^ ^
// \ \ \ `------ Iteration 4
// \ \ `--------------------- Iteration 3
// \ `-------------------------------------- Iteration 2
// `----------------------------------------------------- Iteration 1
into something like this:
xs.reduce((r, x) => r.length >= 5 ? res : isOdd(x * 7 + 3) ? res.concat(x * 7 - 3) : res, [])
// ^
// `------------------------------------------------------- Just one iteration
In Ramda, because map, filter, and take are transducer-enabled, we can convert
const foo = pipe(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
foo([1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
(which iterates four times through the data) into
const bar = compose(
map(multiply(7)),
map(add(3)),
filter(isOdd),
take(3)
)
into([], bar, [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]) //=> [17, 31, 45]
which only iterates it once. (Note the switch from pipe to compose. Tranducers compose in an order opposite that of plain functions.)
Note the key point of such transducers is that they all operate similarly. map converts a list to another list, as do filter and take. While you could have transducers that operate on different types, and map and filter might also work on such types polymorphically, they will only work together if you're combining functions which operate on the same type.
Flatten is a weak fit for transducers
Your structure is more complex. While we could certainly create a function that will crawl it in in some manner (preorder, postorder), and could thus probably start of a transducer pipeline with it, the logical way to deal with a recursive structure is with a recursive algorithm.
A simple way to flatten such a nested structure is something like this:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
(For various technical reasons, Ramda's code is significantly more complex.)
This recursive version, though, is not well-suited to work with transducers, which essentially have to work step-by-step.
Uniq poorly suited for transducers
uniq, on the other hand, makes less sense with such transducers. The problem is that the container used by uniq, if you're going to get any benefit from transducers, has to be one which has quick inserts and quick lookups, a Set or an Object most likely. Let's say we use a Set. Then we have a problem, since our flatten operates on lists.
A different approach
Since we can't easily fold existing functions into one that does what you're looking for, we probably need to write a one-off.
The structure of the earlier solution makes it fairly easy to add the uniqueness constraint. Again, that was:
const flatten = xs => xs.reduce(
(a, x) => concat(a, isArray(x) ? flatten(x) : [x]),
[]
);
With a helper function for adding all elements to a Set:
const addAll = (set, xs) => xs.reduce((s, x) => s.add(x), set)
We can write a function that flattens, keeping only the unique values:
const flattenUniq = xs => xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
)
Note that this has much the structure of the above, switching only to use a Set and therefore switching from concat to our addAll.
Of course you might want an array, at the end. We can do that just by wrapping this with a Set -> Array function, like this:
const flattenUniq = xs => Array.from(xs.reduce(
(s, x) => addAll(s, isArray(x) ? flattenUniq(x) : [x]),
new Set()
))
You also might consider keeping this result as a Set. If you really want a collection of unique values, a Set is the logical choice.
Such a function does not have the elegance of a points-free transduced function, but it works, and the exposed plumbing makes the relationships with the original data structure and with the plain flatten function much more clear.
I guess you can think of this entire long answer as just a long-winded way of pointing out what user633183 said in the comments: "neither flatten nor uniq are good use cases for transducers."
Uniq is now a transducer in Ramda so you can use it directly. And as for flatten you can traverse the tree up front to produce a bunch of flat values
const arr = [1, 2, [2, 3], [1, [4, 5]]];
const deepIterate = function*(list) {
for (const it of list) {
yield* Array.isArray(it) ? deepIterate(it) : [it];
}
}
R.into([], R.uniq(), deepIterate(arr)) // -> [1, 2, 3, 4, 5]
This lets you compose additional transducers
R.into([], R.compose(R.uniq(), R.filter(isOdd), R.take(5)), deepIterate(arr))

Flattening Array In JavaScript- Explanation needed

I'm reading a book called Eloquent JavaScript. There's an exercise in it that requires one to flatten a heterogeneous array & after trying so long and failing to get the answer, I looked up the solution online & couldn't understand the code. I'm hoping someone will be kind enough to explain, especially for argument "flat" and how it's supposed to work. The code is below
var arrays = [[1, 2, 3], [4, 5], [6]];
console.log(arrays.reduce(function(flat, current) {
return flat.concat(current);
}, []));
The reduce function defined in the book is:
function reduce(array, combine, start) {
var current = start;
for (var i = 0; i < array.length; i++)
current = combine(current, array[i]);
return current;
}
and as a method of an array,
arr.reduce(combine, start);
Let's look at each part of the reduce method. The book describes it as "folding up the array, one element at a time." The first argument for reduce is the "combiner function", that accepts two arguments, the "current" value and the "next" item in the array.
Now, the initial "current" value is given as the second argument of the reduce function, and in the solution of flattening arrays, it is the empty array, []. Note that in the beginning, the "next" item in the array is the 0th item.
Quoting the book to observe: "If your array contains at least one element, you are allowed to leave off the start argument."
It may also be confusing that in the flattening solution, current is placed as the second argument to reduce, whereas in the reduce definition above, current is used to assign the cumulative, folded value. In the flattening solution, current refers to the "next" arrays item (the individual array of integers)
Now, at each step of the reduction, the "current" value plus the next array item is fed to the (anonymous) combiner, and the return value becomes the updated "current" value. That is, we consumed an element of the array and continue with the next item.
flat is merely the name given to the accumulated result. Because we wish to return a flat array, it is an appropriate name. Because an array has the concat function, the first step of the reduce function is, (pretending that I can assign the internal variables)
flat = []; // (assignment by being the second argument to reduce)
Now, walk through the reduction as iterating over arrays, by going through the steps shown above in reduce's definition
for (var i = 0; i < arrays.length; i++)
flat = combine(flat, arrays[i]);
Calling combine gives [].concat([1, 2, 3]) // => [1, 2, 3]
Then,
flat = [1, 2, 3].concat([4, 5]) // => [1, 2, 3, 4, 5]
and again for the next iteration of the reduction. The final return value of the reduce function is then the final value of flat.
This would be the solution I came with with ES6 format:
const reduced = arrays.reduce((result,array) => result.concat(array),[]);
console.log(reduced);
I have implemented this solution and this seems to work for nested arrays as well.
function flattenArray(arr){
for(var i=0;i<arr.length;i++){
if(arr[i] instanceof Array){
Array.prototype.splice.apply(arr,[i,1].concat(arr[i]))
}
}
return arr;
}
There is an easy way to do these exercises. those functions are already built inside the javascript so you can use them easily.
But the whole joy of this exercise is to create those functions:
Create reduce function. Reduce function should add all array elements. you can use a higher-order function or just a normal one. here is an example for higher-order:
function reduce(array, calculate){
let sumOfElements = 0;
for(let element of array){
sumOfElements = calculate(sumOfElements, element)
}
return sumOfElements
}
Next step is to create a concat function. since we need to return those reduced arrays in new array we will just return them. (Warning: you must use rest parameter)
function concat(...arr){
return arr
}
And for last. you will just display it (You can use any example)
console.log(concat(reduce([1, 2, 3, 4], (a, b) => a + b), reduce([5, 6], (a, b) => a + b)))
The reduce method acts as a for loop iterating over each element in an array. The solution takes each array element and concatenates it to the next one. That should flatten the array.
var arr =[[1,2],[3,4],[5,6]]
function flatten(arr){
const flat= arr.reduce((accumulator,currentValue)=>{
return accumulator.concat(currentValue)
})
return flat
}
console.log(flatten(arr))
//Output 1,2,3,4,5,6

Categories