Implementing a custom reducer using ramda - javascript

I know that Ramda.js provides a reduce function, but I am trying to learn how to use ramda and I thought a reducer would be a good example. Given the following code, what would be a more efficient and functional approach?
(function(){
// Some operators. Sum and multiplication.
const sum = (a, b) => a + b;
const mult = (a, b) => a * b;
// The reduce function
const reduce = R.curry((fn, accum, list) => {
const op = R.curry(fn);
while(list.length > 0){
accum = pipe(R.head, op(accum))(list);
list = R.drop(1, list);
}
return accum;
});
const reduceBySum = reduce(sum, 0);
const reduceByMult = reduce(mult, 1);
const data = [1, 2, 3, 4];
const result1 = reduceBySum(data);
const result2 = reduceByMult(data);
console.log(result1); // 1 + 2 + 3 + 4 => 10
console.log(result2); // 1 * 2 * 3 * 4 => 24
})();
Run this on the REPL: http://ramdajs.com/repl/

I'm assuming this is a learning exercise and not for real-world application. Correct?
There are certainly some efficiencies you could gain over that code. At the core of Ramda's implementation, when all the dispatching, transducing, etc. are stripped away, is something like:
const reduce = curry(function _reduce(fn, acc, list) {
var idx = 0;
while (idx < list.length) {
acc = fn(acc, list[idx]);
idx += 1;
}
return acc;
});
I haven't tested, but this probably gains on your version because it only uses the number of functions calls strictly needed: one for each member of the list, and it does that with bare-bones iteration. Your version adds the call to curry, and then, on each iteration, calls to pipe and head, to that curried op function, to the result of the pipe call, and to drop. So this one should be faster.
On the other hand, this code is as imperative as it gets. If you want to go with something more purely functional, you would need to use a recursive solution. Here's one version:
const reduce = curry(function _reduce(fn, acc, list) {
return (list.length) ? _reduce(fn, fn(acc, head(list)), tail(list)) : acc;
});
This sacrifices all the performance of the above to the calls to tail. But it's clearly more of a straightforward functional implementation. In many modern JS engines, however, this will fail to even work on larger lists due to the stack depth.
Because it is tail-recursive, it would be able to take advantage of tail-call optimization specified by ES2015 but so far little implemented. Until then, it's mostly of academic interest. And even when that is available, because of the head and -- especially -- tail call in there, it's going to be much slower than the imperative implementation above.
You might be interested to know that Ramda was the second attempt at the API that's generated. Its original authors (disclaimer: I'm one of them) first built Eweda on the lines of this latter version. That experiment failed for exactly these reasons. Javascript simply cannot handle this sort of recursion... yet.

Related

Cycling through a list with async call inside

I have an array of Ids, I need to iterate through all the Ids, and for each Ids of the array make an async call to retrieve a value from DB, then sums all the value gathered. I did something like this
let quantity = 0;
for (const id of [1,2,3,4]) {
const subQuantity = await getSubQuantityById(id);
quantity += subQuantity;
}
Is there a more elegant and coincise way to write this for in javascript?
It is totally fine because your case include an async operation. Using a forEach instead is not possible here at all.
Your for loop is perfectly clean. If you want to make it shorter you could even do:
let totalQuantity = 0;
for (const id of arrayOfIds) {
totalQuantity += await getSubQuantityById(id);
}
As-is, it may even be more clear than using += await as above.
Naming could be improved as suggested.
I find the following one liner suggested in comments more cryptic/dirty:
(await Promise.all([1,2,3,4].map(i => getSubQuantityById(id))).reduce((p, c) => p + c, 0)
Edit: Props to #vitaly-t, who indicates that using Promise.all the way this one liner does will result in uncontrollable concurrency and lead to troubles in the context of a database
I can't follow #vitaly-t's argument that concurrent database queries will cause "problems" - at least not when we are talking about simple queries and there is a "moderate" number of these queries.
Here is my version of doing the summation. Obviously, the console.log in the last .then() needs to be replaced by the actual action that needs to happen with the calculated result.
// a placeholder function for testing:
function getSubQuantityById(i){
return fetch("https://jsonplaceholder.typicode.com/users/"+i).then(r=>r.json()).then(u=>+u.address.geo.lat);
}
Promise.all([1,2,3,4].map(id => getSubQuantityById(id)))
.then(d=>d.reduce((p, c) => p + c,0))
.then(console.log)
Is there a more elegant and coincise way to write this for in javascript?
Certainly, by processing your input as an iterable. The solution below uses iter-ops library:
import {pipeAsync, map, wait, reduce} from 'iter-ops';
const i = pipeAsync(
[1, 2, 3, 4], // your list of id-s
map(getSubQuantityById), // remap ids into async requests
wait(), // resolve requests
reduce((a, c) => a + c) // calculate the sum
); //=> AsyncIterableExt<number>
Testing the iterable:
(async function () {
console.log(await i.first); //=> the sum
})();
It is elegant, because you can inject more processing logic right into the iteration pipeline, and the code will remain very easy to read. Also, it is lazy-executing, initiates only when iterated.
Perhaps even more importantly, such a solution lets you control concurrency, to avoid producing too many requests against the database. And you can fine-tune concurrency, by replacing wait with waitRace.
P.S. I'm the author of iter-ops.

Performance difference between function initialization locations [JavaScript]

I am curious about the performance difference between initializing a function outside a loop vs inline:
Outside the loop:
const reducer = (acc, val) => {
// work
};
largeArray.reduce(reducer);
Inline:
largeArray.reduce((acc, val) => {
// work
});
I encounter this kind of situation regularly, and, unless I'm going to reuse the function, it seems useful to avoid introducing another variable into my scope by using the inline version.
Is there a performance difference in these two examples or does the JS engine optimize them identically?
For example: is the inline function being created every time the loop runs and then garbage collected? And if so:
What kind of effect does this have on performance, and
Does the size of the function affect this? For example, a function that is 200 vs 30_000 unicode characters.
Are there any other differences or things I'm not considering?
Hopefully you understand my train of thought and can provide some insight about this. I realize that I can read all of the docs and source code for V8 or other engines, and I would get my answer, but that seems like an overwhelming task to understand this concept.
I did run test on jsben
SET1(random used twice): http://jsben.ch/8Dukx
SET2:(used once): http://jsben.ch/SnvxV
Setup
const arr = [ ...Array(100).keys() ];
const reducer = (acc, cur) => (acc + cur);
Test 1
let sumInline = arr.reduce((acc, cur) => (acc + cur), 0);
let sumInlineHalf = arr.slice(0, 50).reduce((acc, cur) => (acc + cur), 0);
console.log(sumInline, sumInlineHalf);
Test 2
let sumOutline = arr.reduce(reducer, 0);
let sumOutlineHalf = arr.slice(0, 50).reduce(reducer, 0);
console.log(sumOutline, sumOutlineHalf);
Be surprised
What kind of effect does this have on performance, and
None.
Does the size of the function affect this? For example, a function that is 200 vs 30_000 unicode characters.
Functions aren't executed as "unicode characters". It doesn't matter how "long" the code is.
Are there any other differences or things I'm not considering?
A very important one: Code is written for humans, not computers. And why do you even ask me?
is the inline function being created every time the loop runs and then garbage collected?
That would be unneccessary and slow. So probably: no.

Lambda function return a function with different memory location?

So here is some code i am trying to work with
const someFunc = (a) => (b) => a + b;
const someArray = [1, 2];
const firstOrder = someArray.map(a => someFunc(a));
firstOrder[0] === firstOrder[1]; // returns false
I am not sure why this is a function with a different memory location.
I was expecting to accomplish a similar functionality wherein
firstOrder[0] === firstOrder[1]; // should return true
I am not sure if something like this is even possible.
The primary motivation here is to avoid memory footprint.
I guess i could use some help here.
Thanks in advance.
As said in the comment, functions with different scopes are never === to each other.
The memory overhead of a simple function is next to nothing, especially on modern hardware and modern JS engines, so before spending effort on this, make sure this is not a case of premature optimization - run a performance test, and make sure this is actually a bottleneck first.
You're currently passing around an array of functions, presumably so they can be iterated through and called by something later. Consider passing around just the someArray and a someFunc that takes 2 arguments and returns a number instead; an array of primitives takes less memory than an array of functions. For example, the following code takes up ~1,400M memory on Chrome for me:
const someFunc = (a) => (b) => a + b;
const arrayOfFunctions = Array.from({ length: 1e7 }, (_, i) => someFunc(i));
// eventually use arrayOfFunctions
But if you just store your someArray, and call the function only when you need access to the final number it returns, the memory footprint is much lighter:
const someFunc = (a, b) => a + b;
const someArray = Array.from({ length: 1e7 }, (_, i) => i);
// eventually, once you need access to the final numbers, iterate through someArray and call someFunc with it:
// ...
const theBArgument = 5;
const result = someArray.map(a => someFunc(a, theBArgument));
Before the result, this uses only ~120M memory on Chrome, for me.

TypeScript transpile - for loop vs Array slice

In ES6 we can use a rest parameter, effectively creating an Array of arguments. TypeScript transpiles this to ES5 using a for loop. I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice? Maybe there are edge cases that the slice option does not cover?
// Written in TypeScript
/*
const namesJoinTS = function (firstName, ...args) {
return [firstName, ...args].join(' ');
}
const res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res)
*/
// TypeScript above transpiles to this:
var namesJoinTS = function (firstName) {
var args = [];
for (var _i = 1; _i < arguments.length; _i++) {
args[_i - 1] = arguments[_i];
}
return [firstName].concat(args).join(' ');
};
var res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res); //Dave B Smith
// Vanilla JS
var namesJoinJS = function (firstName) {
var args = [].slice.call(arguments, 1);
return [firstName].concat(args).join(' ');
};
var res = namesJoinJS('Dave', 'B', 'Smith');
console.log(res); // //Dave B Smith
This weird transpilation is a side effect of the biased optimization older versions of V8 had (and might still have). They optimize(d) some certain patterns greatly but did not care about the overall performance, therefore some strange patterns (like a for loop to copy arguments into an array *) did run way faster. Therefore the maintainers of libraries & transpilers started searching for ways to optimize their code acording to that, as their code runs on millions of devices and every millisecond counts. Now as the optimizations in V8 got more mature and are focused on the average performance, most of these tricks don't work anymore. It is a matter of time till they get refactored out of the codebase.
Additionally JavaScript is moving towards a language that can be optimized more easily, older features like arguments are replaced with newer ones (rest properties) that are more strict, and therefore more performant. Use them to achieve good performance with good looking code, arguments is a mistake of the past.
I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice?
Well it is faster on older V8 versions, wether that is still the case has to be tested. If you write the code for your project I would always choose the more elegant solution, the millisecond you might theoretically loose doesn't matter in 99% of the cases.
Maybe there are edge cases that the slice option does not cover?
No (AFAIK).
*you might ask "why is it faster though?", well that's because:
arguments itself is hard to optimize as
1) it can be reassigned (arguments = 3)
2) it has to be "live", changing arguments will get reflected to arguments
Therefore it can only be optimized if you directly access it, as the compiler then might replace the arraylike accessor with a variable reference:
function slow(a) {
console.log(arguments[0]);
}
// can be turned into this by the engine:
function fast(a) {
console.log(a);
}
This also works for loops if you inline them and fall back to another (maybe slower) version if the number of arguments changes:
function slow() {
for(let i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
slow(1, 2, 3);
slow(4, 5, 6);
slow("what?");
// can be optimized to:
function fast(a, b, c) {
console.log(a);
console.log(b);
console.log(c);
}
function fast2(a) {
console.log(a);
}
fast(1,2,3);
fast(4, 5, 6);
fast2("what?");
Now if you however call another function and pass in arguments things get really complicated:
var leaking;
function cantBeOptimized(a) {
leak(arguments); // uurgh
a = 1; // this has to be reflected to "leaking" ....
}
function leak(stuff) { leaking = stuff; }
cantBeOptimized(0);
console.log(leaking[0]); // has to be 1
This can't be really optimized, it is a performance nighmare.
Therefore calling a function and passing arguments is a bad idea performance wise.

What are the advantages of Javascript's reduce() function? (and map())

I'm trying to decide whether to use the reduce() method in Javascript for a function I need to write which is something like this
var x = [some array], y = {};
for (...) {
someOperation(x[i]);
y[x[i]] = "some other value";
}
Now this can obviously be written as a reduce() function in the following manner:
x.reduce(function(prev, current, index, arr) {
someOperation(current);
prev[current] = "some other value";
return prev;
}, {})
Or something like that. Is there any performance or other difference between the two? Or some other reason (like browser support, for instance) due to which one should be favoured over the other in a web programming environment? Thanks.
Even though I prefer these operations (reduce, map, filter, etc.), it's still not feasible to use them because of certain browsers that do not support them in their implementations. Sure, you can "patch" it by extending the Array prototype, but that's opening a can of worms too.
I don't think there's anything inherently wrong with these functions, and I think they make for better code, but for now it's best not to use them. Once a higher percentage of the population uses a browser that supports these functions I think they'll be fair game.
As far as performance, these will probably be slower than hand written for loops because of the overhead from function calls.
map and filter and reduce and forEach and ... (more info: https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Array#Iteration_methods ) are far better than normal loops because:
They are more elegant
They encourage functional programming (see benefits of functional programming)
You will need to write functions anyway and pass into them, as parameters, iteration variables. This is because javascript has no block scope. Functions like map and reduce make your job so much easier because they automatically set up your iteration variable and pass it into your function for you.
IE9 claims to support these. They're in the official javascript/ecmascript spec. If you care about people who are using IE8, that is your prerogative. If you really care, you can hack it by overriding Array.prototype for ONLY IE8 and older, to "fix" IE8 and older.
reduce is used to return one value from an array, as a result of sequentially processing the results of the previous elements.
reduceRight does the same, but starts at the end and works backwards.
map is used to return an array whose members have all been passed through a function.
neither method affects the array itself.
var A1= ['1', '2', '3', '4', '5', '6', '7',' 8'];
// This use of map returns a new array of the original elements, converted to numbers-
A1=A1.map(Number); // >> each of A1's elements converted to a number
// This reduce totals the array elements-
var A1sum= A1.reduce(function(a, b){ return a+b;});
// A1sum>> returned value: (Number) 36
They are not supported in older browsers, so you'll need to provide a substitute for them. Not worth it if all you are doing can be replicated in a simple loop.
Figuring the standard deviation of a population is an example where both map and reduce can be effectively used-
Math.mean= function(array){
return array.reduce(function(a, b){ return a+b; })/array.length;
}
Math.stDeviation=function(array){
var mean= Math.mean(array);
dev= array.map(function(itm){return (itm-mean)*(itm-mean); });
return Math.sqrt(dev.reduce(function(a, b){ return a+b; })/array.length);
}
var A2= [6.2, 5, 4.5, 6, 6, 6.9, 6.4, 7.5];
alert ('mean: '+Math.mean(A2)+'; deviation: '+Math.stDeviation(A2))
kennebec - good going, but your stDeviation function calls reduce twice and map once when it only needs a single call to reduce (which makes it a lot faster):
Math.stDev = function (a) {
var n = a.length;
var v = a.reduce(function (v, x) {
v[0] += x * x;
v[1] += x;
return v;
}, [0,0]);
return Math.sqrt( (v[0] - v[1]*v[1] / n) / n );
}
Should do a conversion to number when assigning to v[1] to make sure string numbers don't mess with the result and the divisor in the last line should probablly be (n - 1) in most cases, but that's up to the OP. :-)

Categories