Efficiency of Javascript functions - javascript

Q1. I would like to confirm whether Version 1 of below code is more efficient than Version 2? I'd like to know for future reference so I will be writing codes according to the style of either V1 or V2.
Q2. How does one measure efficiency of the code? (Doesn't have to be in depth, I just want to have a rough idea)
Version 1:
function average(array) {
return array.reduce(function(a,b) { return a + b; }) / array.length;
}
Version 2:
function average(array) {
function plus(a,b) { return a + b; }
return array.reduce(plus) / array.length;
}
Edit: assuming that at a later stage I would be writing much more complex code and I would like to get into the habit of writing efficient code now. I know that for simple one-liners there's no explicit difference.

These functions are equally efficient from a big-O perspective. The reason they are the same, is that they both pass a function into reduce() (the way in which the function is declared is different, but it's the same underlying structure and thus the same efficiency). The functions are otherwise the same. If I were you, I'd opt for the second case as it is probably easier to maintain.
If you want, you can use a speed test, but honestly it's a waste of your time as these two approaches are identical.

With a sequential loop, it's far faster:
http://jsperf.com/anonymous-vs-named-function-passing/2
function average3(array) {
var sum =0;
for( var i=0, len=array.length; i< len; i++ )
sum+=array[i];
return sum/array.length;
}

You will have the same result. Or difference will be negligible. Depends on JS engine and how good its optimizer is.
As others suggested you can use jsperf.com for speed tests.
But if you really care about performance then check third case here
http://jsperf.com/anonymous-vs-named-function-passing/3

They are the same... but you could optimize by avoiding to create a new closure of the sum function.
function sum(a,b){
return a+b;
}
function average(array){
return array.reduce(sum) / array.length;
}
This way sum won't hold a reference to array in its context; and a new instance of sum with the context won't be necessary. Remember that a closure will hold a reference of the arguments of the function that contains it even if you don't use those arguments.
This means that a new function sum won't be instantiated every time that you call average.

Related

TypeScript transpile - for loop vs Array slice

In ES6 we can use a rest parameter, effectively creating an Array of arguments. TypeScript transpiles this to ES5 using a for loop. I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice? Maybe there are edge cases that the slice option does not cover?
// Written in TypeScript
/*
const namesJoinTS = function (firstName, ...args) {
return [firstName, ...args].join(' ');
}
const res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res)
*/
// TypeScript above transpiles to this:
var namesJoinTS = function (firstName) {
var args = [];
for (var _i = 1; _i < arguments.length; _i++) {
args[_i - 1] = arguments[_i];
}
return [firstName].concat(args).join(' ');
};
var res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res); //Dave B Smith
// Vanilla JS
var namesJoinJS = function (firstName) {
var args = [].slice.call(arguments, 1);
return [firstName].concat(args).join(' ');
};
var res = namesJoinJS('Dave', 'B', 'Smith');
console.log(res); // //Dave B Smith
This weird transpilation is a side effect of the biased optimization older versions of V8 had (and might still have). They optimize(d) some certain patterns greatly but did not care about the overall performance, therefore some strange patterns (like a for loop to copy arguments into an array *) did run way faster. Therefore the maintainers of libraries & transpilers started searching for ways to optimize their code acording to that, as their code runs on millions of devices and every millisecond counts. Now as the optimizations in V8 got more mature and are focused on the average performance, most of these tricks don't work anymore. It is a matter of time till they get refactored out of the codebase.
Additionally JavaScript is moving towards a language that can be optimized more easily, older features like arguments are replaced with newer ones (rest properties) that are more strict, and therefore more performant. Use them to achieve good performance with good looking code, arguments is a mistake of the past.
I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice?
Well it is faster on older V8 versions, wether that is still the case has to be tested. If you write the code for your project I would always choose the more elegant solution, the millisecond you might theoretically loose doesn't matter in 99% of the cases.
Maybe there are edge cases that the slice option does not cover?
No (AFAIK).
*you might ask "why is it faster though?", well that's because:
arguments itself is hard to optimize as
1) it can be reassigned (arguments = 3)
2) it has to be "live", changing arguments will get reflected to arguments
Therefore it can only be optimized if you directly access it, as the compiler then might replace the arraylike accessor with a variable reference:
function slow(a) {
console.log(arguments[0]);
}
// can be turned into this by the engine:
function fast(a) {
console.log(a);
}
This also works for loops if you inline them and fall back to another (maybe slower) version if the number of arguments changes:
function slow() {
for(let i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
slow(1, 2, 3);
slow(4, 5, 6);
slow("what?");
// can be optimized to:
function fast(a, b, c) {
console.log(a);
console.log(b);
console.log(c);
}
function fast2(a) {
console.log(a);
}
fast(1,2,3);
fast(4, 5, 6);
fast2("what?");
Now if you however call another function and pass in arguments things get really complicated:
var leaking;
function cantBeOptimized(a) {
leak(arguments); // uurgh
a = 1; // this has to be reflected to "leaking" ....
}
function leak(stuff) { leaking = stuff; }
cantBeOptimized(0);
console.log(leaking[0]); // has to be 1
This can't be really optimized, it is a performance nighmare.
Therefore calling a function and passing arguments is a bad idea performance wise.

Is this is a valid pattern and what is it called?

I find myself writing the following JavaScript more and more and I would like to know if this is a common pattern and if so, what is it called?
Part of the code and pattern:
var fruits = ["pear", "apple", "banana"];
var getNextFruit = function() {
var _index = 0,
_numberOfFruits = fruits.length;
getNextFruit = function() {
render(fruits[_index]);
_index = (_index + 1) % _numberOfFruits;
}
getNextFruit();
};
I have a function which takes no parameters, inside the function I redefine the function and immediately call it. In a functional language this might be a function being returned, JavaScript just makes it easier because you can reuse the name of the function. Thus you are able to extend the functionality without having to change your implementation.
I can also imagine this pattern to be very useful for memoization where your "cache" is the state we wrap around.
I even sometimes implement this with a get or a set method on the function where I can get the state if it's meaningful. The added fiddle shows an example of this.
Because this is a primarily JavaScript oriented question: The obligatory fiddle
I have a function which takes no parameters, inside the function I redefine the function and immediately call it.
Is this is a valid pattern and what is it called?
A function redefining itself is usually an antipattern, as it complicates stuff a lot. Yes, it sometimes can be more efficient to swap out the whole function than to put an if (alreadyInitialised) condition inside the function, but it's very rarely worth it. When you need to optimise performance, you can try and benchmark both approaches, but otherwise the advice is to keep it as simple as you can.
The pattern "initialises itself on the first call" is known as laziness for pure computations (in functional programming) and as a singleton for objects (in OOP).
However, most of the time there's no reason to defer the initialisation of the object/function/module whatever until it is used for the first time. The ressources taken for it (both time and memory) are insignificant, especially when you are sure that you will need it in your program at least once. For that, use an IIFE in JavaScript, which is also known as the module pattern when creating an object.
Creating a function via a closure is a pretty common pattern in JavaScript. I would personally do that differently:
var fruits = ["pear", "apple", "banana"];
var getNextFruit = function(fruits) {
var index = 0,
numberOfFruits = fruits.length;
function getNextFruit() {
render(fruits[_index]);
index = (_index + 1) % numberOfFruits;
}
return getNextFruit;
}(fruits);
There's no good reason (in my opinion) to clutter up the variable names with leading underscores because they're private to the closure anyway. The above also does not couple the workings of the closure with the external variable name. My version can be made a reusable service:
function fruitGetter(fruits) {
var index = 0, numberOfFruits = fruits.length;
function getNextFruit() {
render(fruits[_index]);
index = (_index + 1) % numberOfFruits;
}
return getNextFruit;
}
// ...
var getNextFruit = fruitGetter(someFruits);
var otherFruits = fruitGetter(["kumquat", "lychee", "mango"]);

Why is recursion faster than a flat for loop for a summation function on JavaScript?

I'm working in a language that translates to JavaScript. In order to avoid some stack overflows, I'm applying tail call optimization by converting certain functions to for loops. What is surprising is that the conversion is not faster than the recursive version.
http://jsperf.com/sldjf-lajf-lkajf-lkfadsj-f/5
Recursive version:
(function recur(a0,s0){
return a0==0 ? s0 : recur(a0-1, a0+s0)
})(10000,0)
After tail call optimization:
ret3 = void 0;
a1 = 10000;
s2 = 0;
(function(){
while (!ret3) {
a1 == 0
? ret3 = s2
: (a1_tmp$ = a1 - 1 ,
s2_tmp$ = a1 + s2,
a1 = a1_tmp$,
s2 = s2_tmp$);
}
})();
ret3;
After some cleanup using Google Closure Compiler:
ret3 = 0;
a1 = 1E4;
for(s2 = 0; ret3 == 0;)
0 == a1
? ret3 = s2
: (a1_tmp$ = a1 - 1 ,
s2_tmp$ = a1 + s2,
a1 = a1_tmp$,
s2 = s2_tmp$);
c=ret3;
The recursive version is faster than the "optimized" ones! How can this be possible, if the recursive version has to handle thousands of context changes?
There's more to optimising than tail-call optimisation.
For instance, I notice you're using two temporary variables, when all you need is:
s2 += a1;
a1--;
This alone practically reduces the number of operations by a third, resulting in a performance increase of 50%
In the long run, it's important to optimise what operations are being performed before trying to optimise the operations themselves.
EDIT: Here's an updated jsperf
as Kolink say what your piece of code do is simply adding n to the total, reduce n by 1, and loop until n not reach 0
so just do that :
n = 10000, o = 0; while(n) o += n--;
it's more faster and lisible than the recursive version, and off course output the same result
There are not so much context changes inside the recursive version as you expect, since the named function recur is contained in the scope of recur itself/they share the same scope. The reason for that has to do with the way the JavaScript engines evaluate scope, and there are plenty of websites out there which explain this topic, so I will not do it here. At a second look you will notice that recur is also a so called "pure" function, which basically means it never has to leave it's own scope as long as the internal execution runs (simply put: until it returns a value). These two facts make it basically fast. I just want to mention here, the first example is the only tail call optimized one of all three – a tc optimization can only be done in recursive functions and this is the only recursive one.
However, a second look at the second example (no pun intended) reveals, that the "optimizer" made things worse for you, since it introduced scopes into the former pure function by splitting the operation into
variables instead of arguments
a while loop
a IIFE (immediatly invoked function expression) that separates the introduced inner and outer variables
Which leads to poorer performance since now the engine has to handle 10000 context changes.
To tell you the truth I do not know why the third example is poorer in performance than the recursive one, so maybe it has to do with:
the browser you use (ever tried another one and compared the results?)
the number of variables
stack frames created by for-loops (never heard of though), which
would have to do with the first example: the JS engines interpret a
pure recursive function until it finds a return statement. If the last thing following the statement is a function call, then evaluate any expressions (if any) and variables to pass as arguments, call the function and throw away the frame
something, only the browser-vendors can truly tell you :)

Why can't I assign for loop to a variable?

So I am just wondering why the following code dosen't work. I am looking for a similar strategy to put the for loop in a variable.
var whatever = for (i=1;i<6;i++) {
console.log(i)
};
Thanks!
Because a for loop is a statement and in JavaScript statements don't have values. It's simply not something provided for in the syntax and semantics of the language.
In some languages, every statement is treated as an expression (Erlang for example). In others, that's not the case. JavaScript is in the latter category.
It's kind-of like asking why horses have long stringy tails and no wings.
edit — look into things like the Underscore library or the "modern" add-ons to the Array prototype for "map" and "reduce" and "forEach" functionality. Those allow iterative operations in an expression evaluation context (at a cost, of course).
I suppose what you look for is function:
var whatever = function(min, max) {
for (var i = min; i < max; ++i) {
console.log(i);
}
}
... and later ...
whatever(1, 6);
This approach allows you to encapsulate the loop (or any other code, even declaring another functions) within a variable.
Your issue is that for loops do not return values. You could construct an array with enough elements to hold all the iterations of your loop, then assign to it within the loop:
arry[j++] = i;
You can do this, but it seems that you might want to check out anonymous functions. With an anonymous function you could do this:
var whatever = function(){
for (var i=1;i<6;i++) {
console.log(i);
}
};
and then
whatever(); //runs console.log(i) i times.

javascript functions and arguments object, is there a cost involved

It is common place to see code like that around the web and in frameworks:
var args = Array.prototype.slice.call(arguments);
In doing so, you convert the arguments Object into a real Array (as much as JS has real arrays anyway) and it allows for whatever array methods you have in your Array prototypes to be applied to it, etc etc.
I remember reading somewhere that accessing the arguments Object directly can be significantly slower than an Array clone or than the obvious choice of named arguments. Is there any truth to that and under what circumstances / browsers does it incur a performance penalty to do so? Any articles on the subject you know of?
update interesting find from http://bonsaiden.github.com/JavaScript-Garden/#function.arguments that invalidates what I read previously... Hoping the question gets some more answers from the likes of #Ivo Wetzel who wrote this.
At the bottom of that section it says:
Performance myths and truths
The arguments object is always created
with the only two exceptions being the
cases where it is declared as a name
inside of a function or one of its
formal parameters. It does not matter
whether it is used or not.
this goes in conflict with http://www.jspatterns.com/arguments-considered-harmful/, which states:
However, it's not a good idea to use
arguments for the reasons of :
performance
security
The arguments object is not automatically created every time the function is called, the JavaScript engine will only create it on-demand, if it's used. And that creation is not free in terms of performance. The difference between using arguments vs. not using it could be anywhere between 1.5 times to 4 times slower, depending on the browser
clearly, can't both be correct, so which one is it?
ECMA die-hard Dmitrty Soshnikov said:
Which exactly “JavaScript engine” is
meant? Where did you get this exact
info? Although, it can be true in some
implementations (yep, it’s the good
optimization as all needed info about
the context is available on parsing
the code, so there’s no need to create
arguments object if it was not found
on parsing), but as you know
ECMA-262-3 statements, that arguments
object is created each time on
entering the execution context.
Here's some q&d testing. Using predefined arguments seems to be the fastest, but it's not always feasible to do this. If the arity of the function is unknown beforehand (so, if a function can or must receive a variable amount of arguments), I think calling Array.prototype.slice once would be the most efficient way, because in that case the performance loss of using the arguments object is the most minimal.
The arguments has two problems: one is that it's not a real array. The second one is that it can only include all of the arguments, including the ones that were explicitly declared. So for example:
function f(x, y) {
// arguments also include x and y
}
This is probably the most common problem, that you want to have the rest of the arguments, without the ones that you already have in x and y, so you would like to have something like that:
var rest = arguments.slice(2);
but you can't because it doesn't have the slice method, so you have to apply the Array.prototype.slice manually.
I must say that I haven't seen converting all of the arguments to a real array just for the sake of performance, only as a convenience to call Array methods. I'd have to do some profiling to know what is actually faster, and it may also depend faster for what, but my guess would be that there's not much of a difference if you don't want to call the Array methods in which case you have no choice but to convert it to a real array or apply the methods manually using call or apply.
The good news is that in new versions of ECMAScript (Harmony?) we'll be able to write just this:
function f(x, y, ...rest) {
// ...
}
and we'll be able to forget all of those ugly workarounds.
No one's done testing on this in a while, and all the links are dead. Here's some fresh results:
var res = []
for(var i = 0, l = arguments.length; i < l; i++){
res.push(arguments[i])
}
}
function loop_variable(){
var res = []
var args = arguments
for(var i = 0, l = args.length; i < l; i++){
res.push(args[i])
}
return res
}
function slice(){
return Array.prototype.slice.call(arguments);
}
function spread(){
return [...arguments];
}
function do_return(){
return arguments;
}
function literal_spread(){
return [arguments[0],arguments[1],arguments[2],arguments[3],arguments[4],arguments[5],arguments[6],arguments[7],arguments[8],arguments[9]];
}
function spread_args(...args){
return args;
}
I tested these here: https://jsben.ch/bB11y, as do_return(0,1,2,3,4,5,6,7,8,9) and so on. Here are my results on my Ryzen 2700X, on Linux 5.13:
Firefox 90.0
Chromium 92.0
do_return
89%
100%
loop_variable
74%
77%
spread
63%
29%
loop
73%
94%
literal_spread
86%
100%
slice
68%
81%
spread_args
100%
98%
I would argue against the accepted answer.
I edited the tests, see here: http://jsperf.com/arguments-performance/6
I added the test for slice method and a test for memory copy to preallocated array. The latter is multiple times more efficient in my computer.
As You can see, the first two memory copy methods in that performance test page are slow not due to loops, but due to the push call instead.
In conclusion, the slice seems almost the worst method for working with arguments (not counting the push methods since they are even not much shorter in code than the much more efficient preallocation method).
Also it might be of interest, that apply function behaves quite well and does not have much performance hit by itself.
First existing test:
function f1(){
for(var i = 0, l = arguments.length; i < l; i++){
res.push(arguments[i])
}
}
Added tests:
function f3(){
var len = arguments.length;
res = new Array(len);
for (var i = 0; i < len; i++)
res[i] = arguments[i];
}
function f4(){
res = Array.prototype.slice.call(arguments);
}
function f5_helper(){
res = arguments;
}
function f5(){
f5_helper.apply(null, arguments);
}
function f6_helper(a, b, c, d){
res = [a, b, c, d];
}
function f6(){
f6_helper.apply(null, arguments);
}

Categories