Is not having local functions a micro optimisation? - javascript

Would moving the inner function outside of this one so that its not created everytime the function is called be a micro-optimisation?
In this particular case the doMoreStuff function is only used inside doStuff. Should I worry about having local functions like these?
function doStuff() {
var doMoreStuff = function(val) {
// do some stuff
}
// do something
for (var i = 0; i < list.length; i++) {
doMoreStuff(list[i]);
for (var j = 0; j < list[i].children.length; j++) {
doMoreStuff(list[i].children[j]);
}
}
// do some other stuff
}
An actaul example would be say :
function sendDataToServer(data) {
var callback = function(incoming) {
// handle incoming
}
ajaxCall("url", data, callback);
}

Not sure if this falls under the category "micro-optimization". I would say no.
But it depends on how often you call doStuff. If you call it often, then creating the function over and over again is just unnecessary and will definitely add overhead.
If you don't want to have the "helper function" in global scope but avoid recreating it, you can wrap it like so:
var doStuff = (function() {
var doMoreStuff = function(val) {
// do some stuff
}
return function() {
// do something
for (var i = 0; i < list.length; i++) {
doMoreStuff(list[i]);
}
// do some other stuff
}
}());
As the function which is returned is a closure, it has access to doMoreStuff. Note that the outer function is immediately executed ( (function(){...}()) ).
Or you create an object that holds references to the functions:
var stuff = {
doMoreStuff: function() {...},
doStuff: function() {...}
};
More information about encapsulation, object creation patterns and other concepts can be found in the book JavaScript Patterns.

For optimal speed with a nested function (function within internal scope of an outer function), I suspect you should use declarations, not expressions.
The question asks about "local functions" and optimization, but doesn't specify how the local functions are created. But it should, because the question's answer probably is different for the different techniques by which the "inner function" can be created.
Looking at the answer and test results by #cleong, I suspect that only his answer is using the optimal technique for function creation. There are three ways to create a function, and #cleong is showing us the one that provides fast execution. The three techniques are:
constructor
declaration
expression
Constructor isn't used much, it requires a string that has the text of the function body. This would be useful in reflective programming, where you do a "toString()" to get the function body, modify, then construct a new function. And that, of course, is more-or-less never done.
Declaration is used, but mostly for outer functions, not inner functions (by "inner function" I mean a function nested within another). Yet, based upon #cleong tests, it seems to be very fast; just as fast as an outer function.
Expressions are what everyone uses. This might not be the best idea; but it's what everyone does.
One major difference between function declarations and function expressions is that the declarations are subject to hoisting. Everyone knows that "var" declarations are hoisted; but so are "function" declarations. For things that are hoisted, computations are performed at compile time to determine the memory space that will be needed for the thing. Presumably, one would expect that the inner function is compiled at compile time, and can run much as would a compiled outer function.
I have a copy of Flannigan's "The Definitive Guide" book from about six years ago, and I remember reading the reverse of what I just wrote here. He said something like: expressions are compiled, and declarations are not. While he is the world's "definitive guide" to JavaScript, I have always suspected he might have gotten this one mixed up and backwards. I suspect that function inner declarations are more "ready to go" than are function expressions. The test results on this stackOverflow page seem to confirm my long held suspicions.
Looking at the #cleong test results, it just seems that declaration, not expression, is the way to go for inner functions, if optimal execution speed is a concern.

The original question was asked in 2011. Given the rise of Node.js since then, I thought it's worth revisiting the issue. In a server environment, a few milliseconds here and there can matter a lot. It could be difference between remaining responsive under load or not.
While inner functions are nice conceptually, they can pose problems for the JavaScript engine's code optimizer. The following example illustrate this:
function a1(n) {
return n + 2;
}
function a2(n) {
return 2 - n;
}
function a() {
var k = 5;
for (var i = 0; i < 100000000; i++) {
k = a1(k) + a2(k);
}
return k;
}
function b() {
function b1(n) {
return n + 2;
}
function b2(n) {
return 2 - n;
}
var k = 5;
for (var i = 0; i < 100000000; i++) {
k = b1(k) + b2(k);
}
return k;
}
function measure(label, fn) {
var s = new Date();
var r = fn();
var e = new Date();
console.log(label, e - s);
}
for (var i = 0; i < 4; i++) {
measure('A', a);
measure('B', b);
}
The command for running the code:
node --trace_deopt test.js
The output:
[deoptimize global object # 0x2431b35106e9]
A 128
B 130
A 132
[deoptimizing (DEOPT eager): begin 0x3ee3d709a821 b (opt #5) #4, FP to SP delta: 72]
translating b => node=36, height=32
0x7fffb88a9960: [top + 64] <- 0x2431b3504121 ; rdi 0x2431b3504121 <undefined>
0x7fffb88a9958: [top + 56] <- 0x17210dea8376 ; caller's pc
0x7fffb88a9950: [top + 48] <- 0x7fffb88a9998 ; caller's fp
0x7fffb88a9948: [top + 40] <- 0x3ee3d709a709; context
0x7fffb88a9940: [top + 32] <- 0x3ee3d709a821; function
0x7fffb88a9938: [top + 24] <- 0x3ee3d70efa71 ; rcx 0x3ee3d70efa71 <JS Function b1 (SharedFunctionInfo 0x361602434ae1)>
0x7fffb88a9930: [top + 16] <- 0x3ee3d70efab9 ; rdx 0x3ee3d70efab9 <JS Function b2 (SharedFunctionInfo 0x361602434b71)>
0x7fffb88a9928: [top + 8] <- 5 ; rbx (smi)
0x7fffb88a9920: [top + 0] <- 0 ; rax (smi)
[deoptimizing (eager): end 0x3ee3d709a821 b #4 => node=36, pc=0x17210dec9129, state=NO_REGISTERS, alignment=no padding, took 0.203 ms]
[removing optimized code for: b]
B 1000
A 125
B 1032
A 132
B 1033
As you can see, function A and B ran at the same speed initially. Then for some reason a deoptimization event occurred. From then on B is nearly an order of magnitude slower.
If you're writing code where performance is importantly, it's best to avoid inner functions.

It completely depends on how often the function is called. If it's a OnUpdate function that is called 10 times per second it is a decent optimalisation. If it's called three times per page, it is a micro optimalisation.
Though handy, nested function definitions are never needed (they can be replaced by extra arguments for the function).
Example with nested function:
function somefunc() {
var localvar = 5
var otherfunc = function() {
alert(localvar);
}
otherfunc();
}
Same thing, now with argument instead:
function otherfunc(localvar) {
alert(localvar);
}
function somefunc() {
var localvar = 5
otherfunc(localvar);
}

It is absolutely a micro-optimization. The whole reason for having functions in the first place is so that you make your code cleaner, more maintainable and more readable. Functions add a semantic boundary to sections of code. Each function should only do one thing, and it should do it cleanly. So if you find your functions performing multiple things at the same time, you've got a candidate for refactoring it into multiple routines.
Only optimize when you've got something working that's too slow (If it's not working yet, it's too early to optimize. Period). Remember, nobody ever paid extra for a program that was faster than their needs/requirements...
Edit: Considering that the program isn't finished yet, it's also a premature optimization. Why is that bad? Well, first you're spending time working on something that may not matter in the long run. Second, you don't have a baseline to see if your optimizations improved anything in a realistic sense. Third, you're reducing maintainability and readability before you've even got it running, so it'll be harder to get running than if you went with clean concise code. Fourth, you don't know if you'll need doMoreStuff somewhere else in the program until you've finished it and understand all your needs (perhaps a longshot depending on the exact details, but not outside the realm of possibility).
There's a reason that Donnald Knuth said Premature optimization is the root of all evil...

A quick "benchmark" run on an average PC (i know there are lots of unaccounted-for variables, so dont comment on the obvious, but it's interesting in any case):
count = 0;
t1 = +new Date();
while(count < 1000000) {
p = function(){};
++count;
}
t2 = +new Date();
console.log(t2-t1); // milliseconds
It could be optimised by moving the increment to the condition for example (brings running time down by about 100 milliseconds, although it doesn't affect the difference between with and without function creation, so it isn't really relevant)
Running 3 times gave:
913
878
890
Then comment out the function creation line, 3 runs gave:
462
458
464
So purely on 1000,000 empty function creations you add about half a second. Even assuming your original code is running 10 times a second on a handheld device (let's say that devices overall performance is 1/100 of this laptop, which is exaggerated - it's probably closer to 1/10, although will provide a nice upper bound), that's equivalent to 1000 function creations/sec on this computer, which happens in 1/2000 of a second. So every second the handheld device is adding overhead of 1/2000 second of processing... half a millisecond every second isn't very much.
From this primitive test I would conclude that on a PC this is definitely a micro-optimisation, and if you're developing for weaker devices, it is almost certainly as well.

Related

TypeScript transpile - for loop vs Array slice

In ES6 we can use a rest parameter, effectively creating an Array of arguments. TypeScript transpiles this to ES5 using a for loop. I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice? Maybe there are edge cases that the slice option does not cover?
// Written in TypeScript
/*
const namesJoinTS = function (firstName, ...args) {
return [firstName, ...args].join(' ');
}
const res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res)
*/
// TypeScript above transpiles to this:
var namesJoinTS = function (firstName) {
var args = [];
for (var _i = 1; _i < arguments.length; _i++) {
args[_i - 1] = arguments[_i];
}
return [firstName].concat(args).join(' ');
};
var res = namesJoinTS('Dave', 'B', 'Smith');
console.log(res); //Dave B Smith
// Vanilla JS
var namesJoinJS = function (firstName) {
var args = [].slice.call(arguments, 1);
return [firstName].concat(args).join(' ');
};
var res = namesJoinJS('Dave', 'B', 'Smith');
console.log(res); // //Dave B Smith
This weird transpilation is a side effect of the biased optimization older versions of V8 had (and might still have). They optimize(d) some certain patterns greatly but did not care about the overall performance, therefore some strange patterns (like a for loop to copy arguments into an array *) did run way faster. Therefore the maintainers of libraries & transpilers started searching for ways to optimize their code acording to that, as their code runs on millions of devices and every millisecond counts. Now as the optimizations in V8 got more mature and are focused on the average performance, most of these tricks don't work anymore. It is a matter of time till they get refactored out of the codebase.
Additionally JavaScript is moving towards a language that can be optimized more easily, older features like arguments are replaced with newer ones (rest properties) that are more strict, and therefore more performant. Use them to achieve good performance with good looking code, arguments is a mistake of the past.
I was wondering is there any scenarios where using the for loop approach is a better option than using Array.prototype.slice?
Well it is faster on older V8 versions, wether that is still the case has to be tested. If you write the code for your project I would always choose the more elegant solution, the millisecond you might theoretically loose doesn't matter in 99% of the cases.
Maybe there are edge cases that the slice option does not cover?
No (AFAIK).
*you might ask "why is it faster though?", well that's because:
arguments itself is hard to optimize as
1) it can be reassigned (arguments = 3)
2) it has to be "live", changing arguments will get reflected to arguments
Therefore it can only be optimized if you directly access it, as the compiler then might replace the arraylike accessor with a variable reference:
function slow(a) {
console.log(arguments[0]);
}
// can be turned into this by the engine:
function fast(a) {
console.log(a);
}
This also works for loops if you inline them and fall back to another (maybe slower) version if the number of arguments changes:
function slow() {
for(let i = 0; i < arguments.length; i++) {
console.log(arguments[i]);
}
}
slow(1, 2, 3);
slow(4, 5, 6);
slow("what?");
// can be optimized to:
function fast(a, b, c) {
console.log(a);
console.log(b);
console.log(c);
}
function fast2(a) {
console.log(a);
}
fast(1,2,3);
fast(4, 5, 6);
fast2("what?");
Now if you however call another function and pass in arguments things get really complicated:
var leaking;
function cantBeOptimized(a) {
leak(arguments); // uurgh
a = 1; // this has to be reflected to "leaking" ....
}
function leak(stuff) { leaking = stuff; }
cantBeOptimized(0);
console.log(leaking[0]); // has to be 1
This can't be really optimized, it is a performance nighmare.
Therefore calling a function and passing arguments is a bad idea performance wise.

Efficiency of Javascript functions

Q1. I would like to confirm whether Version 1 of below code is more efficient than Version 2? I'd like to know for future reference so I will be writing codes according to the style of either V1 or V2.
Q2. How does one measure efficiency of the code? (Doesn't have to be in depth, I just want to have a rough idea)
Version 1:
function average(array) {
return array.reduce(function(a,b) { return a + b; }) / array.length;
}
Version 2:
function average(array) {
function plus(a,b) { return a + b; }
return array.reduce(plus) / array.length;
}
Edit: assuming that at a later stage I would be writing much more complex code and I would like to get into the habit of writing efficient code now. I know that for simple one-liners there's no explicit difference.
These functions are equally efficient from a big-O perspective. The reason they are the same, is that they both pass a function into reduce() (the way in which the function is declared is different, but it's the same underlying structure and thus the same efficiency). The functions are otherwise the same. If I were you, I'd opt for the second case as it is probably easier to maintain.
If you want, you can use a speed test, but honestly it's a waste of your time as these two approaches are identical.
With a sequential loop, it's far faster:
http://jsperf.com/anonymous-vs-named-function-passing/2
function average3(array) {
var sum =0;
for( var i=0, len=array.length; i< len; i++ )
sum+=array[i];
return sum/array.length;
}
You will have the same result. Or difference will be negligible. Depends on JS engine and how good its optimizer is.
As others suggested you can use jsperf.com for speed tests.
But if you really care about performance then check third case here
http://jsperf.com/anonymous-vs-named-function-passing/3
They are the same... but you could optimize by avoiding to create a new closure of the sum function.
function sum(a,b){
return a+b;
}
function average(array){
return array.reduce(sum) / array.length;
}
This way sum won't hold a reference to array in its context; and a new instance of sum with the context won't be necessary. Remember that a closure will hold a reference of the arguments of the function that contains it even if you don't use those arguments.
This means that a new function sum won't be instantiated every time that you call average.

Why is recursion faster than a flat for loop for a summation function on JavaScript?

I'm working in a language that translates to JavaScript. In order to avoid some stack overflows, I'm applying tail call optimization by converting certain functions to for loops. What is surprising is that the conversion is not faster than the recursive version.
http://jsperf.com/sldjf-lajf-lkajf-lkfadsj-f/5
Recursive version:
(function recur(a0,s0){
return a0==0 ? s0 : recur(a0-1, a0+s0)
})(10000,0)
After tail call optimization:
ret3 = void 0;
a1 = 10000;
s2 = 0;
(function(){
while (!ret3) {
a1 == 0
? ret3 = s2
: (a1_tmp$ = a1 - 1 ,
s2_tmp$ = a1 + s2,
a1 = a1_tmp$,
s2 = s2_tmp$);
}
})();
ret3;
After some cleanup using Google Closure Compiler:
ret3 = 0;
a1 = 1E4;
for(s2 = 0; ret3 == 0;)
0 == a1
? ret3 = s2
: (a1_tmp$ = a1 - 1 ,
s2_tmp$ = a1 + s2,
a1 = a1_tmp$,
s2 = s2_tmp$);
c=ret3;
The recursive version is faster than the "optimized" ones! How can this be possible, if the recursive version has to handle thousands of context changes?
There's more to optimising than tail-call optimisation.
For instance, I notice you're using two temporary variables, when all you need is:
s2 += a1;
a1--;
This alone practically reduces the number of operations by a third, resulting in a performance increase of 50%
In the long run, it's important to optimise what operations are being performed before trying to optimise the operations themselves.
EDIT: Here's an updated jsperf
as Kolink say what your piece of code do is simply adding n to the total, reduce n by 1, and loop until n not reach 0
so just do that :
n = 10000, o = 0; while(n) o += n--;
it's more faster and lisible than the recursive version, and off course output the same result
There are not so much context changes inside the recursive version as you expect, since the named function recur is contained in the scope of recur itself/they share the same scope. The reason for that has to do with the way the JavaScript engines evaluate scope, and there are plenty of websites out there which explain this topic, so I will not do it here. At a second look you will notice that recur is also a so called "pure" function, which basically means it never has to leave it's own scope as long as the internal execution runs (simply put: until it returns a value). These two facts make it basically fast. I just want to mention here, the first example is the only tail call optimized one of all three – a tc optimization can only be done in recursive functions and this is the only recursive one.
However, a second look at the second example (no pun intended) reveals, that the "optimizer" made things worse for you, since it introduced scopes into the former pure function by splitting the operation into
variables instead of arguments
a while loop
a IIFE (immediatly invoked function expression) that separates the introduced inner and outer variables
Which leads to poorer performance since now the engine has to handle 10000 context changes.
To tell you the truth I do not know why the third example is poorer in performance than the recursive one, so maybe it has to do with:
the browser you use (ever tried another one and compared the results?)
the number of variables
stack frames created by for-loops (never heard of though), which
would have to do with the first example: the JS engines interpret a
pure recursive function until it finds a return statement. If the last thing following the statement is a function call, then evaluate any expressions (if any) and variables to pass as arguments, call the function and throw away the frame
something, only the browser-vendors can truly tell you :)

Readibility of anonymous closures in nested loops

A friend of mine got bitten by the all too famous 'anonymous functions in loop' javascript issues. (It's been explained to death on SO, and I'm actually expecting someone to my question as a duplicate, which would probably be fair game).
The issue amounts to what John Resig explained in this tutorial :
http://ejohn.org/apps/learn/#62
var count = 0;
for ( var i = 0; i < 4; i++ ) {
setTimeout(function(){
assert( i == count++, "Check the value of i." );
}, i * 200);
}
To a new user it should work, but indeed "i always have the same values", they say, hence crying and teeth gnashing.
I explained the issue with a lot of hand waving and some stuff about scopes, and pointed him to some solutions provided on SO or other sites (really, when you know it is such a common issue, google is your friend).
Of course the real answer is that in JS the scope is at the function level. So when the anonymous functions run, 'i' is not defined in the scope of any of them, but it is defined in the global scope, and it has the value of the end of the loop, 4.
Since we've all most likely be trained in languages that use block-level scope, it's yet-another-thing-that-js-does-a-little-bit-different-than-the-rest-of-the-world (meaning of "this", anyone ?)
What bogs me is that the common answer, actually even the one provided by John himself is the following :
var count = 0;
for ( var i = 0; i < 4; i++ ) (function(i){
setTimeout(function(){
assert( i == count++, "Check the value of i." );
}, i * 200);
})(i);
Which obviously works and demonstrate mastery of the language and a taste for nested parenthesis that suspiciously makes you look like a LISP-er.
And yet I can't help thinking that other solutions would be much more readible and easier to explain.
This one is just pushing the anonymous closure a bit closure closer to the setTimeout (or closer to the addEventListener is 99.9% of the case where it bites someone) :
var count = 0;
for (var i = 0 ; i < 4 ; i++) {
setTimeout((function (index) {
return function () {
assert( index == count++, "Check the value of i." );
}
})(i), i*200);
};
This one is about explaining what we're doing, with an explicit function factory :
var count = 0
function makeHandler(index) {
return function() {
assert(index == count ++);
};
};
for (var i = 0 ; i < 4 ; i++) {
setTimeout(makeHandler(i), i*200);
};
Finally, there is another solution that removes the problem altogether, and would seem even more natural to me (although I agree that it somehow sidesteps the problem 'by accident')
var count = 0;
function prepareTimeout(index) {
setTimeout(function () {
assert(index == count++);
}, index * 200);
};
for (var k = 0; k < 4 ; k++) {
prepareTimeout(k);
};
Are those solutions entirely equivalent, in terms of memory usage, number of scope created, possible leaking ?
Sorry if this is really a FAQ or subjective or whatever.
Solution #1
for (var i = 0; i < 4; i++) (function (i) {
// scope #1
setTimeOut(function () {
// scope #2
})(i), i*200);
})(i);
is actually quite nice as it reduces indentation and therefore perceived code complexity. On the other hand the for loop does not have its own block which is something jsLint (rightfully, in my opinion) would complain about.
Solution #2
for (var i = 0; i < 4; i++) {
setTimeout((function (i) {
// scope #1
return function () {
// scope #2
}
})(i), i*200);
};
Is the way I would do it most of the time. The for loop has an actual block, which I find increases readability and falls more into place with what you'd expect from a conventional loop (as in "for x do y", opposed to the "for x create anonymous function and execute it right away" from #1). But that's in the eye of the beholder, really, the more experience you have, the more these approaches start to look the same for you.
Solution #3
function makeHandler(index) {
// scope #1
return function() {
// scope #2
};
};
for (var i = 0 ; i < 4 ; i++) {
setTimeout(makeHandler(i), i*200);
};
as you say makes things clearer in the for loop itself. The human mind can adapt easier to named blocks that do something predefined than to a bunch of nested anonymous functions.
function prepareTimeout(index) {
// scope #1
setTimeout(function () {
// scope #2
}, index * 200);
};
for (var k = 0; k < 4 ; k++) {
prepareTimeout(k);
};
is the absolute same thing. I don't see any "issue sidestepping" here, it's just equivalent to #3.
As I see it, the approaches do not differ in the least bit, semantically - only syntactically. Sometimes there are reasons to prefer one over the other (the "function factory" approach is very re-usable, for example), but that doesn't apply to the standard situation you describe.
In any case, there are three concepts that a new user of JavaScript must get his head around:
functions are objects and can be passed around like integers (and therefore they don't need a name)
function scope and preservation of scope (how closures work)
how asynchronicity works
Once these concepts have sunken in, there will be a point at which you no longer see such a big difference in those approaches. They are just different ways to "put it". Until then you simply choose the one you are most comfortable with.
EDIT: You could argue that that point is when you transition from a "you must do it like this" attitude to a "you could do it like this, or like this, or like this" attitude. This applies to programming languages, cooking and pretty much anything else.
To say something about the implicit question in the title: Readability is also in the eye of the beholder. Ask any Perl programmer. Or someone comfortable with regular expressions.
In my opinion the last pattern (using a predefined named function) is the most readable, 'debugable' and on top of that the most usable in my editors (KomodoEdit or Visual Studio in combination with Resharper 6.0), where it's easy to jump to the function definition from the function call. It just asks a bit more discipline in coding.

javascript functions and arguments object, is there a cost involved

It is common place to see code like that around the web and in frameworks:
var args = Array.prototype.slice.call(arguments);
In doing so, you convert the arguments Object into a real Array (as much as JS has real arrays anyway) and it allows for whatever array methods you have in your Array prototypes to be applied to it, etc etc.
I remember reading somewhere that accessing the arguments Object directly can be significantly slower than an Array clone or than the obvious choice of named arguments. Is there any truth to that and under what circumstances / browsers does it incur a performance penalty to do so? Any articles on the subject you know of?
update interesting find from http://bonsaiden.github.com/JavaScript-Garden/#function.arguments that invalidates what I read previously... Hoping the question gets some more answers from the likes of #Ivo Wetzel who wrote this.
At the bottom of that section it says:
Performance myths and truths
The arguments object is always created
with the only two exceptions being the
cases where it is declared as a name
inside of a function or one of its
formal parameters. It does not matter
whether it is used or not.
this goes in conflict with http://www.jspatterns.com/arguments-considered-harmful/, which states:
However, it's not a good idea to use
arguments for the reasons of :
performance
security
The arguments object is not automatically created every time the function is called, the JavaScript engine will only create it on-demand, if it's used. And that creation is not free in terms of performance. The difference between using arguments vs. not using it could be anywhere between 1.5 times to 4 times slower, depending on the browser
clearly, can't both be correct, so which one is it?
ECMA die-hard Dmitrty Soshnikov said:
Which exactly “JavaScript engine” is
meant? Where did you get this exact
info? Although, it can be true in some
implementations (yep, it’s the good
optimization as all needed info about
the context is available on parsing
the code, so there’s no need to create
arguments object if it was not found
on parsing), but as you know
ECMA-262-3 statements, that arguments
object is created each time on
entering the execution context.
Here's some q&d testing. Using predefined arguments seems to be the fastest, but it's not always feasible to do this. If the arity of the function is unknown beforehand (so, if a function can or must receive a variable amount of arguments), I think calling Array.prototype.slice once would be the most efficient way, because in that case the performance loss of using the arguments object is the most minimal.
The arguments has two problems: one is that it's not a real array. The second one is that it can only include all of the arguments, including the ones that were explicitly declared. So for example:
function f(x, y) {
// arguments also include x and y
}
This is probably the most common problem, that you want to have the rest of the arguments, without the ones that you already have in x and y, so you would like to have something like that:
var rest = arguments.slice(2);
but you can't because it doesn't have the slice method, so you have to apply the Array.prototype.slice manually.
I must say that I haven't seen converting all of the arguments to a real array just for the sake of performance, only as a convenience to call Array methods. I'd have to do some profiling to know what is actually faster, and it may also depend faster for what, but my guess would be that there's not much of a difference if you don't want to call the Array methods in which case you have no choice but to convert it to a real array or apply the methods manually using call or apply.
The good news is that in new versions of ECMAScript (Harmony?) we'll be able to write just this:
function f(x, y, ...rest) {
// ...
}
and we'll be able to forget all of those ugly workarounds.
No one's done testing on this in a while, and all the links are dead. Here's some fresh results:
var res = []
for(var i = 0, l = arguments.length; i < l; i++){
res.push(arguments[i])
}
}
function loop_variable(){
var res = []
var args = arguments
for(var i = 0, l = args.length; i < l; i++){
res.push(args[i])
}
return res
}
function slice(){
return Array.prototype.slice.call(arguments);
}
function spread(){
return [...arguments];
}
function do_return(){
return arguments;
}
function literal_spread(){
return [arguments[0],arguments[1],arguments[2],arguments[3],arguments[4],arguments[5],arguments[6],arguments[7],arguments[8],arguments[9]];
}
function spread_args(...args){
return args;
}
I tested these here: https://jsben.ch/bB11y, as do_return(0,1,2,3,4,5,6,7,8,9) and so on. Here are my results on my Ryzen 2700X, on Linux 5.13:
Firefox 90.0
Chromium 92.0
do_return
89%
100%
loop_variable
74%
77%
spread
63%
29%
loop
73%
94%
literal_spread
86%
100%
slice
68%
81%
spread_args
100%
98%
I would argue against the accepted answer.
I edited the tests, see here: http://jsperf.com/arguments-performance/6
I added the test for slice method and a test for memory copy to preallocated array. The latter is multiple times more efficient in my computer.
As You can see, the first two memory copy methods in that performance test page are slow not due to loops, but due to the push call instead.
In conclusion, the slice seems almost the worst method for working with arguments (not counting the push methods since they are even not much shorter in code than the much more efficient preallocation method).
Also it might be of interest, that apply function behaves quite well and does not have much performance hit by itself.
First existing test:
function f1(){
for(var i = 0, l = arguments.length; i < l; i++){
res.push(arguments[i])
}
}
Added tests:
function f3(){
var len = arguments.length;
res = new Array(len);
for (var i = 0; i < len; i++)
res[i] = arguments[i];
}
function f4(){
res = Array.prototype.slice.call(arguments);
}
function f5_helper(){
res = arguments;
}
function f5(){
f5_helper.apply(null, arguments);
}
function f6_helper(a, b, c, d){
res = [a, b, c, d];
}
function f6(){
f6_helper.apply(null, arguments);
}

Categories