Reducing Javascript Function Call Overhead - javascript

I have seen Javascript code that claims to speed up function call overhead like:
function foo() {
// do something
}
function myFunc() {
var fastFoo = foo; // this caches function foo locally for faster lookups
for (var i = 0; i < 1000; i++) {
fastFoo();
}
}
I do not see how this can speed up javascript function call overhead, as it seems to me like it is just a memory lookup, either at the top of the current stack (for fastFoo) or somewhere else in the stack (I am not sure where the global context is stored... anyone?).
Is this a relic of ancient browsers, a complete myth, or a true improvement enhancer?

It is all dependent on scope. Accessing the local scope is always faster than accessing a parent scope. if the function is defined in the parent scope you will often see speedups if you make a local reference.
If this speedup is significant depends on many things, and only testing in your case will show if it is worth doing so.
The difference in speed depends on the difference of scope.
Calling a.b.c.d.e.f.g.h(); from the scope of x.y.z is slower than calling a.b(); from the scope of a.b.c (not the prettiest or most correct examples, but it should ´serve it's purpose :)

This will result in an infinitesimal performance gain.
Don't do it.

Related

Should closures be used to structure code even if they are not depended on lexical scope? How about redeclaration inside loops?

2 questions:
Closures have the advantage of being able to access the outer scopes and therefore are a great tool in our toolbox.
Is it frowned upon just using them to structure the program if scoping is not needed?.
foo = () => {
closure = (_)=> {
...
}
if(...){
closure(bar);
}else{
closure(baz);
}
}
In this case the function does not depend on the scope and could
be moved one level higher without change in functionality. Semantically it makes sense to place it there since it will only be used inside foo.
How do closures behave if they are declared inside loops? Does redeclaration hurt performance?`
foo.forEach( x => {
closure = () => ...
})
Is it frowned upon just using them to structure the program if scoping is not needed?
There is no one way to write JavaScript code (or any other code). This part of the question calls for opinion, which is off-topic for SO. :-)
There are a couple of objective observations that can be made about doing that:
It keeps the functions private, they can only be used in the function they're created in (assuming you don't return them out of it or assign them to variables declared in an outer scope). That could be argued as being good (for encapsulation) and as bad (limited reuse).
Modules probably reduce the desire to do this a bit (though not entirely).
How do closures behave if they are declared inside loops?
A couple of things I need to call out about your example relative to the question you've asked there:
You haven't declared a function at all. You've created one, via a function expression, but you haven't declared one. (This matters to the answer; we'll come back to it in a moment.)
Your example doesn't create a function in a loop, it creates it inside another function — forEach's callback. That function is called several times, but it isn't a loop per se.
This code creates a function in a loop:
for (const value of something) {
closure = () => {
// ...
};
}
It works just like creating a function anywhere else: A new function object is created each time, closing over the environment where it was created (in this case, the environment for each iteration of the loop). This can be handy if it's using something specific to the loop iteration (like value above).
Declaring a function in a loop looks like this:
for (const value of something) {
function foo() {
// ...
}
}
Never do that in loose-mode code, only do it in strict mode (or better yet, avoid doing it entirely). The loose-mode semantics for it aren't pretty because it wasn't specified behavior for a long time (but was an "allowed extension") and different implementations handled it in different ways. When TC39 specified the behavior, they could only specify a subset of situations that happened to be handled the same way across all major implementations.
The strict mode semantics for it are fairly reasonable: A new function object is created every time and the function's identifier exists only in the environment of the loop iteration. Like all function declarations, it's hoisted (to the top of the block, not the scope enclosing the loop):
"use strict";
const something = [1, 2, 3];
console.log(typeof foo); // Doesn't exist here
for (const value of something) {
foo();
function foo() {
console.log(value);
}
}
console.log(typeof foo); // Doesn't exist here
Does redeclaration hurt performance?
Not really. The JavaScript engine only has to parse the code once, creating the bytecode or machine code for the function, and then can reuse that bytecode or machine code when creating each of the function objects and attaching them to the environment they close over. Modern engines are very good at that. If you're creating millions of temporary objects, that might cause memory churn, but only worry about it if and when you have a performance problem that you've traced to a place where you've done it.

JS Variable Scope Lookup vs Allocation

If I am applying a function to a javascript array i.e.
var a = [1,2,3,4];
var x;
a.forEach(function(val) {
x = val + 1;
// do some stuff with x
});
Is it better to leave the variable declaration in the outer scope or to put the declaration inside the function? i.e.
var a = [1,2,3,4];
a.forEach(function(val) {
var x = val + 1;
// do some stuff with x
});
What I do not know is if the memory allocation for x is more expensive than the variable lookup process.
If you do not need the variable outside the forEach callback, put it inside. It is simply cleaner that way. Memory is not an issue.
Its better to put it into the loop. There is no performance difference. However using the scope the right way is a good starting point to detect errors:
var something = 5;
a.forEach(el => something = el);
console.log(something); // what is something? Why is it set?
What I do not know is if the memory allocation for x is more expensive than the variable lookup process
It depends. It depends on the exact code, the size of the array, how frequently the code is run, how many scopes there are between the code accessing the variable and the scope where the variable is declared, what optimizations the JavaScript engine on which the code is running does, etc.
So: Write the clearest, simplest code you can, and if you run into a performance problem related to that code, profile changes to see how to address it. Anything else is premature optimization and a waste of your time.
However: With the example given, if you declare it within the callback, x will be a local stack variable, and those are extremely cheap to allocate (in terms of execution time). I can't see that it would really matter either way (if you declare it outside, it's just one scope away), but it happens that in this case, the simplest, cleanest code (declaring it within the callback) is at least likely not to be worse than the alternative, and I'd say almost certainly better. But again: If it's an issue, profile the real code.
Practically, this change control two parameters: Scope and initiation.
Scope: if you need to use this var outside the loop, so you need to declare it out side, else you can drop it in and each iteration and the scope will declare, its probably saves memory outside the scope.
Initiation: in big programs the initiation can be critic time,
each time who the loop takes place you need to declare the var and its wasting time...

JavaScript variable resolution

I am pondering this:
function outer()
{
var myVar = 1;
function inner()
{
alert(myVar);
}
}
Now, as I understand it, this will result two lookups for the variable - one lookup to check the local variables in the inner function and one lookup for the outer function - at which point the variable is found.
The question is - will this be a particularly large drain on performance when compared to this:
function myFunc ()
{
var myVar = 1;
alert(myVar);
}
Which would only require the one lookup for the variable - it's then found as a local variable.
In older JS engines, scope lookups could cause some effects in performance.
However, even years back it was a very very minor difference - not really something you had to worry about.
Today's engines are most likely capable of optimizing lookups like this, and in general their performance is much much better. Unless you're writing something completely crazy or targeting a device with very poor performance, this is not something you need to worry about.

Are there downsides to using var more than once on the same variable in JavaScript

Before asking my question, let me give a disclaimer. I know what var does, I know about block scope, and I know about variable hoisting. I'm not looking for answers on those topics.
I'm simply wondering if there is a functional, memory, or performance cost to using a variable declaration on the same variable more than once within a function.
Here is an example:
function foo() {
var i = 0;
while (i++ < 10) {
var j = i * i;
}
}
The previous could just have easily been written with the j variabled declared at the top:
function foo() {
var i = 0, j;
while (i++ < 10) {
j = i * i;
}
}
I'm wondering if there is any actual difference between these two methods. In other words, does the var keyword do anything other than establish scope?
Reasons I've heard to prefer the second method:
The first method gives the appearance of block scope when it's
actually function scoped.
Variable declarations are hoisted to
the top of the scope, so that's where they should be defined.
I consider these reasons to be good but primarily stylistic. Are there other reasons that have more to do with functionality, memory allocation, performance, etc.?
In JavaScript - The Good Parts Douglas Crockford suggests that by using the second method and declaring your variables at the top of their scope you will more easily avoid scope bugs.
These are often caused by for loops, and can be extremely difficult to track down, as no errors will be raised. For example;
function() {
for ( var i = 0; i < 10; i++ ) {
// do something 10 times
for ( var i = 0; i < 5; i++ ) {
// do something 5 times
}
}
}
When the variables are hoisted we end up with only one i. And thus the second loop overwrites the value, giving us an endless loop.
You can also get some bizarre results when dealing with function hoisting. Take this example:
(function() {
var condition = true;
if(condition) {
function f() { console.log('A'); };
} else {
function f() { console.log('B'); };
}
f(); // will print 'B'
})();
This is because function bodies are hoisted and the second function overwrites the first.
Because searching for bugs like this is hard and regardless of any performance issues (I rarely care about a couple of microseconds), I always declare my variables at the top of the scope.
There will not be any differences during execution time. There might be a imperceptibly small difference in interpretation/compilation time, but that of course will be implementation dependent. There also might be a few bytes different in the size of the file, which could also affect download time. I don't think either of these are worth being bothered about.
As you already know, any variable declaration will be hoisted to the top of the function. The important thing to note is that this occurs during the interpretation/compilation process, not during execution.
Before a function is executed, the function must be parsed. After each function is parsed, they will both have all of the variable declarations moved to the top, which means that they will be identical and there will be no execution time cost incurred.
For the same reason, there are no memory cost differences. After parsing, there will be no differences at all.
Since you are not asking about style I am not telling you which I think is better. But I will say that the only reason you should prefer one over the other is style.
Style
Subjective. I prefer the approach that keeps the var close to the usage site, but I always keep the scoping rules in mind. I also avoid combining multiple declarations into a single var and prefer multiple var statements.
Memory allocations
Variables are not objects: not applicable. Due to the hoisting rules, the variable "slot" has the same lifetime in all cases.
Performance
No. There should be no difference in terms of performance. While an implementation could technically really mess this up - they don't.
The only way to answer this (besides looking at every implementation in minutia) is to use a benchmark.
Result: noise differences on modern browsers
The 2nd way saves the 4 characters in the the javascript file - but in terms of the actual code generation or execution speed I don't believe there is any difference at all.
Your example will function exactly the same. It could be even more unclear than your example to use 'var' in a block, though. Say, for example you want to update a variable that is outside of the scope (by not using 'var') conditionally, or use a local 'var' instead. Even if that condition is false, 'j' becomes local. It turns out not to be as trivial as it appears to do that.
var j = 1;
function foo () {
j = 2;
if (false) {
var j = 3; // makes `j = 2` local simply for being in the same function
}
console.log(j);
}
foo(); // outputs 2
console.log(j); // outputs 1
That's one tricky case that may not work as you expect just by looking at the code.
There are no downsides to declaring every 'var' on top, only up-sides.

Can nsiTimer cause overflow or memory issues upon repeated use?

In the mozilla docs it says:
initWithCallback(): Initialize a timer to fire after the given millisecond interval. This version takes a function to call and a closure to pass to that function.
In the this code example:
setupTimer: function() {
var waitPeriod = getNewWaitPeriod();
myTimer.initWithCallback({
notify: function(t) {
foo();
setupTimer();
}
},
waitPeriod,
Components.interfaces.nsITimer.TYPE_ONE_SHOT);
}
How much is actually included in the closure that's passed to the function. Does the closure keep a copy of the entire stack? Is this code sample at risk of stack overflowing or forever increasing memory usage?
In theory the closure keeps everything that's in scope for the closure (so in this case the local variables in setupTimer plus whatever variables setupTimer itself closes over). Note that this is different from the callstack: closure scope in JS is lexical, not dynamic, so it doesn't matter how you reached your function, only what the source of the function looks like.
In practice JS engines optimize closures heavily to speed up access to barewords in closures, so the set of things the closure actually keeps alive might be smaller than the theoretical set I describe above. But I wouldn't depend on that.

Categories