Effect of function location on overall performance - javascript

I have a rather large javascript application and am trying to optimize performance. If I have a loop that will execute a small function thousands of times, does putting the small function far away, code-wise, from the calling function have any performance implications? Thank You.

There is no difference declaring the function in loop or calling it after each iteration. I have been taught that each function and variable according to its size creates its space in RAM at specified location. Javascript knows where that function or variable is located in memory because we assign it a name like foo.
for(var i = 0; i < 1000; i++) foo(i);
... Your 300 lines
function foo(i) {
document.body.innerHTML += i+"<br />";
}
or
for(var i = 0; i < 1000; i++) {
document.body.innerHTML += i+"<br />";
}
You can use the way you like. The functions when declared have fixed position in memory thus can be called from anywhere. You can also call it from Europe if it is located there.

Related

For loop skipping iterations

I have a for loop which will iterate through all choices and and set a value for them.
For some reason, when I run the code it does the first iteration, then skips to the last and does that one 3 times, or so according to the console.
code:
for (i = 0; i < 4; i += 1) {
console.log(i)
var generated = word
while (generated == word) {
generated = wordsJson.characters[Math.floor(Math.random() * wordsJson.characters.length)]
}
choices[i].innerHTML = translate(generated)
}
What I get in console:
0
(3) 3
This is my first time asking something on stackoverflow. If you need more information, please ask.
It appears that the variable i is getting modified outside of the for loop.
Typically you would want to declare your iterator variable (in this case i) so that it's scoped to the loop, which would look like:
for (let i = 0; i < 4; i += 1) { ... }
Note, specifically, the addition of let. Since you haven't done that, it means that either i is already explicitly declared somewhere or, if not, that you've created a new global variable i.
Since you've got code you haven't shown that also seems to relate to i (choices[i]) and methods that we don't know the exact function of (translate()) it's hard to say for certain, but that would be the first place to look.
If not, posting some additional code so we can see the other functionality would be helpful.

Optimizing stack memory read/writes in JavaScript?

Let's say we have the following function with a loop that writes some memory variable within each iteration of the loop.
function myFunction() {
for (var i = 0; i < 10000; i++) {
let myStackVariable = i;
// do something with myStackVariable
}
}
But let's say we re-write this loop to declare the variable only once outside of the loop and then re-assign it during the loop.
function myFunction() {
let myStackVariable;
for (var i = 0; i < 10000; i++) {
myStackVariable = i;
// do something with myStackVariable
}
}
I'm pretty sure that in any decent C compiler will optimize only a single variable in the stack and consistently use that memory location.
Will JavaScript likely do the same? Is there any benefit performance wise to doing the latter? Is it worth writing one way rather than the other. I know, I know - premature optimization is the root of all evil - but I am more curious than anything.
I tried running tests with n = 100000000 and the results were the same, but I'm unsure if my example was just too simple since there are no recursions, etc.

Javascript, counters and real-time applications

I'm currently working on a game and i have decided to go with javascript to make a prototype. During the development i noticed that i use a lot of counters in this way:
function update() {
for(var i = 0; i < n; i++) {
// do stuff
}
for(var i = 0; i < m; i++) {
// do other stuff
}
}
Keeping in mind that this is a real time application so the update function is executed almost 60 times per second, we can say that i'm creating a lot of variables. I was wondering how that piece of code would affect performance (does the javascript engine make some optimization here?) and how the garbage collector behaves in this situation (i don't even know how the GC manages primitive types...).
For now i changed the code to look like this:
var counters = {};
function update() {
for(counters['taskA'] = 0; counters['taskA'] < n; counters['taskA']++) {
// do stuff
}
for(counters['taskB'] = 0; counters['taskB'] < m; counters['taskB']++) {
// do other stuff
}
}
Does this code make any difference?
There shouldn't be any significant performance difference. However, the counters variable will not be garbage collected if its on the global scope. It will only get GCed when it goes out of scope so if its within another function that will be mostly fine.
On your first example the i variables definitely get GCed as there within the update function.

Iterator scope not local?

I was just writing two JavaScript functions, one of which took in a long string, looped over it until it hit a space, then called the other function to print the input before the space into the DOM. The first function would then continue on with input after the space, hit a space, call the print function, etc.
In the process, I kept hitting infinite loops, but only if the string contained a space. I couldn't figure out why, since all the looping seemed to be set up properly. I ultimately figured out that my iterator variable was jumping scope out of my second function printMe and back into the first, readAndFeed, and because of the way the functions were set up, it would always come back as a lower number than the terminating value if there was a space involved.
The first function's loop looked like this:
function readAndFeed(content){
var output = "";
var len = content.length;
for(i = 0; i < len; i++)
{
console.log(i+" r and f increment")
if(content[i] == (" "))
{
printMe(output);
output = "";
}
else if(i==len-1){
output += content[i];
printMe(output)
}
else
{
output += content[i]
}
}
}
The second function is printMe(), and it looped over the string, broke it into three bits, looped over each of them separately (not in a nested fashion), and printed them to the DOM. I used similar loops in it, and I also used i as an iterator.
This would loop over strings with no spaces just fine, but if I threw a space in there, the browser would crash. I tried a bunch of different stuff, but ultimately (by logging the iterator values) realized something was up with i. What worked was changing the i in the printMe function to a j.
I'm confused; this doesn't seem like how I understand variable scope. The functions are defined separately, so it seems like the iterators should be local to those functions and not able to jump out of one into the other.
Here's a jsfiddle
Uncomment the "is an example" part at the bottom to crash your browser. Again, changing the i variables to j in the printMe function completely solved this, but whaaa?
When you don't declare a variable, it is implicitly global. Since you've not declared your loop iteration index i, it is global. If you do that in multiple functions, those globals will collide and one function will accidentally modify the other's variable.
The solution is to make SURE your local variables are declared with var as in:
for (var i = 0; i < len; i++) {
In your case, you need to fix both readAndFeed() and printMe() as they both have the same issue and thus they both try to use the global i. When you call one from the other, it trounces the original's use of i. Here's a fixed version of readAndFeed():
function readAndFeed(content) {
var output = "";
var len = content.length;
// add var here before i
for (var i = 0; i < len; i++) {
console.log(i + " r and f increment")
if (content[i] == (" ")) {
printMe(output);
output = "";
} else if (i == len - 1) {
output += content[i];
printMe(output)
} else {
output += content[i]
}
}
}
If you run your Javascript code in strict mode, then trying to use an undeclared variable actually causes an error (rather than implicitly make it a global) so you can't accidentally shoot yourself in the foot like this.
In your example, i is in fact a global variable. Any assignment without var statement to an undeclared variable declares an implicit global.
To make i local, just include var:
for (var i = 0; i < len; i++)

Is not having local functions a micro optimisation?

Would moving the inner function outside of this one so that its not created everytime the function is called be a micro-optimisation?
In this particular case the doMoreStuff function is only used inside doStuff. Should I worry about having local functions like these?
function doStuff() {
var doMoreStuff = function(val) {
// do some stuff
}
// do something
for (var i = 0; i < list.length; i++) {
doMoreStuff(list[i]);
for (var j = 0; j < list[i].children.length; j++) {
doMoreStuff(list[i].children[j]);
}
}
// do some other stuff
}
An actaul example would be say :
function sendDataToServer(data) {
var callback = function(incoming) {
// handle incoming
}
ajaxCall("url", data, callback);
}
Not sure if this falls under the category "micro-optimization". I would say no.
But it depends on how often you call doStuff. If you call it often, then creating the function over and over again is just unnecessary and will definitely add overhead.
If you don't want to have the "helper function" in global scope but avoid recreating it, you can wrap it like so:
var doStuff = (function() {
var doMoreStuff = function(val) {
// do some stuff
}
return function() {
// do something
for (var i = 0; i < list.length; i++) {
doMoreStuff(list[i]);
}
// do some other stuff
}
}());
As the function which is returned is a closure, it has access to doMoreStuff. Note that the outer function is immediately executed ( (function(){...}()) ).
Or you create an object that holds references to the functions:
var stuff = {
doMoreStuff: function() {...},
doStuff: function() {...}
};
More information about encapsulation, object creation patterns and other concepts can be found in the book JavaScript Patterns.
For optimal speed with a nested function (function within internal scope of an outer function), I suspect you should use declarations, not expressions.
The question asks about "local functions" and optimization, but doesn't specify how the local functions are created. But it should, because the question's answer probably is different for the different techniques by which the "inner function" can be created.
Looking at the answer and test results by #cleong, I suspect that only his answer is using the optimal technique for function creation. There are three ways to create a function, and #cleong is showing us the one that provides fast execution. The three techniques are:
constructor
declaration
expression
Constructor isn't used much, it requires a string that has the text of the function body. This would be useful in reflective programming, where you do a "toString()" to get the function body, modify, then construct a new function. And that, of course, is more-or-less never done.
Declaration is used, but mostly for outer functions, not inner functions (by "inner function" I mean a function nested within another). Yet, based upon #cleong tests, it seems to be very fast; just as fast as an outer function.
Expressions are what everyone uses. This might not be the best idea; but it's what everyone does.
One major difference between function declarations and function expressions is that the declarations are subject to hoisting. Everyone knows that "var" declarations are hoisted; but so are "function" declarations. For things that are hoisted, computations are performed at compile time to determine the memory space that will be needed for the thing. Presumably, one would expect that the inner function is compiled at compile time, and can run much as would a compiled outer function.
I have a copy of Flannigan's "The Definitive Guide" book from about six years ago, and I remember reading the reverse of what I just wrote here. He said something like: expressions are compiled, and declarations are not. While he is the world's "definitive guide" to JavaScript, I have always suspected he might have gotten this one mixed up and backwards. I suspect that function inner declarations are more "ready to go" than are function expressions. The test results on this stackOverflow page seem to confirm my long held suspicions.
Looking at the #cleong test results, it just seems that declaration, not expression, is the way to go for inner functions, if optimal execution speed is a concern.
The original question was asked in 2011. Given the rise of Node.js since then, I thought it's worth revisiting the issue. In a server environment, a few milliseconds here and there can matter a lot. It could be difference between remaining responsive under load or not.
While inner functions are nice conceptually, they can pose problems for the JavaScript engine's code optimizer. The following example illustrate this:
function a1(n) {
return n + 2;
}
function a2(n) {
return 2 - n;
}
function a() {
var k = 5;
for (var i = 0; i < 100000000; i++) {
k = a1(k) + a2(k);
}
return k;
}
function b() {
function b1(n) {
return n + 2;
}
function b2(n) {
return 2 - n;
}
var k = 5;
for (var i = 0; i < 100000000; i++) {
k = b1(k) + b2(k);
}
return k;
}
function measure(label, fn) {
var s = new Date();
var r = fn();
var e = new Date();
console.log(label, e - s);
}
for (var i = 0; i < 4; i++) {
measure('A', a);
measure('B', b);
}
The command for running the code:
node --trace_deopt test.js
The output:
[deoptimize global object # 0x2431b35106e9]
A 128
B 130
A 132
[deoptimizing (DEOPT eager): begin 0x3ee3d709a821 b (opt #5) #4, FP to SP delta: 72]
translating b => node=36, height=32
0x7fffb88a9960: [top + 64] <- 0x2431b3504121 ; rdi 0x2431b3504121 <undefined>
0x7fffb88a9958: [top + 56] <- 0x17210dea8376 ; caller's pc
0x7fffb88a9950: [top + 48] <- 0x7fffb88a9998 ; caller's fp
0x7fffb88a9948: [top + 40] <- 0x3ee3d709a709; context
0x7fffb88a9940: [top + 32] <- 0x3ee3d709a821; function
0x7fffb88a9938: [top + 24] <- 0x3ee3d70efa71 ; rcx 0x3ee3d70efa71 <JS Function b1 (SharedFunctionInfo 0x361602434ae1)>
0x7fffb88a9930: [top + 16] <- 0x3ee3d70efab9 ; rdx 0x3ee3d70efab9 <JS Function b2 (SharedFunctionInfo 0x361602434b71)>
0x7fffb88a9928: [top + 8] <- 5 ; rbx (smi)
0x7fffb88a9920: [top + 0] <- 0 ; rax (smi)
[deoptimizing (eager): end 0x3ee3d709a821 b #4 => node=36, pc=0x17210dec9129, state=NO_REGISTERS, alignment=no padding, took 0.203 ms]
[removing optimized code for: b]
B 1000
A 125
B 1032
A 132
B 1033
As you can see, function A and B ran at the same speed initially. Then for some reason a deoptimization event occurred. From then on B is nearly an order of magnitude slower.
If you're writing code where performance is importantly, it's best to avoid inner functions.
It completely depends on how often the function is called. If it's a OnUpdate function that is called 10 times per second it is a decent optimalisation. If it's called three times per page, it is a micro optimalisation.
Though handy, nested function definitions are never needed (they can be replaced by extra arguments for the function).
Example with nested function:
function somefunc() {
var localvar = 5
var otherfunc = function() {
alert(localvar);
}
otherfunc();
}
Same thing, now with argument instead:
function otherfunc(localvar) {
alert(localvar);
}
function somefunc() {
var localvar = 5
otherfunc(localvar);
}
It is absolutely a micro-optimization. The whole reason for having functions in the first place is so that you make your code cleaner, more maintainable and more readable. Functions add a semantic boundary to sections of code. Each function should only do one thing, and it should do it cleanly. So if you find your functions performing multiple things at the same time, you've got a candidate for refactoring it into multiple routines.
Only optimize when you've got something working that's too slow (If it's not working yet, it's too early to optimize. Period). Remember, nobody ever paid extra for a program that was faster than their needs/requirements...
Edit: Considering that the program isn't finished yet, it's also a premature optimization. Why is that bad? Well, first you're spending time working on something that may not matter in the long run. Second, you don't have a baseline to see if your optimizations improved anything in a realistic sense. Third, you're reducing maintainability and readability before you've even got it running, so it'll be harder to get running than if you went with clean concise code. Fourth, you don't know if you'll need doMoreStuff somewhere else in the program until you've finished it and understand all your needs (perhaps a longshot depending on the exact details, but not outside the realm of possibility).
There's a reason that Donnald Knuth said Premature optimization is the root of all evil...
A quick "benchmark" run on an average PC (i know there are lots of unaccounted-for variables, so dont comment on the obvious, but it's interesting in any case):
count = 0;
t1 = +new Date();
while(count < 1000000) {
p = function(){};
++count;
}
t2 = +new Date();
console.log(t2-t1); // milliseconds
It could be optimised by moving the increment to the condition for example (brings running time down by about 100 milliseconds, although it doesn't affect the difference between with and without function creation, so it isn't really relevant)
Running 3 times gave:
913
878
890
Then comment out the function creation line, 3 runs gave:
462
458
464
So purely on 1000,000 empty function creations you add about half a second. Even assuming your original code is running 10 times a second on a handheld device (let's say that devices overall performance is 1/100 of this laptop, which is exaggerated - it's probably closer to 1/10, although will provide a nice upper bound), that's equivalent to 1000 function creations/sec on this computer, which happens in 1/2000 of a second. So every second the handheld device is adding overhead of 1/2000 second of processing... half a millisecond every second isn't very much.
From this primitive test I would conclude that on a PC this is definitely a micro-optimisation, and if you're developing for weaker devices, it is almost certainly as well.

Categories