I am not very well parsed in javascript, but do you call the following micro optimization?
for(var j=0;j < document.getElementsByTagName('a').length; j++)
{
//...
}
var Elements = document.getElementsByTagName('a');
var ElementsLength = Elements.length;
for(var j=0;j < ElementsLength ; j++)
{
//...
}
var Elements = document.getElementsByTagName('a');
for(var j=0;j < Elements.length; j++)
{
//...
}
does document.getElementByTagName really get called in every loop
cycle in the first case?
do browsers try to optimize the javascript we write?
is there any difference between the second and third case considering
that the collection will never change?
does document.getElementByTagName really get called in every loop cycle in the first case?
Yes.
do browsers try to optimize the javascript we write?
It won't change it functionally. There's a difference between calling a function once and calling it every time in a loop; perhaps you meant to call the function every time. "Optimising" that away is actually changing what the program does.
is there any difference between the second and third case considering that the collection will never change?
Not functionally, but performance wise yes. Accessing the length attribute is a little more overhead than reading the number from a simple variable. Probably not so much you'd really notice, but it's there. It's also a case that cannot be optimised away, since Elements.length may change on each iteration, which would make the program behave differently. A good optimiser may be able to detect whether the attribute is ever changing and optimise in case it's certain it won't; but I don't know how many implementations really do that, because this can become quite complex.
Related
My question is more a curiosity question on for-loop style.
While reading some old code by others, I encountered a style I haven't seen before.
var declaredEarlier = Array
for(var i=0, length=declaredEarlier.length; i < length; i++) {
//stuff
}
I had never seen declaring a length before use and since this is an old app, was this kind of style a C/C++/old Java carryover? or was this developer unique?
Is there any benefit for declaring length that way instead of doing what I normally do:
for(var i=0; i < declaredEarlier.length; i++) {
//stuff
}
If this has been asked before, I could not find it. And if it's not stackoverflow applicable, which forum would be better to ask?
There are two reasons you might grab the length up-front like that:
In case the length may change (remember, JavaScript arrays aren't really arrays* and their length is not fixed) and you want to use the original length to limit the loop, or
To avoid repeatedly looking up the length on the object, since you know it's not going to change
The first is obviously substantive; the second is style and/or micro-optimisation.
* Disclosure: That's a link to a post on my anemic little blog.
for the first style the value of declaredEarlier can change so you need to use it
like that length=declaredEarlier.length
but for the second style you already know the value you dont need to calculate it again
I know that in browser it is more optimal to write a for loop along the lines of
for(var i=0, l=arr.length; i<l; i++){ }
instead of
for(var i=0; i<arr.length; i++){ }
But is this true in NodeJS or does the V8 engine optimize it?
I know in ecma-262 5.1 sec-15.4 array length is defined as such:
The value of the length property is numerically greater than the name of every property whose name is an array index; whenever a property of an Array object is created or changed, other properties are adjusted as necessary to maintain this invariant.
Thus if the length doesn't change the only reason this method would be slower is because you have to access the property. What I'm looking for is a reasonable example/ explanation that would show whether or not the V8 engine (which is used in NodeJS) suffers in performance when accessing this property.
If arr is a pure local variable and the loop doesn't touch it in any way, then yes. However even if the optimization fails, loading the same field over and over again doesn't really cost anything due to CPU cache.
I would always use the first if applicable because it informs both the interpreter and any future reader of the code that the loop does not modify the length of the array.
Even if it's not faster (although http://jsperf.com/array-length-vs-cached suggests it actually is) it's simply good practise to factor constant expressions from outside of a loop.
The problem is length's calculation. In this example for(var i=0; i<arr.length; i++){ } statement arr.length will be calculated at every loop iteration. But with for(var i=0, l=arr.length; i<l; i++){ } value will be taken without any calculations. Simply getting a value is faster than calculating length of the array.
Getting length can't be optimized by any compiler because it could be changed. So it's calculated each iteration.
I've put together a jsperf test that compares for loops that iterate over an array with caching the array length condition vs not caching. I thought that caching the variable before to avoid recalculating the array length each iteration would be faster, but the jsperf test says otherwise. Can someone explain why this is? I also thought that including the array length variable definition (when cached) within the for loop's initialization would reduce the time since the parse doesn't need to look up the "var" keyword twice, but that also doesn't appear to be the case.
example without caching:
for(var i = 0; i < testArray.length; i++){
//
}
example with caching
var len = testArray.length;
for(var i = 0; i < len; i++){
//
}
example with caching variable defined in for loop initialization
for(var i = 0, len=testArray.length; i < len; i++){
//
}
http://jsperf.com/for-loop-condition-caching
Can someone explain why this is?
This optimization and case is extremely common so modern JavaScript engines will automatically perform this optimization for you.
Some notes:
This is not the case when iterating a NodeList (such as the result of querySelectorAll
This is an overkill micro optimization for most code paths anyway, usually the body of the loop takes more time than this comparison anyway.
The performance of the scenarios that you posted depends on how smart the JS engine's optimizer is. An engine with a very dump optimizer (or no optimizer at all) will likely be faster when you are using the variable, but you can't even rely on that. After all, length's type is well-known, while a variable can be anything and may require additional checks.
Given a perfect optimizer, all three examples should have the same performance. And as your examples are pretty simple, modern engines should reach that point soon.
In my quest to optimize my game engine I have discovered optimization i have been doing affecting each browser differently, in a lot of cases making one browser worse and the other better!
Currently I'm trying to optimize looping as i do alot of it and depending on the way this is done can have a big effect on the performance of my engine.
Based on the results here http://jsperf.com/for-vs-while-loop-iterating/3
It seems a reverse for loop in chrome is the fastest
var l = foo.length;
for (var i = l; i--;) {
}
And in firefox a forward for loop is fastest
var l = foo.length;
for (var i = 0; i < l; i++) {
}
Now in order to use the correct one per browser I'm doing something like this
function foreach(func, iterations){
var browser = $.browser;
var i;
if (browser.webkit)
{
for(i=iterations;i--;)
{
func(i);
}
}
else
{
for (i = 0; i < iterations; i++)
{
func(i);
}
}
}
but it seems there may be alot of overhead here that may hurt performance.
If you were to provide different ways of looping for different browsers what would you do?
EDIT: seems there was a bug in my testing where i was doing one too many loops on the forward loop and now chrome seems to be the fastest there also, I may not need to optimize the loops but it may still be worth while as mention in another comment incase browser versions change performance again
Unfortunately, if your goal is the best performance loop on each browser, the very last thing you want to do is introduce function calls into it. There's no way you can define your foreach function such that it will be anything like as fast as the straight for loop. Calling the iteration function will wash out any gains you might get.
A quick jsperf can confirm this easily enough. For instance, run this one on Chrome or a recent version of Firefox or Opera. It compares looping forward with for, backward with for, or using the browser's built-in Array#forEach function (part of ECMAScript5). (I think we can assume any foreach function you build will be slower than the built-in one.) As you can see, the forEach version is dramatically slower than either of the for loops, which makes sense: Calling functions isn't very expensive, but it isn't free.
Mind you, what you're doing in the loop probably washes out the difference between counting up and count down. What I'd do is figure out what's fastest on the slower browsers, and use that. You said, for instance, that a reverse loop is faster in Chrome but a forward loop is faster in Firefox. As Chrome's V8 is dramatically faster than Firefox's SpiderMonkey (at the moment, these things are constantly in flux), pick the forward loop, as it's faster on the slower engine.
Otherwise, you're into needing to do preprocessing on your code and churning out a different version tailored to each browser.
I don't think your overhad is as big as you feel, but what you can do is do your test only once:
var foreach;
if (forwardIsFaster) {
foreach = function (func, iterations) {
// loop forwards...
};
} else {
foreach = function (func, iterations) {
// loop backwards...
};
}
That said, I'm not sure using browser sniffing is the best solution here; maybe instead do a test loop on startup, measure which solution is faster, and choose the one that turns out to be faster.
When you start doing something significant inside the loop (e.g. just printing the variable), the order of iteration doesn't matter, see:
http://jsperf.com/for-vs-while-loop-iterating/4
So stop caring about this and if (and only if) your code is slow in any place, just profile it and optimize that part.
You could make foreach return a function, which would be an iterator. So you would have, once in your script, var iterator = foreach($.browser.webkit);. From then on, you would just use iterator(iterations, callback), which would no longer be executing any conditionals.
The key, basically, is that the user's browser won't change, so the result of the execution of that conditional needs to be evaluated only once per script.
Given some JS code like that one here:
for (var i = 0; i < document.getElementsByName('scale_select').length; i++) {
document.getElementsByName('scale_select')[i].onclick = vSetScale;
}
Would the code be faster if we put the result of getElementsByName into a variable before the loop and then use the variable after that?
I am not sure how large the effect is in real life, with the result from getElementsByName typically having < 10 items. I'd like to understand the underlying mechanics anyway.
Also, if there's anything else noteworthy about the two options, please tell me.
Definitely. The memory required to store that would only be a pointer to a DOM object and that's significantly less painful than doing a DOM search each time you need to use something!
Idealish code:
var scale_select = document.getElementsByName('scale_select');
for (var i = 0; i < scale_select.length; i++)
scale_select[i].onclick = vSetScale;
Caching the property lookup might help some, but caching the length of the array before starting the loop has proven to be faster.
So declaring a variable in the loop that holds the value of the scale_select.length would speed up the entire loop some.
var scale_select = document.getElementsByName('scale_select');
for (var i = 0, al=scale_select.length; i < al; i++)
scale_select[i].onclick = vSetScale;
A smart implementation of DOM would do its own caching, invalidating the cache when something changes. But not all DOMs today can be counted on to be this smart (cough IE cough) so it's best if you do this yourself.
In principle, would the code be faster if we put the result of getElementsByName into a variable before the loop and then use the variable after that?
yes.
Use variables. They're not very expensive in JavaScript and function calls are definitely slower. If you loop at least 5 times over document.getElementById() use a variable. The idea here is not only the function call is slow but this specific function is very slow as it tries to locate the element with the given id in the DOM.
There's no point storing the scaleSelect.length in a separate variable; it's actually already in one - scaleSelect.length is just an attribute of the scaleSelect array, and as such it's as quick to access as any other static variable.
I think so. Everytime it loops, the engine needs to re-evaluate the document.getElementsByName statement.
On the other hand, if the the value is saved in a variable, than it allready has the value.
# Oli
Caching the length property of the elements fetched in a variable is also a good idea:
var scaleSelect = document.getElementsByName('scale_select');
var scaleSelectLength = scaleSelect.length;
for (var i = 0; i < scaleSelectLength; i += 1)
{
// scaleSelect[i]
}