array.push(element) vs array[array.length] = element [duplicate] - javascript
This question already has answers here:
How to append something to an array?
(30 answers)
Why is array.push sometimes faster than array[n] = value?
(5 answers)
Closed 9 years ago.
I was wondering if there is a reason to choose
array.push(element)
over
array[array.length] = element
or vice-versa.
Here's a simple example where I have an array of numbers and I want to make a new array of those numbers multiplied by 2:
var numbers = [5, 7, 20, 3, 13];
var arr1 = [];
var len = numbers.length;
for(var i = 0; i < len; i++){
arr1.push(numbers[i] * 2);
}
alert(arr1);
var arr2 = [];
for(var i = 0; i < len; i++){
arr2[arr2.length] = numbers[i] * 2;
}
alert(arr2);
The fastest way to do it with current JavaScript technology, while also using minimal code, is to store the last element first, thereby allocating the full set of array indices, and then counting backwards to 0 while storing the elements, thereby taking advantage of nearby memory storage positions and minimizing cache misses.
var arr3 = [];
for (var i = len; i>0;){
i--;
arr2[i] = numbers[i] * 2;
}
alert(arr2);
Note that if the number of elements being stored is "big enough" in the view of the JavaScript engine, then the array will be created as a "sparse" array and never converted to a regular flat array.
Yes, I can back this up. The only problem is that JavaScript optimizers are extremely aggressive in throwing away calculations that aren't used. So in order for the results to be calculated fairly, all the results have to be stored (temporarily). One further optimization that I believed to be obsolete, but actually improves the speed even further is to pre-initialize the array using new Array(*length*). That's an old-hat trick that for a while made no difference, but no in the days of extreme JavaScript engine optimizations, it appears to make a difference again.
<script>
function arrayFwd(set) {
var x = [];
for (var i = 0; i<set.length; i++)
x[x.length] = set[i];
return x;
}
function arrayRev(set) {
var x = new Array(set.length);
for (var i = set.length; i>0;) {
i--;
x[i] = set[i];
}
return x;
}
function arrayPush(set) {
var x = [];
for (var i = 0; i<set.length; i++)
x.push(set[i]);
return x;
}
results = []; /* we'll store the results so that
optimizers don't realize the results are not used
and thus skip the function's work completely */
function timer(f, n) {
return function(x) {
var n1 = new Date(), i = n;
do { results.push(f(x)); } while (i-- > 0); // do something here
return (new Date() - n1)/n;
};
}
set = [];
for (i=0; i<4096; i++)
set[i] = (i)*(i+1)/2;
timers = {
forward: timer(arrayFwd, 500),
backward: timer(arrayRev, 500),
push: timer(arrayPush, 500)
};
for (k in timers) {
document.write(k, ' = ', timers[k](set), ' ms<br />');
}
</script>
Opera 12.15:
forward = 0.12 ms
backward = 0.04 ms
push = 0.09 ms
Chrome (latest, v27):
forward = 0.07 ms
backward = 0.022 ms
push = 0.064 ms
(for comparison, when results are not stored, Chrome produces these numbers:
forward = 0.032 ms
backward = 0.008 ms
push = 0.022 ms
This is almost four times faster versus doing the array forwards, and almost three times faster versus doing push.)
IE 10:
forward = 0.028 ms
backward = 0.012 ms
push = 0.038 ms
Strangely, Firefox still shows push as faster. There must be some code re-writing going on under the hood with Firefox when push is used, because accessing a property and invoking a function are both slower than using an array index in terms of pure, un-enhanced JavaScript performance.
Related
In V8 why does a preallocated array consume less memory?
Consider the following two alternatives: const mb_before = process.memoryUsage().heapUsed / 1024 / 1024; const n = 15849; const o = 115; const entries = []; for (var i = 0; i < n; i++) { const subarr = []; for (var j = 0; j < o; j++) { subarr.push(Math.random()); } entries.push(subarr); } const mb_after = process.memoryUsage().heapUsed / 1024 / 1024; console.log('arr using ' + (mb_after - mb_before) + ' megabyte'); // arr using 15.110992431640625 megabyte and const mb_before = process.memoryUsage().heapUsed / 1024 / 1024; const n = 15849; const o = 115; const entries = new Array(n); for (var i = 0; i < n; i++) { const subarr = new Array(o); for (var j = 0; j < o; j++) { subarr[j] = Math.random(); } entries[i] = subarr; } const mb_after = process.memoryUsage().heapUsed / 1024 / 1024; console.log('arr using ' + (mb_after - mb_before) + ' megabyte'); // arr using 12.118911743164062 megabyte From my understanding the two arrays' size should be identical, only the way they were instantiated differs. How can it be explained that the resulting memory usage is consistently different?
I believe this has to do with the way array memory is allocated. When you instantiate an array giving it a specific size as you are in the second example, it will allocate that memory. When you grow the array it will allocate a small amount of extra space to handle growth and then as you grow the array the additional memory allocations will get bigger. This results in extra free space in the first example.
I don't find this surprising at all. Although standard arrays aren't really arrays at all*, JavaScript engines default to optimization: Treating them as though they were really arrays when they can. In your first example, V8 doesn't know how big each of the arrays is going to get — it just keeps growing, and in order to treat it as an optimized array (rather than an object with special properties), V8 has to keep reallocating and copying to make it bigger periodically. So it's not surprising that the most recent proactive allocation left a lot of extra room in case it kept growing. In your second example, you've given V8 a big old clue in advance of how big you intend to make the array. So it's reasonable that V8 would use that information to optimize the allocation it does for the underlying true array. * (that's a post on my anemic little blog)
Efficiently find every combination of assigning smaller bins to larger bins
Let's say I have 7 small bins, each bin has the following number of marbles in it: var smallBins = [1, 5, 10, 20, 30, 4, 10]; I assign these small bins to 2 large bins, each with the following maximum capacity: var largeBins = [40, 50]; I want to find EVERY combination of how the small bins can be distributed across the big bins without exceeding capacity (eg put small bins #4,#5 in large bin #2, the rest in #1). Constraints: Each small bin must be assigned to a large bin. A large bin can be left empty This problem is easy to solve in O(n^m) O(2^n) time (see below): just try every combination and if capacity is not exceeded, save the solution. I'd like something faster, that can handle a variable number of bins. What obscure graph theory algorithm can I use to reduce the search space? //Brute force var smallBins = [1, 5, 10, 20, 30, 4, 10]; var largeBins = [40, 50]; function getLegitCombos(smallBins, largeBins) { var legitCombos = []; var assignmentArr = new Uint32Array(smallBins.length); var i = smallBins.length-1; while (true) { var isValid = validate(assignmentArr, smallBins, largeBins); if (isValid) legitCombos.push(new Uint32Array(assignmentArr)); var allDone = increment(assignmentArr, largeBins.length,i); if (allDone === true) break; } return legitCombos; } function increment(assignmentArr, max, i) { while (i >= 0) { if (++assignmentArr[i] >= max) { assignmentArr[i] = 0; i--; } else { return i; } } return true; } function validate(assignmentArr, smallBins, largeBins) { var totals = new Uint32Array(largeBins.length); for (var i = 0; i < smallBins.length; i++) { var assignedBin = assignmentArr[i]; totals[assignedBin] += smallBins[i]; if (totals[assignedBin] > largeBins[assignedBin]) { return false; } } return true; } getLegitCombos(smallBins, largeBins);
Here's my cumbersome recursive attempt to avoid duplicates and exit early from too large sums. The function assumes duplicate elements as well as bin sizes are presented grouped and counted in the input. Rather than place each element in each bin, each element is placed in only one of duplicate bins; and each element with duplicates is partitioned distinctly. For example, in my results, the combination, [[[1,10,20]],[[4,5,10,30]]] appears once; while in the SAS example in Leo's answer, twice: once as IN[1]={1,3,4} IN[2]={2,5,6,7} and again as IN[1]={1,4,7} IN[2]={2,3,5,6}. Can't vouch for efficiency or smooth-running, however, as it is hardly tested. Perhaps stacking the calls rather than recursing could weigh lighter on the browser. JavaScript code: function f (as,bs){ // i is the current element index, c its count; // l is the lower-bound index of partitioned element function _f(i,c,l,sums,res){ for (var j=l; j<sums.length; j++){ // find next available duplicate bin to place the element in var k=0; while (sums[j][k] + as[i][0] > bs[j][0]){ k++; } // a place for the element was found if (sums[j][k] !== undefined){ var temp = JSON.stringify(sums), _sums = JSON.parse(temp); _sums[j][k] += as[i][0]; temp = JSON.stringify(res); var _res = JSON.parse(temp); _res[j][k].push(as[i][0]); // all elements were placed if (i == as.length - 1 && c == 1){ result.push(_res); return; // duplicate elements were partitioned, continue to next element } else if (c == 1){ _f(i + 1,as[i + 1][1],0,_sums,_res); // otherwise, continue partitioning the same element with duplicates } else { _f(i,c - 1,j,_sums,_res); } } } } // initiate variables for the recursion var sums = [], res = [] result = []; for (var i=0; i<bs.length; i++){ sums[i] = []; res[i] = []; for (var j=0; j<bs[i][1]; j++){ sums[i][j] = 0; res[i][j] = []; } } _f(0,as[0][1],0,sums,res); return result; } Output: console.log(JSON.stringify(f([[1,1],[4,1],[5,1],[10,2],[20,1],[30,1]], [[40,1],[50,1]]))); /* [[[[1,4,5,10,10]],[[20,30]]],[[[1,4,5,10,20]],[[10,30]]],[[[1,4,5,20]],[[10,10,30]]] ,[[[1,4,5,30]],[[10,10,20]]],[[[1,4,10,20]],[[5,10,30]]],[[[1,4,30]],[[5,10,10,20]]] ,[[[1,5,10,20]],[[4,10,30]]],[[[1,5,30]],[[4,10,10,20]]],[[[1,10,20]],[[4,5,10,30]]] ,[[[1,30]],[[4,5,10,10,20]]],[[[4,5,10,20]],[[1,10,30]]],[[[4,5,30]],[[1,10,10,20]]] ,[[[4,10,20]],[[1,5,10,30]]],[[[4,30]],[[1,5,10,10,20]]],[[[5,10,20]],[[1,4,10,30]]] ,[[[5,30]],[[1,4,10,10,20]]],[[[10,10,20]],[[1,4,5,30]]],[[[10,20]],[[1,4,5,10,30]]] ,[[[10,30]],[[1,4,5,10,20]]],[[[30]],[[1,4,5,10,10,20]]]] */ console.log(JSON.stringify(f([[1,1],[4,1],[5,1],[10,2],[20,1],[30,1]], [[20,2],[50,1]]))); /* [[[[1,4,5,10],[10]],[[20,30]]],[[[1,4,5,10],[20]],[[10,30]]],[[[1,4,5],[20]],[[10,10,30]]] ,[[[1,4,10],[20]],[[5,10,30]]],[[[1,5,10],[20]],[[4,10,30]]],[[[1,10],[20]],[[4,5,10,30]]] ,[[[4,5,10],[20]],[[1,10,30]]],[[[4,10],[20]],[[1,5,10,30]]],[[[5,10],[20]],[[1,4,10,30]]] ,[[[10,10],[20]],[[1,4,5,30]]],[[[10],[20]],[[1,4,5,10,30]]]] */ Here's a second, simpler version that only attempts to terminate the thread when an element cannot be placed: function f (as,bs){ var stack = [], sums = [], res = [] result = []; for (var i=0; i<bs.length; i++){ res[i] = []; sums[i] = 0; } stack.push([0,sums,res]); while (stack[0] !== undefined){ var params = stack.pop(), i = params[0], sums = params[1], res = params[2]; for (var j=0; j<sums.length; j++){ if (sums[j] + as[i] <= bs[j]){ var _sums = sums.slice(); _sums[j] += as[i]; var temp = JSON.stringify(res); var _res = JSON.parse(temp); _res[j].push(i); if (i == as.length - 1){ result.push(_res); } else { stack.push([i + 1,_sums,_res]); } } } } return result; } Output: var r = f([1,5,10,20,30,4,10,3,4,5,1,1,2],[40,50,30]); console.log(r.length) console.log(JSON.stringify(f([1,4,5,10,10,20,30], [40,50]))); 162137 [[[30],[1,4,5,10,10,20]],[[10,30],[1,4,5,10,20]],[[10,20],[1,4,5,10,30]] ,[[10,30],[1,4,5,10,20]],[[10,20],[1,4,5,10,30]],[[10,10,20],[1,4,5,30]] ,[[5,30],[1,4,10,10,20]],[[5,10,20],[1,4,10,30]],[[5,10,20],[1,4,10,30]] ,[[4,30],[1,5,10,10,20]],[[4,10,20],[1,5,10,30]],[[4,10,20],[1,5,10,30]] ,[[4,5,30],[1,10,10,20]],[[4,5,10,20],[1,10,30]],[[4,5,10,20],[1,10,30]] ,[[1,30],[4,5,10,10,20]],[[1,10,20],[4,5,10,30]],[[1,10,20],[4,5,10,30]] ,[[1,5,30],[4,10,10,20]],[[1,5,10,20],[4,10,30]],[[1,5,10,20],[4,10,30]] ,[[1,4,30],[5,10,10,20]],[[1,4,10,20],[5,10,30]],[[1,4,10,20],[5,10,30]] ,[[1,4,5,30],[10,10,20]],[[1,4,5,20],[10,10,30]],[[1,4,5,10,20],[10,30]] ,[[1,4,5,10,20],[10,30]],[[1,4,5,10,10],[20,30]]]
This problem is seen often enough that most Constraint Logic Programming systems include a predicate to model it explicitly. In OPTMODEL and CLP, we call it pack: proc optmodel; set SMALL init 1 .. 7, LARGE init 1 .. 2; num size {SMALL} init [1 5 10 20 30 4 10]; num capacity{LARGE} init [40 50]; var WhichBin {i in SMALL} integer >= 1 <= card(LARGE); var SpaceUsed{i in LARGE} integer >= 0 <= capacity[i]; con pack( WhichBin, size, SpaceUsed ); solve with clp / findall; num soli; set IN{li in LARGE} = {si in SMALL: WhichBin[si].sol[soli] = li}; do soli = 1 .. _nsol_; put IN[*]=; end; quit; This code produces all the solutions in 0.06 seconds on my laptop: IN[1]={1,2,3,4,6} IN[2]={5,7} IN[1]={1,2,3,4} IN[2]={5,6,7} IN[1]={1,2,3,6,7} IN[2]={4,5} IN[1]={1,2,5,6} IN[2]={3,4,7} IN[1]={1,2,5} IN[2]={3,4,6,7} IN[1]={1,2,4,6,7} IN[2]={3,5} IN[1]={1,2,4,7} IN[2]={3,5,6} IN[1]={1,2,4,6} IN[2]={3,5,7} IN[1]={1,3,4,6} IN[2]={2,5,7} IN[1]={1,3,4} IN[2]={2,5,6,7} IN[1]={1,5,6} IN[2]={2,3,4,7} IN[1]={1,5} IN[2]={2,3,4,6,7} IN[1]={1,4,6,7} IN[2]={2,3,5} IN[1]={1,4,7} IN[2]={2,3,5,6} IN[1]={2,3,4,6} IN[2]={1,5,7} IN[1]={2,3,4} IN[2]={1,5,6,7} IN[1]={2,5,6} IN[2]={1,3,4,7} IN[1]={2,5} IN[2]={1,3,4,6,7} IN[1]={2,4,6,7} IN[2]={1,3,5} IN[1]={2,4,7} IN[2]={1,3,5,6} IN[1]={3,5} IN[2]={1,2,4,6,7} IN[1]={3,4,7} IN[2]={1,2,5,6} IN[1]={3,4,6} IN[2]={1,2,5,7} IN[1]={3,4} IN[2]={1,2,5,6,7} IN[1]={5,7} IN[2]={1,2,3,4,6} IN[1]={5,6} IN[2]={1,2,3,4,7} IN[1]={5} IN[2]={1,2,3,4,6,7} IN[1]={4,6,7} IN[2]={1,2,3,5} IN[1]={4,7} IN[2]={1,2,3,5,6} Just change the first 3 lines to solve for other instances. However, as others have pointed out, this problem is NP-Hard. So it can switch from very fast to very slow suddenly. You could also solve the version where not every small item needs to be assigned to a large bin by creating a dummy large bin with enough capacity to fit the entire collection of small items. As you can see from the "Details" section in the manual, the algorithms that solve practical problems quickly are not simple, and their implementation details make a big difference. I am unaware of any CLP libraries written in Javascript. Your best bet may be to wrap CLP in a web service and invoke that service from your Javascript code.
Javascript Typed array vs simple array: performance
What I'm basically trying to do is to map an array of data points into a WebGL vertex buffer (Float32Array) in realtime (working on animated parametric surfaces). I've assumed that representing data points with Float32Arrays (either one Float32Array per component: [xx...x, yy...y] or interleave them: xyxy...xy) should be faster than storing them in an array of points: [[x, y], [x, y],.. [x, y]] since that'd actually be a nested hash and all. However, to my surprise, that leads to a slowdown of about 15% in all the major browsers (not counting array creation time). Here's a little test I've set up: var points = 250000, iters = 100; function map_2a(x, y) {return Math.sin(x) + y;} var output = new Float32Array(3 * points); // generate data var data = []; for (var i = 0; i < points; i++) data[i] = [Math.random(), Math.random()]; // run console.time('native'); (function() { for (var iter = 0; iter < iters; iter++) for (var i = 0, to = 0; i < points; i++, to += 3) { output[to] = data[i][0]; output[to + 1] = data[i][1]; output[to + 2] = map_2a(data[i][0], data[i][1]); } }()); console.timeEnd('native'); // generate data var data = [new Float32Array(points), new Float32Array(points)]; for (var i = 0; i < points; i++) { data[0][i] = Math.random(); data[1][i] = Math.random(); } // run console.time('typed'); (function() { for (var iter = 0; iter < iters; iter++) for (var i = 0, to = 0; i < points; i++, to += 3) { output[to] = data[0][i]; output[to + 1] = data[1][i]; output[to + 2] = map_2a(data[0][i], data[1][i]); } }()); console.timeEnd('typed'); Is there anything I'm doing wrong?
I think your problem is that you are not comparing the same code. In the first example, you have one large array filled with very small arrays. In the second example, you have two very large arrays, and both of them need to be indexed. The profile is different. If I structure the first example to be more like the second (two large generic arrays), then the Float32Array implementation far outperforms the generic array implementation. Here is a jsPerf profile to show it.
In V8 variables can have SMI (int31/int32), double and pointer type. So I guess when you operate with floats it should be converted to double type. If you use usual arrays it is converted to doubles already.
How to output every number from 1 to 10. Using random numbers in JavaScript [duplicate]
This question already has answers here: How to randomize (shuffle) a JavaScript array? (69 answers) Closed 9 years ago. I am trying to make a script which is outputting every number from 1-10. Using a random number generator, in JavaScript. I want every number to be unique. Here is an example of what i would like the script to output: 5 9 7 6 1 3 4 8 2 10 This is my attempt: var test = []; var amountOfNumbers = 10; var inArray = false; var useNumbers = []; for(var i=0; useNumbers.length<=amountOfNumbers; i++){ var rng = Math.floor((Math.random()*amountOfNumbers)+1); for(var a=0; a<=test.length; a++){ if(rng == test[a]){ inArray == true; } } if(!inArray){ document.write(rng); test.push(rng); useNumbers.push(rng); } } Hope you can help. for the record I am not interested in jQuery og any other library :)
1) How to fix your code You have a few errors, among them the fact you don't reset inArray to false and that you don't iterate over the whole test array (use <, not <=). But using a loop to see if you already have the number isn't efficient, it's better to use an object as a map : var test = []; var amountOfNumbers = 10; var useNumbers = {}; for(var i=0; test.length<amountOfNumbers; i++){ var rng = Math.floor((Math.random()*amountOfNumbers)+1); if(!useNumbers[rng]){ document.write(rng); test.push(rng); useNumbers[rng] = true; } } 2) How to do it properly Your algorithm will loop until it is lucky enough to find the remaining numbers. This isn't efficient and isn't predictable. The normal reliable practice is to generate the array [1..10] to shuffle it Generating an array of the integers from 1 to N can be done with a simple loop or in a fancier way : var arr = Array.apply(0,new Array(N)).map(function(_,i){ return i+1 }); Shuffling an array is usually done with the Fisher-Yates algorithm, for which you'll easily find JS implementations (it's easy to write anyway). A fast (theoretically not guaranteed to work with all future sort implementations) alternative is this one : arr = arr.sort(function(a,b){ return Math.random()>0.5 }); The whole program
Your approach means to check over all the array in each step, looking if your random number is already inside the array, which means a lot lost time. Best approach is disordering an ordered array. In each loop, we generate a random number (in the example, a number between 0 and 1) and with a 50% probability we change the item in the current position for other item in a random position (between 0 and the length of the array). Hope it helps. function disorder(arg) { for (var i = 0; i < arg.length; i++) { if (Math.random() < 0.5) { var aux = arg[i]; var rndPos = Math.floor(Math.random()) * arg.length; arg[i] = arg[rndPos]; arg[rndPos] = aux; } } return arg; } var myArray = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]; var myNewArray = disorder(myArray); myNewArray.forEach(function(item) { console.log(item); });
For-loop performance: storing array length in a variable
Consider two versions of the same loop iteration: for (var i = 0; i < nodes.length; i++) { ... } and var len = nodes.length; for (var i = 0; i < len; i++) { ... } Is the latter version anyhow faster than the former one?
The accepted answer is not right because any decent engine should be able to hoist the property load out of the loop with so simple loop bodies. See this jsperf - at least in V8 it is interesting to see how actually storing it in a variable changes the register allocation - in the code where variable is used the sum variable is stored on the stack whereas with the array.length-in-a-loop-code it is stored in a register. I assume something similar is happening in SpiderMonkey and Opera too. According to the author, JSPerf is used incorrectly, 70% of the time. These broken jsperfs as given in all answers here give misleading results and people draw wrong conclusions from them. Some red flags are putting code in the test cases instead of functions, not testing the result for correctness or using some mechanism of eliminating dead code elimination, defining function in setup or test cases instead of global.. For consistency you will want to warm-up the test functions before any benchmark too, so that compiling doesn't happen in the timed section.
Update: 16/12/2015 As this answer still seems to get a lot of views I wanted to re-examine the problem as browsers and JS engines continue to evolve. Rather than using JSPerf I've put together some code to loop through arrays using both methods mentioned in the original question. I've put the code into functions to break down the functionality as would hopefully be done in a real world application: function getTestArray(numEntries) { var testArray = []; for (var i = 0; i < numEntries; i++) { testArray.push(Math.random()); } return testArray; } function testInVariable(testArray) { for (var i = 0; i < testArray.length; i++) { doSomethingAwesome(testArray[i]); } } function testInLoop(testArray) { var len = testArray.length; for (var i = 0; i < len; i++) { doSomethingAwesome(testArray[i]); } } function doSomethingAwesome(i) { return i + 2; } function runAndAverageTest(testToRun, testArray, numTimesToRun) { var totalTime = 0; for (var i = 0; i < numTimesToRun; i++) { var start = new Date(); testToRun(testArray); var end = new Date(); totalTime += (end - start); } return totalTime / numTimesToRun; } function runTests() { var smallTestArray = getTestArray(10000); var largeTestArray = getTestArray(10000000); var smallTestInLoop = runAndAverageTest(testInLoop, smallTestArray, 5); var largeTestInLoop = runAndAverageTest(testInLoop, largeTestArray, 5); var smallTestVariable = runAndAverageTest(testInVariable, smallTestArray, 5); var largeTestVariable = runAndAverageTest(testInVariable, largeTestArray, 5); console.log("Length in for statement (small array): " + smallTestInLoop + "ms"); console.log("Length in for statement (large array): " + largeTestInLoop + "ms"); console.log("Length in variable (small array): " + smallTestVariable + "ms"); console.log("Length in variable (large array): " + largeTestVariable + "ms"); } console.log("Iteration 1"); runTests(); console.log("Iteration 2"); runTests(); console.log("Iteration 3"); runTests(); In order to achieve as fair a test as possible each test is run 5 times and the results averaged. I've also run the entire test including generation of the array 3 times. Testing on Chrome on my machine indicated that the time it took using each method was almost identical. It's important to remember that this example is a bit of a toy example, in fact most examples taken out of the context of your application are likely to yield unreliable information because the other things your code is doing may be affecting the performance directly or indirectly. The bottom line The best way to determine what performs best for your application is to test it yourself! JS engines, browser technology and CPU technology are constantly evolving so it's imperative that you always test performance for yourself within the context of your application. It's also worth asking yourself whether you have a performance problem at all, if you don't then time spent making micro optimizations that are imperceptible to the user could be better spent fixing bugs and adding features, leading to happier users :). Original Answer: The latter one would be slightly faster. The length property does not iterate over the array to check the number of elements, but every time it is called on the array, that array must be dereferenced. By storing the length in a variable the array dereference is not necessary each iteration of the loop. If you're interested in the performance of different ways of looping through an array in javascript then take a look at this jsperf
According to w3schools "Reduce Activity in Loops" the following is considered bad code: for (i = 0; i < arr.length; i++) { And the following is considered good code: var arrLength = arr.length; for (i = 0; i < arrLength; i++) { Since accessing the DOM is slow, the following was written to test the theory: <!doctype html> <html lang="en"> <head> <meta charset="utf-8"> <title>my test scripts</title> </head> <body> <button onclick="initArray()">Init Large Array</button> <button onclick="iterateArraySlowly()">Iterate Large Array Slowly</button> <button onclick="iterateArrayQuickly()">Iterate Large Array Quickly</button> <p id="slow">Slow Time: </p> <p id="fast">Fast Time: </p> <p id="access"></p> <script> var myArray = []; function initArray(){ var length = 1e6; var i; for(i = 0; i < length; i++) { myArray[i] = i; } console.log("array size: " + myArray.length); } function iterateArraySlowly() { var t0 = new Date().getTime(); var slowText = "Slow Time: " var i, t; var elm = document.getElementById("slow"); for (i = 0; i < myArray.length; i++) { document.getElementById("access").innerHTML = "Value: " + i; } t = new Date().getTime() - t0; elm.innerHTML = slowText + t + "ms"; } function iterateArrayQuickly() { var t0 = new Date().getTime(); var fastText = "Fast Time: " var i, t; var elm = document.getElementById("fast"); var length = myArray.length; for (i = 0; i < length; i++) { document.getElementById("access").innerHTML = "Value: " + i; } t = new Date().getTime() - t0; elm.innerHTML = fastText + t + "ms"; } </script> </body> </html> The interesting thing is that the iteration executed first always seems to win out over the other. But what is considered "bad code" seems to win the majority of the time after each have been executed a few times. Perhaps someone smarter than myself can explain why. But for now, syntax wise I'm sticking to what is more legible for me: for (i = 0; i < arr.length; i++) {
if nodes is DOM nodeList then the second loop will be much much faster because in the first loop you lookup DOM (very costly) at each iteration. jsperf
This has always been the most performant on any benchmark test that I've used. for (i = 0, val; val = nodes[i]; i++) { doSomethingAwesome(val); }
I believe that the nodes.length is already defined and is not being recalculated on each use. So the first example would be faster because it defined one less variable. Though the difference would be unnoticable.