javascript array traversal - efficiency - javascript

So, I have seen this piece of code at alot of places:
for (var i = 0, len = myArray.length; i < len; i++) {
}
I am aware that is the length caching of the array.
Today I saw this:
var len = myArray.length;
var i = 0;
while(i++ < len)
Efficiency wise, both would be the same, right? Any input would be appreciated.

If you have a "normal" loop, you can also change i < len to i !== len. This makes the loop a lot faster, because the the check for inequality is very fast. The caching of the variable is not so important, but it do no harm.
So a fast loop in javascript can written as follows:
for (var i = 0, len = myArray.length; i !== len; i++) {
}
UPDATE
I made some performance tests a while ago and this was what I found out. But nowadays the browsers don't show the same behaviour, furthermore it's the opposite (< is faster than !==) . Here is a test I made just now: http://jsperf.com/loop-inequality-check
So forget about the posting above ;)

Setup a jsperf test case here:
http://jsperf.com/javascript-array-length
for (i = 0; i < arr.length; i++) {
//nothing
}
var arrlength = arr.length;
for (i = 0; i < arrlength; i++) {
//nothing
}
var arrlength = arr.length,
i = 0;
while (arrlength > i++) {
//nothing
}
var arrlength = arr.length;
while (arrlength--) {
//nothing
}
If the test cases can be improved upon, please let me know in the comments. With a small number of tests, it seems IE11 is better optimized for the while cases, while Chrome 31 appears to prefer the second for loop (which is fairly similar to the while cases).

Related

Why won't my function work when I use splice?

I am trying to write a function which should calculate all prime numbers up to an input parameter and return it. I am doing this for practice.
I wrote this function in a few ways but I was trying to find new ways to do this for more practice and better performance. The last thing I tried was the code below:
function primes(num){
let s = []; // sieve
for(let i = 2; i <= num; i++){
s.push(i);
}
for(let i = 0; i < s.length; i++) {
for(let j = s[i]*s[i]; j <= num;) {
//console.log(j);
if(s.indexOf(j)!= -1){
s.splice(s.indexOf(j), 1, 0);
}
j+=s[i];
}
}
s = s.filter(a => a != 0);
return s;
}
console.log(primes(10));
The problem is that when I run this in a browser it keeps calculating and won't stop and I don't know why.
Note: when I comment out the splice and uncomment console.log(j); everything works as expected and logs are the things they should be but with splice, the browser keep calculating and won't stop.
I am using the latest version of Chrome but I don't think that can have anything to do with the problem.
Your problem lies in this line:
s.splice(s.indexOf(j), 1, 0);
Splice function third argument contains elements to be added in place of the removed elements. Which means that instead of removing elements, you are swapping their values with 0's, which then freezes your j-loop.
To fix it, simply omit third parameter.
function primes(num){
let s = []; // sieve
for(let i = 2; i <= num; i++){
s.push(i);
}
for(let i = 0; i < s.length; i++) {
for(let j = s[i]*s[i]; j <= num;) {
//console.log(j);
if(s.indexOf(j)!= -1){
s.splice(s.indexOf(j), 1);
}
j+=s[i];
}
}
return s;
}
console.log(primes(10));
Your problem is in this loop:
for(let j = s[i]*s[i]; j <= num;)
This for loop is looping forever because j is always less than or equal to num in whatever case you're testing. It is very difficult to determine exactly when this code will start looping infinitely because you are modifying the list as you loop.
In effect though, the splice command will be called setting some portion of the indexes in s to 0 which means that j+=s[i] will no longer get you out of the loop.

Complexity of nested for loops starting from the last iteration

function solution(A) {
var len = A.length;
cnt = 0;
for(i = 0; i < len - 1; i++){
for (a = i + 1; a < len; a++){
if (A[i] == A[a]){cnt++;}
else{continue;}
}
if(cnt > 1000000000){return 1000000000;}
}
return cnt;
}
So this is a code for counting identical pairs of an array, I know that 2 for loops give time complexity of O(n2). Is it always the case? Even if the next iteration goes through only remaining part of the array?
Yes this will be roughly O(1/2n^2) but since constants aren't really that important this will end up being O(n^2)

let vs var performance in nodejs and chrome

When I test following code in chrome and nodejs, I get following:
Chrome:
for loop with VAR: 24.058ms
for loop with LET: 8.402ms
NodeJS:
for loop with VAR: 4.329ms
for loop with LET: 8.727ms
As per my understanding, because of block scoping LET is faster in chrome. But can someone help me understand why is it opposite in NodeJS?
Or am i missing something?
"use strict";
console.time("for loop with VAR");
for (var i = 0; i < 1000000; i += 1) {
// Do nothing
}
console.timeEnd("for loop with VAR");
console.time("for loop with LET");
for (let i = 0; i < 1000000; i += 1) {
// Do nothing
}
console.timeEnd("for loop with LET");`
PS: Not sure if this is not the ideal way to test performance.
V8 version shipped with node.js 5.10 don't support the temporal dead zone for let bindings.
Chrome instead is using V8 5.0 that support it...but as the vm is not yet optimized to handle TDZ, is normal that for now it's slower (I remember reading people who assert that replacing var with let made the code about 27% slower).
When you do
for (let i = 0; i < 1000000; i += 1) { }
the i value in each loop cycle is a separate reference, which is useful when using the i value in an asynchronous callback. This is slower, but can be faster than alternatives in this usage case.
When instead you use
let j;
for (j = 0; j < 1000000; ++j) { }
you will only have one value reference, and it will be just as fast as with var.
Try the following code
console.time("let i");
for (let i = 0; i < 10000000; ++i) { }
console.timeEnd("let i");
console.time("let j");
let j;
for (j = 0; j < 10000000; ++j) { }
console.timeEnd("let j");
console.time("var k");
for (var k = 0; k < 10000000; ++k) { }
console.timeEnd("var k");
this will give results like
let i: 91ms
let j: 25ms
var k: 27ms
where clearly let is equally fast to var when used correctly.
Also to see the difference in asynchronous behaviour, try
for (let i = 0; i < 3; ++i) {
setImmediate(() => { console.log(i) });
}
let j;
for (j = 0; j < 3; ++j) {
setImmediate(() => { console.log(j) });
}
for (var k = 0; k < 3; ++k) {
setImmediate(() => { console.log(k) });
}
which will output
0
1
2
3
3
3
3
3
3
as in each cycle of the loop for let i the i value is a unique reference, which is what causes the slight overhead, whereas for the other two loops it's the same reference.
i can't tell you more but as mentiont in this video (very good), you need smarter code to test this.
https://www.youtube.com/watch?v=65-RbBwZQdU
the compiler will to magic stuff with your code and might even ereas the loop if you don't use i and the loop is empty

Javascript for loop without recalculating array's length

In my Javascript reference book, for loops are optimized in the following way:
for( var i = 0, len = keys.length; i < len; i + +) { BODY }
Apparently, doing "len = keys.length" prevents the computer from recalculating keys.length each time it goes through the for loop.
I don't understand why the book doesn't write "var len = keys.length" instead of "len = keys.length"? Isn't the book making "len" a global variable, which isn't good if you're trying to nest two for-loops that loop through two arrays?
E.g.
for( var i = 0, len = keys.length; i < len; i + +) {
for (var i = 0; len = array2.length; i < len; i++) {
}
}
Source: Flanagan, David (2011-04-18). JavaScript: The Definitive Guide: Activate Your Web Pages (Definitive Guides) (Kindle Locations 6992-6998). O'Reilly Media. Kindle Edition.
You can chain variable declarations like this
var i = 0, foo="bar", hello = "world"
Which is close to the same thing as
var i = 0;
var foo = "bar";
var hello = "world";
So this
for(var i = 0, len = keys.length; i < len; i++)
Is the same as
var len = keys.length;
for(var i = 0; i < len; i++)
I like to avoid .length altogether and use something like
var keys = /* array or nodelist */, key, i;
for(i = 0; key = keys[i]; i++)
This will cause the for loop to end as soon as key resolves to undefined.
Now you can use key instead of keys[i]
As a side note, your second example would never work as all of the variables defined in the first for statement would be overwritten by the second, yielding unexpected results. You can nest for loops, but you have to use different variable names.
AS Ankit correctly mentioned in the comments of the question, it is a shorthand. And moreover, if I am correct, javascript is functionally scoped and not block scoped, so you are overwriting the len declared in the outer for loop while re-declaring it in the inner one.
In Javascript each time you instantiate it's a heavy cost. Using one var and short hand commas to create multiple variables is much more efficient.
Your example using only one var to create a variable.
for (var i = 0, len = array2.length; i < len; i++) {
//loop
}
The alternative is to var up the len variable outside the loop.
var len = array2.length;
for (var i = 0; i < len; i++) {
//loop
}
Now you have two separate "var" instances.
I personally like to declare my var at the beginning.
var len = array2.length,
i = 0;
for(i = 0; i < len; i++){
//First loop
}
//So if I use more than one for loop in a routine I can just reuse i
for(i = 0; i < 10; i++){
//Second loop
}
Hope that helps explain.

Why this loop backwards is much slower than going forward?

I have a answer to another guy question here How to count string occurrence in string?
So I was playing with algorithms here, and after benchmarking some functions I was wondering why a backwards loop was significantly slower than forward.
Benchmark test here
NOTE: This code below does not work as supposed to be, there are
others that work (thats not the point of this question), be aware
before Copying>Pasting it
Forward
function occurrences(string, substring) {
var n = 0;
var c = 0;
var l = substring.length;
for (var i = 0, len = string.length; i < len; i++) {
if (string.charAt(i) == substring.charAt(c)) {
c++;
} else {
c = 0;
}
if (c == l) {
c = 0;
n++;
}
}
return n;
}
Backwards
function occurrences(string, substring) {
var n = 0;
var l = substring.length - 1;
var c = l;
for (i = string.length; i > 1; i--) {
if (string.charAt(i) == substring.charAt(c)) {
c--;
} else {
c = l;
}
if (c < 0) {
c = l;
n++;
}
}
return n;
}
I think the backwards test has a bug:
for (i = string.length; i > 1; i--) {
should be
for (i = string.length - 1; i >= 0; i--) {
When i is string.length, string.charAt(i) is undefined. Do this several thousand times, and it could yield a substantial difference.
Here's a modified test that seems to yield much closer to identical performances.
I found the bottle-neck myself.
when I did this
for (i = string.length; i > 1; i--) {
I accidentaly deleted the "var" from var i, so I've made i global.
After fixing it I got the expected results.
for (var i = string.length; i > 1; i--) {
I never though that this may be a HUGE difference, so pay attention guys.
Fixed Benckmark test here
Before:
After:
PS: for practical use, do NOT use this functions, the indexOf version is much faster.
What data are you testing with. If your data has lots of matching prefixes but not many false matches the other way round , that might affect it.
also wont that search bug on cases like "aaabbaaa" try to find "aab" it will match aa, then fail , then continue from third a and fail. ?
Because they are not complete mirrored functions, add console.log()s inside all ifs and elses of both functions and compare the results, you will see that the tests aren't fair.
You did something wrong. I suggest to ensure that they both work as expected before even start the testings.

Categories