Efficient way to access members of Objects in JavaScript - javascript

My question somehow relates to this, but it still involves some key differences.
So here it is, I have following code;
for(var i = 0; i < someObj.children[1].listItems.length; i++)
{
doSomething(someObj.children[1].listItems[i]);
console.log(someObj.children[1].listItems[i]);
}
vs.
var i = 0,
itemLength = someObj.children[1].listItems.length,
item;
for(; i < itemLength; i++)
{
item = someObj.children[1].listItems[i];
doSomething(item);
console.log(item);
}
Now this is a very small exemplary part of code I deal with in an enterprise webapp made in ExtJS. Now here in above code, second example is clearly more readable and clean compared to first one.
But is there any performance gain involved when I reduce number of object lookups in similar way?
I'm asking this for a scenario where there'll be a lot more code within the loop accessing members deep within the object and iteration itself would be happening ~1000 times, and browser varies from IE8 to Latest Chrome.

There won't be a noticeable difference, but for performance and readability, and the fact that it does look like a live nodeList, it should probably be iterated in reverse if you're going to change it :
var elems = someObj.children[1].listItems;
for(var i = elems.length; i--;) {
doSomething(elems[i]);
console.log(elems[i]);
}

Performance gain will depend on how large the list is.
Caching the length is typically better (your second case), because someObj.children[1].listItems.length is not evaluated every time through the loop, as it is in your first case.
If order doesn't matter, I like to loop like this:
var i;
for( i = array.length; --i >= 0; ){
//do stuff
}

Caching object property lookup will result in a performance gain, but the extent of it is based on iterations and depth of the lookups. When your JS engine evaluates something like object.a.b.c.d, there is more work involved than just evaluating d. You can make your second case more efficient by caching additional property lookups outside the loop:
var i = 0,
items = someObj.children[1].listItems,
itemLength = items.length,
item;
for(; i < itemLength; i++) {
item = items[i];
doSomething(item);
console.log(item);
}
The best way to tell, of course, is a jsperf

Related

Array contains anything other than 0

I have an array of numbers with 64 indexes (it's canvas image data).
I want to know if my array contains only zero's or anything other than zero.
We can return a boolean upon the first encounter of any number greater than zero (even if the very last index is non-zero and all the others are zero, we should return true).
What is the most efficient way to determine this?
Of course, we could loop over our array (focus on the testImageData function):
// Setup
var imgData = {
data: new Array(64)
};
imgData.data.fill(0);
// Set last pixel to black
imgData.data[imgData.data.length - 1] = 255;
// The part in question...
function testImageData(img_data) {
var retval = false;
for (var i = 0; i < img_data.data.length; i++) {
if (img_data.data[i] > 0) {
retval = true;
break;
}
}
return retval;
}
var result = testImageData(imgData);
...but this could take a while if my array were bigger.
Is there a more efficient way to test if any index in the array is greater than zero?
I am open to answers using lodash, though I am not using lodash in this project. I would rather the answer be native JavaScript, either ES5 or ES6. I'm going to ignore any jQuery answers, just saying...
Update
I setup a test for various ways to check for a non-zero value in an array, and the results were interesting.
Here is the JSPerf Link
Note, the Array.some test was much slower than using for (index) and even for-in. The fastest, of course, was for(index) for(let i = 0; i < arr.length; i++)....
You should note that I also tested a Regex solution, just to see how it compared. If you run the tests, you will find that the Regex solution is much, much slower (not surprising), but still very interesting.
I would like to see if there is a solution that could be accomplished using bitwise operators. If you feel up to it, I would like to see your approach.
Your for loop is the fastest way on Chrome 64 with Windows 10.
I've tested against two other options, here is the link to the test so you can run them on your environment.
My results are:
// 10776 operations per second (the best)
for (let i = 0; i < arr.length; i++) {
if (arr[i] !== 0) {
break
}
}
// 4131 operations per second
for (const n of arr) {
if (n !== 0) {
break
}
}
// 821 operations per second (the worst)
arr.some(x => x)
There is no faster way than looping through every element in the array. logically in the worst case scenario the last pixel in your array is black, so you have to check all of them. The best algorithm therefore can only have a O(n) runtime. Best thing you can do is write a loop that breaks early upon finding a non-white pixel.

Efficiently transferring elements between arrays

I have several JavaScript arrays, each containing a list of pointers to objects. When an object meets a certain condition, its pointer must be removed from its current containing array and placed into a different array.
My current (naive) solution is to splice out the exiting array elements and concatenate them onto the array they are entering. This is a slow method and seems to fragment memory over time.
Can anyone offer advice (general or JS-specific) on a better way to do this?
Demonstration code:
// Definitions
TestObject = function() {
this.shouldSwitch = function() {
return(Math.random() > 0.9);
}
}
A = [];
B = [];
while(A.length < 500) {
A.push(new TestObject());
}
// Transfer loop
doTransfers = function() {
var A_pending = [];
var B_pending = [];
for(var i = 0; i < A.length; i++) {
if(A[i].shouldSwitch()) {
B_pending.push(A[i]);
A.splice(i,1);
i--;
}
}
for(var i = 0; i < B.length; i++) {
if(B[i].shouldSwitch()) {
A_pending.push(B[i]);
B.splice(i,1);
i--;
}
}
A = A.concat(A_pending);
B = B.concat(B_pending);
}
setInterval(doTransfers,10);
Thanks!
For a language-independent kind of solution to this problem, when you're transferring elements from one contiguous sequence (array) to another, it's not appending elements to the back of the new array that's going to be bottlenecky (constant time complexity), it's going to be the removal of elements from the middle of your existing container (linear time complexity).
So the biggest benefit you can get is to replace that linear-time operation of removing from the middle of the array with a constant-time operation that still uses that cache-friendly, contiguous array representation.
One of the easiest ways to do this is to simply create two new arrays instead of one: a new array to append the elements you want to keep and a new array to append the elements you want to transfer. When you're done, you can swap out the new array of elements you want to keep (not transfer) with the old array you had.
In such a case, we're exchanging linear-time removals from the middle of a container with amortized constant-time insertions to the back of a new one. While insertion to the end of a container still has a worst-case complexity of O(N) for reallocations, it occurs infrequently enough and is still generally far better than paying for an operation that has an average complexity of O(N) every time you transfer a single element by constantly removing from the middle.
Another way to solve this problem that can be even more efficient, especially for certain cases like really small arrays since it only creates 1 new array, is this:
... when you transfer an element, first append a copy of it (possibly just a shallow copy) to the new container. Then overwrite the element at that index in the old container with the element from the back of the old container. Now simply pop off the element at the back of the old container. So we have one push, one assignment, and one pop.
In this case, we're exchanging a linear-time removal from the middle of a container with a single assignment (store/move instruction) and a constant-time pop from the back of the container (often basic arithmetic). This can work extremely well if the order of the elements in the old array can be shuffled around a little bit, and is often an overlooked solution for getting that linear-time removal from the middle of the array into one with constant-time complexity from the back of the array.
splice is pretty harmful for performance in a loop. But you don't seem to need mutations on the input arrays anyway - you are creating new ones and overwrite the previous values.
Just do
function doTransfers() {
var A_pending = [];
var B2_pending = [];
for (var i = 0; i < A.length; i++) {
if (A[i].shouldSwitch())
B_pending.push(A[i]);
else
A_pending.push(A[i]);
}
var B1_pending = [];
for (var i = 0; i < B.length; i++) {
if (B[i].shouldSwitch())
A_pending.push(B[i]);
else
B1_pending.push(B[i]);
}
A = A_pending;
B = B1_pending.concat(B2_pending);
}

++ or -- Incrementer or Decrementer Speed Comparison [duplicate]

In his book Even Faster Web Sites Steve Sounders writes that a simple way to improve the performance of a loop is to decrement the iterator toward 0 rather than incrementing toward the total length (actually the chapter was written by Nicholas C. Zakas). This change can result in savings of up to 50% off the original execution time, depending on the complexity of each iteration. For example:
var values = [1,2,3,4,5];
var length = values.length;
for (var i=length; i--;) {
process(values[i]);
}
This is nearly identical for the for loop, the do-while loop, and the while loop.
I'm wondering, what's the reason for this? Why is to decrement the iterator so much faster? (I'm interested in the technical background of this and not in benchmarks proving this claim.)
EDIT: At first sight the loop syntax used here looks wrong. There is no length-1 or i>=0, so let's clarify (I was confused too).
Here is the general for loop syntax:
for ([initial-expression]; [condition]; [final-expression])
statement
initial-expression - var i=length
This variable declaration is evaluated first.
condition - i--
This expression is evaluated before each loop iteration. It will decrement the variable before the first pass through the loop. If this expression evaluates to false the loop ends. In JavaScript is 0 == false so if i finally equals 0 it is interpreted as false and the loop ends.
final-expression
This expression is evaluated at the end of each loop iteration (before the next evaluation of condition). It's not needed here and is empty. All three expressions are optional in a for loop.
The for loop syntax is not part of the question, but because it's a little bit uncommon I think it's interesting to clarify it. And maybe one reason it's faster is, because it uses less expressions (the 0 == false "trick").
I'm not sure about Javascript, and under modern compilers it probably doesn't matter, but in the "olden days" this code:
for (i = 0; i < n; i++){
.. body..
}
would generate
move register, 0
L1:
compare register, n
jump-if-greater-or-equal L2
-- body ..
increment register
jump L1
L2:
while the backward-counting code
for (i = n; --i>=0;){
.. body ..
}
would generate
move register, n
L1:
decrement-and-jump-if-negative register, L2
.. body ..
jump L1
L2:
so inside the loop it's only doing two extra instructions instead of four.
I believe the reason is because you're comparing the loop end point against 0, which is faster then comparing again < length (or another JS variable).
It is because the ordinal operators <, <=, >, >= are polymorphic, so these operators require type checks on both left and right sides of the operator to determine what comparison behaviour should be used.
There's some very good benchmarks available here:
What's the Fastest Way to Code a Loop in JavaScript
It is easy to say that an iteration can have less instructions. Let’s just compare these two:
for (var i=0; i<length; i++) {
}
for (var i=length; i--;) {
}
When you count each variable access and each operator as one instruction, the former for loop uses 5 instructions (read i, read length, evaluate i<length, test (i<length) == true, increment i) while the latter uses just 3 instructions (read i, test i == true, decrement i). That is a ratio of 5:3.
What about using a reverse while loop then:
var values = [1,2,3,4,5];
var i = values.length;
/* i is 1st evaluated and then decremented, when i is 1 the code inside the loop
is then processed for the last time with i = 0. */
while(i--)
{
//1st time in here i is (length - 1) so it's ok!
process(values[i]);
}
IMO this one at least is a more readble code than for(i=length; i--;)
for increment vs. decrement in 2017
In modern JS engines incrementing in for loops is generally faster than decrementing (based on personal Benchmark.js tests), also more conventional:
for (let i = 0; i < array.length; i++) { ... }
It depends on the platform and array length if length = array.length has any considerable positive effect, but usually it doesn't:
for (let i = 0, length = array.length; i < length; i++) { ... }
Recent V8 versions (Chrome, Node) have optimizations for array.length, so length = array.length can be efficiently omitted there in any case.
There is an even more "performant" version of this.
Since each argument is optional in for loops you can skip even the first one.
var array = [...];
var i = array.length;
for(;i--;) {
do_teh_magic();
}
With this you skip even the check on the [initial-expression]. So you end up with just one operation left.
I've been exploring loop speed as well, and was interested to find this tidbit about decrementing being faster than incrementing. However, I have yet to find a test that demonstrates this. There are lots of loop benchmarks on jsperf. Here is one that tests decrementing:
http://jsperf.com/array-length-vs-cached/6
Caching your array length, however (also recommended Steve Souders' book) does seem to be a winning optimization.
in modern JS engines, the difference between forward and reverse loops is almost non-existent anymore. But the performance difference comes down to 2 things:
a) extra lookup every of length property every cycle
//example:
for(var i = 0; src.length > i; i++)
//vs
for(var i = 0, len = src.length; len > i; i++)
this is the biggest performance gain of a reverse loop, and can obviously be applied to forward loops.
b) extra variable assignment.
the smaller gain of a reverse loop is that it only requires one variable assignment instead of 2
//example:
var i = src.length; while(i--)
I've conducted a benchmark on C# and C++ (similar syntax). There, actually, the performance differs essentially in for loops, as compared to do while or while. In C++, performance is greater when incrementing. It may also depend on the compiler.
In Javascript, I reckon, it all depends on the browser (Javascript engine), but this behavior is to be expected. Javascript is optimized for working with DOM. So imagine you loop through a collection of DOM elements you get at each iteration, and you increment a counter, when you have to remove them. You remove the 0 element, then 1 element, but then you skip the one that takes 0's place. When looping backwards that problem disappears. I know that the example given isn't just the right one, but I did encounter situations where I had to delete items from an ever-changing object collection.
Because backward looping is more often inevitable than forward looping, I am guessing that the JS engine is optimized just for that.
Have you timed it yourself? Mr. Sounders might be wrong with regards to modern interpreters. This is precisely the sort of optimization in which a good compiler writer can make a big difference.
I am not sure if it's faster but one reason i see is that when you iterate over an array of large elements using increment you tend to write:
for(var i = 0; i < array.length; i++) {
...
}
You are essentially accessing the length property of the array N (number of elements) times.
Whereas when you decrement, you access it only once. That could be a reason.
But you can also write incrementing loop as follows:
for(var i = 0, len = array.length; i < len; i++) {
...
}
It's not faster (at least in modern browsers):
// Double loops to check the initialization performance too
const repeats = 1e3;
const length = 1e5;
console.time('Forward');
for (let j = 0; j < repeats; j++) {
for (let i = 0; i < length; i++) {}
}
console.timeEnd('Forward'); // 58ms
console.time('Backward');
for (let j = repeats; j--;) {
for (let i = length; i--;) {}
}
console.timeEnd('Backward'); // 64ms
The difference is even bigger in case of an array iteration:
const repeats = 1e3;
const array = [...Array(1e5)];
console.time('Forward');
for (let j = 0; j < repeats; j++) {
for (let i = 0; i < array.length; i++) {}
}
console.timeEnd('Forward'); // 34ms
console.time('Backward');
for (let j = 0; j < repeats; j++) {
for (let i = array.length; i--;) {}
}
console.timeEnd('Backward'); // 64ms

Is it possible to have quadratic time complexity without nested loops?

It was going so well. I thought I had my head around time complexity. I was having a play on codility and used the following algorithm to solve one of their problems. I am aware there are better solutions to this problem (permutation check) - but I simply don't understand how something without nested loops could have a time complexity of O(N^2). I was under the impression that the associative arrays in Javascript are like hashes and are very quick, and wouldn't be implemented as time-consuming loops.
Here is the example code
function solution(A) {
// write your code in JavaScript (Node.js)
var dict = {};
for (var i=1; i<A.length+1; i++) {
dict[i] = 1;
}
for (var j=0; j<A.length; j++) {
delete dict[A[j]];
}
var keyslength = Object.keys(dict).length;
return keyslength === 0 ? 1 : 0;
}
and here is the verdict
There must be a bug in their tool that you should report: this code has a complexity of O(n).
Believe me I am someone on the Internet.
On my machine:
console.time(1000);
solution(new Array(1000));
console.timeEnd(1000);
//about 0.4ms
console.time(10000);
solution(new Array(10000));
console.timeEnd(10000);
// about 4ms
Update: To be pedantic (sic), I still need a third data point to show it's linear
console.time(100000);
solution(new Array(100000));
console.timeEnd(100000);
// about 45ms, well let's say 40ms, that is not a proof anyway
Is it possible to have quadratic time complexity without nested loops? Yes. Consider this:
function getTheLengthOfAListSquared(list) {
for (var i = 0; i < list.length * list.length; i++) { }
return i;
}
As for that particular code sample, it does seem to be O(n) as #floribon says, given that Javascript object lookup should be constant time.
Remember that making an algorithm that takes an arbitrary function and determines whether that function will complete at all is provably impossible (halting problem), let alone determining complexity. Writing a tool to statically determine the complexity of anything but the most simple programs would be extremely difficult and this tool's result demonstrates that.

Why are these Javascript for loops significantly slower on Firefox then Chrome / Safari?

I was messing around with the benchmark site jfprefs and created my own benchmark at http://jsperf.com/prefix-or-postfix-increment/9.
The benchmarks are variations of Javascript for loops, using prefix and postfix incrementors and the Crockford jslint style of not using an in place incrementor.
for (var index = 0, len = data.length; index < len; ++index) {
data[index] = data[index] * 2;
}
for (var index = 0, len = data.length; index < len; index++) {
data[index] = data[index] * 2;
}
for (var index = 0, len = data.length; index < len; index += 1) {
data[index] = data[index] * 2;
}
After getting the numbers from a couple of runs of the benchmark, I noticed that Firefox is doing about 15 operations per second on average and Chrome is doing around 300.
I thought JaegerMonkey and v8 were fairly comparable in terms of speed? Are my benchmarks flawed somehow, is Firefox doing some kind of throttling here or is the gap really that large between the performance of the Javascript interpreters?
UPDATE: Thanks to jfriend00, I've concluded the difference in performance is not entirely due to the loop iteration, as seen in this version of the test case. As you can see Firefox is slower, but not as much of a gap as we see in the initial test case.
So why is the statement,
data[index] = data[index] * 2;
So much slower on Firefox?
Arrays are tricky in JavaScript. The way you create them, how you fill them (and with what values) can all affect their performance.
There are two basic implementations that engines use. The simplest, most obvious one is a contiguous block of memory (just like a C array, with some metadata, like the length). It's the fastest way, and ideally the implementation you want in most cases.
The problem is, arrays in JavaScript can grow very large just by assigning to an arbitrary index, leaving "holes". For example, if you have a small array:
var array = [1,2,3];
and you assign a value to a large index:
array[1000000] = 4;
you'll end up with an array like this:
[1, 2, 3, undefined, undefined, undefined, ..., undefined, 4]
To save memory, most runtimes will convert array into a "sparse" array. Basically, a hash table, just like regular JS objects. Once that happens, reading or writing to an index goes from simple pointer arithmetic to a much more complicated algorithm, possibly with dynamic memory allocation.
Of course, different runtimes use different heuristics to decide when to convert from one implementation to another, so in some cases, optimizing for Chrome, for example, can hurt performance in Firefox.
In your case, my best guess is that filling the array backwards is causing Firefox to use a sparse array, making it slower.
I hate to give you such a simple answer, but pretty simply: instruction branching: http://igoro.com/archive/fast-and-slow-if-statements-branch-prediction-in-modern-processors/
From what I get from the benchmark, there's something under the hood in these engines that is giving the instruction prediction features of the processor hell.

Categories