EDIT: As noted by kennytm below and after investigating myself, according to the ECMA spec, when two objects are determined to be equal in a custom sort, JavaScript is not required to leave those two objects in the same order. Both Chrome and Opera are the only two major browsers to choose to have non-stable sorts, but others include Netscape 8&9, Kazehakaze, IceApe and a few others. The Chromium team has marked this bug as "Working as intended" so it will not be "fixed". If you need your arrays to stay in their original order when values are equal, you will need to employ some additional mechanism (such as the one above). Returning 0 when sorting objects is effectively meaningless, so don't bother. Or use a library that supports stable sort such as Underscore/Lodash.
I just got a report that some code I wrote is breaking on Chrome. I've tracked it down to a custom method I'm using to sort an array of objects. I am really tempted to call this a bug, but I'm not sure it is.
In all other browsers when you sort an array of objects, if two objects resolve to the same value their order in the updated array is left unchanged. In Chrome, their order is seemingly randomized. Run the code below in Chrome and any other browser you want. You should see what I mean.
I have two questions:
First, was I right in assuming that when your custom sorter returns 0 that the two compared items should remain in their original order (I have a feeling I was wrong).
Second, is there any good way of fixing this? The only thing I can think of is adding an auto-incrementing number as an attribute to each member of the array before sorting, and then using that value when two items sort is comparing resolve to the same value. In other words, never return 0.
Here's the sample code:
var x = [
{'a':2,'b':1},
{'a':1,'b':2},
{'a':1,'b':3},
{'a':1,'b':4},
{'a':1,'b':5},
{'a':1,'b':6},
{'a':0,'b':7},
]
var customSort = function(a,b) {
if (a.a === b.a) return 0;
if (a.a > b.a) return 1;
return -1;
};
console.log("before sorting");
for (var i = 0; i < x.length; i++) {
console.log(x[i].b);
}
x.sort(customSort);
console.log("after sorting");
for (var i = 0; i < x.length; i++) {
console.log(x[i].b);
}
In all other browsers, what I see is that only the first member and the last member of the array get moved (I see 7,2,3,4,5,6,1) but in Chrome the inner numbers are seemingly randomized.
[EDIT] Thank you very much to everyone who answered. I guess that 'inconsistent' doesn't necessary mean it's a bug. Also, I just wanted to point out that my b property was just an example. In fact, I'm sorting some relatively wide objects on any one of about 20 keys according to user input. Even keeping track of what the user last sorted by still won't solve the problem of the randomness I'm seeing. My work-around will probably be a close variation of this (new code is highlighted):
var x = [
{'a':2,'b':1},
{'a':1,'b':2},
{'a':1,'b':3},
{'a':1,'b':4},
{'a':1,'b':5},
{'a':1,'b':6},
{'a':0,'b':7},
];
var i;
var customSort = function(a,b) {
if (a.a === b.a) return a.customSortKey > b.customSortKey ? 1 : -1; /*NEW CODE*/
if (a.a > b.a) return 1;
return -1;
};
console.log("before sorting");
for (i = 0; i < x.length; i++) {console.log(x[i].b);}
for (i = 0; i < x.length; i++) { /*NEW CODE*/
x[i].customSortKey = i; /*NEW CODE*/
} /*NEW CODE*/
x.sort(customSort);
console.log("after sorting");
for (i = 0; i < x.length; i++) {console.log(x[i].b);}
The ECMAScript standard does not guarantee Array.sort is a stable sort. Chrome (the V8 engine) uses in-place QuickSort internally (for arrays of size ≥ 22, else insertion sort) which is fast but not stable.
To fix it, make customSort compare with .b as well, eliminating the need of stability of the sorting algorithm.
May be you know it already but you can use an array to sort on multiple columns and avoid this bug:
var customSort = function(a,b) {
return [a.a, a.b] > [b.a, b.b] ? 1:-1;
}
The V8 sort is not stable, unfortunately. I'll see if I can dig up the Chromium bug about this.
V8 sort is now stable!
Related
I have an array of numbers with 64 indexes (it's canvas image data).
I want to know if my array contains only zero's or anything other than zero.
We can return a boolean upon the first encounter of any number greater than zero (even if the very last index is non-zero and all the others are zero, we should return true).
What is the most efficient way to determine this?
Of course, we could loop over our array (focus on the testImageData function):
// Setup
var imgData = {
data: new Array(64)
};
imgData.data.fill(0);
// Set last pixel to black
imgData.data[imgData.data.length - 1] = 255;
// The part in question...
function testImageData(img_data) {
var retval = false;
for (var i = 0; i < img_data.data.length; i++) {
if (img_data.data[i] > 0) {
retval = true;
break;
}
}
return retval;
}
var result = testImageData(imgData);
...but this could take a while if my array were bigger.
Is there a more efficient way to test if any index in the array is greater than zero?
I am open to answers using lodash, though I am not using lodash in this project. I would rather the answer be native JavaScript, either ES5 or ES6. I'm going to ignore any jQuery answers, just saying...
Update
I setup a test for various ways to check for a non-zero value in an array, and the results were interesting.
Here is the JSPerf Link
Note, the Array.some test was much slower than using for (index) and even for-in. The fastest, of course, was for(index) for(let i = 0; i < arr.length; i++)....
You should note that I also tested a Regex solution, just to see how it compared. If you run the tests, you will find that the Regex solution is much, much slower (not surprising), but still very interesting.
I would like to see if there is a solution that could be accomplished using bitwise operators. If you feel up to it, I would like to see your approach.
Your for loop is the fastest way on Chrome 64 with Windows 10.
I've tested against two other options, here is the link to the test so you can run them on your environment.
My results are:
// 10776 operations per second (the best)
for (let i = 0; i < arr.length; i++) {
if (arr[i] !== 0) {
break
}
}
// 4131 operations per second
for (const n of arr) {
if (n !== 0) {
break
}
}
// 821 operations per second (the worst)
arr.some(x => x)
There is no faster way than looping through every element in the array. logically in the worst case scenario the last pixel in your array is black, so you have to check all of them. The best algorithm therefore can only have a O(n) runtime. Best thing you can do is write a loop that breaks early upon finding a non-white pixel.
Is it safe to assume that array.indexOf() does a linear search from the beginning of the array?
So if I often search for the biggest value, will sorting the array before calling indexOf make it run even faster?
Notes:
I want to sort only once and search many times.
"biggest value" is actually the most popular search key which is a string.
Yes, indexOf is starting from the first to the last. Sorting it before asking the first entry afterwards makes a difference in performance depending in the performance of sorting algorithm. Normally O(N log N) in quicksort to O(n) in linear search. I would suggest you to make a simple test for it with random value count and see how performance behave.
Of course it depends on your DataObject:
ArrayList:
public int indexOf(Object o) {
if (o == null) {
for (int i = 0; i < size; i++)
if (elementData[i]==null)
return i;
} else {
for (int i = 0; i < size; i++)
if (o.equals(elementData[i]))
return i;
}
return -1;
}
Perhaps a TreeSet can also help you till its ordered all the time.
At the end i would say:"Depends on your data-container and how the ordering or search is implemented and where the performance in the whole process is needed".
As a comment say with pure arrays you can make binarySearch which has the same performance impact of quicksort.
So at the end its not only a question of the performance of algorithm but also at what time you need the performance in the process. For example if you have lots of time by adding values and no time by getting the value you need, a sorted structure can improve it very well, like Tree-Collections. If you need more performance by adding things its perhaps the other way around.
player.obj.speed = 20;
computer1.obj.speed = 20;
computer2.obj.speed = 20;
into
*.obj.speed =20; //once and for all!
I think my question here is clear enough to be understood.
I will install automation packages if needs... but I think there is simplest way that makes me facepalm...
anyway thanks!
objectArray = [ //Array of the objects (not their speeds as integers don't get passed by reference, but objects do)
player.obj,
computer1.obj,
computer2.obj
]
function setAll(X)
{
//Go over all the objects
for(var i = 0; i < objectArray.length; i++)
{
objectArray[i].speed = X;
}
}
This is pretty straightforward: make an array of the objects (which WILL update the original, as apposed to an array of integers or floats, etc). Then, when you want to change them, just go over the array and set each one's speed property.
My question somehow relates to this, but it still involves some key differences.
So here it is, I have following code;
for(var i = 0; i < someObj.children[1].listItems.length; i++)
{
doSomething(someObj.children[1].listItems[i]);
console.log(someObj.children[1].listItems[i]);
}
vs.
var i = 0,
itemLength = someObj.children[1].listItems.length,
item;
for(; i < itemLength; i++)
{
item = someObj.children[1].listItems[i];
doSomething(item);
console.log(item);
}
Now this is a very small exemplary part of code I deal with in an enterprise webapp made in ExtJS. Now here in above code, second example is clearly more readable and clean compared to first one.
But is there any performance gain involved when I reduce number of object lookups in similar way?
I'm asking this for a scenario where there'll be a lot more code within the loop accessing members deep within the object and iteration itself would be happening ~1000 times, and browser varies from IE8 to Latest Chrome.
There won't be a noticeable difference, but for performance and readability, and the fact that it does look like a live nodeList, it should probably be iterated in reverse if you're going to change it :
var elems = someObj.children[1].listItems;
for(var i = elems.length; i--;) {
doSomething(elems[i]);
console.log(elems[i]);
}
Performance gain will depend on how large the list is.
Caching the length is typically better (your second case), because someObj.children[1].listItems.length is not evaluated every time through the loop, as it is in your first case.
If order doesn't matter, I like to loop like this:
var i;
for( i = array.length; --i >= 0; ){
//do stuff
}
Caching object property lookup will result in a performance gain, but the extent of it is based on iterations and depth of the lookups. When your JS engine evaluates something like object.a.b.c.d, there is more work involved than just evaluating d. You can make your second case more efficient by caching additional property lookups outside the loop:
var i = 0,
items = someObj.children[1].listItems,
itemLength = items.length,
item;
for(; i < itemLength; i++) {
item = items[i];
doSomething(item);
console.log(item);
}
The best way to tell, of course, is a jsperf
I was messing around with the benchmark site jfprefs and created my own benchmark at http://jsperf.com/prefix-or-postfix-increment/9.
The benchmarks are variations of Javascript for loops, using prefix and postfix incrementors and the Crockford jslint style of not using an in place incrementor.
for (var index = 0, len = data.length; index < len; ++index) {
data[index] = data[index] * 2;
}
for (var index = 0, len = data.length; index < len; index++) {
data[index] = data[index] * 2;
}
for (var index = 0, len = data.length; index < len; index += 1) {
data[index] = data[index] * 2;
}
After getting the numbers from a couple of runs of the benchmark, I noticed that Firefox is doing about 15 operations per second on average and Chrome is doing around 300.
I thought JaegerMonkey and v8 were fairly comparable in terms of speed? Are my benchmarks flawed somehow, is Firefox doing some kind of throttling here or is the gap really that large between the performance of the Javascript interpreters?
UPDATE: Thanks to jfriend00, I've concluded the difference in performance is not entirely due to the loop iteration, as seen in this version of the test case. As you can see Firefox is slower, but not as much of a gap as we see in the initial test case.
So why is the statement,
data[index] = data[index] * 2;
So much slower on Firefox?
Arrays are tricky in JavaScript. The way you create them, how you fill them (and with what values) can all affect their performance.
There are two basic implementations that engines use. The simplest, most obvious one is a contiguous block of memory (just like a C array, with some metadata, like the length). It's the fastest way, and ideally the implementation you want in most cases.
The problem is, arrays in JavaScript can grow very large just by assigning to an arbitrary index, leaving "holes". For example, if you have a small array:
var array = [1,2,3];
and you assign a value to a large index:
array[1000000] = 4;
you'll end up with an array like this:
[1, 2, 3, undefined, undefined, undefined, ..., undefined, 4]
To save memory, most runtimes will convert array into a "sparse" array. Basically, a hash table, just like regular JS objects. Once that happens, reading or writing to an index goes from simple pointer arithmetic to a much more complicated algorithm, possibly with dynamic memory allocation.
Of course, different runtimes use different heuristics to decide when to convert from one implementation to another, so in some cases, optimizing for Chrome, for example, can hurt performance in Firefox.
In your case, my best guess is that filling the array backwards is causing Firefox to use a sparse array, making it slower.
I hate to give you such a simple answer, but pretty simply: instruction branching: http://igoro.com/archive/fast-and-slow-if-statements-branch-prediction-in-modern-processors/
From what I get from the benchmark, there's something under the hood in these engines that is giving the instruction prediction features of the processor hell.