player.obj.speed = 20;
computer1.obj.speed = 20;
computer2.obj.speed = 20;
into
*.obj.speed =20; //once and for all!
I think my question here is clear enough to be understood.
I will install automation packages if needs... but I think there is simplest way that makes me facepalm...
anyway thanks!
objectArray = [ //Array of the objects (not their speeds as integers don't get passed by reference, but objects do)
player.obj,
computer1.obj,
computer2.obj
]
function setAll(X)
{
//Go over all the objects
for(var i = 0; i < objectArray.length; i++)
{
objectArray[i].speed = X;
}
}
This is pretty straightforward: make an array of the objects (which WILL update the original, as apposed to an array of integers or floats, etc). Then, when you want to change them, just go over the array and set each one's speed property.
Related
I have an array of numbers with 64 indexes (it's canvas image data).
I want to know if my array contains only zero's or anything other than zero.
We can return a boolean upon the first encounter of any number greater than zero (even if the very last index is non-zero and all the others are zero, we should return true).
What is the most efficient way to determine this?
Of course, we could loop over our array (focus on the testImageData function):
// Setup
var imgData = {
data: new Array(64)
};
imgData.data.fill(0);
// Set last pixel to black
imgData.data[imgData.data.length - 1] = 255;
// The part in question...
function testImageData(img_data) {
var retval = false;
for (var i = 0; i < img_data.data.length; i++) {
if (img_data.data[i] > 0) {
retval = true;
break;
}
}
return retval;
}
var result = testImageData(imgData);
...but this could take a while if my array were bigger.
Is there a more efficient way to test if any index in the array is greater than zero?
I am open to answers using lodash, though I am not using lodash in this project. I would rather the answer be native JavaScript, either ES5 or ES6. I'm going to ignore any jQuery answers, just saying...
Update
I setup a test for various ways to check for a non-zero value in an array, and the results were interesting.
Here is the JSPerf Link
Note, the Array.some test was much slower than using for (index) and even for-in. The fastest, of course, was for(index) for(let i = 0; i < arr.length; i++)....
You should note that I also tested a Regex solution, just to see how it compared. If you run the tests, you will find that the Regex solution is much, much slower (not surprising), but still very interesting.
I would like to see if there is a solution that could be accomplished using bitwise operators. If you feel up to it, I would like to see your approach.
Your for loop is the fastest way on Chrome 64 with Windows 10.
I've tested against two other options, here is the link to the test so you can run them on your environment.
My results are:
// 10776 operations per second (the best)
for (let i = 0; i < arr.length; i++) {
if (arr[i] !== 0) {
break
}
}
// 4131 operations per second
for (const n of arr) {
if (n !== 0) {
break
}
}
// 821 operations per second (the worst)
arr.some(x => x)
There is no faster way than looping through every element in the array. logically in the worst case scenario the last pixel in your array is black, so you have to check all of them. The best algorithm therefore can only have a O(n) runtime. Best thing you can do is write a loop that breaks early upon finding a non-white pixel.
I have several JavaScript arrays, each containing a list of pointers to objects. When an object meets a certain condition, its pointer must be removed from its current containing array and placed into a different array.
My current (naive) solution is to splice out the exiting array elements and concatenate them onto the array they are entering. This is a slow method and seems to fragment memory over time.
Can anyone offer advice (general or JS-specific) on a better way to do this?
Demonstration code:
// Definitions
TestObject = function() {
this.shouldSwitch = function() {
return(Math.random() > 0.9);
}
}
A = [];
B = [];
while(A.length < 500) {
A.push(new TestObject());
}
// Transfer loop
doTransfers = function() {
var A_pending = [];
var B_pending = [];
for(var i = 0; i < A.length; i++) {
if(A[i].shouldSwitch()) {
B_pending.push(A[i]);
A.splice(i,1);
i--;
}
}
for(var i = 0; i < B.length; i++) {
if(B[i].shouldSwitch()) {
A_pending.push(B[i]);
B.splice(i,1);
i--;
}
}
A = A.concat(A_pending);
B = B.concat(B_pending);
}
setInterval(doTransfers,10);
Thanks!
For a language-independent kind of solution to this problem, when you're transferring elements from one contiguous sequence (array) to another, it's not appending elements to the back of the new array that's going to be bottlenecky (constant time complexity), it's going to be the removal of elements from the middle of your existing container (linear time complexity).
So the biggest benefit you can get is to replace that linear-time operation of removing from the middle of the array with a constant-time operation that still uses that cache-friendly, contiguous array representation.
One of the easiest ways to do this is to simply create two new arrays instead of one: a new array to append the elements you want to keep and a new array to append the elements you want to transfer. When you're done, you can swap out the new array of elements you want to keep (not transfer) with the old array you had.
In such a case, we're exchanging linear-time removals from the middle of a container with amortized constant-time insertions to the back of a new one. While insertion to the end of a container still has a worst-case complexity of O(N) for reallocations, it occurs infrequently enough and is still generally far better than paying for an operation that has an average complexity of O(N) every time you transfer a single element by constantly removing from the middle.
Another way to solve this problem that can be even more efficient, especially for certain cases like really small arrays since it only creates 1 new array, is this:
... when you transfer an element, first append a copy of it (possibly just a shallow copy) to the new container. Then overwrite the element at that index in the old container with the element from the back of the old container. Now simply pop off the element at the back of the old container. So we have one push, one assignment, and one pop.
In this case, we're exchanging a linear-time removal from the middle of a container with a single assignment (store/move instruction) and a constant-time pop from the back of the container (often basic arithmetic). This can work extremely well if the order of the elements in the old array can be shuffled around a little bit, and is often an overlooked solution for getting that linear-time removal from the middle of the array into one with constant-time complexity from the back of the array.
splice is pretty harmful for performance in a loop. But you don't seem to need mutations on the input arrays anyway - you are creating new ones and overwrite the previous values.
Just do
function doTransfers() {
var A_pending = [];
var B2_pending = [];
for (var i = 0; i < A.length; i++) {
if (A[i].shouldSwitch())
B_pending.push(A[i]);
else
A_pending.push(A[i]);
}
var B1_pending = [];
for (var i = 0; i < B.length; i++) {
if (B[i].shouldSwitch())
A_pending.push(B[i]);
else
B1_pending.push(B[i]);
}
A = A_pending;
B = B1_pending.concat(B2_pending);
}
My question somehow relates to this, but it still involves some key differences.
So here it is, I have following code;
for(var i = 0; i < someObj.children[1].listItems.length; i++)
{
doSomething(someObj.children[1].listItems[i]);
console.log(someObj.children[1].listItems[i]);
}
vs.
var i = 0,
itemLength = someObj.children[1].listItems.length,
item;
for(; i < itemLength; i++)
{
item = someObj.children[1].listItems[i];
doSomething(item);
console.log(item);
}
Now this is a very small exemplary part of code I deal with in an enterprise webapp made in ExtJS. Now here in above code, second example is clearly more readable and clean compared to first one.
But is there any performance gain involved when I reduce number of object lookups in similar way?
I'm asking this for a scenario where there'll be a lot more code within the loop accessing members deep within the object and iteration itself would be happening ~1000 times, and browser varies from IE8 to Latest Chrome.
There won't be a noticeable difference, but for performance and readability, and the fact that it does look like a live nodeList, it should probably be iterated in reverse if you're going to change it :
var elems = someObj.children[1].listItems;
for(var i = elems.length; i--;) {
doSomething(elems[i]);
console.log(elems[i]);
}
Performance gain will depend on how large the list is.
Caching the length is typically better (your second case), because someObj.children[1].listItems.length is not evaluated every time through the loop, as it is in your first case.
If order doesn't matter, I like to loop like this:
var i;
for( i = array.length; --i >= 0; ){
//do stuff
}
Caching object property lookup will result in a performance gain, but the extent of it is based on iterations and depth of the lookups. When your JS engine evaluates something like object.a.b.c.d, there is more work involved than just evaluating d. You can make your second case more efficient by caching additional property lookups outside the loop:
var i = 0,
items = someObj.children[1].listItems,
itemLength = items.length,
item;
for(; i < itemLength; i++) {
item = items[i];
doSomething(item);
console.log(item);
}
The best way to tell, of course, is a jsperf
EDIT: As noted by kennytm below and after investigating myself, according to the ECMA spec, when two objects are determined to be equal in a custom sort, JavaScript is not required to leave those two objects in the same order. Both Chrome and Opera are the only two major browsers to choose to have non-stable sorts, but others include Netscape 8&9, Kazehakaze, IceApe and a few others. The Chromium team has marked this bug as "Working as intended" so it will not be "fixed". If you need your arrays to stay in their original order when values are equal, you will need to employ some additional mechanism (such as the one above). Returning 0 when sorting objects is effectively meaningless, so don't bother. Or use a library that supports stable sort such as Underscore/Lodash.
I just got a report that some code I wrote is breaking on Chrome. I've tracked it down to a custom method I'm using to sort an array of objects. I am really tempted to call this a bug, but I'm not sure it is.
In all other browsers when you sort an array of objects, if two objects resolve to the same value their order in the updated array is left unchanged. In Chrome, their order is seemingly randomized. Run the code below in Chrome and any other browser you want. You should see what I mean.
I have two questions:
First, was I right in assuming that when your custom sorter returns 0 that the two compared items should remain in their original order (I have a feeling I was wrong).
Second, is there any good way of fixing this? The only thing I can think of is adding an auto-incrementing number as an attribute to each member of the array before sorting, and then using that value when two items sort is comparing resolve to the same value. In other words, never return 0.
Here's the sample code:
var x = [
{'a':2,'b':1},
{'a':1,'b':2},
{'a':1,'b':3},
{'a':1,'b':4},
{'a':1,'b':5},
{'a':1,'b':6},
{'a':0,'b':7},
]
var customSort = function(a,b) {
if (a.a === b.a) return 0;
if (a.a > b.a) return 1;
return -1;
};
console.log("before sorting");
for (var i = 0; i < x.length; i++) {
console.log(x[i].b);
}
x.sort(customSort);
console.log("after sorting");
for (var i = 0; i < x.length; i++) {
console.log(x[i].b);
}
In all other browsers, what I see is that only the first member and the last member of the array get moved (I see 7,2,3,4,5,6,1) but in Chrome the inner numbers are seemingly randomized.
[EDIT] Thank you very much to everyone who answered. I guess that 'inconsistent' doesn't necessary mean it's a bug. Also, I just wanted to point out that my b property was just an example. In fact, I'm sorting some relatively wide objects on any one of about 20 keys according to user input. Even keeping track of what the user last sorted by still won't solve the problem of the randomness I'm seeing. My work-around will probably be a close variation of this (new code is highlighted):
var x = [
{'a':2,'b':1},
{'a':1,'b':2},
{'a':1,'b':3},
{'a':1,'b':4},
{'a':1,'b':5},
{'a':1,'b':6},
{'a':0,'b':7},
];
var i;
var customSort = function(a,b) {
if (a.a === b.a) return a.customSortKey > b.customSortKey ? 1 : -1; /*NEW CODE*/
if (a.a > b.a) return 1;
return -1;
};
console.log("before sorting");
for (i = 0; i < x.length; i++) {console.log(x[i].b);}
for (i = 0; i < x.length; i++) { /*NEW CODE*/
x[i].customSortKey = i; /*NEW CODE*/
} /*NEW CODE*/
x.sort(customSort);
console.log("after sorting");
for (i = 0; i < x.length; i++) {console.log(x[i].b);}
The ECMAScript standard does not guarantee Array.sort is a stable sort. Chrome (the V8 engine) uses in-place QuickSort internally (for arrays of size ≥ 22, else insertion sort) which is fast but not stable.
To fix it, make customSort compare with .b as well, eliminating the need of stability of the sorting algorithm.
May be you know it already but you can use an array to sort on multiple columns and avoid this bug:
var customSort = function(a,b) {
return [a.a, a.b] > [b.a, b.b] ? 1:-1;
}
The V8 sort is not stable, unfortunately. I'll see if I can dig up the Chromium bug about this.
V8 sort is now stable!
Say I have two arrays, items and removeItems and I wanted any values found in removeItems to be removed from items.
The brute force mechanism would probably be:
var animals = ["cow","dog","frog","cat","whale","salmon","zebra","tuna"];
var nonMammals = ["salmon","frog","tuna","spider"];
var mammals = [];
var isMammal;
for(var i=0;i<animals.length;i++){
isMammal = true;
for(var j=0;j<nonMammals;j++){
if(nonMammals[j] === animals[i]){
isMammal = false;
break;
}
}
if(isMammal){
mammals.push(animals[i]);
}
}
This is what? O(N^2)? Is there a more efficient way?
Basically what you want to do is efficiently compute the set difference S \ T. The (asymptotically) fastest way I know is to put T into a hashmap (that makes |T| steps) and go over each s in S (that makes |S| steps) checking wether s in T (which is O(1)). So you get to O(|T| + |S|) steps.
That's actually O(M * N).
Probably you could do better sorting the animals array first, then doing a binary search. You'll be able to reduce to O(N * log N) - well, that's if log N < M anyway.
Anyway, if you're working with JS and that runs client-side, then just try to keep amount of data at minimum, or their browsers will yell at you with every request.
With jQuery it's pretty easy:
function is_mammal(animal)
{
return $.inArray(animal, nonMammals) == -1;
}
mammals = $.grep(animals, is_mammal);
See docs for $.grep and $.inArray.