I'm programming physics for a small 2D game, and I need to check every object on the screen against every other object on the screen. That's O(N^2), and I don't really like that.
What I had in mind:
for (var i = 0; i < objects.length; i ++)
for (var j = 0; j < objects.length; j ++)
if (collide(objects[i], objects[j])) doStuff(objects[i], objects[j]);
This is unnecessary, and I will check the same objects against one another multiple times. How can I avoid that? I thought of having a matrix, which would be n*n (given that n is the number of objects), and then each time I visit a pair of objects I would to this:
visited[i][j] = 1;
visited[j][i] = 1;
And then, I would always know which pair of objects I visited.
This would work, however, I would, again, need to set all of those cells, n*n times, just to set them all to 0 at start! Maybe, I could set everything to [], but this still doesn't seem like a viable solution to me. Is there something better?
Obviously, my language of choice is Javascript, but I know C, C++ and Python relatively fluently, so you can answer in them (although Javascript, C and C++ have almost the same syntax).
You won't avoid O(n^2), but you can cut it down by half:
for (var i = 0; i < objects.length; i ++)
for (var j = i; j < objects.length; j ++)
if (collide(objects[i], objects[j])) doStuff(objects[i], objects[j]);
Assuming collision is symmetric. If it's also reflexive, and testing for collision is expensive, you can move doStuff(object[i], object[i]) out of the inner loop to avoid testing for collision and start the inner loop from i+1
If your sprites have a maximum size, you could sort your arrays and skip comparing things that are vert+maxsize lower on the screen...
Related
I have been going through a tutorial on The Odin Project and I keep coming across this line of code or a variation of the code:
(i = 0; i < fLen; i++)
What exactly is happening here? I don't understand why it is used for multiple programs. If I do not understand what it is doing, then I have a hard time using it.
Example:
var fruits, text, fLen, i;
fruits = ["Banana", "Orange", "Apple", "Mango"];
fLen = fruits.length;
text = "<ul>";
for (i = 0; i < fLen; i++) {
text += "<li>" + fruits[i] + "</li>";
}
In short, it's a For loop that's meant to iterate a set number of times. In that example, it's iterating based up on the length of the array for fruits. So it's going to run 4 times. The i++ at the end just increases the increment after everytime it's run an iteration.
The whole point of that code is that it's creating a unordered list <ul> and then adding the four list items <li> for each index of the fruit array.
It's pretty simple once you get it, there's three pieces to: (i = 0; i < 3; i++)
Start with 0
If i < 3 run the code inside of the braces {}
Add +1 to i
The trick is to realize the code doesn't run when i = 3 since it's no longer < 3.
You can do variations like (i = 3; i > 0; i--) which is the same concept backwards.
Agreed with JohnPete22, it is a for loop, here are some examples:
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for
If you are used to some other programming languages, you could consider some alternatives here that might make a little more sense to you:
for in - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/for...in
for each - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Array/forEach
while - https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Statements/while
That's a for loop. It runs the code inside it's block (the { }) multiple times, based on what's inside the parentheses.
The parentheses have three "clauses", separated by semicolons. The first clause is the "initializer", which is only run once at the beginning. The second clause is the "condition", which is checked at the beginning of each time the block is run. If it evaluates to true (or anything "truthy"), the block is run again; otherwise, the loop exits. Finally, the third clause is the "final expression", which is run after the block each time.
Put together, you can make a loop run, say, ten times like so:
for (let i = 0; i < 10; i++) { /* … */ }
This initially sets i to zero, increments i each time, and exits when i hits 10. In your example above, the loop is being used to iterate over each element in the fruits list and collect them into an unordered list.
In his book Even Faster Web Sites Steve Sounders writes that a simple way to improve the performance of a loop is to decrement the iterator toward 0 rather than incrementing toward the total length (actually the chapter was written by Nicholas C. Zakas). This change can result in savings of up to 50% off the original execution time, depending on the complexity of each iteration. For example:
var values = [1,2,3,4,5];
var length = values.length;
for (var i=length; i--;) {
process(values[i]);
}
This is nearly identical for the for loop, the do-while loop, and the while loop.
I'm wondering, what's the reason for this? Why is to decrement the iterator so much faster? (I'm interested in the technical background of this and not in benchmarks proving this claim.)
EDIT: At first sight the loop syntax used here looks wrong. There is no length-1 or i>=0, so let's clarify (I was confused too).
Here is the general for loop syntax:
for ([initial-expression]; [condition]; [final-expression])
statement
initial-expression - var i=length
This variable declaration is evaluated first.
condition - i--
This expression is evaluated before each loop iteration. It will decrement the variable before the first pass through the loop. If this expression evaluates to false the loop ends. In JavaScript is 0 == false so if i finally equals 0 it is interpreted as false and the loop ends.
final-expression
This expression is evaluated at the end of each loop iteration (before the next evaluation of condition). It's not needed here and is empty. All three expressions are optional in a for loop.
The for loop syntax is not part of the question, but because it's a little bit uncommon I think it's interesting to clarify it. And maybe one reason it's faster is, because it uses less expressions (the 0 == false "trick").
I'm not sure about Javascript, and under modern compilers it probably doesn't matter, but in the "olden days" this code:
for (i = 0; i < n; i++){
.. body..
}
would generate
move register, 0
L1:
compare register, n
jump-if-greater-or-equal L2
-- body ..
increment register
jump L1
L2:
while the backward-counting code
for (i = n; --i>=0;){
.. body ..
}
would generate
move register, n
L1:
decrement-and-jump-if-negative register, L2
.. body ..
jump L1
L2:
so inside the loop it's only doing two extra instructions instead of four.
I believe the reason is because you're comparing the loop end point against 0, which is faster then comparing again < length (or another JS variable).
It is because the ordinal operators <, <=, >, >= are polymorphic, so these operators require type checks on both left and right sides of the operator to determine what comparison behaviour should be used.
There's some very good benchmarks available here:
What's the Fastest Way to Code a Loop in JavaScript
It is easy to say that an iteration can have less instructions. Let’s just compare these two:
for (var i=0; i<length; i++) {
}
for (var i=length; i--;) {
}
When you count each variable access and each operator as one instruction, the former for loop uses 5 instructions (read i, read length, evaluate i<length, test (i<length) == true, increment i) while the latter uses just 3 instructions (read i, test i == true, decrement i). That is a ratio of 5:3.
What about using a reverse while loop then:
var values = [1,2,3,4,5];
var i = values.length;
/* i is 1st evaluated and then decremented, when i is 1 the code inside the loop
is then processed for the last time with i = 0. */
while(i--)
{
//1st time in here i is (length - 1) so it's ok!
process(values[i]);
}
IMO this one at least is a more readble code than for(i=length; i--;)
for increment vs. decrement in 2017
In modern JS engines incrementing in for loops is generally faster than decrementing (based on personal Benchmark.js tests), also more conventional:
for (let i = 0; i < array.length; i++) { ... }
It depends on the platform and array length if length = array.length has any considerable positive effect, but usually it doesn't:
for (let i = 0, length = array.length; i < length; i++) { ... }
Recent V8 versions (Chrome, Node) have optimizations for array.length, so length = array.length can be efficiently omitted there in any case.
There is an even more "performant" version of this.
Since each argument is optional in for loops you can skip even the first one.
var array = [...];
var i = array.length;
for(;i--;) {
do_teh_magic();
}
With this you skip even the check on the [initial-expression]. So you end up with just one operation left.
I've been exploring loop speed as well, and was interested to find this tidbit about decrementing being faster than incrementing. However, I have yet to find a test that demonstrates this. There are lots of loop benchmarks on jsperf. Here is one that tests decrementing:
http://jsperf.com/array-length-vs-cached/6
Caching your array length, however (also recommended Steve Souders' book) does seem to be a winning optimization.
in modern JS engines, the difference between forward and reverse loops is almost non-existent anymore. But the performance difference comes down to 2 things:
a) extra lookup every of length property every cycle
//example:
for(var i = 0; src.length > i; i++)
//vs
for(var i = 0, len = src.length; len > i; i++)
this is the biggest performance gain of a reverse loop, and can obviously be applied to forward loops.
b) extra variable assignment.
the smaller gain of a reverse loop is that it only requires one variable assignment instead of 2
//example:
var i = src.length; while(i--)
I've conducted a benchmark on C# and C++ (similar syntax). There, actually, the performance differs essentially in for loops, as compared to do while or while. In C++, performance is greater when incrementing. It may also depend on the compiler.
In Javascript, I reckon, it all depends on the browser (Javascript engine), but this behavior is to be expected. Javascript is optimized for working with DOM. So imagine you loop through a collection of DOM elements you get at each iteration, and you increment a counter, when you have to remove them. You remove the 0 element, then 1 element, but then you skip the one that takes 0's place. When looping backwards that problem disappears. I know that the example given isn't just the right one, but I did encounter situations where I had to delete items from an ever-changing object collection.
Because backward looping is more often inevitable than forward looping, I am guessing that the JS engine is optimized just for that.
Have you timed it yourself? Mr. Sounders might be wrong with regards to modern interpreters. This is precisely the sort of optimization in which a good compiler writer can make a big difference.
I am not sure if it's faster but one reason i see is that when you iterate over an array of large elements using increment you tend to write:
for(var i = 0; i < array.length; i++) {
...
}
You are essentially accessing the length property of the array N (number of elements) times.
Whereas when you decrement, you access it only once. That could be a reason.
But you can also write incrementing loop as follows:
for(var i = 0, len = array.length; i < len; i++) {
...
}
It's not faster (at least in modern browsers):
// Double loops to check the initialization performance too
const repeats = 1e3;
const length = 1e5;
console.time('Forward');
for (let j = 0; j < repeats; j++) {
for (let i = 0; i < length; i++) {}
}
console.timeEnd('Forward'); // 58ms
console.time('Backward');
for (let j = repeats; j--;) {
for (let i = length; i--;) {}
}
console.timeEnd('Backward'); // 64ms
The difference is even bigger in case of an array iteration:
const repeats = 1e3;
const array = [...Array(1e5)];
console.time('Forward');
for (let j = 0; j < repeats; j++) {
for (let i = 0; i < array.length; i++) {}
}
console.timeEnd('Forward'); // 34ms
console.time('Backward');
for (let j = 0; j < repeats; j++) {
for (let i = array.length; i--;) {}
}
console.timeEnd('Backward'); // 64ms
In first example I created empty array of length 1000:
var arr = new Array(1000);
for (var i = 0; i < arr.length; i++)
arr[i] = i;
In second example created empty array of length 0:
var arr = [];
for (var i = 0; i < 1000; i++)
arr.push(i);
Testing in Chrome 41.0.2272.118 on OS X 10.10.3 and first block run faster. Why? Because JavaScript-engine knows about array size?
Benchmark is here http://jsperf.com/poerttest/2.
If you don't specify the array size it will have to keep allocating more space. But if you specify the size at the beginning, it only allocates once.
Yes. When you allocate size, interpreter knows that it has allocate only 1000 element memory/space. So, when you insert element, it is just one operation. But when you declare dynamic array, 2nd scenario your case, interpreter has to increase size of the array and then push the element. It is 2 operations!
Another possibility could have been that push() is more expensive than assigning to a fixed position. But tests show it is not the case.
What happens is that empty arrays get a relatively small starting capacity (either hash pool or actual array), and increasing that pool is expensive. You can see that by trying with lower sizes: at 100 elements, the performance difference between Array(100) and [] disappears.
My question somehow relates to this, but it still involves some key differences.
So here it is, I have following code;
for(var i = 0; i < someObj.children[1].listItems.length; i++)
{
doSomething(someObj.children[1].listItems[i]);
console.log(someObj.children[1].listItems[i]);
}
vs.
var i = 0,
itemLength = someObj.children[1].listItems.length,
item;
for(; i < itemLength; i++)
{
item = someObj.children[1].listItems[i];
doSomething(item);
console.log(item);
}
Now this is a very small exemplary part of code I deal with in an enterprise webapp made in ExtJS. Now here in above code, second example is clearly more readable and clean compared to first one.
But is there any performance gain involved when I reduce number of object lookups in similar way?
I'm asking this for a scenario where there'll be a lot more code within the loop accessing members deep within the object and iteration itself would be happening ~1000 times, and browser varies from IE8 to Latest Chrome.
There won't be a noticeable difference, but for performance and readability, and the fact that it does look like a live nodeList, it should probably be iterated in reverse if you're going to change it :
var elems = someObj.children[1].listItems;
for(var i = elems.length; i--;) {
doSomething(elems[i]);
console.log(elems[i]);
}
Performance gain will depend on how large the list is.
Caching the length is typically better (your second case), because someObj.children[1].listItems.length is not evaluated every time through the loop, as it is in your first case.
If order doesn't matter, I like to loop like this:
var i;
for( i = array.length; --i >= 0; ){
//do stuff
}
Caching object property lookup will result in a performance gain, but the extent of it is based on iterations and depth of the lookups. When your JS engine evaluates something like object.a.b.c.d, there is more work involved than just evaluating d. You can make your second case more efficient by caching additional property lookups outside the loop:
var i = 0,
items = someObj.children[1].listItems,
itemLength = items.length,
item;
for(; i < itemLength; i++) {
item = items[i];
doSomething(item);
console.log(item);
}
The best way to tell, of course, is a jsperf
I'd like to know which of the below methods could be considered more efficient.
The first one is quite simple it uses two for loops. The last one is my personal favorite, because it only uses one. I'm not quite sure about the pros and cons of each method though, as they are both pretty fast.
They are meant to be used with a CanvasPixelArray or ones that are structured in a similar way.
w and h stands for the width and height of the 2d matrices.
for (var y = 0; y < h; y++) {
for (var x = 0; x < w; x++) {
// ...
}
}
for (var i = 0, l = w*h; i < l; i++) {
var x = i%w;
var y = Math.floor(i/w);
// ...
}
The first will be more efficient. Think about the number of operations you are doing in the first method: One single incrementation per pixel plus an additional incrementation per row. The other method saves the extra incrementation per row but replaces it with something far more complex. In addition to the rounding and the % which are expensive, the y value is recalculated each and every time, whereas it is only calculated once per row in method 1. In short, the extra incrementation per row will be far quicker than adding all these supplementary operations per pixel. This isn't to say that you shouldn't use the second method, but given the code you have posted the first will perform better.