Matrix Iteration - Which method is more efficient? - javascript

I'd like to know which of the below methods could be considered more efficient.
The first one is quite simple it uses two for loops. The last one is my personal favorite, because it only uses one. I'm not quite sure about the pros and cons of each method though, as they are both pretty fast.
They are meant to be used with a CanvasPixelArray or ones that are structured in a similar way.
w and h stands for the width and height of the 2d matrices.
for (var y = 0; y < h; y++) {
for (var x = 0; x < w; x++) {
// ...
}
}
for (var i = 0, l = w*h; i < l; i++) {
var x = i%w;
var y = Math.floor(i/w);
// ...
}

The first will be more efficient. Think about the number of operations you are doing in the first method: One single incrementation per pixel plus an additional incrementation per row. The other method saves the extra incrementation per row but replaces it with something far more complex. In addition to the rounding and the % which are expensive, the y value is recalculated each and every time, whereas it is only calculated once per row in method 1. In short, the extra incrementation per row will be far quicker than adding all these supplementary operations per pixel. This isn't to say that you shouldn't use the second method, but given the code you have posted the first will perform better.

Related

Undefined error message when accessing node [duplicate]

What is an off-by-one error? If I have one, how do I fix it?
An off-by-one error is for example when you intend to perform a loop n times and write something like:
for (int i = 1; i < n; ++i) { ... }
or:
for (int i = 0; i <= n; ++i) { ... }
In the first case the loop will be executed (n - 1) times and in the second case (n + 1) times, giving the name off-by-one. Other variations are possible but in general the loop is executed one too many or one too few times due to an error in the initial value of the loop variable or in the end condition of the loop.
The loop can be written correctly as:
for (int i = 0; i < n; ++i) { ... }
A for loop is just a special case of a while loop. The same kind of error can be made in while loops.
An off-by-one error is when you expect something to be of value N, but in reality it ends up being N-1 or N+1. For example, you were expecting the program to perform an operation 10 times, but it ends up performing 9 or 11 times (one too few or one too many times). In programming this is most commonly seen happening when dealing with "for" loops.
This error happens due to a misjudgement where you do not realize that the number you are using to keep track of your counting may not be the same as the number of things you are counting. In other words, the number you are using to count may not be the same as the total of things you are counting. There is nothing that obligates both things to be the same. Try to count out loud from 0 to 10 and you end up saying 11 numbers in total, but the final number that you say is 10.
One way to prevent the problem is to realize that our brain has a tendency (maybe a cognitive bias) to make that error. Keeping that in mind may help you identify and prevent future situations. But I guess that the best thing you can do to prevent this error is to write unit tests. The tests will help you make sure that your code is running as it should.
Say you have the following code featuring an array and a for loop:
char exampleArray[] = { 'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd' };
for(int i = 0; i <= 11; i++)
{
print(exampleArray[i])
}
See the issue here? Because I counted my array to have eleven characters in it, I have set my loop to iterate eleven times. However, arrays start at zero in most languages, meaning that when my code goes to print
exampleArray[11]
I will get an index out of bounds error because the array in the example has no value at index eleven.
In this case, I can fix this easily by simply telling my loop to iterate one fewer times.
The easiest way to debug this issue is to print out your upper and lower bounds and see which value generates an index out of bounds error, then set your value to be one greater or one fewer than it is throughout your entire iteration.
Of course, this assumes the error is generated by a loop going one over or one less than the bounds of an array, there are other situations where an index out of bounds error can occur, however, this is the most common case. An index out of bounds will always refer to trying to access data where data does not exist due to the boundaries past not being within the boundaries of data.
A common off-by-one confusion arises because some languages enumerate vectors from zero (C, for example) and other languages from one (R, for example). Thus, a vector x of size n has members running from x[0] to x[n-1] in C but from x[1] to x[n] in R.
You are also confronted with the off-by-one challenge when coding the common idiom for cyclic incrementation:
In C:
i = (i+1)%n
In R:
i <- (i-1)%%n + 1
Off by one error (sometimes called OBOE) crop up when you're trying to target a specific index of a string or array (to slice or access a segment), or when looping over the indices of them.
If we consider Javascript as an example language, indexing starts at zero, not one, which means the last index is always one less than the length of the item. If you try to access an index equal to the length, the program may throw an
"index out of range" reference error
or
print undefined.
When you use string or array methods that take index ranges as arguments, it helps to read the documentation of that language and understand if they are inclusive (the item at the given index is part of what's returned) or not. Here are some examples of off by one errors:
let alphabet = "abcdefghijklmnopqrstuvwxyz";
let len = alphabet.length;
for (let i = 0; i <= len; i++) {
// loops one too many times at the end
console.log(alphabet[i]);
}
for (let j = 1; j < len; j++) {
// loops one too few times and misses the first character at index 0
console.log(alphabet[j]);
}
for (let k = 0; k < len; k++) {
// Goldilocks approves - this is just right
console.log(alphabet[k]);
}
A simple rule of thumb:
int i = 0; // if i is initiated as 0, always use a < or > in the condition
while (i < 10)
System.out.printf("%d ", i++);
int i = 1; // if i is initiated as 1, always use a <= or >= in the condition
while (i <= 10)
System.out.printf("%d ". i++);

Why JavaScript array length return undefined when loop in matrix (2D array) [duplicate]

What is an off-by-one error? If I have one, how do I fix it?
An off-by-one error is for example when you intend to perform a loop n times and write something like:
for (int i = 1; i < n; ++i) { ... }
or:
for (int i = 0; i <= n; ++i) { ... }
In the first case the loop will be executed (n - 1) times and in the second case (n + 1) times, giving the name off-by-one. Other variations are possible but in general the loop is executed one too many or one too few times due to an error in the initial value of the loop variable or in the end condition of the loop.
The loop can be written correctly as:
for (int i = 0; i < n; ++i) { ... }
A for loop is just a special case of a while loop. The same kind of error can be made in while loops.
An off-by-one error is when you expect something to be of value N, but in reality it ends up being N-1 or N+1. For example, you were expecting the program to perform an operation 10 times, but it ends up performing 9 or 11 times (one too few or one too many times). In programming this is most commonly seen happening when dealing with "for" loops.
This error happens due to a misjudgement where you do not realize that the number you are using to keep track of your counting may not be the same as the number of things you are counting. In other words, the number you are using to count may not be the same as the total of things you are counting. There is nothing that obligates both things to be the same. Try to count out loud from 0 to 10 and you end up saying 11 numbers in total, but the final number that you say is 10.
One way to prevent the problem is to realize that our brain has a tendency (maybe a cognitive bias) to make that error. Keeping that in mind may help you identify and prevent future situations. But I guess that the best thing you can do to prevent this error is to write unit tests. The tests will help you make sure that your code is running as it should.
Say you have the following code featuring an array and a for loop:
char exampleArray[] = { 'H', 'e', 'l', 'l', 'o', ' ', 'W', 'o', 'r', 'l', 'd' };
for(int i = 0; i <= 11; i++)
{
print(exampleArray[i])
}
See the issue here? Because I counted my array to have eleven characters in it, I have set my loop to iterate eleven times. However, arrays start at zero in most languages, meaning that when my code goes to print
exampleArray[11]
I will get an index out of bounds error because the array in the example has no value at index eleven.
In this case, I can fix this easily by simply telling my loop to iterate one fewer times.
The easiest way to debug this issue is to print out your upper and lower bounds and see which value generates an index out of bounds error, then set your value to be one greater or one fewer than it is throughout your entire iteration.
Of course, this assumes the error is generated by a loop going one over or one less than the bounds of an array, there are other situations where an index out of bounds error can occur, however, this is the most common case. An index out of bounds will always refer to trying to access data where data does not exist due to the boundaries past not being within the boundaries of data.
A common off-by-one confusion arises because some languages enumerate vectors from zero (C, for example) and other languages from one (R, for example). Thus, a vector x of size n has members running from x[0] to x[n-1] in C but from x[1] to x[n] in R.
You are also confronted with the off-by-one challenge when coding the common idiom for cyclic incrementation:
In C:
i = (i+1)%n
In R:
i <- (i-1)%%n + 1
Off by one error (sometimes called OBOE) crop up when you're trying to target a specific index of a string or array (to slice or access a segment), or when looping over the indices of them.
If we consider Javascript as an example language, indexing starts at zero, not one, which means the last index is always one less than the length of the item. If you try to access an index equal to the length, the program may throw an
"index out of range" reference error
or
print undefined.
When you use string or array methods that take index ranges as arguments, it helps to read the documentation of that language and understand if they are inclusive (the item at the given index is part of what's returned) or not. Here are some examples of off by one errors:
let alphabet = "abcdefghijklmnopqrstuvwxyz";
let len = alphabet.length;
for (let i = 0; i <= len; i++) {
// loops one too many times at the end
console.log(alphabet[i]);
}
for (let j = 1; j < len; j++) {
// loops one too few times and misses the first character at index 0
console.log(alphabet[j]);
}
for (let k = 0; k < len; k++) {
// Goldilocks approves - this is just right
console.log(alphabet[k]);
}
A simple rule of thumb:
int i = 0; // if i is initiated as 0, always use a < or > in the condition
while (i < 10)
System.out.printf("%d ", i++);
int i = 1; // if i is initiated as 1, always use a <= or >= in the condition
while (i <= 10)
System.out.printf("%d ". i++);

Why does javascript process an array of structures faster than a structure of arrays?

I've been looking for an efficient way to process large lists of vectors in javascript. I created a suite of performance tests that perform in-place scalar-vector multiplication using different data structures:
AoS implementation:
var vectors = [];
//
var vector;
for (var i = 0, li=vectors.length; i < li; ++i) {
vector = vectors[i];
vector.x = 2 * vector.x;
vector.y = 2 * vector.y;
vector.z = 2 * vector.z;
}
SoA implementation:
var x = new Float32Array(N);
var y = new Float32Array(N);
var z = new Float32Array(N);
for (var i = 0, li=x.length; i < li; ++i) {
x[i] = 2 * x[i];
y[i] = 2 * y[i];
z[i] = 2 * z[i];
}
The AoS implementation is at least 5 times faster. This caught me by surprise. The AoS implementation uses one more index lookup per iteration than the SoA implementation, and the engine has to work without a guaranteed data type.
Why does this occur? Is this due to browser optimization? Cache misses?
On a side note, SoA is still slightly more efficient when performing addition over a list of vectors:
AoS:
var AoS1 = [];
var AoS2 = [];
var AoS3 = [];
//code for populating arrays
for (var i = 0; i < N; ++i) {
AoS3[i].x = AoS1[i].x + AoS2[i].x;
}
SoA:
var x1 = new Float32Array(N);
var x2 = new Float32Array(N);
var x3 = new Float32Array(N);
for (var i = 0; i < N; ++i) {
x3[i] = x1[i] + x2[i];
}
Is there a way I can tell when an operation is going to be more/less efficient for a given data structure?
EDIT: I failed to emphasize the SoA implementation made use of typed arrays, which is why this performance behavior struck me as odd. Despite having the guarantee of data type provided by typed arrays, the plain-old array of associative arrays is faster. I have yet to see a duplicate of this question.
EDIT2: I've discovered the behavior no longer occurs when the declaration for vector is moved to preparation code. AoS is ostensibly faster when vector is declared right next to the for loop. This makes little sense to me, particularly since the engine should just anchor it to the top of the scope, anyways. I'm not going to question this further since I suspect an issue with the testing framework.
EDIT3: I got a response from the developers of the testing platform, and they've confirmed the performance difference is due to outer scope lookup. SoA is still the most efficient, as expected.
The structure of the tests being used for benchmarking seem to have overlapped each other causing either undefined or undesired behavior. A cleaner test (https://www.measurethat.net/Benchmarks/Show/474/0/soa-vs-aos) shows little difference between the two, and has SOA executing slightly (30%) faster.
However, none of this matters to the bottom line when it comes to performance. This is an effort in micro-optimization. What you are essentially comparing is O(n) to O(n) with nuance involved. The small percent difference will not have an effect overall as O(n) is considered to be an acceptable time complexity.

Searching/Sorting Multidimensional Arrays

I've created a multi-dimensional array based on the x/y coords of the perimeter of a circle. An object can be dragged along the arc (in javascript) and then 'dropped' anywhere on it. The problem is, I need to find the closest x and y coordinate to where the object is 'dropped.'
My current solution involves looping through an array and finding the closest value to x, and then looping again to find the y coordinate, but it doesn't seem very clean and there are problems with it.
Does anyone have any suggestions?
Thanks!
So, let's see. We assume a predefined set of (x, y) coordinates. You are given another point and have to find the nearest element of the array to that given point. I am going to assume "nearest" means the smallest Pythagorean or Euclidean distance from the given point to each of the other points.
The simplest algorithm is probably the best (if you want to look at others in Wikipedia, have at it). Since you didn't give us any code for the structure, I'm going to assume an array of objects, each object having an x and a y property, ditto for the given point.
var findNearestPoint = function (p, points) {
var minDist = Number.POSITIVE_INFINITY,
minPoint = -1,
i,
l,
curDist,
sqr = function(x) { return x * x; };
for (i = 0, l = points.length; i < l; i++) {
curDist = sqr(p.x - points[i].x) + sqr(p.y - points[i].y);
if (curDist < minDist) {
minDist = curDist;
minPoint = i;
}
}
return points[i];
};
(Untested, but you get the idea.)
If your arrays are created in sequential order (that is from smallest to greatest or greatest to smallest), you could use introduce a Binary Search Algorithm.
Get middle element of x array.
If x equals your value, stop and look for y, otherwise.
If x is lower, search in the lower half of the array (starting from step 1).
If x is higher, search in the upper half of the array (starting from step 1).
Then use the same formula on y. You might have to change to algorithm a bit to make it so it works with the closest matching element. Having not seen your array, I can't offer code to solve to problem.

Creating a random number generator in jscript and prevent duplicates

We are trying to create a random number generator to create serial numbers for products on a virtual assembly line.
We got the random numbers to generate, however since they are serial numbers we don't want it to create duplicates.
Is there a way that it can go back and check to see if the number generated has already been generated, and then to tell it that if it is a duplicate to generate a new number, and to repeat this process until it has a "unique" number.
The point of a serial number is that they're NOT random. Serial, by definition, means that something is arranged in a series. Why not just use an incrementing number?
The easiest way to fix this problem is to avoid it. Use something that is monotonically increasing (like time) to form part of your serial number. To that you can prepend some fixed value that identifies the line or something.
So your serial number format could be NNNNYYYYMMDDHHMMSS, where NNNN is a 4-digit line number and YYYY is the 4 digit year, MM is a 2 digit month, ...
If you can produce multiple things per second per line, then add date components until you get to the point where only one per unit time is possible -- or simply add the count of items produced this day to the YYYYMMDD component (e.g., NNNNYYYYMMDDCCCCCC).
With a truly random number you would have to store the entire collection and review it for each number. Obviously this would mean that your generation would become slower and slower the larger the number of keys you generate (since it would have to retry more and more often and compare to a larger dataset).
This is entirely why truly random numbers just are never used for this purpose. For serial numbers the standard is always to just do a sequential number - is there any real real for them to be random?
Unique IDs are NEVER random - GUIDs and the like are based on the system time and (most often) MAC address. They're globally unique because of the algorithm used and the machine specifics - not because of the size of the value or any level of randomness.
Personally I would do everything I could to either use a sequential value (perhaps with a unique prefix if you have multiple channels) or, better, use a real GUID for your purpose.
is this what you are looking for?
var rArray;
function fillArray (range)
{
rArray = new Array ();
for(var x = 0; x < range; x++)
rArray [x] = x;
}
function randomND (range)
{
if (rArray == null || rArray.length < 1)
fillArray (range);
var pos = Math.floor(Math.random()*rArray.length);
var ran = rArray [pos];
for(var x = pos; x < rArray.length; x++)
rArray [x] = rArray [x+1];
var tempArray = new Array (rArray.length-1)
for(var x = 0; x < tempArray.length; x++)
tempArray [x] = rArray [x];
rArray = tempArray;
return ran;
}

Categories