Fastest way to reset a multidimensional array? - javascript

Say I have a two dimensional array: vectors[x][y], and the initial array structure looks like this:
vectors = [
[0, 0, 0, 0, 0,],
[0, 0, 0, 0, 0,],
[0, 0, 0, 0, 0,],
[0, 0, 0, 0, 0,],
[0, 0, 0, 0, 0,]
]
After some calculations, the data in the array is randomized. What is the fastest way and most efficient way to return the array to it's initial state?
I know that I could just hardcode the above zeroed array and set vectors equal to it again, but I also know that an algorithm such as:
for (var x = 0; x < vectors.length; x++) {
for (var y = 0; y < vectors[x].length; y++) {
vectors[x][y] = 0;
}
}
is O(x * y).
So which is the better way? And is there a better, even faster/more efficient way to solve this?
And for the general case of zeroing a multi-dimensional array of any length, which is the best way? (I'm working in JavaScript if it matters)

Here is my two cents:
I'd go with keeping a clean copy of your original array for fastest performance. You can either keep a referenced hard-coded copy
var vectorsOrig = [
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]
];
or do a dynamic clean clone of the initial array using slice ((recursively in your case for deep copy):
var clonedVectors = [0, 0, 0, 0, 0].slice(0);
Regardless, taking the approach of resetting your vector reference to an original copy will be faster than cycling through and resetting each node. If your old vector array object isn't referenced any more, JavaScript will garbage collect it.
With that said, the question becomes of obtaining a clean copy each and every time. Having once hard-coded instance will give you one clean copy and you'll have to clone it thereafter. Nor do you want to into dynamic generation via similar for loops as the reset option. My advice is to write a clone function that simply returns a new hard-coded or initialized array:
function newVector() {
return [
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0],
[0, 0, 0, 0, 0]
];
}
var vector = newVector();
vector[1][2] = 11;
console.dir(vector);
vector = newVector(); // your old array will be garbage-collected if no longer referenced by any other reference
console.dir(vector);
Ideally, it's best to benchmark various approach.
EDIT
Thanks to Vega's input, I've modified his test to test three approaches. In Chrome and IE9, this solution seems to be the fastest, in FF (15.0.1) manual iteration seems faster (memory allocation/management slower in FF possibly). http://jsperf.com/array-zero-test/2

So far, it sounds like we have 2 possible choices.
Overwrite the whole thing with zeroes. (Your solution)
Keep a record of all modified elements and only reset those ones. record[0].x = 3; record[0].y = 5; and so on
However, you'll still need to loop once through the record. To explain it further, I mean that each time an element in the array is set to a value, you should record that element's placement in the array. Then, using a loop, you can visit each element and set it to 0. So, if you had a sparse array, it would be more efficient.
Depending on the implementation, I can see why you would want #2 instead of #1...but if you seriously have a large enough matrix that you need to worry about analyzing the algorithm, you might consider doing some kind of server pre-processing.

Another different way of looking at the problem is to use a linear array and calculate the linear index from the x, y indices when you do your updates.
The initialization is then just a single for loop with time O(x+y)

I'll risk and say that the fastest way of assigning the same value to all elements is by calling Array.map().
But, there is a catch here. Note that this will have incredibly fast performance on browsers that have natively implemented that method, and will have just the usual performance in other browsers. Also note that .map() isn't available in some old browsers, so you'll need to use Underscore.js or any other library that provides that method.

Related

Javascript Map does not contain array of numbers as key

I'm trying to create a map based on certain arrays of three. For example,
const rule30Map = new Map();
rule30Map.set([1, 0, 0], 1);
rule30Map.set([0, 1, 1], 1);
rule30Map.set([0, 1, 0], 1);
rule30Map.set([0, 0, 1], 1);
When I try getting a value based on values in an array, the console returns undefined,
console.log(rule30Map.get([1, 0, 0])); // prints undefined
but the expected output would be 1. Can somebody explain why my logic was misunderstood?
The keys are compared with === comparison. Two separate arrays that look like [1, 0, 0] are still two separate arrays that are not equal to each other.
Working out some automated way of keeping track of key objects by their characteristics somehow would probably be more complicated than using a plain object for storage. Because JavaScript does not provide a generic way for a class of objects to supply hashing and comparison overrides means that Map and Set in JavaScript are somewhat limited in usefulness.
You could do:
const a = [1,0,0];
const map = new Map();
map.set(a.toString(), "value");
map.get([1,0,0].toString());
I assume you are computing the [1,0,0] part.

Javascript - Creating constructor class within object literal

I'm creating a tic-tac-toe game in Javascript for an online course. I'm trying to set up various properties within an object literal I'm calling "Game". However, I know that when I start writing the AI functions, I will need to create several instances of a state class that holds board state, score & player info. My question is - how do I create a constructor like this from within my Game object? As an example:
var Game = {
board: [0, 1, 2,
3, 4, 5,
6, 7, 8],
// 0 = Empty, 1 = X, 2 = O
boardState: [0, 0, 0,
0, 0, 0,
0, 0, 0],
empty: this.availableMoves(this.board),
// either "human" or "AI"
whoseTurn: "",
// etc.. other initial properties...
//
//
state: {
turn : this.startingPlayer,
boardState: this.boardState,
value: 0
}
}
As I understand it right now, "state" is just an object literal within Game that holds static values for the current state of the board. If I wanted to create new instances of state for the purposes of iterating through possible states, though, how might I go about doing that without affecting the current state? Something like:
function changeState(state){
var state1 = new state();
// edit state1 object without affecting current state.
}
I was originally thinking of creating an entirely different "State" object for this outside of "Game", but to me that didn't seem to be the right way of going about it since states are part of the game. Any advice would be appreciated!

Getting NaN inconsistently mapping parseInt

Was playing around with some code to create an array of 0's and found that for only one value NaN was returned instead of 0. I get this in Chrome, Node, and Firefox.
What's causing the second value to be NaN?
var arr = new Array(32).join(0).split('').map(parseInt)
// prints [0, NaN, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
console.dir(arr)
That's because the function passed to map will be called with three arguments:
The current array item
The index of that item
The whole array
In this case, that function is parseInt, which only uses two arguments:
The string to be parsed
The radix used to parse the string
Additional arguments will be ignored.
Therefore, for the 2nd item in the array (i.e. index 1), the call will be
parseInt("0", 1, ignoredArray)
When the radix is 1, NaN is returned:
Let R = ToInt32(radix).
If R ≠ 0, then
If R < 2 or R > 36, then return NaN.
Also note that if you used a bigger number like new Array(99), you would have seen NaNs starting at index 37.
The .map() function passes three arguments to the callback of which two are used by parseInt(): the value, and the index (the third is the array itself). When the index is 1, any string of digits will be an invalid value. The parseInt() function ignores the second argument when it's 0.
To elaborate: for the first element, parseInt() is called like this:
parseInt("0", 0)
because the array value is zero and the index is zero. On the second call, it's this:
parseInt("0", 1)
and that returns NaN.
Note that if you're not too picky about the results being all integers, you can do what you're trying to do with the Number constructor:
var arr = new Array(32).join(0).split('').map(Number);
In ES2015 (the newest ratified standard for JavaScript, which if we are all very good will be fully implemented in all browsers as our Christmas present) you can use the .fill() function:
var arr = new Array(31).fill(0);
The mapping passes three arguments to its function:
the element;
the index of that element within the array; and
the array itself.
The parseInt function will look at the first two of those, treating the first correctly as the string but treating the second as the base to use. When the base is neither zero nor in the inclusive range 2..36, it returns NaN (1). If you had an array with forty elements, you'd also see a bunch of NaN values at the end:
0, NaN, 0, 0, 0, ... 0, 0, 0, NaN, NaN
You'd also get some pretty strange results if the number strings were anything other than zero, since the array index would dictate what base was used to interpret it.
To actually fix this, you can just provide a function that will translate what map gives you to what parseInt expects (a map map, I guess you could call it):
function myParseInt(s,r,a) { return parseInt(s,10); }
var arr = new Array(32).join(0).split('').map(myParseInt)
alert(arr)
You might also want to have a look at the way you're creating this array, it will actually end up as an array of size 31 rather than 32. If you just want an array of '0' characters, you can just use:
var arr = new Array(32).fill('0')
assuming you have a browser that supports ECMAScript 2015, which is Safari, Firefox and Chrome-desktop as of the time of this answer.
(1) A base of zero is the default case for handling things like hex prefixes.
A base of one makes no sense in a purely positional system (where each digit is multiplied by a power of the base and accumulated) since the only allowed digit there would be 0, so each place value would be that zero multiplied by 1n. In other words, the only number possible in a base-one system would be zero.
Bases from 2 to 36 are therefore more sensible.

TypedArray Set vs. Unrolled Loop (Javascript)

In attempting to build a WebGL 3D library for myself (learning purposes mostly) I followed documentation that I found from various sources that stated that the TypedArray function set() (specifically for Float32Array), is supposed to be "as fast as" memcpy in C (obviously tongue in cheek), literally the fastest according to html5rocks. On appearances that seemed to be correct (no loop setup in javascript, disappearing into some uberfast typed array pure C nonsense, etc).
I took a gander at glMatrix (good job on it btw!), and noticed that he (author) stated that he unrolled all of the loops for speed. This is obviously something a javascript guru would do normally for as much speed as possible, but, based on my previous reading, I thought I had 1-up on this library, specifically, he created his lib to be functional with both arrays and typed arrays, thus I thought that I would get more speed by using set() since I was only interested in staying in TypedArray types.
To test my theory I set up this jsperf. Not only does set() comparatively lack speed, every other technique I tried (in the jsperf), beats it. It is the slowest by far.
Finally, my question: Why? I can theoretically understand a loop unrolling becoming highly optimized in spidermonkey or chrome V8 js-engines, but losing out to a for loop seems ridiculous (copy2 in jsperf), especially if its intent is theoretically to speed up copies due to the raw contiguous in memory data types (TypedArray). Either way it feels like the set() function is broken.
Is it my code? my browser? (I am using Firefox 24) or am I missing some other theory of optimization? Any help in understanding this contrary result to my thoughts and understandings would be incredibly helpful.
This is an old question, but there is a reason to use TypedArrays if you have a specific need to optimize some poorly performing code. The important thing to understand about TypedArray objects in JavaScript is that they are views which represent a range of bytes inside of an ArrayBuffer. The underlying ArrayBuffer actually represents the contiguous block of binary data to operate on, but we need a view in order to access and manipulate a window of that binary data.
Separate (or even overlapping) ranges in the same ArrayBuffer can be viewed by multiple different TypedArray objects. When you have two TypedArray objects that share the same ArrayBuffer, the set operation is extremely fast. This is because the machine is working with a contiguous block of memory.
Here's an example. We'll create an ArrayBuffer of 32 bytes, one length-16 Uint8Array to represent the first 16 bytes of the buffer, and another length-16 Uint8Array to represent the last 16 bytes:
var buffer = new ArrayBuffer(32);
var array1 = new Uint8Array(buffer, 0, 16);
var array2 = new Uint8Array(buffer, 16, 16);
Now we can initialize some values in the first half of the buffer:
for (var i = 0; i < 16; i++) array1[i] = i;
console.log(array1); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
console.log(array2); // [0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
And then very efficiently copy those 8 bytes into the second half of the buffer:
array2.set(array1);
console.log(array1); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
console.log(array2); // [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
We can confirm that the two arrays actually share the same buffer by looking at the buffer with another view. For example, we could use a length-8 Uint32Array that spans the entire 32 bytes of the buffer:
var array3 = new Uint32Array(buffer)
console.log(array3); // [50462976, 117835012, 185207048, 252579084,
// 50462976, 117835012, 185207048, 252579084]
I modified a JSPerf test I found to demonstrate the huge performance boost of a copy on the same buffer:
http://jsperf.com/typedarray-set-vs-loop/3
We get an order of magnitude better performance on Chrome and Firefox, and it's even much faster than taking a normal array of double length and copying the first half to the second half. But we have to consider the cycles/memory tradeoff here. As long as we have a reference to any single view of an ArrayBuffer, the rest of the buffer's data can not be garbage collected. An ArrayBuffer.transfer function is proposed for ES7 Harmony which would solve this problem by giving us the ability explicitly release memory without waiting for the garbage collector, as well as the ability to dynamically grow ArrayBuffers without necessarily copying.
Well set doesn't exactly have simple semantics like that, in V8 after doing some figuring out of what should be done it will essentially arrive at exactly the same loop that the other methods are directly doing in the first place.
Note that JavaScript is compiled into highly optimized machine code if you play your cards right (all the tests do that) so there should be no "worshipping" of some methods just because they are "native".
I've also been exploring how set() performs and I have to say that for smaller blocks (such as the 16 indices used by the original poster), set() is still around 5x slower than the comparable unrolled loop, even when operating on a contiguous block of memory.
I've adapted the original jsperf test here. I think its fair to say that for small block transfers such as this, set() simply can't compete with unrolled index assignment performance. For larger block transfers (as seen in sbking's test), set() does perform better but then it is competing with literally 1 million array index operations so it would seem bonkers to not be able to overcome this with a single instruction.
The contiguous buffer set() in my test does perform slightly better than the separate buffer set(), but again, at this size of transfer the performance benefit is marginal

IE9 javascript sort order ... why?

I noticed IE9 sort order is changing elements order when comparison function returns 0.
See:
var myarray=[
{id:1,val:0},
{id:2,val:0},
{id:3,val:7},
{id:4,val:41}
];
myarray.sort(function(a,b){return a.val - b.val});
for(var i in myarray)
{
console.log(myarray[i].id);
}
Current stable versions of Chrome, Firefox, Opera and Safari got the following output: 1 2 3 4.
Same output for IE7 and IE8.
IE9 output is: 2 1 3 4
Why? Is that normal?
Don't use for...in on an array if you're trying to iterate over the numeric properties, for two reasons:
You will also get methods and properties added to Array.prototype showing up;
The iteration order is defined in the ECMAScript spec as being implementation-dependent, meaning it could in theory be anything.
Both points also apply to Objects. Chrome in fact does not conform to the most common browser behaviour, leading to heated debate in a Chrome bug report.
From MDC (emphasis mine):
If compareFunction(a, b) returns 0, leave a and b unchanged with respect to each other, but sorted with respect to all different elements. [Note: the ECMAscript standard does not guarantee this behaviour], and thus not all browsers (e.g. Mozilla versions dating back to at least 2003) respect this.
In my experience, only Chrome/Firefox get this right. Opera 11's behavior for me is .. not well defined.
E.g., using sort to move all zeroes to the top of the array:
[1, 0, 3, 0, 5, 0, 2].sort(function (a, b) { return b === 0 && 1 || 0;});
Chromium 10: [0, 0, 0, 1, 3, 5, 2]
Firefox 4: [0, 0, 0, 1, 3, 5, 2]
Opera 11: [0, 0, 0, 2, 1, 5, 3] <- does not maintain order of non-zeroes
Based on your sort function, both of those elements are equal and it shouldn't matter which order they appear in. It is up to the browser to either leave the order as it is or switch the order as it sees appropriate...neither is a guarantee.
If the two aren't equal, then your sort function is incorrect and should take the other items into account as well.

Categories