I have an array of arrays for drawing a tilemap on the screen (basically an array of columns, each column is an array of tiles). I have tried to speed up the drawing process by not setting the array indexes that contain empty tiles, but it is not any faster.
var a1 = [];
a1[0] = 1;
a1[100] = 1;
a1[200] = 1;
a1[300] = 1;
var a2 = [];
for( var i = 0; i <= 300; i++ ) {
a2[i] = 1;
}
When I compared the time taken to loop through these two 100,000 times, a2 was slightly faster. When I tried using ( for var x in y ) instead, both with an array and an object, they were 4 - 12 times slower.
If looping through an object is a lot slower, and removing 99% of the array (not just from the end) is not making it any faster, is there any way one could actually make it faster?
Do not have holes in your arrays, just fill it completely (also pre-allocate to avoid dynamic resizing)
var a1 = new Array(301);
for (var i = 0; i < a1.length; ++i) a1[i] = 0;
a1[0] = 1;
a1[100] = 1;
a1[200] = 1;
a1[300] = 1;
Loop normally (never use for.in, use Object.keys if you need to iterate over keys):
for (var i = 0; i < a1.length; ++i) {
if (a1[i] !== 0) {
//Non empty
}
}
Related
I'm learning to analyze space complexity, but I'm confused of analyzing an array vs an object in JS. So I'd like to get some help here.
ex1. array []
int[] table = new int[26];
for (int i = 0; i < s.length(); i++) {
table[s.charAt(i) - 'a']++;
}
ex1. is from an example online, and it says the space complexity is O(1) because the table's size stays constant.
ex2. object {}
let nums[0,1,2,3,4,5], map = {};
for (let i = 0; i < nums.length; i++) {
map[ nums[i] ] = i;
}
I think ex2. uses O(n) because the map object is accessed 6 times. However, if I use the concept learned from ex1., the space complexity should be O(1)? Any ideas where I went wrong?
From the complexity analysis point of view, in ex 1, the complexity is O(1) because the array size doesn't increase. Because you are initializing the table to a fixed size of 26 (Looks like you are counting the characters in a string?).
See the below example that keeps track of counts of a alphabets in a string (Only small letters for clarity). In this case the length of array which tracks the count of alphabets never change even if the string changes its length.
function getCharacterCount(s){
const array = new Int8Array(26);
for (let i = 0; i < s.length; i++) {
array[s.charCodeAt(i) - 97]++;
}
return array;
}
Now let's change the implementation to map instead. Here the size of the map increases as and when a new character is encountered in the string.So
Theoretically speaking, the space complexity is O(n).
But in reality, we started with map with length 0 (0 keys) and it doesn't go beyond 26. If the string doesn't contain all the characters, the space taken would be much lesser than an array as in previous implementation.
function getCharacterCountMap(s){
const map = {};
for (let i = 0; i < s.length; i++) {
const charCode = s.charCodeAt(i) - 97;
if(map[charCode]){
map[charCode] = map[charCode] + 1
}else{
map[charCode] = 0;
}
}
return map;
}
function getCharacterCount(s){
const array = new Int8Array(26);
for (let i = 0; i < s.length; i++) {
array[s.charCodeAt(i) - 97]++;
}
return array;
}
function getCharacterCountMap(s){
const map = {};
for (let i = 0; i < s.length; i++) {
const charCode = s.charCodeAt(i) - 97;
if(map[charCode]){
map[charCode] = map[charCode] + 1
}else{
map[charCode] = 1;
}
}
return map;
}
console.log(getCharacterCount("abcdefabcedef"));
console.log(getCharacterCountMap("abcdefabcdef"));
Sometimes we need to use temporary array in a loop to store data. For example, when we need to deal with 2 dimension arrays. But I am not sure if it is a bad practice to create a new array in a loop, especially if I need to do it often, such as in animation.
for (let i = 0; i < 10000; i++) {
const temp = [];
for (let j = 0; j < 10; j++) {
temp.push(j);
}
arr.push(temp);
}
If this is a variable I should be able to use a global variable and reassign value to it. So I have tried to use a global array and clear the array using temp.length = 0 but then because the array's reference is stored the data will all become the last pushed values. I have also tried a global const temp = new Set() but then when I push the temp set into the arr array it will be arr.push([...temp]). So is it inevitable to create new array in such situation?
Typed arrays
TypedArrays should always be the first option when you need an array. They offer massive performance and memory benefits over standard arrays.
The quickest way to create an array of numbers is
var i,j,b,a = [];
for (i = 0; i < 100000; i += 1) {
a[i] = b = new Float64Array(10);
for (j = 0; j < 10; j += 1) {
b[j] = j;
}
}
Which benchmarked at 634, a huge 5.5 times quicker than the fastest Array method
var i,j,b,a = [];
for (i = 0; i < 100000; i += 1) {
a[i] = b = [];
for (j = 0; j < 10; j += 1) {
b[j] = j;
}
}
Which benchmarked 3664 which is another 3.4 times quicker than
const arr = [...Array(10000)];
for (let i = 0; i < arr.length; i ++) {
arr[i] = [...Array(10).keys()]
}
That benchmarks a sad 12352
Typed arrays are fixed size, and when declared they are pre filled with zero. They will never become sparse and if use well can eliminate GC overheads.
With Atomics and you can share them with workers via SharedArrayBuffers(True sharing a shared array only has one address)
Though they are limited to Doubles, floats, signed and unsigned integers, they can hold any type of data (computers are just binary processing machines after all) and will give noticeable performance increases for any type of array work
Is creating arrays in loops bad.
There nothing wrong with creating arrays in loops.
I've wrote a script that instead of giving the real average of a set of data returns a windows that contains most data points. Let me show some code:
time.tic()
var selectedAverage = 0;
var highestPointCount = 0;
for (var i = 1; (i*step) <= maxValue; i++) {
var dataPointCount = 0;
for (var j = 0; j < myArray.length; j++) {
if (myArray[j] >= minValue+(i-1)*step && myArray[j] <= minValue+i*step) {
dataPointCount++;
}
}
if (dataPointCount > highestPointCount) {
highestPointCount = dataPointCount;
selectedAverage = (minValue+(i-1)*step)+Math.round(0.5*step);
}
}
console.log(time.toct().ms)
return selectedAverage;
First the step value is calculated by subtracting the minimum value from the maximum value and then deciding by 10. So there are 10 'horizontal' windows. Then the script counts the amount of datapoint within each window and returns a appropriate average.
It appears however that script slows down extremely (sometimes more than 200 times) when an array of larger numbers is passed in (1.000.000 for example). Array lengths are roughly 200 but always the same length so it must be associated with the actual values. Any idea where it is going wrong?
EDIT:
The code to get the step value:
var minValue = myArray.min();
var maxValue = myArray.max();
var step = Math.round((maxValue-minValue)/10);
if (step === 0) {
step = 1
}
The .min() and .max() are prototypes attached to Array. But this all goes very fast. I've measured every step and it is the for loop that slows down.
If I understood your algorithm correctly, this should remove all unnecesary calculations and be much faster:
var arr = [];
var maxQty=0;
var wantedAverage = 0;
for (var j = 0; j < 11; j++) {
arr[j]=0;
}
for (var j = 0; j < myArray.length; j++) {
var stepIndex = Math.floor((myArray[j]-minValue)/step)
arr[stepIndex]+=1;
if(arr[stepIndex] > maxQty ){
wantedAverage = minValue + stepIndex*step +Math.round(0.5*step);
maxQty = arr[stepIndex]
}
}
console.log(maxQty, wantedAverage)
We just iterate over each element of the array only once, and calculate the index of the window it belongs to, adding one to the quantity array. Then we update the wantedAverage if we have a bigger amount of points in window found
There are 2 different things I think of your issue:
Removed unnecessary / repeated calculation
Inside your nested code you have minValue+(i-1)*step and minValue+i*step calculated everytime for the same value of minValue, i and step.
You should pull it up before the 2nd for-loop where it becomes:
var dataPointCount = 0;
var lowerLimit = minValue+(i-1)*step;
var higherLimit = minValue+1*step;
for (var j = 0; j < myArray.length; j++) {
if (myArray[j] >= lowerLimit && myArray[j] <= higherLimit) {
dataPointCount++;
}
}
You got severe performance hit when you are handling big data array are likely caused by memory swapping. From your point of view you are dealing with a single array instance, however when you have such a big array it is unlikely the JavaScript VM has access to consecutive memory space to hold all those values. It is very likely JavaScript VM has to juggle multiple memory blocks that it gets from the operating system and have to spend extra effort to translate which value is where during reading/writing.
I have a very big array which looks similar to this
var counts = ["gfdg 34243","jhfj 543554",....] //55268 elements long
this is my current loop
var replace = "";
var scored = 0;
var qgram = "";
var score1 = 0;
var len = counts.length;
function score(pplaintext1) {
qgram = pplaintext1;
for (var x = 0; x < qgram.length; x++) {
for (var a = 0, len = counts.length; a < len; a++) {
if (qgram.substring(x, x + 4) === counts[a].substring(0, 4)) {
replace = parseInt(counts[a].replace(/[^1-9]/g, ""));
scored += Math.log(replace / len) * Math.LOG10E;
} else {
scored += Math.log(1 / len) * Math.LOG10E;
}
}
}
score1 = scored;
scored = 0;
} //need to call the function 1000 times roughly
I have to loop through this array several times and my code is running slowly. My question is what the fastest way to loop through this array would be so I can save as much time as possible.
Your counts array appears to be a list of unique strings and values associated with them. Use an object instead, keyed on the unique strings, e.g.:
var counts = { gfdg: 34243, jhfj: 543554, ... };
This will massively improve the performance by removing the need for the O(n) inner loop by replacing it with an O(1) object key lookup.
Also, avoid divisions - log(1 / n) = -log(n) - and move loop invariants outside the loops. Your log(1/len) * Math.LOG10E is actually a constant added in every pass, except that in the first if branch you also need to factor in Math.log(replace), which in log math means adding it.
p.s. avoid using the outer scoped state variables for the score, too! I think the below replicates your scoring algorithm correctly:
var len = Object.keys(counts).length;
function score(text) {
var result = 0;
var factor = -Math.log(len) * Math.LOG10E;
for (var x = 0, n = text.length - 4; x < n; ++x) {
var qgram = text.substring(x, x + 4);
var replace = counts[qgram];
if (replace) {
result += Math.log(replace) + factor;
} else {
result += len * factor; // once for each ngram
}
}
return result;
}
I have two Javascript ArrayBuffers; each contains 512 bits of data.
I would like to do an xor comparison of the two arrays and store the results in a third array.
Currently, I am looping over the elements in the buffers. In the code below 'distance' is an integer and feat_a1 and feat_b1 are ArrayBuffers that are 512 bits in length.
for(var d1=0; d1<512; d1++){
distance += feat_b1[d1] ^ feat_a1[d1];
}
Is there a more efficient way of doing the pairwise comparison of these two arrays?
as i understand you can't directly use arrayBuffer[i], you have to pass it to some container (like Int8Array). I have made next example http://jsfiddle.net/mLurz/ , tried with different Typed arrays from this list and Uint32Array showed the best performance
var i;
var dist = 0;
var max = Math.pow(2,32);
var buf1 = new ArrayBuffer(1024);
var x = new Uint32Array(buf1);
for (i = 0; i < 256; ++i) {
x[i] = Math.random()*max;
}
var buf2 = new ArrayBuffer(1024);
var y = new Uint32Array(buf2);
for (i = 0; i < 256; ++i) {
y[i] = Math.random()*max
}
console.time('Uint32Array');
for (var j = 0; j < 1000000; ++j) {
for (i = 0; i < 256; ++i) {
dist += y[i]^x[i];
}
}
console.timeEnd('Uint32Array');