I implemented a simple algorithm in 2 ways. One using indexOf and other one using Hash Table.
The problem:
Given an arbitrary ransom note string and another string containing letters from all the magazines, write a function that will return true if the ransom note can be constructed from the magazines ; otherwise, it will return false.
First one. Is it Time Complexity O(N^2) because I have N letters in the ransomNote and I can do N searches in the indexOf?
var canConstruct = function(ransomNote, magazine) {
if(magazine.length < ransomNote.length) return false;
const arr = magazine.split("");
for(let i=0; i<ransomNote.length; i++) {
if(arr.indexOf(ransomNote[i]) < 0)
return false;
const index = arr.indexOf(ransomNote[i]);
arr.splice(index, 1);
}
return true;
};
Second one. What is the time complexity? Does the hash table makes it O(N)?
var canConstruct = function(ransomNote, magazine) {
if(magazine.length < ransomNote.length) return false;
const map = new Map();
for(let i =0; i<magazine.length; i++) {
if(map.has(magazine[i]))
map.set(magazine[i], map.get(magazine[i])+1);
else
map.set(magazine[i], 1);
}
for(let i=0; i<ransomNote.length; i++) {
if(!map.has(ransomNote[i]))
return false;
else {
const x = map.get(ransomNote[i]) - 1;
if(x > 0)
map.set(ransomNote[i], x)
else
map.delete(ransomNote[i]);
}
}
return true;
};
Thanks
FIRST solution
Well one thing you have to take in consideration, especially in the first solution is that split, slice and indexOf methods all have their own time complexity.
Let's say you have m letters in magazine. When you split it into array, you will already use O(m) time complexity there (and of course O(m) space complexity due to the fact you store it all in a new array, that has a size of m).
Now you enter a for loop that will run n times (where n is the number of letters in ransomNote). So right then and there you have O(m * n) time complexity. indexOf operation will also be called n times with the caveat that it runs O(m) every time it's called. You can see there how quickly you start adding up your time complexity.
I see it being like O(3 * m * n^2) which rounds up to O(m * n^n) time complexity. I would strongly suggest not calling indexOf multiple times, just call it once and store its result somewhere. You will either have an index or -1 meaning it wasn't found.
SECOND solution
Much better. Here you populate a hash map (so some extra memory used, but considering you also used a split in first, and stored it, it should roughly be the same).
And then you just simply go with one loop over a randomNote and find a letter in the hashMap. Finding a letter in map is O(1) Time so it's really useful for this type of algorithm.
I would argue the complexity would be O(n * m) in the end so much better than the first one.
Hope I made sense to you. IF you want we can dive a bit deeper in space analysis later in the comments if you respond
The version with indexOf() is O(N*M), where N is the length of the ransom note, and M is the length of the magazine. You perform up to N searches, and each search is linear through the magazine array that's N characters. Also, array.splice() is O(M) because it has to copy all the array elements after index to the lower index.
Hash table access and updates are generally considered to be O(1). The second version performs N hash table lookups and updates, so the overall complexity is O(N).
Related
This is one of the leetcode problems. In this problem we are to build an array from permutation and the challenge is to solve the problem in O(1) space complexity, so does the solution fulfills the criteria or not? One more thing if we are manipulating the same array but increasing its length by appending the elements at the end, does it mean that we are assigning new space hence causing O(N) space complexity during the execution of the program?
var buildArray = function(nums) {
let len = nums.length
for (let i = 0; i < len; i++) {
nums.push(nums[nums[i]]);
}
nums = nums.splice(len, nums.length);
return nums;
};
This is O(n) space complexity, you can't "cheat" by pushing your data on the input array, because you are using extra space anyways.
This code would be the equivalent of storing your data in a new array, and then returning that
I would guess the target of this space complexity limitation is for you to reach to a solution using pointers to mutate the input array
The space complexity still is O(n). The input array has n length. When you push in the array it basically allocates one more memory location and then update the array.
By pushing the elements in the array you are still using extra n space.
In C++, to resize the array code would be written as:
int* arr = new int[10]
int* resize_arr = new int[size*2];
for(int i = 0; i < size; i++)
resize_arr[i] = arr[i];
arr = resize_arr;
delete[] resize_arr;
after using all the allocated space, to add more elements you need to create a new array then copy the elements.
All these steps are done in one line in python. It does not mean you are not using more space.
Since for any given nums you are looping at least once the entire array and then do an O(1) operations ( keys lookup & push ), it would be safe to say this is an O(n) solution.
Here's a simple JavaScript performance test:
const iterations = new Array(10 ** 7);
var x = 0;
var i = iterations.length + 1;
console.time('negative');
while (--i) {
x += iterations[-i];
}
console.timeEnd('negative');
var y = 0;
var j = iterations.length;
console.time('positive');
while (j--) {
y += iterations[j];
}
console.timeEnd('positive');
The first loop counts from 10,000,000 down to 1 and accesses an array with a length of 10 million using a negative index on each iteration. So it goes through the array from beginning to end.
The second loop counts from 9,999,999 down to 0 and accesses the same array using a positive index on each iteration. So it goes through the array in reverse.
On my PC, the first loop takes longer than 6 seconds to complete, but the second one only takes ~400ms.
Why is the second loop faster than the first?
Because iterations[-1] will evaluate to undefined (which is slow as it has to go up the whole prototype chain and can't take a fast path) also doing math with NaN will always be slow as it is the non common case.
Also initializing iterations with numbers will make the whole test more useful.
Pro Tip: If you try to compare the performance of two codes, they should both result in the same operation at the end ...
Some general words about performance tests:
Performance is the compiler's job these days, code optimized by the compiler will always be faster than code you are trying to optimize through some "tricks". Therefore you should write code that is likely by the compiler to optimize, and that is in every case, the code that everyone else writes (also your coworkers will love you if you do so). Optimizing that is the most benefitial from the engine's view. Therefore I'd write:
let acc = 0;
for(const value of array) acc += value;
// or
const acc = array.reduce((a, b) => a + b, 0);
However in the end it's just a loop, you won't waste much time if the loop is performing bad, but you will if the whole algorithm performs bad (time complexity of O(n²) or more). Focus on the important things, not the loops.
To elaborate on Jonas Wilms' answer, Javascript does not work with negative indice (unlike languages like Python).
iterations[-1] is equal to iteration["-1"], which look for the property named -1 in the array object. That's why iterations[-1] will evaluate to undefined.
Can someone tell me what type of sorting is this? Is this still O(n) even after adding the filter?
const sortMe = arr => {
let sorted = new Array(arr.length).fill(0);
for(let i = 0; i < sorted.length; i++){
if(sorted.indexOf(arr[i]) === -1)
sorted[arr[i] - 1] = arr[i];
else sorted.splice(sorted.indexOf(arr[i]), 0, arr[i]);
}
return sorted.filter(x => x !== 0);
}
console.log(sortMe([3,15,2,1,18,2,5,6,7,12,3,1,2,3]))
Sorry for asking this, I'm not a computer science graduate.
That's O(n)^2.
First, you are going through the loop:
for(let i = 0; i < sorted.length; i++){
That makes it O(n) automatically without adding any processing. Then, in addition to that, you're also using indexOf (in both branches).
sorted.splice(sorted.indexOf(arr[i]), 0, arr[i]);
indexOf starts with the first element and then moves to each element stepwise until it reaches the sought element. So, for each step in O(n), you are calling a function which iterates over the entire list, this means you are calling O(n), n times, making it O(n*n), or O(n^2).
Note: you might not think it is O(n^2) because you are processing a subset of the list for indexOf. However, the average case is still looking at a list of length n/2.
You also asked about adding a filter function. This might make it seem like it should be O(n^2 + n).
For big-O notation, you remove all constants: O-notation is about complexity and growth of equations with larger inputs. Therefore it's either "constant, logarithmic, linear, exponential, factorial, etc", not "Factorial + logarithmic."
I have an array of numbers with 64 indexes (it's canvas image data).
I want to know if my array contains only zero's or anything other than zero.
We can return a boolean upon the first encounter of any number greater than zero (even if the very last index is non-zero and all the others are zero, we should return true).
What is the most efficient way to determine this?
Of course, we could loop over our array (focus on the testImageData function):
// Setup
var imgData = {
data: new Array(64)
};
imgData.data.fill(0);
// Set last pixel to black
imgData.data[imgData.data.length - 1] = 255;
// The part in question...
function testImageData(img_data) {
var retval = false;
for (var i = 0; i < img_data.data.length; i++) {
if (img_data.data[i] > 0) {
retval = true;
break;
}
}
return retval;
}
var result = testImageData(imgData);
...but this could take a while if my array were bigger.
Is there a more efficient way to test if any index in the array is greater than zero?
I am open to answers using lodash, though I am not using lodash in this project. I would rather the answer be native JavaScript, either ES5 or ES6. I'm going to ignore any jQuery answers, just saying...
Update
I setup a test for various ways to check for a non-zero value in an array, and the results were interesting.
Here is the JSPerf Link
Note, the Array.some test was much slower than using for (index) and even for-in. The fastest, of course, was for(index) for(let i = 0; i < arr.length; i++)....
You should note that I also tested a Regex solution, just to see how it compared. If you run the tests, you will find that the Regex solution is much, much slower (not surprising), but still very interesting.
I would like to see if there is a solution that could be accomplished using bitwise operators. If you feel up to it, I would like to see your approach.
Your for loop is the fastest way on Chrome 64 with Windows 10.
I've tested against two other options, here is the link to the test so you can run them on your environment.
My results are:
// 10776 operations per second (the best)
for (let i = 0; i < arr.length; i++) {
if (arr[i] !== 0) {
break
}
}
// 4131 operations per second
for (const n of arr) {
if (n !== 0) {
break
}
}
// 821 operations per second (the worst)
arr.some(x => x)
There is no faster way than looping through every element in the array. logically in the worst case scenario the last pixel in your array is black, so you have to check all of them. The best algorithm therefore can only have a O(n) runtime. Best thing you can do is write a loop that breaks early upon finding a non-white pixel.
Is it safe to assume that array.indexOf() does a linear search from the beginning of the array?
So if I often search for the biggest value, will sorting the array before calling indexOf make it run even faster?
Notes:
I want to sort only once and search many times.
"biggest value" is actually the most popular search key which is a string.
Yes, indexOf is starting from the first to the last. Sorting it before asking the first entry afterwards makes a difference in performance depending in the performance of sorting algorithm. Normally O(N log N) in quicksort to O(n) in linear search. I would suggest you to make a simple test for it with random value count and see how performance behave.
Of course it depends on your DataObject:
ArrayList:
public int indexOf(Object o) {
if (o == null) {
for (int i = 0; i < size; i++)
if (elementData[i]==null)
return i;
} else {
for (int i = 0; i < size; i++)
if (o.equals(elementData[i]))
return i;
}
return -1;
}
Perhaps a TreeSet can also help you till its ordered all the time.
At the end i would say:"Depends on your data-container and how the ordering or search is implemented and where the performance in the whole process is needed".
As a comment say with pure arrays you can make binarySearch which has the same performance impact of quicksort.
So at the end its not only a question of the performance of algorithm but also at what time you need the performance in the process. For example if you have lots of time by adding values and no time by getting the value you need, a sorted structure can improve it very well, like Tree-Collections. If you need more performance by adding things its perhaps the other way around.