Why Hasn't My Time Complexity Improved On This Leet Code Challenge? - javascript

I am taking a course on algorithms and big O on Udemy.
I learned that nested loops are bad for performance. I wrote a Leet Code challenge before starting this course, and I wanted to try it again using some things I learned on the course. I was expecting it to be much faster than it was the last time. But it was the same speed. Can someone explain to me where I'm going wrong and why there's no improvement in the performance of this function?
Challenge: function with array and target integer arguments, find the two integers from the array whose sum is the target.
New code: Time: 212ms
var twoSum = function(nums, target) {
let right = nums.length - 1;
let left = 0;
// as long as left > nums.lenth - 2
while (left < nums.length) {
if (nums[left] + nums[right] === target) {
return [right, left];
}
if (right > left + 1) {
right--;
} else {
left++;
right = nums.length - 1;
}
}
};
Old code: Time: 204ms
var twoSum = function(nums, target) {
for (let i = 0; i < nums.length; i++) {
for (let ii = 0; ii < nums.length; ii++) {
if (i !== ii && nums[i] + nums[ii] === target) {
return [i, ii];
break;
}
}
}
};

Big-o is purely theoretical, yet LeetCode's benchmarking is something practical, not to mention that their measurements are highly inaccurate and unreliable, which you can fully ignore. It's just something there without much benefit.
var twoSum = function(nums, target) {
let numsMap = {};
for(let index = 0; index < nums.length; index++) {
const num = nums[index];
if(numsMap[target - num] !== undefined) {
return [numsMap[target - num], index];
}
numsMap[num] = index;
}
return [];
}
References
For additional details, you can see the Discussion Board. There are plenty of accepted solutions with a variety of languages and explanations, efficient algorithms, as well as asymptotic time/space complexity analysis1, 2 in there.
If you are preparing for interviews:
We would want to write bug-free and clean codes based on standards and conventions (e.g., c1, 2, c++1, 2, java1, 2, c#1, 2, python1, javascript1, go1, rust1). Overall, we would like to avoid anything that might become controversial for interviews.
There are also other similar platforms, which you might have to become familiar with, in case you'd be interviewing with specific companies that would use those platforms.
If you are practicing for contests1:
Just code as fast as you can, almost everything else is very trivial.
For easy questions, brute force algorithms usually get accepted. For interviews, brute force is less desired, especially if the question would be an easy level.
For medium and hard questions, about 90% of the time, brute force algorithms fail mostly with Time Limit Exceeded (TLE) and less with Memory Limit Exceeded (MLE) errors.
Contestants are ranked based on an algorithm explained here.

Related

How to determine memory and time complexity of function?

I'm not good at determining time complexities and memory complexities and would appreciate it if someone could help me out.
I have an algorithm that returns data from cache or fetch data if it's not in cache, I am not sure what its time and memory complexities would be.
What I am trying to figure out ?
What is its time and memory complexity and why.
What have I done before posting a question on SO ?
I have read this, this, this and many more links.
What I have done so far ?
As I understood from all articles and questions that I read, all my operations with loops are linear. I have 3 loops so it's N+N+N complexity and we can write it as N. I think that complexity of getData is O(n). Space complexity is more complex, as I understand it's often equal to time complexity for simple data structures so I think space complexity is also N but I have cache object (Hash Table) that save every response from fetchData, so I don't understand how to calculate it as space complexity.
Function
https://jsfiddle.net/30k42hrf/9/
or
const cache = {};
const fetchData = (key, arrayOfKeys) => {
const result = [];
for (let i = 0; i < arrayOfKeys.length; i++) {
result.push({
isin: arrayOfKeys[i],
data: Math.random()
});
}
return result;
}
const getData = (key, arrayOfKeys) => {
if (arrayOfKeys.length < 1) return null;
const result = [];
const keysToFetch = [];
for (let i = 0; i < arrayOfKeys.length; i++) {
const isin = arrayOfKeys[i];
if (key in cache && isin in cache[key]) {
result.push({
isin,
data: cache[key][isin]
});
} else {
keysToFetch.push(isin);
}
}
if (keysToFetch.length > 0) {
const response = fetchData(key, keysToFetch);
for (let i = 0; i < response.length; i++) {
const { isin, data } = response[i];
if (cache[key]) {
cache[key][isin] = data;
} else {
cache[key] = { [isin]: data }
}
}
return [...result, ...response];
}
return result;
}
// getData('123', ['a', 'b'])
Thanks
Time/Space complexity is determined in terms of how much more time/space will the iterations take as input increased/doubled. An intuitive view is that imagine the input size is 10, think about the time/space it takes and then think about it again as input size is 20, then input size is 100.
I am not quite clear about your code here but for a general cache stuff, the average time complexity is O(1) because once you got it in the cache, the retrieve time complexity is always O(1). You can think about a case that you retrieve the same item for 1 million times but you only need to store it once.
The average space complexity is O(n) because essentially you need to store everything in the space, which is N.
When it comes to extreme worst case, the time complexity can also be worse for the first time retrieving.

Is there a limit to how many levels you can nest in JavaScript?

Say you have this really complex algorithm that requires dozens of for loops.
Does JavaScript have a limit to how deep loops can be nested or is there no limit?
What is the best practice for deep nested for loops?
I tried searching on MDN but couldn't find what I was looking for
Edit
I'm looking if there is a built in limit. For example if you had something like this
If ( a = 1, a < 3, a++) {
if (b = 1; b < 3; b++) {
...
if (cd = 1; cd < 3; cd++)
Would this actually be possible or would JS throw an error
Edit: Here's a theoretical example of when you might need this
You want to find if any 500 numbers in an array sum up to equal another number. You'd need about 500 loops to add the numbers to a combos array and then filter them to find the sum of them relative to a third number.
Would there even be enough space in the universe to store that much data?
There's no limit in the specification. There will probably be a limit in any implementation due to memory/stack overflows...
For example, this works fine:
var s = 0;
var is = new Array(11);
for(is[0] = 0; is[0] < 2; is[0]++) {
for(is[1] = 0; is[1] < 2; is[1]++) {
for(is[2] = 0; is[2] < 2; is[2]++) {
for(is[3] = 0; is[3] < 2; is[3]++) {
for(is[4] = 0; is[4] < 2; is[4]++) {
for(is[5] = 0; is[5] < 2; is[5]++) {
for(is[6] = 0; is[6] < 2; is[6]++) {
for(is[7] = 0; is[7] < 2; is[7]++) {
for(is[8] = 0; is[8] < 2; is[8]++) {
for(is[9] = 0; is[9] < 2; is[9]++) {
for(is[10] = 0; is[10] < 2; is[10]++) {
s++;
}
}
}
}
}
}
}
}
}
}
}
document.write(s);
Just to kick in with a short test that I've since modified to not count loops as much as see the memory limit... This code (please don't use it, your machine will hate you):
function x () {
function newLoop (index) {
var y = [];
console.log("index");
for (i = index; i < index+1000; i++) {
y.push(i);
if(i == index+999) {
console.log(i);
newLoop(i);
}
}
}
newLoop(0);
}
x();
has stopped at logging 499500 to the console. That is probably hitting some safety switch or the memory limit.
That's 500 nested loops.
In an earlier test with a lighter version of this code I've gotten up to 999 nested loops in just the first second, with the code clogging up my browser for another few seconds (but not displaying the rest because of "too many messages per second to the console" error).
I didn't care enough too bother with more details after that nor do I see benefits of a more detailed description here, but (in my project) I'm traversing HTML with a whole lot of nested loops in badly laid out pages and these results faaar exceed my needs.
TL;DR: memory plays a bigger role than the number of loops, but I've gotten above a 1000 nested loops. Please don't ever use that many though :)
PS. this was run in Edge, for the version check the date of my post :)
There is no maximum nesting level that you should worry about when you are writing code that a human is supposed to read and maintain. You can nest hundreds of loops without problems.
However, you should avoid it wherever possible! At some point someone will have to understand your code (most likely it's you, when you are debugging!) and will curse you. It should be possible to extract inner loops into a separate function with a meaningful name.

Speed differences in JavaScript functions finding the most common element in an array

I'm studying for an interview and have been working through some practice questions. The question is:
Find the most repeated integer in an array.
Here is the function I created and the one they created. They are appropriately named.
var arr = [3, 6, 6, 1, 5, 8, 9, 6, 6]
function mine(arr) {
arr.sort()
var count = 0;
var integer = 0;
var tempCount = 1;
var tempInteger = 0;
var prevInt = null
for (var i = 0; i < arr.length; i++) {
tempInteger = arr[i]
if (i > 0) {
prevInt = arr[i - 1]
}
if (prevInt == arr[i]) {
tempCount += 1
if (tempCount > count) {
count = tempCount
integer = tempInteger
}
} else {
tempCount = 1
}
}
console.log("most repeated is: " + integer)
}
function theirs(a) {
var count = 1,
tempCount;
var popular = a[0];
var temp = 0;
for (var i = 0; i < (a.length - 1); i++) {
temp = a[i];
tempCount = 0;
for (var j = 1; j < a.length; j++) {
if (temp == a[j])
tempCount++;
}
if (tempCount > count) {
popular = temp;
count = tempCount;
}
}
console.log("most repeated is: " + popular)
}
console.time("mine")
mine(arr)
console.timeEnd("mine")
console.time("theirs")
theirs(arr)
console.timeEnd("theirs")
These are the results:
most repeated is: 6
mine: 16.929ms
most repeated is: 6
theirs: 0.760ms
What makes my function slower than their?
My test results
I get the following results when I test (JSFiddle) it for a random array with 50 000 elements:
mine: 28.18 ms
theirs: 5374.69 ms
In other words, your algorithm seems to be much faster. That is expected.
Why is your algorithm faster?
You sort the array first, and then loop through it once. Firefox uses merge sort and Chrome uses a variant of quick sort (according to this question). Both take O(n*log(n)) time on average. Then you loop through the array, taking O(n) time. In total you get O(n*log(n)) + O(n), that can be simplified to just O(n*log(n)).
Their solution, on the other hand, have a nested loop where both the outer and inner loops itterate over all the elements. That should take O(n^2). In other words, it is slower.
Why does your test results differ?
So why does your test results differ from mine? I see a number of possibilities:
You used a to small sample. If you just used the nine numbers in your code, that is definately the case. When you use short arrays in the test, overheads (like running the console.log as suggested by Gundy in comments) dominate the time it takes. This can make the result appear completely random.
neuronaut suggests that it is related to the fact that their code operates on the array that is already sorted by your code. While that is a bad way of testing, I fail to see how it would affect the result.
Browser differences of some kind.
A note on .sort()
A further note: You should not use .sort() for sorting numbers, since it sorts things alphabetically. Instead, use .sort(function(a, b){return a-b}). Read more here.
A further note on the further note: In this particular case, just using .sort() might actually be smarter. Since you do not care about the sorting, only the grouping, it doesnt matter that it sort the numbers wrong. It will still group elements with the same value together. If it is faster without the comparison function (i suspect it is), then it makes sense to sort without one.
An even faster algorithm
You solved the problem in O(n*log(n)), but you can do it in just O(n). The algorithm to do that is quite intuitive. Loop through the array, and keep track of how many times each number appears. Then pick the number that appears the most times.
Lets say there are m different numbers in the array. Looping through the array takes O(n) and finding the max takes O(m). That gives you O(n) + O(m) that simplifies to O(n) since m < n.
This is the code:
function anders(arr) {
//Instead of an array we use an object and properties.
//It works like a dictionary in other languages.
var counts = new Object();
//Count how many of each number there is.
for(var i=0; i<arr.length; i++) {
//Make sure the property is defined.
if(typeof counts[arr[i]] === 'undefined')
counts[arr[i]] = 0;
//Increase the counter.
counts[arr[i]]++;
}
var max; //The number with the largest count.
var max_count = -1; //The largest count.
//Iterate through all of the properties of the counts object
//to find the number with the largerst count.
for (var num in counts) {
if (counts.hasOwnProperty(num)) {
if(counts[num] > max_count) {
max_count = counts[num];
max = num;
}
}
}
//Return the result.
return max;
}
Running this on a random array with 50 000 elements between 0 and 49 takes just 3.99 ms on my computer. In other words, it is the fastest. The backside is that you need O(m) memory to store how many time each number appears.
It looks like this isn't a fair test. When you run your function first, it sorts the array. This means their function ends up using already sorted data but doesn't suffer the time cost of performing the sort. I tried swapping the order in which the tests were run and got nearly identical timings:
console.time("theirs")
theirs(arr)
console.timeEnd("theirs")
console.time("mine")
mine(arr)
console.timeEnd("mine")
most repeated is: 6
theirs: 0.307ms
most repeated is: 6
mine: 0.366ms
Also, if you use two separate arrays you'll see that your function and theirs run in the same amount of time, approximately.
Lastly, see Anders' answer -- it demonstrates that larger data sets reveal your function's O(n*log(n)) + O(n) performance vs their function's O(n^2) performance.
Other answers here already do a great job of explaining why theirs is faster - and also how to optimize yours. Yours is actually better with large datasets (#Anders). I managed to optimize the theirs solution; maybe there's something useful here.
I can get consistently faster results by employing some basic JS micro-optimizations. These optimizations can also be applied to your original function, but I applied them to theirs.
Preincrementing is slightly faster than postincrementing, because the value does not need to be read into memory first
Reverse-while loops are massively faster (on my machine) than anything else I've tried, because JS is translated into opcodes, and guaranteeing >= 0 is very fast. For this test, my computer scored 514,271,438 ops/sec, while the next-fastest scored 198,959,074.
Cache the result of length - for larger arrays, this would make better more noticeably faster than theirs
Code:
function better(a) {
var top = a[0],
count = 0,
i = len = a.length - 1;
while (i--) {
var j = len,
temp = 0;
while (j--) {
if (a[j] == a[i]) ++temp;
}
if (temp > count) {
count = temp;
top = a[i];
}
}
console.log("most repeated is " + top);
}
[fiddle]
It's very similar, if not the same, to theirs, but with the above micro-optimizations.
Here are the results for running each function 500 times. The array is pre-sorted before any function is run, and the sort is removed from mine().
mine: 44.076ms
theirs: 35.473ms
better: 32.016ms

Why is Selection Sort so fast in Javascript?

I'm studying for a technical interview right now, and writing quick javascript implementations of different sorts. The random-array benchmark results for most of the elementary sorts makes sense but the selection sort is freakishly fast. And I don't know why.
Here is my implementation of the Selection Sort:
Array.prototype.selectionSort = function () {
for (var target = 0; target < this.length - 1; target++) {
var min = target;
for (var j = target + 1; j < this.length - 1; j++) {
if (this[min] > this[j]) {
min = j;
}
}
if (min !== target) {
this.swap(min, target);
}
}
}
Here are the results of the same randomly generated array with 10000 elements:
BubbleSort => 148ms
InsertionSort => 94ms
SelectionSort => 91ms
MergeSort => 45ms
All the sorts are using the same swap method. So why is Selection Sort faster? My only guess is that Javascript is really fast at array traversal but slow at value mutation, since SelectionSort uses the least in value mutation, it's faster.
** For Reference **
Here is my Bubble Sort implementation
Array.prototype.bubbleSort = function () {
for (var i = this.length - 1; i > 1; i--) {
var swapped = false;
for (var j = 0; j < i; j++) {
if (this[j + 1] < this[j]) {
this.swap(j, j+1);
swapped = true;
}
}
if ( ! swapped ) {
return;
}
}
}
Here is the swap Implementation
Array.prototype.swap = function (index1, index2) {
var val1 = this[index1],
val2 = this[index2];
this[index1] = val2;
this[index2] = val1;
};
First let me point out two flaws:
The code for your selection sort is faulty. The inner loop needs to be
for (var j = target + 1; j < this.length; j++) {
otherwise the last element is never selected.
Your jsperf tests sort, as you say, the "same randomly generated array" every time. That means that the successive runs in each test loop will try to sort an already sorted array, which would favour algorithms like bubblesort that have a linear best-case performance.
Luckily, your test array is so freakishly large that jsperf runs only a single iteration of its test loop at once, calling the setup code that initialises the array before every run. This would haunt you for smaller arrays, though. You need to shuffle the array inside the "timed code" itself.
Why is Selection Sort faster? My only guess is that Javascript is really fast at array traversal but slow at value mutation.
Yes. Writes are always slower than reads, and have negative effects on cached values as well.
SelectionSort uses the least in value mutation
Yes, and that is quite significant. Both selection and bubble sort do have an O(n²) runtime, which means that both execute about 100000000 loop condition checks, index increments, and comparisons of two array elements.
However, while selection sort does only O(n) swaps, bubble sort does O(n²) of them. That means not only mutating the array, but also the overhead of a method call. And that much much more often than the selection sort does it. Here are some example logs:
> swaps in .selectionSort() of 10000 element arrays
9989
9986
9992
9990
9987
9995
9989
9990
9988
9991
> swaps in .bubbleSort() of 10000 element arrays
24994720
25246566
24759007
24912175
24937357
25078458
24918266
24789670
25209063
24894328
Ooops.

Alternatives to javascript function-based iteration (e.g. jQuery.each())

I've been watching Google Tech Talks' Speed Up Your Javascript and in talking about loops, the speaker mentions to stay away from function-based iterations such as jQuery.each() (among others, at about 24:05 in the video). He briefly explains why to avoid them which makes sense, but admittedly I don't quite understand what an alternative would be. Say, in the case I want to iterate through a column of table cells and use the value to manipulate the adjacent cell's value (just a quick example). Can anyone explain and give an example of an alternative to function-based iteration?
Just a simple for loop should be quicker if you need to loop.
var l = collection.length;
for (var i = 0; i<l; i++) {
//do stuff
}
But, just because it's quicker doesn't mean it's always important that it is so.
This runs at the client, not the server, so you don't need to worry about scaling with the number of users, and if it's quick with a .each(), then leave it. But, if that's slow, a for loop could speed it up.
Ye olde for-loop
It seems to me that it would be case that function-based iteration would be slightly slower because of the 1) the overhead of function itself, 2) the overhead of the callback function being created and executed N times, and 3) the extra depth in the scope chain. However, I thought I'd do a quick benchmark just for kicks. Turns out, at least in my simple test-case, that function-based iteration was faster. Here's the code and the findings
Test benchmark code
// Function based iteration method
var forEach = function(_a, callback) {
for ( var _i=0; _i<_a.length; _i++ ) {
callback(_a[_i], _i);
}
}
// Generate a big ass array with numbers 0..N
var a = [], LENGTH = 1024 * 10;
for ( var i=0; i<LENGTH; i++ ) { a.push(i); }
console.log("Array length: %d", LENGTH);
// Test 1: function-based iteration
console.info("function-base iteration");
var end1 = 0, start1 = new Date().getTime();
var sum1 = 0;
forEach(a, function(value, index) { sum1 += value; });
end1 = new Date().getTime();
console.log("Time: %sms; Sum: %d", end1 - start1, sum1);
// Test 2: normal for-loop iteration
console.info("Normal for-loop");
var end2 = 0, start2 = new Date().getTime();
var sum2 = 0;
for (var j=0; j<a.length; j++) { sum2 += a[j]; }
end2 = new Date().getTime();
console.log("Time: %sms; Sum: %d", end2 - start2, sum2);
Each test just sums the array which is simplistic, but something that can be realistically seen in some sort of real life scenario.
Results for FF 3.5
Array length: 10240
function-base iteration
Time: 9ms; Sum: 52423680
Normal for-loop
Time: 22ms; Sum: 52423680
Turns out that a basic for iteration was faster in this test case. I haven't watched the video yet, but I'll give it a look and see if he's differing somewhere that would make function-based iterations slower.
Edit: This is by no means the end-all, be-all and is only the results of one engine and one test-case. I fully expected the results to be the other way around (function-based iteration being slower), but it is interesting to see how certain browsers have made optimizations (which may or may not be specifically aimed at this style of JavaScript) so that the opposite is true.
The fastest possible way to iterate is to cut down on stuff you do within the loop. Bring stuff out of the iteration and minimise lookups/increments within the loop, e.g.
var i = arr.length;
while (i--) {
console.log("Item no "+i+" is "+arr[i]);
}
NB! By testing on the latest Safari (with WebKit nightly), Chrome and Firefox, you'll find that it really doesn't matter which kind of the loop you're choosing if it's not for each or for in (or even worse, any derived functions built upon them).
Also, what turns out, is that the following for loop is very slightly even faster than the above option:
var l = arr.length;
for (var i=l; i--;) {
console.log("Item no "+i+" is "+arr[i]);
}
If the order of looping doesn't matter, the following should be fastest as you only need a single local variable; also, decrementing the counter and bounds-checking are done with a single statement:
var i = foo.length;
if(i) do { // check for i != 0
// do stuff with `foo[i]`
} while(--i);
What I normally use is the following:
for(var i = foo.length; i--; ) {
// do stuff with `foo[i]`
}
It's potentially slower than the previous version (post- vs pre-decrement, for vs while), but more readable.

Categories