The combination sum 4 problem is a well known coding interview problem. It states the following:
Given an array of distinct integers nums and a target integer target, return the number of possible combinations that add up to target. You may assume that you have an infinite number of each kind of coin.
This problem can be solved optimally through dynamic programming and that is very well explained on the problem solution that leetcode provides.
However, this problem can also be solved recursively (and through backtracking). I am trying to solve this problem with backtracking to further cement my understanding of backtracking. I have solved the problem recursively with a loop, however, I don't understand why my conversion of it to using indexes instead of looping is broken. Is it not possible to solve this problem through indexing manually?
// This one works
function combinationSum4 (nums, target, currTotal=[0], nbWays=[0]) {
if (currTotal[0] === target) {
nbWays[0]++;
return nbWays[0];
}
if (currTotal[0] > target) {
return nbWays[0];
}
for (const num of nums) {
currTotal[0] += num;
combinationSum4(nums, target, currTotal, nbWays);
currTotal[0] -= num;
}
return nbWays[0];
}
Below, I want to use two indices, a "fast" index and a "slow" index. For example, if we had nums=[1,2,3] and a target=4. Then, we'd have something like:
0
i=0 /
/
1[1]
i=0 / \
2[1,1] \ i=1
i=0/ **3[1,2]**
3[1,1,1] /
/ / i=0
... 4[1,2,1]
Notice how when we get to i=1, we loop back from the beginning from 0. I'm not really sure how to do this in code though without a loop (that's why I use i in this diagram). Is this not possible?
// This one is broken
function combinationSum4 (nums, target, currTotal=[0], nbWays=[0], fastIndex=0, slowIndex=0) {
if (currTotal[0] === target) {
nbWays[0]++;
return nbWays[0];
}
if (currTotal[0] > target || slowIndex >= nums.length || fastIndex >= nums.length) {
return nbWays[0];
}
const num = nums[fastIndex];
currTotal[0] += num;
combinationSum4(nums, target, currTotal, nbWays, fastIndex, slowIndex);
currTotal[0] -= num;
combinationSum4(nums, target, currTotal, nbWays, fastIndex+1, slowIndex);
combinationSum4(nums, target, currTotal, nbWays, 0, slowIndex+1);
return nbWays[0];
}
Related
I encountered this question during an interview. It gives you an array and a threshold and the function should return the length of the shortest, non-empty, contiguous subarray of that array with sum at least past that threshold.
So if the array is [2,-1,2] and threshold is 3 then it should return 3 .
Here is my attempt written in JavaScript. I was taking a typical sliding window approach, where once the sum is bigger than threshold I will decrease the window size and I keep track of the window size during each iteration.
function(array, threshold) {
let minWindowSize = Infinity
let sum = 0
for (let start = 0, end = 0; end < array.length; end++) {
sum += array[end]
if(sum >= threshold) minWindowSize = Math.min(minWindowSize, end - start + 1)
while (sum > threshold) {
sum -= array[start++]
}
}
return minWindowSize === Infinity ? -1 : minWindowSize
};
However my solution is buggy for cases like
array = [17,85,93,-45,-21]
threshold = 150
I wanted to know what differentiates this question from a typical sliding window question and is there a way to implement a sliding window approach to solve this question? It seems pretty straightforward but it turned out to be a hard question on leetcode.
As David indicates, you can't use the sliding/stretching window technique when there are negative numbers in the array, because the sum doesn't grow monotonically with the window size.
You can still solve this in O(n log n) time, though, using a technique that is usually used for the "sliding window minimum/maximum" problem.
First, transform your array into a prefix sum array, by replacing each element with the sum of that element and all the previous ones. Now your problem changes to "find the closest pair of elements with difference >= X" (assuming array[-1]==0).
As you iterate though the array, you need to find, for each i, the latest index j such that j < i and array[j] <= array[i]-x.
In order to do that quickly, first note that array[j] will always be less than all the following elements up to i, because otherwise there would be a closer element to choose.
So, as you iterate through the array, maintain a stack of the indexes of all elements that are smaller than all the later elements you've seen. This is easy and takes O(n) time overall -- after processing each i, you just pop all the indexes with >= values, and then push i.
Then for each i, you can do a binary search in this stack to find the latest index with a small enough value. The binary search works, because the values for index in the stack increase monotonically -- each element must be less than all the following ones.
With the binary search, to total time increases to O(n log n).
In JavaScript, it looks like this:
var shortestSubarray = function(A, K) {
//transform to prefix sum array
let sum=0;
const sums = A.map(val => {
sum+=val;
return sum;
});
const stack=[];
let bestlen = -1;
for(let i=0; i<A.length; ++i) {
const targetVal = sums[i]-K;
//binary search to find the shortest acceptable span
//we will find the position in the stack *after* the
//target value
let minpos=0;
let maxpos=stack.length;
while(minpos < maxpos) {
const testpos = Math.floor(minpos + (maxpos-minpos)/2);
if (sums[stack[testpos]]<=targetVal) {
//value is acceptable.
minpos=testpos+1;
} else {
//need a smaller value - go left
maxpos=testpos;
}
}
if (minpos > 0) {
//found a span
const spanlen = i - stack[minpos-1];
if (bestlen < 0 || spanlen < bestlen) {
bestlen = spanlen;
}
} else if (bestlen < 0 && targetVal>=0) {
// the whole prefix is a valid span
bestlen = i+1;
}
// Add i to the stack
while(stack.length && sums[stack[stack.length-1]] >= sums[i]) {
stack.pop();
}
stack.push(i);
}
return bestlen;
};
Leetcode says:
SUCCESS:
Runtime: 216 ms, faster than 100.00% of JavaScript online submissions
for Shortest Subarray with Sum at Least K.
Memory Usage: 50.1 MB, less than 37.37% of JavaScript online
submissions for Shortest Subarray with Sum at Least K.
I guess most people used a slower algorithm.
You could use a sliding window if all of the array elements were nonnegative. The problem is that, with negative elements, it's possible for a subarray to be both shorter than and have a greater sum than another subarray.
I don't know how to solve this problem with a sliding window. The approach that I have in mind would be to loop over the prefix sums, inserting each into a segment tree after searching the segment tree for the most recent sum that was at least threshold smaller.
I want to partition an array (eg [1,2,3,4,5,6,7,8]), first partition should keep even values, second odd values (example result: [2,4,6,8,1,3,5,7]).
I managed to resolve this problem twice with built-in Array.prototype methods. First solution uses map and sort, second only sort.
I would like to make a third solution which uses a sorting algorithm, but I don't know what algorithms are used to partition lists. I'm thinking about bubble sort, but I think it is used in my second solution (array.sort((el1, el2)=>(el1 % 2 - el2 % 2)))... I looked at quicksort, but I don't know where to apply a check if an integer is even or odd...
What is the best (linear scaling with array grow) algorithm to perform such task in-place with keeping order of elements?
You can do this in-place in O(n) time pretty easily. Start the even index at the front, and the odd index at the back. Then, go through the array, skipping over the first block of even numbers.
When you hit an odd number, move backwards from the end to find the first even number. Then swap the even and odd numbers.
The code looks something like this:
var i;
var odd = n-1;
for(i = 0; i < odd; i++)
{
if(arr[i] % 2 == 1)
{
// move the odd index backwards until you find the first even number.
while (odd > i && arr[odd] % 2 == 1)
{
odd--;
}
if (odd > i)
{
var temp = arr[i];
arr[i] = arr[odd];
arr[odd] = temp;
}
}
}
Pardon any syntax errors. Javascript isn't my strong suit.
Note that this won't keep the same relative order. That is, if you gave it the array [1,2,7,3,6,8], then the result would be [8,2,6,3,7,1]. The array is partitioned, but the odd numbers aren't in the same relative order as in the original array.
If you are insisting on an in-place approach instead of the trivial standard return [arr.filter(predicate), arr.filter(notPredicate)] approach, that can be easily and efficiently achieved using two indices, running from both sides of the array and swapping where necessary:
function partitionInplace(arr, predicate) {
var i=0, j=arr.length;
while (i<j) {
while (predicate(arr[i]) && ++i<j);
if (i==j) break;
while (i<--j && !predicate(arr[j]));
if (i==j) break;
[arr[i], arr[j]] = [arr[j], arr[i]];
i++;
}
return i; // the index of the first element not to fulfil the predicate
}
let evens = arr.filter(i=> i%2==0);
let odds = arr.filter(i=> i%2==1);
let result = evens.concat(odds);
I believe that's O(n). Have fun.
EDIT:
Or if you really care about efficiency:
let evens, odds = []
arr.forEach(i=> {
if(i%2==0) evens.push(i); else odds.push(i);
});
let result = evens.concat(odds);
Array.prototype.getEvenOdd= function (arr) {
var result = {even:[],odd:[]};
if(arr.length){
for(var i = 0; i < arr.length; i++){
if(arr[i] % 2 = 0)
result.odd.push(arr[i]);
else
result.even.push(arr[i]);
}
}
return result ;
};
I am trying to optimize a function. I believe this nested for loop is quadratic, but I'm not positive. I have recreated the function below
const bucket = [["e","f"],[],["j"],[],["p","q"]]
let totalLettersIWantBack = 4;
//I'm starting at the end of the bucket
function produceLetterArray(bucket, limit){
let result = [];
let countOfLettersAccumulated = 0;
let i = bucket.length - 1;
while(i > 0){
if(bucket[i].length > 0){
bucket[i].forEach( (letter) =>{
if(countOfLettersAccumulated === totalLettersIWantBack){
return;
}
result.push(letter);
countOfLettersAccumulated++;
})
}
i--;
}
return result;
}
console.log(produceLetterArray(bucket, totalLettersIWantBack));
Here is a trick for such questions. For the code whose complexity you want to analyze, just write the time that it would take to execute each statement in the worst case assuming no other statement exists. Note the comments begining with #operations worst case:
For the given code:
while(i > 0){ //#operations worst case: bucket.length
if(bucket[i].length > 0){ //#operations worst case:: 1
bucket[i].forEach( (letter) =>{ //#operations worst case: max(len(bucket[i])) for all i
if(countOfLettersAccumulated === totalLettersIWantBack){ //#operations worst case:1
return;
}
result.push(letter); //#operations worst case:1
countOfLettersAccumulated++; //#operations worst case:1
})
}
i--; ////#operations worst case:: 1
}
We can now multiply all the worst case times (since they all can be achieved in the worst case, you can always set totalLettersIWantBack = 10^9) to get the O complexity of the snippet:
Complexity = O(bucket.length * 1 * max(len(bucket[i])) * 1 * 1 * 1 * 1)
= O(bucket.length * max(len(bucket[i]))
If the length of each of the bucket[i] was a constant, K, then your complexity reduces to:
O(K * bucket.length ) = O(bucket.length)
Note that the complexity of the push operation may not remain constant as the number of elements grow (ultimately, the runtime will need to allocate space for the added elements, and all the existing elements may have to be moved).
Whether or not this is quadratic depends on what you consider N and how bucket is organized. If N is the total number of letters, then the runtime is bound by either the number of bins in your bucket, if that is larger than N, or it is bound by the number of letters in the bucket, if N is larger. In either case, the search time increases linearly with the larger bound, if one would dominate the other the time complexity is O(N). This is effectively a linear search with "turns" in it, scrunching a linear search and spacing it out does not change the time complexity. The existence of multiple loops in a piece of code does not alone make it non linear. Take the linear search example again. We search a list until we've found the largest element.
//12 elements
var array = [0,1,2,3,4,5,6,7,8,9,10,11];
var rows = 3;
var cols = 4;
var largest = -1;
for(var i = 0; i < rows; ++i){
for(var j = 0; j < cols; ++j){
var checked = array[(i * cols) + j];
if (checked > largest){
largest = checked;
}
}
}
console.log("found largest number (eleven): " + largest.toString());
Despite this using two loops instead of one, the runtime complexity is still O(N) where N is the number of elements in the input. Scrunching this down so each index is actually an array to multiple elements, or separating relevant elements by empty bins doesn't change the fact the runtime complexity is bound linearly.
This is technically linear with n being the number of elements total in your matrix. This is because the exit condition is the length of bucket and for each array in bucket you check if countOfLettersAccumulated is equal to totalLettersIWantBack. Continually looking at values.
It gets a lot more complicated if you are looking for an answer matching the dimensions of your matrix because it looks like the dimensions of bucket are not fixed.
You can turn this bit of code into constant by adding an additional check outside the bucket foreach which if countOfLettersAccumulated is equal to
totalLettersIWantBack then you do a break.
I like #axiom's explanation of complexity analyze.
Just would like to add possible optimized solution.
UPD .push (O(1)) is faster that .concat (O(n^2))
also here is test Array push vs. concat
const bucket = [["e","f"],[],["j", 'm', 'b'],[],["p","q"]]
let totalLettersIWantBack = 4;
//I'm starting at the end of the bucket
function produceLetterArray(bucket, limit){
let result = [];
for(let i = bucket.length-1; i > 0 && result.length < totalLettersIWantBack; i--){
//previous version
//result = result.concat(bucket[i].slice(0, totalLettersIWantBack-result.length));
//faster version of merging array
Array.prototype.push.apply(result, bucket[i].slice(0, totalLettersIWantBack-result.length));
}
return result;
}
console.log(produceLetterArray(bucket, totalLettersIWantBack));
I'm trying to implement a quicksort function in JavaScript:
function partition(l, low, high) {
l[0] = l[low];
var pivotkey = l[low];
while (low < high) {
while (low < high && pivotkey <= l[high]) {
--high;
}
l[low] = l[high];
while (low < high && l[low] <= pivotkey) {
++low;
}
l[high] = l[low];
}
l[low] = l[0];
return low;
}
function qsort(l, low, high) {
var pivotloc;
if (low < high) {
pivotloc = partition(l, low, high);
qsort(l, low, pivotloc - 1);
qsort(l, pivotloc + 1, high);
}
return;
}
function quickSort(l) {
qsort(l, 1, l.length - 1);
return l;
}
console.log(quickSort([0, 1, 4, 3]));
But this program output nothing in the terminal (with node qsort.js). Perhaps I'm missing something. Can anyone point me in the right direction? How to debug this kind of problems?
So as you stated the first element of the array will be used as a temporary variable which will be only useful for the execution of the algorithm and which wasn't clear at first !
Your algorithm works fine but you have problem in printing the result !
To get what you want you need to get rid of the first element by adding the shift() function in the quickSort() block so it becomes :
function quickSort(l) {
qsort(l, 1, l.length - 1);
l.shift(); // add this function
return l;
}
There is another solution if you want by using the splice() function which removes the first element also and which has this form :
array.splice(indexToRemove, numberToRemove);
so to get the required resutl add the instruction above to your quickSort() function like this:
function quickSort(l) {
qsort(l, 1, l.length - 1);
l.splice(0, 1); //add this line
return l;
}
These are the two solutions I guess for your problem. Hope it helps !!
I believe there are several issues with your algorithm. But the one I can notice immediately is related to you using l.low and l.high in your partition function. I'm assuming that l is an array in which case, both of these are intended to access the indices at low and high. If that's the case, then these should be l[low] and l[high].
Since it seems like you're trying to do an in-place version of quick sort, take a look at what's available here: In Place Quick Sort
So, I have this program that asks for the minimum even value in the array and I have written the code but I seem to missed a loop. I will write the correct code but I hope someone would explain why is there a while loop
<HTML>
<HEAD>
<SCRIPT LANGUAGE = "JavaScript">
var number=new Array(10)
for(var i=0; i<number.length; i=i+1)
{
number[i] =window.prompt('enter number ','')
number[i] =parseFloat(number[i])
}
var y = 0
while (number[y] % 2 != 0) //get the first even number in the array
{
y = y + 1
}
//after you exit the while loop y will have the index of the first even number
var Min
Min = number[y]
for(var i=0; i<number.length; i=i+1)
{
if (number[i] % 2 == 0)
{
if(number[i]<Min)
{
Min= number[i]
}
}
}
document.write(Min)
</SCRIPT>
</HEAD>
</HTML>
So, this part
var y = 0
while (number[y] % 2 != 0) //get the first even number in the array
{
y = y + 1
}
//after you exit the while loop y will have the index of the first even number
I'm finding it hard to really grasp this loop and if I might ask: is there another way to find the minimum value in an array?
Many thanks!
The while loop sets the first value of Min so that subsequent comparisons work. Here's a far
simpler and faster way to do the same thing:
var min = Infinity; // Start with the biggest number possible
for (var i=myArray.length;i--;){
var val = myArray[i];
if (val<min && val%2==0) min = val;
}
This is faster because—unlike the original code—this doesn't iterate over the first non-even values twice. It would be roughly equivalent in speed if the for loop in the original started at index y, i.e. for (var i=y+1;i<number.length;++i)
It's also very slightly faster because the for loop caches the length of the array instead of looking it up each time, and because it only looks up the value in the array once each loop, not three times. Modern JavaScript runtimes like V8 can optimize naive code to behave similarly, however, so this is not a very important point.
Edit: For fun, here's a modern, functional programming approach:
var min = Math.min.apply(Math,myArray.filter(function(n){ return n%2==0 }));
The above uses Array.filter to create a new array of just the even-valued items, and then uses Function.prototype.apply to pass the array of values as parameters to Math.min.
If you're interested how to do that in modern Javascript, it goes like this:
minEvenElement = Math.min.apply(Math, myArray.filter(function(e) { return !(e % 2) }))