Time Complexity - Bad Recursion - British Change Combinations - javascript

I recently came up with a naive (+ poor) solution to the British Change Problem (i.e. how many combinations of coins can generate a given total). I have a better solution now, but was still interested in solving the time and space complexity of the two solutions below.
Worst Solution
This solution recursively tries combining every number against itself and every other number, resulting in a lot of duplicate work. I believe it's O(n^n) time and not sure how to measure space complexity (but it's huge, since we're storing every result). Thoughts?
var makeChange = function(total){ // in pence
var allSets = new Set();
var coins = [1,2,5,10,20,50,100,200];
var subroutine = (arr, total) => {
if(total < 0){ return; }
if(total === 0){
allSets.add(''+arr);
} else {
// increase each coin amount by one and decrease the recursive total by one
for(var i = 0; i<coins.length; i++){
if((total - coins[i]) >= 0){
subroutine(arr.slice(0,i).concat(arr[i]+1).concat(arr.slice(i+1)), (total - coins[i]))
}
}
}
};
var zeros = new Array(coins.length).fill(0);
subroutine(zeros, total);
return allSets.size;
};
Improved Solution
This solution still has massive space complexity but I believe the time complexity has -improved- to O(n!) since we're recursing on smaller subsets of coins each time.
var makeChange = function(total){ // in pence
var allSets = new Set();
var coins = [1,2,5,10,20,50,100,200];
var subroutine = (arr, total, start) => {
if(total < 0){ return; }
if(total === 0){
console.log(''+arr);
allSets.add(''+arr);
} else {
// only solve for coins above start, since lower coins already solved
for(var i = start; i<coins.length; i++){
if((total - coins[i]) >= 0){
subroutine(arr.slice(0,i).concat(arr[i]+1).concat(arr.slice(i+1)), (total - coins[i]), i);
}
}
}
};
var zeros = new Array(coins.length).fill(0);
for(let i = 0; i<coins.length; i++){
subroutine(zeros, total, i);
}
return allSets.size;
};
Please help me to understand if my time/space complexity estimates are correct, and how to better estimate future problems like these. Thanks!

The complexity of the first algorithm is not actually an O(n^n). N is a variable which represents your input. In this case, I will refer to the variable "total" as your input, so N is based on total. For your algorithm to be O(n^n), it's recurrence tree would have to have a depth of N and a branching factor of N. Here, your depth of your recurrence is based on the smallest variable in your coins array. There is one branch of your recursion tree where you simply subtract that value off every time and recurse until total is zero. Given that that value is constant, it is safe to say your depth is n. Your branching factor for your recursion tree is also based off of your coins array, or the number of values in it. For every function call, you generate C other function calls, where C is the size of your coins array. That means your function is actually O(n^c) not O(n^n). Your time and space complexities are both based off of the size of your coins array as well as your input number.
The space complexity for your function is O(n^c * c). Every time you call your function, you also pass it an array of a size based on your input. We already showed that there are O(n^c) calls, and each call incorporates an array of size c.
Remember when analyzing the complexity of functions to take into account all inputs.

Related

What can I use to add new value to previous value and repeat this a certain amount of times? (Javascript)

I'm trying to get an array of numbers based on a calculation that keeps adding a set amount to the previous amount until this have repeated 20 times. The initial number is a negative number because the client pays an initial amount of money for a solar power system and then the calculation should subtract an amount each month based on how much the client saves by not having to pay for electricity. It needs to be an array (I think) because it needs to go into a chart. Here's a google worksheet that might make what I'm trying to more clear. The part of the sheet that is relevant to my question is in columns T and U in pink.
I did tonne of reading on loops, different array types (reduce and map). I'm new to this so it didn't seem any of those types of arrays will do what I need to be done. I found the below code somewhere and it seemed like this is the closest to what I need to happen but I could be completely off track (my adjusted version is further down):
// program to generate fibonacci series up to n terms
// take input from the user
const number = parseInt(prompt('Enter the number of terms: '));
let n1 = 0, n2 = 1, nextTerm;
console.log('Fibonacci Series:');
for (let i = 1; i <= number; i++) {
console.log(n1);
nextTerm = n1 + n2;
n1 = n2;
n2 = nextTerm;
}
I tried to adjust it to try get it to do what I need it to do but in console it shows one number and then the rest is NAN. I know this means not a number but I don't know why or how to fix it:
function runningNetProfit(n) {
var profitSequence = [0];
var nextYear = (monthlyEstimatedSavings * 12);
for (var i = negSystemCost; i < n - 1; i++) {
profitSequence.push(nextYear);
nextYear = nextYear + profitSequence[i];
}
return profitSequence;
}
console.log(runningNetProfit(20))
I added what I did (all of the code) to a codepen as well, maybe it can make my question more clear, that can be found here The javascript relevant to this question is right at the bottom from line 145. Any advice would be much appreciated.
See if this code works for you:
It takes a given installation cost, the price they pay for electricity, and the number of months, then spits out an array with these numbers.
const installCost = 250000;
const electricityCost = 45000;
const numMonths = 20
const newArray = Array.from({length: numMonths})
const updatedArray = newArray.map((_, index) => index * electricityCost - installCost);
console.log(updatedArray) // returns [-250000,-205000,-160000,-115000,-70000,-25000,20000,65000,110000,155000,200000,245000,290000,335000,380000,425000,470000,515000,560000,605000]
Here's the code sandbox for it: https://codesandbox.io/s/blue-pine-l5rjc8?file=/src/index.js

Is this JS Unique ID Generator Unreliable? (Getting collisions)

I am using the following JS function to generate unique IDs, which I got from another StackOverflow thread:
function generateUniqueID() {
return Math.round(new Date().getTime() + (Math.random() * 100));
}
I see that it combines the current Date/Time with an additional Randomizer.
Nonetheless, I verified that I'm getting collisions on every 4th or 5th operation of quickly adding items with IDs.
The function is called inside a JS loop to generate IDs from the list of current elements.
jQuery.each(mainEvents, function(index, item) {
// ...
// Generate gaps
gapEvents.push({"gapEventID" : "event-GAP" + generateUniqueID(),
"other" : other });
}
Is this function unreliable? Could it allow collisions in quick JS loop iterations?
I've pretty much ruled out "outside causes" (i.e. that this function isn't the culprit but something else could be), but if that's the case, I can't understand why Math.random() wouldn't keep me safe.
Very much so. You can use new Date().getTime() to get a unique id under the assumption that it takes longer than 1ms for each iteration. As you can tell from your data, that is false. Combined with an RNG that uses Math.floor, it's very possible to get repeated values. You will get repeat times whenever the interval is < 1ms. If you want unique IDs based around the concept of an RNG, I'd say just using Math.random() to the 10^15 is a better choice. 10^15 is the max size integer digit length that will never go past Number.MAX_SAFE_INTEGER.
Math.floor(Math.random() * Math.pow(10, 15))
Is this function unreliable?
In my opinion it really is.
Infact new Date().getTime() is an integer number that increases by 1 each millisecond, while Math.random() * 100 is a pseudo random number that gives a number in a range from 0 to 99.
So their sum might really repeat often if the function is called many times rapidly.
Think if the function is called two times per millisecond, it becomes very likely to have the same number twice. It has 1/100 of probability to happen (and these are pretty much the results I'm getting considering that I'm generating a list of 10.000 ids in about 1 second using that function and getting ~100 duplicates, which is pretty consistent as order of magnitude)
A random value is never a unique value. Even with a timestamp you can't guarantee that the outcome is 100% unique. However, you could minimize this by generating a larger (random) value.
Using a timestamp however, you cover yourself when there are multiple pushes at the same time. Using an extra random will create an almost unique value which, in most use cases is unique.
I'd suggest though to make that random value longer. Or create a GUID instead.
Create GUID / UUID in JavaScript?
Based on the responses the following compares the suggested methods.
But I think I'm going in the wrong direction for my needs. I will be using the ID/Sequence on the server-side to ensure uniqueness.
function run() {
var nums1 = new Set(), nums2 = new Set(), nums3 = new Set();
for (var i = 0; i < 10000; i++) {
nums1.add(originalMethod());
}
for (var i = 0; i < 10000; i++) {
nums2.add(concatMethod());
}
for (var i = 0; i < 10000; i++) {
nums3.add(random10To18thMethod());
}
console.clear();
console.log('Original Method set: ' + nums1.size);
console.log('Concat Method set: ' + nums2.size);
console.log('Math.Random 10^18 set: ' + nums3.size);
function originalMethod() {
return Math.round(new Date().getTime() + (Math.random() * 100));
}
function concatMethod() {
return Math.round(new Date().getTime() + '' + (Math.random() * 100));
}
function random10To18thMethod() {
return Math.random() * Math.pow(10, 18);
}
}
<button onclick="run()">Run Algorithms</button>

Timestamp with Random Number in JS

This is a follow-up question to the one I asked previously, Is this JS Unique ID Generator Unreliable? (Getting collisions).
In the scriptlet below I'm generating 10000 random numbers using 2 methods. Method 1 is a straight random number up to 10^6, while Method 2 concatenates that random number up to 10^6 (same idea as in [1]) with the current JS Date().time() timestamp. Also there's Method [3] which only does the Math.round on the RNG rather than the whole concatenated result.
My question is, if you keep clicking the test button, you see that [1] always produces 10000 unique numbers but [2] produces ~9500 no matter what. [3] produces ~9900 but never the max either. Why is that? The chances of getting a +/-1 from a previous random number in [0..10^6] and having that mixed with the timestamp of exactly the opposite +/-1 for the timestamp concatenation are impossible. We are generating pretty much on the same millisecond in a loop. 10^6 is a huge limit, much bigger than in my original question, and we know that's true because Method [1] works perfectly.
Is there truncation of some kind of going on, which trims the string and makes it more likely to get duplicates? Paradoxically, a smaller string works better than a larger string using the same RNG inside it. But if there's no truncation, I would expect results to be 100% as in [1].
function run() {
var nums1 = new Set(), nums2 = new Set(), nums3 = new Set();
for (var i = 0; i < 10000; i++) {
nums1.add(random10to6th());
}
for (var i = 0; i < 10000; i++) {
nums2.add(random10to6th_concatToTimestamp());
}
for (var i = 0; i < 10000; i++) {
nums3.add(random10to6th_concatToTimestamp_roundRNGOnly());
}
console.clear();
console.log('Random 10^6 Unique set: ' + nums1.size);
console.log('Random 10^6 and Concat to Date().time() Unique set: ' + nums2.size);
console.log('Random 10^6 and Concat to Date().time(), Round RNG Only Unique set: ' + nums3.size);
function random10to6th() {
return Math.random() * Math.pow(10, 6);
}
function random10to6th_concatToTimestamp() {
return Math.round(new Date().getTime() + '' + (Math.random() * Math.pow(10, 6)));
}
}
function random10to6th_concatToTimestamp_roundRNGOnly() {
return new Date().getTime() + '' + Math.round(Math.random() * Math.pow(10, 6));
}
<button onclick="run()">Run Algorithms</button>
<p>(Keep clicking this button)</p>
Is there truncation of some kind of going on, which trims the string
and makes it more likely to get duplicates?
Yes, simply by rounding a random number, you cut off the fractional digits. This reduces the number of possible outcomes compared to the non-rounded random number.
In addition to that, you concatenate a timestamp (13 digits) with a value between 0 and 1000000 (1 to 7 digits). So your concatenated result will have a total number of 14 to 20 digits, but JavaScript's number datatype is of limited precision and represents integers faithfully up to about 16 digits only (see Number.MAX_SAFE_INTEGER).
Example: Let's assume the timestamp is 1516388144210 and you append random numbers from 500000 to 500400:
+'1516388144210500000' == 1516388144210500000
+'1516388144210500100' == 1516388144210500000
+'1516388144210500200' == 1516388144210500000
+'1516388144210500300' == 1516388144210500400
+'1516388144210500400' == 1516388144210500400
You can see that, when converting those strings to numbers, they get rounded to the nearest available IEEE-754 double-precision (64 bit) number. This is because 1516388144210500000 > Number.MAX_SAFE_INTEGER.
I think there are a number of issues in play here. I don't know which or to what degree each of the items below contribute to the observed difference, just that they are things that might explain the results.
One is because you're concatenating a number with a string with a number and then coercing the value back to a number as part of rounding the result. It would be very easy to feed unexpected results into the Round function (which in itself might cause collisions due to floating precision and such outlined below)
Second, I think that you actually reduce the randomness of the resulting number when you concatenate the timestamp. The function is likely to be called many, many times every second; if it's invoked at a rate > Date.getTime() precision the value returned will be identical to one generated in a previous loop iteration.
Third, unless I missed something, have you considered that the random number gen is only guaranteed to be pseudo-random? Precision and digit limits play a factor when dealing with large values like in the code you posted. Since the random-est part of the number is tacked onto the least significant part, it is more likely to be truncated, chopped, or modified.
Try inverting your concatenation and see the results (there's only about 4 or so collisions). The collisions are accounted for by the reasons outlined by me and #le_m's answers.
function run() {
var nums1 = new Set(), nums2 = new Set()
for (var i = 0; i < 10000; i++) {
nums1.add(random10to6th());
}
for (var i = 0; i < 10000; i++) {
nums2.add(random10to6th_concatToTimestamp());
}
console.clear();
console.log('Random 10^6 Unique set: ' + nums1.size);
console.log('Random 10^6 and Concat to Date().time() Unique set: ' + nums2.size);
function random10to6th() {
return Math.random() * Math.pow(10, 6);
}
function random10to6th_concatToTimestamp() {
return Math.round((Math.random() * Math.pow(10, 6)) + '' + new Date().getTime());
}
}
<button onclick="run()">Run Algorithms</button>
<p>(Keep clicking this button)</p>

Find number of pairs with difference larger than or equal to given number

I have a array/dict(HashMap) of positive integers.
I need to find the number of pairs that have a absolute difference greater or equal to a given number, K.
import random
import time
#given number
k = 4
# List of 2,00,000 random numbers in range 0-1000
strength = [random.randrange(0,1000) for x in range(200000)]
strength.sort()
# start clock
start1 = time.clock()
n = len(strength)
# count keeps track of number of pairs found
count = 0
for x in range(n):
for y in range(x,n):
if abs(strength[x] - strength[y]) >= k:
# if found, all number from this point to the end will satisfy
count += n-y
# So no need to go to the end
break
end1 = time.clock()
print(count)
print(end1-start1)
All the answers I find are for pairs less than or equal to a given number.
I need to find the number of pairs that have a absolute difference greater or equal to a given number, K.
Note that the total number of pairs is n * (n - 1) / 2, so if you can find the number of pairs with difference less than K, the number of pairs with difference greater than K is just n * (n - 1) / 2 - num_pairs_with_diff_less_than_K
The solution you provide is also correct (and well documented). If your question is how to adapt it to your case, then all you need to do is to use values of your HashMap (sorted) instead of the strength array.
You can get the 2 item combinations of the array and then filter / reduce them according to the difference.
One might do the job in JavaScript as follows;
Array.prototype.combinations = function(n){
return this.reduce((p,c,i,a) => p.concat(n > 1 ? a.slice(i+1).combinations(n-1).map(e => (e.push(c),e))
: [[c]]),[]);
};
function getAcordingToDiff(a,d){
return a.combinations(2)
.reduce((p,c) => Math.abs(c[0]-c[1]) >= d ? (p.push(c),p) : p ,[]);
}
var arr = Array(30).fill().map((_,i) => i+1); // array from [1,...,30]
console.log(JSON.stringify(arr))
console.log(JSON.stringify(getAcordingToDiff(arr,25))); // diff >= 25
Explanation:
So in the heart of the above code obviously lies the Array.prototype.combinations function. For those who are not familiar with JS, this is just an ordinary function that we define under the Array object's prototype (so that now every array has access to this function like arr.combinations(n)) But let's use a more expressive language and refactor the above combinations array method into a generic function.
function combinations(a,n){
var sa;
return a.reduce(function(p,c,i,a){
if (n > 1) sa = combinations(a.slice(i+1), n-1).map(e => (e.push(c),e));
else sa = [[c]];
return p.concat(sa);
},[]);
}
So as you will notice combinations(a,n) is a recursive function which takes an array a and items count n. It works on the basis of keeping the first item of the input array and recursively invoking itself with one item shorter array, combinations(a.slice(i+1), n-1), and with one less items count up until n decrements to 1 in which case it starts it's return cycle with whatever remains from the input array and each item is wrapped in an array, sa = [[c]].
So on the return cycle of the recursive calls we take the resulting array and push the kept first element (remember -> It works on the basis of keeping the first item of the input array) into each item of the returned array (remember -> ...and each item is wrapped in an array, sa = [[c]]).
So that's it... You should be able to figure out yourself the details.
However in our application we are given an array of numbers and requested to obtain only the 2 item combinations with a certain difference. In this particular case we don't need to calculate all combinations and then filter them. We can do this on the way constructing our combinations. As the required difference value d gets bigger this will bring in a huge gain over filtering afterwards method, since now as d gets bigger we are eliminating more and more of the two item combinations, even before we generate them. And... let's hard-wire our code to work with 2 items only and merge everything in a single function. The performance results are below;
function getCombosWithDiff(a, d, n = 2){
var sa;
return a.reduce(function(p,c,i,a){
if (n > 1) sa = getCombosWithDiff(a.slice(i+1), d, n-1).reduce((r,e) => Math.abs(e[0]-c) > d ? (e.push(c),r.push(e),r)
: r, []);
else sa = [[c]];
return p.concat(sa);
},[]);
}
var arr = Array(100).fill().map((_,i) => i+1);
result = getCombosWithDiff(arr,89);
console.log(JSON.stringify(arr));
console.log(JSON.stringify(result));
So that's it. I have tried the above code to list the 2 items combinations each with diff greater than 10 from an array of 1000 items. It takes like 5000 msecs in Chrome and 14000 msecs in FF. However as mentioned above, the more the diff value d gets bigger, the shorter it takes. e.g same array with diff 900 would resolve in just 1100msecs with Chrome and in 4000msecs with FF.
You can test and play here
Create a 1001-element array A of integers initialized to zeroes. Generate your random integers, and increment the appropriate index by 1 for each such integer. With some math, you could do this without generating 2,000,000 random integers, but it's not worth the complexity.
Create a second 1001-element integer B s.t. B[i] = A[0] + ... + A[i]
Answer is sum from i=0 to 1000-k of B[i] * (2,000,000 - B[i+k-1])

How to efficiently check a list item with all other list items?

Is there any way to optimize this method of searching?
for (var i=0;i<dots.length;i++) {
var blist = [];
for (var n=0;n<dots.length;n++) {
if (dots[n][1]>(dots[i][1]-90)
&& dots[n][1]<(dots[i][1]+90)
&& dots[n][2]>(dots[i][2]-90)
&& dots[n][2]<(dots[i][2]+90)) {
if (!(n === i)) blist.push(n);
}
}
dots[x][1] is the x-coordinate and dots[x][2] is the y-coordinate.
I have 1000 dots, and need to find the dots surrounding each dot, so that results in the
if (dots[n][1]>(dots[i][1]-90)
&& dots[n][1]<(dots[i][1]+90)
&& dots[n][2]>(dots[i][2]-90)
&& dots[n][2]<(dots[i][2]+90))
Running a million times a second, so is there a way to optimize this?
Perhaps try using a data structure for your dots like this
var Dot = function(){
var x = 0;
var y = 0;
var Up;
var Right;
var Left;
var Down;
function init(xVal,yVal)
{
x = xVal;
y = yVal;
}
function GetUp()
{
return Up;
}
function SetUp(UpDot)
{
Up = UpDot;
}
return
{
init: init,
GetUp: GetUp,
SetUp: SetUp
};
};
and then use it like this
var Dots = [];
var firstDot = new Dot();
Dots.push(firstDot);
var secondDot = new Dot();
secondDot.init(0,90);
secondDot.SetUp(firstDot);
Dots.push(secondDot);
Obviously, more would need to be added and configured to match your situation. However, what this would allow you to do was iterate through dots and then check weather there existed a near dot making the time O(n) instead of O(n^2) and thus saving you 900,000 checks.
One way to cut your time in half would be not to double-check each pair:
for (var i = 0, len = dots.length; i < len - 1; i++) {
var blist = [];
for (var n = i + 1; n < len; n++) {
if (dots[n][1]>(dots[i][1]-90)
&& dots[n][1]<(dots[i][1]+90)
&& dots[n][2]>(dots[i][2]-90)
&& dots[n][2]<(dots[i][2]+90)) {
blist.push(i);
blist.push(n);
}
}
}
Note the change in loop boundaries. This allows me to check each pair only once and skip the (n === i) check.
I also cache dot.length, probably not a big deal, but worth doing for a tight loop.
Still, that should be an improvement of more than 50%. While that could help, it's not the orders of magnitude change that might be required for this sort of issue.
Here's a sketch of a solution. It may be the same idea TravisJ was suggesting, although that's not clear to me. It really is only a sketch, and would take significant code to implement.
If you partition your space into 90 unit x 90 unit sections, then a dot in a particular section can only be close enough to a dot in that section or to a dot in one of that section's eight neighbors. This could significantly reduce the number of pairs you have to compare. The cost, of course is algorithmic complexity:
First create a data structure to represent your grid sections. They can probably be represented just by top-left corners, since their heights and widths would be fixed at 90, except maybe at the trailing edges, where it probably wouldn't matter. Assuming a rectangular surface, each one could have three, five, or eight neighbors (corners, edges, inner sections respectively).
Loop through your dots, determining which section they live in. If your total grid starts at 0, this should be relatively straightforward, using some Math.floor(something / 90) operations.
For each section, run the loop above on itself and each of its neighbors to find the set of matches. You can use the shortened version of the loop from my earlier answer.
For a further optimization, you can also reduce the number of neighbors to check. If Section3,7 does a comparison with Section3,8, then there is no reason for Section3,8 to also do the comparison with Section3,7. So you check only a certain subset of the neighbors, say those whose x and y components of their section numbers are greater than or equal to their own.
I have not tested this, except in my head. It should work, but I have not tried to write any code. And the code would not be trivial. I don't think it's weeks of work, but it's not something to whip together in a few minutes either.
I believe it could significantly increase the speed, but that will depend upon how many matches there are, how many dots there are relative to the number of sections.

Categories