Calculate Ratios in JS like Humble Bundle - javascript

Do you know the sliders that you have on humblebundle.com when selecting where you want the money to go? Well when you adjust any one ratio it will automatically adjust the rest.
So say you're paying $20 no matter what but you want to adjust your tip to HB from $2 to $5, the ratios that were on the other stuff should automatically lowered to match but I have no idea what I'm doing.
This is as close as I get mathematically:
var settip = 50;
var tip = 5;
var devs = 75;
var donation = 20;
tip = settip;
var newAvail = 100 - tip;
var rCalc = 100 - (devs + donation);
devs = ((devs + rCalc) * newAvail) * .01;
donation = ((donation + rCalc) * newAvail) * .01;
console.log("New Ratio Calculation: " + rCalc);
console.log("New available space: " + newAvail);
console.log(tip);
console.log(devs);
console.log(donation);
The console logs are just so I can try and put it together in my head where things are going wrong. The numbers are also whole numbers first: 50 instead of .5 because Javascript is not accurate and I don't want to do the fix code every time, I'd rather figure out how to make the code work first and then think about optimizing.
So if anyone could guide me on a method or where I am going wrong here, then that'd be great. Thanks.
Tip is tip to the bundle maker.
Devs is tip to the devs.
Donation is tip to the donation box.
Each number is the ratio. Settip is the new ratio, I should be able to change any one value and have it automatically change all others but I can't even figure out how to do the first part so I couldn't begin to try for the second part of making it actually functional.

I think this problem is not as easy as it might seem if you want to cover different edge cases. Here I assume that you distribute money so you need following properties:
Each amount must be whole integer in cents
Sum of all amounts must be equal to the total sum
The simplest way to deal with it in JS is to make all calculations using whole numbers (e.g. sum in cents instead of dollars) and format them in more human-readable way on UI. Still even with this simplification it requires some non-trivial code:
function updateRates(rates, newValue, index) {
var i, len = rates.length;
var sum = 0;
for (i = 0; i < len; i++)
sum += rates[i];
var oldValue = rates[index];
var newRest = sum - newValue;
var curRest = sum - rates[index];
rates[index] = newValue;
var remainders = new Array(len);
var fraction, value, subsum = 0;
for (i = 0; i < len; i++) {
if (i === index) continue;
// special case, all other sliders were at 0 - split value equally
if (curRest === 0) {
fraction = 1.0 / (len - 1)
}
else {
fraction = rates[i] / curRest
}
value = newRest * fraction;
rates[i] = Math.floor(value); // always round down and then distribute rest according to the Largest remainder method
subsum += rates[i];
remainders[i] = {
index: i,
value: value - rates[i]
};
}
// sort remainders and distribute rest (fractions) accordingly
remainders.sort(function (a, b) {
var av = a.value;
var bv = b.value;
if (av === bv)
return 0;
if (av < bv)
return 1;
else
return -1;
});
for (i = 0; subsum < newRest; i++) {
rates[remainders[i].index] += 1;
subsum += 1;
}
return rates;
}
Some non-trivial tests:
1. updateRates([85,10,5], 82, 0) => [82, 12, 6]
2. updateRates([85,10,5], 83, 0) => [83, 11, 6]
3. updateRates([85,10,5], 84, 0) => [84, 11, 5]
4. updateRates([100,0,0], 95, 0) => [95, 2, 3]
5. updateRates([4,3,3,1], 0, 0) => [0, 5, 5, 1]
Pay attention to the example #5. If one used some naive rounding, sum will not be preserved. Effectively you need to distribute +4 in proportion 3:3:1. It means you should add +12/7, +12/7 and +4/7. Since 12/7 = 1 5/7, according to standard mathematical rules all three should be rounded up resulting in +2, +2, +1 but we only got +4 cents to distribute. To fix this issue the largest remainder method is used to distribute fractional cents among categories. Simply speaking the idea is that we first distribute only whole number of cents (i.e. always round down), calculate how many cents are actually left and then distribute them one by one. The biggest possible drawback of this method is that some rates that started with equal values might have different values after update. On the other hand this can't be avoided as example #4 shows: you can't split 5 cents equally between two categories.

To restate what I think you want: the three variables tip, devs and donation should always sum to 100. When one variable is updated, the other two should be updated to compensate. The automatic updates should keep the same ratios to each other (for example, if donation is double devs, and tips is updated, then the updated donation value should still be double the devs value).
If I've got that right, then this should work for you:
var tips = 5;
var devs = 20;
var donation = 75;
var setTips = function(newValue) {
tips = newValue;
var sum = devs + donation;
var devShare = devs / sum; // the share devs gets between devs and donation
var donationShare = 1 - devShare; // or could calculate as donation / sum
devs = (100 - tips) * devShare; // the remaining times it's share ratio
donation = (100 - tips) * donationShare; // the remaining times it's share ratio
};
// test it out
setTips(50);
console.log(tips, devs, donation);

Related

How to smooth elevation gain from GPS Points

I have a big problem here:
I have an array of gps datas and it's necessary to calculate some informations, I did it all except for the total elevation gain in the route.
There's an array like this [825.23423, 827.39420, 828.19319, ...] that storage the elevation of each gps point passed and I need to calculate the full sum.
The problem is that gps is not so accurate and sometimes gives some incorrect elevations, and it's making the sum get wrong.
I tried many ways to smooth the data, here are some:
Set a minimum value to sum the elevation gain
Get "sub-arrays" and made the media of them to sum the elevation gain after.
Get "sub-arrays" and pop and shift the uncommon values to sum after that.
And the actual way I mixed all, it got better but not even close to the perfect:
const getMediaFixed = (values: number[]) => {
values.sort((a, b) => {
return a - b;
});
values.pop();
values.shift();
const media = values.reduce((a, b) => a + b) / values.length;
return media;
};
let last: number = elevations[0];
let elevationSum = 0;
// let countLimit: number = 1;
for (let i = 0; i < elevations.length - 5; i += 5) {
let media: number;
if (i <= elevations.length - 4) {
media = getMediaFixed([
elevations[i],
elevations[i + 1],
elevations[i + 2],
elevations[i + 3],
elevations[i + 4],
]);
} else {
media = elevations[i];
}
const temp = 0.75 * last + 0.25 * media;
if (temp - last > 0) {
elevationSum += temp - last;
}
last = temp;
}
The point is that it's not working anyway.
Here's a site that works perfectly calculating from a GPX File (I couldn't find the contact to ask for help):
https://www.trackreport.net
I appreciate any ideas to improve the code!
I made it using the exponential moving average to smooth the array with elevations and then I used 1 meter to ignore the bugs!
Thanks.

Generating large random numbers between an inclusive range in Node.js

So I'm very familiar with the good old
Math.floor(Math.random() * (max - min + 1)) + min;
and this works very nicely with small numbers, however when numbers get larger this quickly becomes biased and only returns numbers one zero below it (for ex. a random number between 0 and 1e100 will almost always (every time I've tested, so several billion times since I used a for loop to generate lots of numbers) return [x]e99). And yes I waited the long time for the program to generate that many numbers, twice. By this point, it would be safe to assume that the output is always [x]e99 for all practical uses.
So next I tried this
Math.floor(Math.pow(max - min + 1, Math.random())) + min;
and while that works perfectly for huge ranges it breaks for small ones. So my question is how can do both - be able to generate both small and large random numbers without any bias (or minimal bias to the point of not being noticeable)?
Note: I'm using Decimal.js to handle numbers in the range -1e2043 < x < 1e2043 but since it is the same algorithm I displayed the vanilla JavaScript forms above to prevent confusion. I can take a vanilla answer and convert it to Decimal.js without any trouble so feel free to answer with either.
Note #2: I want to even out the odds of getting large numbers. For example 1e33 should have the same odds as 1e90 in my 0-1e100 example. But at the same time I need to support smaller numbers and ranges.
Your Problem is Precision. That's the reason you use Decimal.js in the first place. Like every other Number in JS, Math.random() supports only 53 bit of precision (Some browser even used to create only the upper 32bit of randomness). But your value 1e100 would need 333 bit of precision. So the lower 280 bit (~75 decimal places out of 100) are discarded in your formula.
But Decimal.js provides a random() method. Why don't you use that one?
function random(min, max){
var delta = new Decimal(max).sub(min);
return Decimal.random( +delta.log(10) ).mul(delta).add(min);
}
Another "problem" why you get so many values with e+99 is probability. For the range 0 .. 1e100 the probabilities to get some exponent are
e+99 => 90%,
e+98 => 9%,
e+97 => 0.9%,
e+96 => 0.09%,
e+95 => 0.009%,
e+94 => 0.0009%,
e+93 => 0.00009%,
e+92 => 0.000009%,
e+91 => 0.0000009%,
e+90 => 0.00000009%,
and so on
So if you generate ten billion numbers, statistically you'll get a single value up to 1e+90. That are the odds.
I want to even out those odds for large numbers. 1e33 should have the same odds as 1e90 for example
OK, then let's generate a 10random in the range min ... max.
function random2(min, max){
var a = +Decimal.log10(min),
b = +Decimal.log10(max);
//trying to deal with zero-values.
if(a === -Infinity && b === -Infinity) return 0; //a random value between 0 and 0 ;)
if(a === -Infinity) a = Math.min(0, b-53);
if(b === -Infinity) b = Math.min(0, a-53);
return Decimal.pow(10, Decimal.random(Math.abs(b-a)).mul(b-a).add(a) );
}
now the exponents are pretty much uniformly distributed, but the values are a bit skewed. Because 101 to 101.5 10 .. 33 has the same probability as 101.5 to 102 34 .. 100
The issue with Math.random() * Math.pow(10, Math.floor(Math.random() * 100)); at smaller numbers is that random ranges [0, 1), meaning that when calculating the exponent separately one needs to make sure the prefix ranges [1, 10). Otherwise you want to calculate a number in [1eX, 1eX+1) but have e.g. 0.1 as prefix and end up in 1eX-1. Here is an example, maxExp is not 100 but 10 for readability of the output but easily adjustable.
let maxExp = 10;
function differentDistributionRandom() {
let exp = Math.floor(Math.random() * (maxExp + 1)) - 1;
if (exp < 0) return Math.random();
else return (Math.random() * 9 + 1) * Math.pow(10, exp);
}
let counts = new Array(maxExp + 1).fill(0).map(e => []);
for (let i = 0; i < (maxExp + 1) * 1000; i++) {
let x = differentDistributionRandom();
counts[Math.max(0, Math.floor(Math.log10(x)) + 1)].push(x);
}
counts.forEach((e, i) => {
console.log(`E: ${i - 1 < 0 ? "<0" : i - 1}, amount: ${e.length}, example: ${Number.isNaN(e[0]) ? "none" : e[0]}`);
});
You might see the category <0 here which is hopefully what you wanted (the cutoff point is arbitrary, here [0, 1) has the same probability as [1, 10) as [10, 100) and so on, but [0.01, 0.1) is again less likely than [0.1, 1))
If you didn't insist on base 10 you could reinterpret the pseudorandom bits from two Math.random calls as Float64 which would give a similar distribution, base 2:
function exponentDistribution() {
let bits = [Math.random(), Math.random()];
let buffer = new ArrayBuffer(24);
let view = new DataView(buffer);
view.setFloat64(8, bits[0]);
view.setFloat64(16, bits[1]);
//alternatively all at once with setInt32
for (let i = 0; i < 4; i++) {
view.setInt8(i, view.getInt8(12 + i));
view.setInt8(i + 4, view.getInt8(20 + i));
}
return Math.abs(view.getFloat64(0));
}
let counts = new Array(11).fill(0).map(e => []);
for (let i = 0; i < (1 << 11) * 100; i++) {
let x = exponentDistribution();
let exp = Math.floor(Math.log2(x));
if (exp >= -5 && exp <= 5) {
counts[exp + 5].push(x);
}
}
counts.forEach((e, i) => {
console.log(`E: ${i - 5}, amount: ${e.length}, example: ${Number.isNaN(e[0]) ? "none" : e[0]}`);
});
This one obviously is bounded by the precision ends of Float64, there are some uneven parts of the distribution due to some details of IEEE754, e.g. denorms/subnorms and i did not take care of special values like Infinity. It is rather to be seen as a fun extra, a reminder of the distribution of float values. Note that the loop does 1 << 11 (2048) times a number iterations, which is about the exponent range of Float64, 11 bit, [-1022, 1023]. That's why in the example each bucket gets approximately said number (100) hits.
You can create the number in increments less than Number.MAX_SAFE_INTEGER, then concatenate the generated numbers to a single string
const r = () => Math.floor(Math.random() * Number.MAX_SAFE_INTEGER);
let N = "";
for (let i = 0; i < 10; i++) N += r();
document.body.appendChild(document.createTextNode(N));
console.log(/e/.test(N));

How do I optimally distribute values over an array of percentages?

Let's say I have the following code:
arr = [0.1,0.5,0.2,0.2]; //The percentages (or decimals) we want to distribute them over.
value = 100; //The amount of things we have to distribute
arr2 = [0,0,0,0] //Where we want how many of each value to go
To find out how to equally distribute a hundred over the array is simple, it's a case of:
0.1 * 100 = 10
0.5 * 100 = 50
...
Or doing it using a for loop:
for (var i = 0; j < arr.length; i++) {
arr2[i] = arr[i] * value;
}
However, let's say each counter is an object and thus has to be whole. How can I equally (as much as I can) distribute them on a different value. Let's say the value becomes 12.
0.1 * 12 = 1.2
0.5 * 12 = 6
...
How do I deal with the decimal when I need it to be whole? Rounding means that I could potentially not have the 12 pieces needed.
A correct algorithm would -
Take an input/iterate through an array of values (for this example we'll be using the array defined above.
Turn it into a set of whole values, which added together equal the value (which will equal 100 for this)
Output an array of values which, for this example it will look something like [10,50,20,20] (these add up to 100, which is what we need to add them up to and also are all whole).
If any value is not whole, it should make it whole so the whole array still adds up to the value needed (100).
TL;DR dealing with decimals when distributing values over an array and attempting to turn them into an integer
Note - Should this be posted on a different stackoverflow website, my need is programming, but the actual question will likely be solved using a mathematics. Also, I had no idea how to word this question, which makes googling incredibly difficult. If I've missed something incredibly obvious, please tell me.
You should round all values as you assign them using a rounding that is known to uniformly distribute the rounding. Finally, the last value will be assigned differently to round the sum up to 1.
Let's start slowly or things get very confused. First, let's see how to assign the last value to have a total of the desired value.
// we will need this later on
sum = 0;
// assign all values but the last
for (i = 0; i < output.length - 1; i++)
{
output[i] = input[i] * total;
sum += output[i];
}
// last value must honor the total constraint
output[i] = total - sum;
That last line needs some explanation. The i will be one more than the last allowed int the for(..) loop, so it will be:
output.length - 1 // last index
The value we assign will be so that the sum of all elements is equal to total. We already computed the sum in a single-pass during the assignment of the values, and thus don't need to iterated over the elements a second time to determine it.
Next, we will approach the rounding problem. Let's simplify the above code so that it uses a function on which we will elaborate shortly after:
sum = 0;
for (i = 0; i < output.length - 1; i++)
{
output[i] = u(input[i], total);
sum += output[i];
}
output[i] = total - sum;
As you can see, nothing has changed but the introduction of the u() function. Let's concentrate on this now.
There are several approaches on how to implement u().
DEFINITION
u(c, total) ::= c * total
By this definition you get the same as above. It is precise and good, but as you have asked before, you want the values to be natural numbers (e.G. integers). So while for real numbers this is already perfect, for natural numbers we have to round it. Let's suppose we use the simple rounding rule for integers:
[ 0.0, 0.5 [ => round down
[ 0.5, 1.0 [ => round up
This is achieved with:
function u(c, total)
{
return Math.round(c * total);
}
When you are unlucky, you may round up (or round down) so much values that the last value correction will not be enough to honor the total constraint and generally, all value will seem to be off by too much. This is a well known problem of which exists a multi-dimensional solution to draw lines in 2D and 3D space which is called the Bresenham algorithm.
To make things easy I'll show you here how to implement it in 1 dimension (which is your case).
Let's first discuss a term: the remainder. This is what is left after you have rounded your numbers. It is computed as the difference between what you wish and what you really have:
DEFINITION
WISH ::= c * total
HAVE ::= Math.round(WISH)
REMAINDER ::= WISH - HAVE
Now think about it. The remained is like the piece of paper that you discard when you cut out a shape from a sheet. That remaining paper is still there but you throw it away. Instead of this, just add it to the next cut-out so it is not wasted:
WISH ::= c * total + REMAINDER_FROM_PREVIOUS_STEP
HAVE ::= Math.round(WISH)
REMAINDER ::= WISH - HAVE
This way you keep the error and carry it over to the next partition in your computation. This is called amortizing the error.
Here is an amortized implementation of u():
// amortized is defined outside u because we need to have a side-effect across calls of u
function u(c, total)
{
var real, natural;
real = c * total + amortized;
natural = Math.round(real);
amortized = real - natural;
return natural;
}
On your own accord you may wish to have another rounding rule as Math.floor() or Math.ceil().
What I would advise you to do is to use Math.floor(), because it is proven to be correct with the total constraint. When you use Math.round() you will have smoother amortization, but you risk to not have the last value positive. You might end up with something like this:
[ 1, 0, 0, 1, 1, 0, -1 ]
Only when ALL VALUES are far away from 0 you can be confident that the last value will also be positive. So, for the general case the Bresenham algoritm would use flooring, resulting in this last implementation:
function u(c, total)
{
var real, natural;
real = c * total + amortized;
natural = Math.floor(real); // just to be on the safe side
amortized = real - natural;
return natural;
}
sum = 0;
amortized = 0;
for (i = 0; i < output.length - 1; i++)
{
output[i] = u(input[i], total);
sum += output[i];
}
output[i] = total - sum;
Obviously, input and output array must have the same size and the values in input must be a paritition (sum up to 1).
This kind of algorithm is very common for probabilistical and statistical computations.
Alternate implementation - it remembers a pointer to the biggest rounded value and when the sum differs of 100, increment or decrement value at this position.
const items = [1, 2, 3, 5];
const total = items.reduce((total, x) => total + x, 0);
let result = [], sum = 0, biggestRound = 0, roundPointer;
items.forEach((votes, index) => {
let value = 100 * votes / total;
let rounded = Math.round(value);
let diff = value - rounded;
if (diff > biggestRound) {
biggestRound = diff;
roundPointer = index;
}
sum += rounded;
result.push(rounded);
});
if (sum === 99) {
result[roundPointer] += 1;
} else if (sum === 101) {
result[roundPointer] -= 1;
}

Javascript doesn't give correct integer output -- Project Euler 20

I'm trying to solve Project Euler Problem 20, while I think to solution is quite trivial, turns out Javascript doesn't give me the correct output:
var fact = 1;
for (var i = 100; i > 0; i--) {
fact *= i;
}
var summed = 0;
while (fact > 0) {
summed += Math.floor(fact % 10);
fact = fact / 10;
}
console.log(summed); //587
http://jsfiddle.net/9uEFj/
Now, let's try solving 100! from the bottom (that is, 1 * 2 * 3 * ... * 100 instead of 100 * 99 * .. * 1 like before ):
var fact = 1;
for (var i = 1; i <= 100; i++) {
fact *= i;
}
var summed = 0;
while (fact > 0) {
summed += Math.floor(fact % 10);
fact = fact / 10;
}
console.log(summed); //659
http://jsfiddle.net/YX4bu/
What is happening here? Why different multiplication order give me different result? Also, why none of the results give me the correct result of problem 20? (648)
The product of the first 100 integers is a large number in the order of 1e158. This will be handled as a floating point number with the consequent loss of precision. See The Floating Point Guide for a fuller explanation. The results of your two multiplications match to 15 significant figures, but differ at the 16th and beyond. This is enough to throw your final result.
To do this properly you'll need to use integer arithmetic throughout - something that is well beyond Javascript's native capabilities. You'll need to handle a number of 158 digits, and write a multiplication routine yourself.
If you use a string format to store the number, the second part of your script just needs to total the digits.

Need help making an approximation of Euler's constant

It's very close but just one number off. If you can change anything here to make it better it'd be appreciated. I'm comparing my number with Math.E to see if I'm close.
var e = (function() {
var factorial = function(n) {
var a = 1;
for (var i = 1; i <= n; i++) {
a = a * i;
}
return a;
};
for (var k = 0, b = []; k < 18; k++) {
b.push(b.length ? b[k - 1] + 1 / factorial(k) : 1 / factorial(k));
}
return b[b.length - 1];
})();
document.write(e);document.write('<br />'+ Math.E);​
My number: 2.7182818284590455
Math.E: 2.718281828459045
Work from higher numbers to lower numbers to minimize cancellation:
var e = 1;
for(var k = 17; k > 0; --k) {
e = 1 + e/k;
}
return e;
Evaluating the Taylor polynomial by Horner's rule even avoids the factorial and allows you to use more terms (won't make a difference beyond 17, though).
As far as I can see your number is the same as Math.E and even has a better precision.
2.7182818284590455
2.718281828459045
What is the problem after all?
With javascript, you cannot calculate e this way due to the level of precision of javascript computations. See http://www.javascripter.net/faq/accuracy.htm for more info.
To demonstrate this problem please take a look at the following fiddle which calculates e with n starting at 50000000, incrementing n by 1 every 10 milliseconds:
http://jsfiddle.net/q8xRs/1/
I like using integer values to approximate real ones.
Possible approximations of e in order of increasing accuracy are:
11/4
87/32
23225/8544
3442297523731/1266350489376
That last one is fairly accurate, equating to:
2.7182818284590452213260834432
which doesn't diverge from wikipedia's value till the 18th:
2.71828182845904523536028747135266249775724709369995
So there's that, if you're interested.

Categories