Generating large random numbers between an inclusive range in Node.js - javascript

So I'm very familiar with the good old
Math.floor(Math.random() * (max - min + 1)) + min;
and this works very nicely with small numbers, however when numbers get larger this quickly becomes biased and only returns numbers one zero below it (for ex. a random number between 0 and 1e100 will almost always (every time I've tested, so several billion times since I used a for loop to generate lots of numbers) return [x]e99). And yes I waited the long time for the program to generate that many numbers, twice. By this point, it would be safe to assume that the output is always [x]e99 for all practical uses.
So next I tried this
Math.floor(Math.pow(max - min + 1, Math.random())) + min;
and while that works perfectly for huge ranges it breaks for small ones. So my question is how can do both - be able to generate both small and large random numbers without any bias (or minimal bias to the point of not being noticeable)?
Note: I'm using Decimal.js to handle numbers in the range -1e2043 < x < 1e2043 but since it is the same algorithm I displayed the vanilla JavaScript forms above to prevent confusion. I can take a vanilla answer and convert it to Decimal.js without any trouble so feel free to answer with either.
Note #2: I want to even out the odds of getting large numbers. For example 1e33 should have the same odds as 1e90 in my 0-1e100 example. But at the same time I need to support smaller numbers and ranges.

Your Problem is Precision. That's the reason you use Decimal.js in the first place. Like every other Number in JS, Math.random() supports only 53 bit of precision (Some browser even used to create only the upper 32bit of randomness). But your value 1e100 would need 333 bit of precision. So the lower 280 bit (~75 decimal places out of 100) are discarded in your formula.
But Decimal.js provides a random() method. Why don't you use that one?
function random(min, max){
var delta = new Decimal(max).sub(min);
return Decimal.random( +delta.log(10) ).mul(delta).add(min);
}
Another "problem" why you get so many values with e+99 is probability. For the range 0 .. 1e100 the probabilities to get some exponent are
e+99 => 90%,
e+98 => 9%,
e+97 => 0.9%,
e+96 => 0.09%,
e+95 => 0.009%,
e+94 => 0.0009%,
e+93 => 0.00009%,
e+92 => 0.000009%,
e+91 => 0.0000009%,
e+90 => 0.00000009%,
and so on
So if you generate ten billion numbers, statistically you'll get a single value up to 1e+90. That are the odds.
I want to even out those odds for large numbers. 1e33 should have the same odds as 1e90 for example
OK, then let's generate a 10random in the range min ... max.
function random2(min, max){
var a = +Decimal.log10(min),
b = +Decimal.log10(max);
//trying to deal with zero-values.
if(a === -Infinity && b === -Infinity) return 0; //a random value between 0 and 0 ;)
if(a === -Infinity) a = Math.min(0, b-53);
if(b === -Infinity) b = Math.min(0, a-53);
return Decimal.pow(10, Decimal.random(Math.abs(b-a)).mul(b-a).add(a) );
}
now the exponents are pretty much uniformly distributed, but the values are a bit skewed. Because 101 to 101.5 10 .. 33 has the same probability as 101.5 to 102 34 .. 100

The issue with Math.random() * Math.pow(10, Math.floor(Math.random() * 100)); at smaller numbers is that random ranges [0, 1), meaning that when calculating the exponent separately one needs to make sure the prefix ranges [1, 10). Otherwise you want to calculate a number in [1eX, 1eX+1) but have e.g. 0.1 as prefix and end up in 1eX-1. Here is an example, maxExp is not 100 but 10 for readability of the output but easily adjustable.
let maxExp = 10;
function differentDistributionRandom() {
let exp = Math.floor(Math.random() * (maxExp + 1)) - 1;
if (exp < 0) return Math.random();
else return (Math.random() * 9 + 1) * Math.pow(10, exp);
}
let counts = new Array(maxExp + 1).fill(0).map(e => []);
for (let i = 0; i < (maxExp + 1) * 1000; i++) {
let x = differentDistributionRandom();
counts[Math.max(0, Math.floor(Math.log10(x)) + 1)].push(x);
}
counts.forEach((e, i) => {
console.log(`E: ${i - 1 < 0 ? "<0" : i - 1}, amount: ${e.length}, example: ${Number.isNaN(e[0]) ? "none" : e[0]}`);
});
You might see the category <0 here which is hopefully what you wanted (the cutoff point is arbitrary, here [0, 1) has the same probability as [1, 10) as [10, 100) and so on, but [0.01, 0.1) is again less likely than [0.1, 1))
If you didn't insist on base 10 you could reinterpret the pseudorandom bits from two Math.random calls as Float64 which would give a similar distribution, base 2:
function exponentDistribution() {
let bits = [Math.random(), Math.random()];
let buffer = new ArrayBuffer(24);
let view = new DataView(buffer);
view.setFloat64(8, bits[0]);
view.setFloat64(16, bits[1]);
//alternatively all at once with setInt32
for (let i = 0; i < 4; i++) {
view.setInt8(i, view.getInt8(12 + i));
view.setInt8(i + 4, view.getInt8(20 + i));
}
return Math.abs(view.getFloat64(0));
}
let counts = new Array(11).fill(0).map(e => []);
for (let i = 0; i < (1 << 11) * 100; i++) {
let x = exponentDistribution();
let exp = Math.floor(Math.log2(x));
if (exp >= -5 && exp <= 5) {
counts[exp + 5].push(x);
}
}
counts.forEach((e, i) => {
console.log(`E: ${i - 5}, amount: ${e.length}, example: ${Number.isNaN(e[0]) ? "none" : e[0]}`);
});
This one obviously is bounded by the precision ends of Float64, there are some uneven parts of the distribution due to some details of IEEE754, e.g. denorms/subnorms and i did not take care of special values like Infinity. It is rather to be seen as a fun extra, a reminder of the distribution of float values. Note that the loop does 1 << 11 (2048) times a number iterations, which is about the exponent range of Float64, 11 bit, [-1022, 1023]. That's why in the example each bucket gets approximately said number (100) hits.

You can create the number in increments less than Number.MAX_SAFE_INTEGER, then concatenate the generated numbers to a single string
const r = () => Math.floor(Math.random() * Number.MAX_SAFE_INTEGER);
let N = "";
for (let i = 0; i < 10; i++) N += r();
document.body.appendChild(document.createTextNode(N));
console.log(/e/.test(N));

Related

Programmatically solving Sam Loyd's Battle of Hastings puzzle - performance issues with BigInt

I'm having performance issues when trying to check whether integer n is a perfect square (sqrt is a whole number) when using BigInt. Using normal numbers below Number.MAX_SAFE_INTEGER gives reasonable performance, but attempting to use BigInt even with the same number range causes a huge performance hit.
The program solves the Battle of Hastings perfect square riddle put forth by Sam Loyd whereby my program iterates over the set of real numbers n (in this example, up to 7,000,000) to find instances where variable y is a whole number (perfect square). I'm interested in the original square root of one of the 13 perfect squares where this condition is satisfied, which is what my code generates (there's more than one).
Assuming y^2 < Number.MAX_SAFE_INTEGER which is 2^53 – 1, this can be done without BigInt and runs in ~60ms on my machine:
const limit = 7_000_000;
var a = [];
console.time('regular int');
for (let n = 1; n < limit; n++) {
if (Math.sqrt(Math.pow(n, 2) * 13 + 1) % 1 === 0)
a.push(n);
}
console.log(a.join(', '));
console.timeEnd('regular int');
Being able to use BigInt would mean I could test for numbers much higher than the inherent number variable limit 2^53 - 1, but BigInt seems inherently slower; unusably so. To test whether a BigInt is a perfect square, I have to use a third party library as Math.sqrt doesn't exist for BigInt such that I can check if the root is perfect, as all sqrt returns a floor value. I adapted functions for this from a NodeJS library, bigint-isqrt and bigint-is-perfect-square.
Thus, using BigInt with the same limit of 7,000,000 runs 35x slower:
var integerSQRT = function(value) {
if (value < 2n)
return value;
if (value < 16n)
return BigInt(Math.sqrt(Number(value)) | 0);
let x0, x1;
if (value < 4503599627370496n)
x1 = BigInt(Math.sqrt(Number(value))|0) - 3n;
else {
let vlen = value.toString().length;
if (!(vlen & 1))
x1 = 10n ** (BigInt(vlen / 2));
else
x1 = 4n * 10n ** (BigInt((vlen / 2) | 0));
}
do {
x0 = x1;
x1 = ((value / x0) + x0) >> 1n;
} while ((x0 !== x1 && x0 !== (x1 - 1n)));
return x0;
}
function perfectSquare(n) {
// Divide n by 4 while divisible
while ((n & 3n) === 0n && n !== 0n) {
n >>= 2n;
}
// So, for now n is not divisible by 2
// The only possible residual modulo 8 for such n is 1
if ((n & 7n) !== 1n)
return false;
return n === integerSQRT(n) ** 2n;
}
const limit = 7_000_000;
var a = [];
console.time('big int');
for (let n = 1n; n < limit; n++) {
if (perfectSquare(((n ** 2n) * 13n) + 1n))
a.push(n);
}
console.log(a.join(', '));
console.timeEnd('big int');
Ideally I'm interested in doing this with a much higher limit than 7 million, but I'm unsure whether I can optimise the BigInt version any further. Any suggestions?
You may be pleased to learn that some recent improvements on V8 have sped up the BigInt version quite a bit; with a recent V8 build I'm seeing your BigInt version being about 12x slower than the Number version.
A remaining challenge is that implementations of BigInt-sqrt are typically based on Newton iteration and hence need an estimate for a starting value, which should be near the final result, so about half as wide as the input, which is given by log2(X) or bitLength(X). Until this proposal gets anywhere, that can best be done by converting the BigInt to a string and taking that string's length, which is fairly expensive.
To get faster right now, #Ouroborus' idea is great. I was curious how fast it would be, so I implemented it:
(function betterAlgorithm() {
const limit = 7_000_000n;
var a = [];
console.time('better algorithm');
let m = 1n;
let m_squared = 1n;
for (let n = 1n; n < limit; n += 1n) {
let y_squared = n * n * 13n + 1n;
while (y_squared > m_squared) {
m += 1n;
m_squared = m * m;
}
if (y_squared === m_squared) {
a.push(n);
}
}
console.log(a.join(', '));
console.timeEnd('better algorithm');
})();
As a particular short-term detail, this uses += 1n instead of ++, because as of today, V8 hasn't yet gotten around to optimizing ++ for BigInts. This difference should disappear eventually (hopefully soon).
On my machine, this version takes only about 4x as much time as your original Number-based implementation.
For larger numbers, I would expect some gains from replacing the multiplications with additions (based on the observation that the delta between consecutive square numbers grows linearly), but for small-ish upper limits that appears to be a bit slower. If you want to toy around with it, this snippet describes the idea:
let m_squared = 1n; // == 1*1
let m_squared_delta = 3n; // == 2*2 - 1*1
let y_squared = 14n; // == 1*1*13+1
let y_squared_delta = 39n; // == 2*2*13+1 - 1*1*13+1
for (let n = 1; n < limit; n++) {
while (y_squared > m_squared) {
m_squared += m_squared_delta;
m_squared_delta += 2n;
}
if (y_squared === m_squared) {
a.push(n);
}
y_squared += y_squared_delta;
y_squared_delta += 26n;
}
The earliest where this could possibly pay off is when the results exceed 2n**64n; I wouldn't be surprised if it wasn't measurable before 2n**256n or so.

Method to generate random numbers to sum to a target

Let's say there are 100 people and $120. What is the formula that could divide it into random amounts to where each person gets something?
In this scenario, one person could get an arbitrary amount like $0.25, someone could get $10, someone could get $1, but everyone gets something. Any tips?
So in Javascript, an array of 100 could be generated, and these random numbers would be in them, but they would add up to 100
To get a result with the smallest possible amount of 1 cent using simple means, you can generate 100 random values, find their sum S, then multiply every value by 120.0/Sum with rounding to integer cents, get sum again. If there is some excess (some cents), distribute it to random persons.
Example in Python for 10 persons and 12$. 1+ and overall-num allow to avoid zero amounts:
import random
overall = 1200
num = 10
amounts = [random.random() for _ in range(num)]
asum = sum(amounts)
for i in range(num):
amounts[i] = 1 + int(amounts[i]*(overall-num) / asum)
asum = sum(amounts)
for i in range(overall - asum):
amounts[random.randint(0,9)] += 1
print(amounts, sum(amounts))
>>[163, 186, 178, 152, 89, 81, 169, 90, 17, 75] 1200
Another way (fair distribution of variants in mathematical sense), as Aki Suihkonen noticed in comments, is to use random choice from divider positions array (with shuffling) to implement (my first proposal, But I supposed too complex implementation earlier):
put 12000 one-cent coins in row
put 99 sticks (dividers) between them, in 11999 spaces between coins
give coins between k and k+1 stick to k-th person
Python implementation:
arr = [i for i in range(overall-1)]
divs = [0] + sorted(random.choices(arr, k=num-1)) + [overall]
amounts = [(divs[i+1]-divs[i]) for i in range(num)]
print(amounts, sum(amounts))
>>>[17, 155, 6, 102, 27, 222, 25, 362, 50, 234] 1200
And just to provide another way to do it, assign each a random number, then normalize to make the sum add up correctly. This should be quite efficient, O(countPeople) no matter how much money we are dividing or how finely we are dividing it.
Here is a solution in JavaScript that will also handle rounding to the nearest penny if desired. Unfortunately while it is unlikely that it will fail to give someone money, it is possible. This can be solved either by pulling out one penny per person and giving them that, or by testing whether you failed to hand out money and re-running.
function distributeRandomly(value, countPeople, roundTo) {
var weights = [];
var total = 0
var i;
// To avoid floating point error, use integer operations.
if (roundTo) {
value = Math.round(value / roundTo);
}
for (i=0; i < countPeople; i++) {
weights[i] = Math.random();
total += weights[i];
}
for (i=0; i < countPeople; i++) {
weights[i] *= value / total;
}
if (roundTo) {
// Round off
total = 0;
for (i = 0; i < countPeople; i++) {
var rounded = Math.floor(weights[i]);
total += weights[i] - rounded;
weights[i] = rounded;
}
total = Math.round(total);
// Distribute the rounding randomly
while (0 < total) {
weights[Math.floor(Math.random()*countPeople)] += 1;
total -= 1;
}
// And now normalize
for (i = 0; i < countPeople; i++) {
weights[i] *= roundTo;
}
}
return weights;
}
console.log(distributeRandomly(120, 5));
console.log(distributeRandomly(120, 6, 0.01));
What should be the targeted distribution?
MBo gives AFAIK Poisson distribution (with the original approach of placing 99 dividers randomly between range of 1 and 11999). Another one would be to divide the sums first evenly, then for every two members redistribute the wealth by transferring a random sum between 0 and $1.19 from one person to another.
Repeat a few times, if it's not sufficient that the maximum amount is just 2x the expectation.
You need a multinomial distribution. I split 12000 instead of 120 to allow the cents.
var n = 100;
var probs = Array(n).fill(1/n);
var sum = Array(n).fill(0);
for(var k=0; k < 12000; k++){
var i = -1;
var p = 0;
var u = Math.random();
while(p < u){
i += 1;
p += probs[i];
}
sum[i] += 1;
}
sum = sum.map(function(x){return x/100;});
console.log(sum);
1.26,1.37,1.28,1.44,1.31,1.22,1.2,1.27,1.21,1.37,1.05,1.17,0.98,1.13,1.18,1.44,0.94,1.32,1.03,1.23,1.19,1.13,1.13,1.32,1.36,1.35,1.32,1.04,1.1,1.18,1.18,1.31,1.17,1.13,1.08,1.11,1.19,1.31,1.2,1.1,1.31,1.22,1.15,1.09,1.27,1.14,1.06,1.23,1.21,0.94,1.32,1.13,1.29,1.25,1.13,1.22,1.13,1.13,1.1,1.16,1.12,1.11,1.26,1.21,1.07,1.19,1.07,1.46,1.14,1.18,0.96,1.21,1.18,1.2,1.18,1.2,1.33,1.01,1.31,1.16,1.28,1.21,1.42,1.29,1.04,1.28,1.12,1.2,1.23,1.39,1.26,1.03,1.27,1.18,1.11,1.31,1.46,1.15,1.23,1.21
One technique would be to work in pennies, and give everyone one to start, then randomly pick subsequent people to give additional pennies until you're out of pennies. This should give a mean of 1.20 and a standard deviation of 1. The code is relatively simple:
const stats = (ns, Σ = ns .reduce ((a, b) => a + b, 0), μ = Σ / ns.length, σ2 = ns .map (n => (n - μ) ** 2) .reduce ((a, b) => a + b), σ = Math .sqrt (σ2)) => ({sum: Math .round(Σ), mean: μ, stdDev: σ, min: Math .min (...ns), max: Math .max (...ns)})
const randomDist = (buckets, total, {factor = 100, min = 1} = {}) =>
Array (total * factor - buckets * min) .fill (1) .reduce (
(res, _) => {res [Math.floor (Math.random() * buckets)] += 1 ; return res},
Array (buckets) .fill (min)
) .map (n => n / factor)
const res = randomDist (100, 120)
console .log (stats (res))
console .log (res)
.as-console-wrapper {max-height: 100% !important; top: 0}
We accept the number of buckets and the total amount to include. We also optionally accept the factor we use to convert to our minimum step and the minimum value everyone gets. (The stats function is just for reporting purposes.)
With this technique, the spread is likely quite small. While it's theoretically possible for a person to get $.01 or $119.01, the chances are extremely remote. We can alter that by randomly choosing how much to add to a person at each step (and not just use a single penny.) I don't have strong background in statistics to justify this mechanism, but it seems relatively robust. It will require one more optional parameter, which I'm calling block, which is the largest block size we will distribute. It would look like this:
const {floor, exp, random, log, min, max, sqrt} = Math
const stats = (ns, Σ = ns .reduce ((a, b) => a + b, 0), μ = Σ / ns.length, σ2 = ns .map (n => (n - μ) ** 2) .reduce ((a, b) => a + b), σ = sqrt (σ2)) => ({mean: μ, stdDev: σ, min: min (...ns), max: max (...ns)})
const randomDist = (buckets, total, {factor = 100, minimum = 1, block = 100} = {}) => {
const res = Array (buckets) .fill (minimum)
let used = total * factor - buckets * minimum
while (used > 0) {
const bucket = floor (random () * buckets)
const amount = 1 + floor (exp (random () * log ((min (used, block)))))
used -= amount
res [bucket] += amount
}
return res .map (r => r / factor)
}
const res1 = randomDist (100, 120)
console .log (stats (res1))
console .log (res1)
const res2 = randomDist (100, 120, {block: 500})
console .log (stats (res2))
console .log (res2)
.as-console-wrapper {max-height: 100% !important; top: 0}
Note that when we switch from the default block size of 100 to 500, we go from statistics like
{mean: 1.2, stdDev: 7.48581986157829, min: 0.03, max: 3.52}
to ones like this:
{mean: 1.2, stdDev: 17.75106194006432, min: 0.01, max: 10.39}
and if we went down to 10, it might look like
{mean: 1.2, stdDev: 2.707932606251492, min: 0.67, max: 2.13}
You could play around with that parameter until it had a distribution that looks like what you want. (If you set it to 1, it would have the same behavior as the first snippet.)

How can I get integer in Math.pow(10, 10000000)

I always get infinity from:
let power = Math.pow(2, 10000000);
console.log(power); //Infinity
So, can I get integer from this?
Maybe I don't understand this task https://www.codewars.com/kata/5511b2f550906349a70004e1/train/javascript? Who knows, show me how to decide that?
The link that you give asks for the last digit of the number. In order to find such a thing, it would be insane to compute an extremely large number (which might exceed the storage capacity of the known universe to write down (*)) just to find the final digit. Work mod 10.
Two observations:
1) n^e % 10 === d^e % 10 // d = last digit of n
2) If e = 10q+r then n^e % 10 === (n^10)^q * n^d %10
This allows us to write:
const lastDigit = function(str1, str2){
//in the following helper function d is an integer and exp a string
const lastDigitHelper = function(d,exp){
if(exp.length === 1){
let e = parseInt(exp);
return Math.pow(d,e) % 10;
} else {
let r = parseInt(exp.slice(-1));
let q = exp.slice(0,-1);
return lastDigitHelper(Math.pow(d,10) % 10,q) * Math.pow(d,r) % 10;
}
}
let d = parseInt(str1.slice(-1));
return lastDigitHelper(d,str2);
}
This passes all of the tests, but isn't as efficient as it could be. The recursive helper function could be replaced by a loop.
(*) For fun: one of the test cases was to compute the last digit of
1606938044258990275541962092341162602522202993782792835301376 ^ 2037035976334486086268445688409378161051468393665936250636140449354381299763336706183397376
If written in base 2, this number would be approximately 4.07 x 10^92 bits long. Since there are fewer than that many atoms in the universe, the number is much too large to store, not to mention too time consuming to compute.
Javascript has a maximum safe integer:
Number.MAX_SAFE_INTEGER
9007199254740991
safe = the ability to represent integers exactly and to correctly compare them
In your case, the number is greater:
Math.pow(2, 10000000) >= Number.MAX_SAFE_INTEGER
true
Or not smaller:
Math.pow(2, 10000000) <= Number.MAX_SAFE_INTEGER
false
You can use an arbitrary size integer library like big-integer to work with larger integers

Codility Ladder javascript - not understanding a detail that jumps the answer from 37 to 100%

I'm trying to solve all the lessons on codility but I failed to do so on the following problem: Ladder by codility
I've searched all over the internet and I'm not finding a answer that satisfies me because no one answers why the max variable impacts so much the result.
So, before posting the code, I'll explain the thinking.
By looking at it I didn't need much time to understand that the total number of combinations it's a Fibonacci number, and removing the 0 from the Fibonacci array, I'd find the answer really fast.
Now, afterwards, they told that we should return the number of combinations modulus 2^B[i].
So far so good, and I decided to submit it without the var max, then I got a score of 37%.. I searched all over the internet and the 100% result was similar to mine but they added that max = Math.pow(2,30).
Can anyone explain to me how and why that max influences so much the score?
My Code:
// Powers 2 to num
function pow(num){
return Math.pow(2,num);
}
// Returns a array with all fibonacci numbers except for 0
function fibArray(num){
// const max = pow(30); -> Adding this max to the fibonaccy array makes the answer be 100%
const arr = [0,1,1];
let current = 2;
while(current<=num){
current++;
// next = arr[current-1]+arr[current-2] % max;
next = arr[current-1]+arr[current-2]; // Without this max it's 30 %
arr.push(next);
}
arr.shift(); // remove 0
return arr;
}
function solution(A, B) {
let f = fibArray(A.length + 1);
let res = new Array(A.length);
for (let i = 0; i < A.length; ++i) {
res[i] = f[A[i]] % (pow(B[i]));
}
return res;
}
console.log(solution([4,4,5,5,1],[3,2,4,3,1])); //5,1,8,0,1
// Note that the console.log wont differ in this solution having max set or not.
// Running the exercise on Codility shows the full log with all details
// of where it passed and where it failed.
The limits for input parameters are:
Assume that:
L is an integer within the range [1..50,000];
each element of array A is an integer within the range [1..L];
each element of array B is an integer within the range [1..30].
So the array f in fibArray can be 50,001 long.
Fibonacci numbers grow exponentially; according to this page, the 50,000th Fib number has over 10,000 digits.
Javascript does not have built-in support for arbitrary precision integers, and even doubles only offer ~14 s.f. of precision. So with your modified code, you will get "garbage" values for any significant value of L. This is why you only got 30%.
But why is max necessary? Modulo math tells us that:
(a + b) % c = ([a % c] + [b % c]) % c
So by applying % max to the iterative calculation step arr[current-1] + arr[current-2], every element in fibArray becomes its corresponding Fib number mod max, without any variable exceeding the value of max (or built-in integer types) at any time:
fibArray[2] = (fibArray[1] + fibArray[0]) % max = (F1 + F0) % max = F2 % max
fibArray[3] = (F2 % max + F1) % max = (F2 + F1) % max = F3 % max
fibArray[4] = (F3 % max + F2 % max) = (F3 + F2) % max = F4 % max
and so on ...
(Fn is the n-th Fib number)
Note that as B[i] will never exceed 30, pow(2, B[i]) <= max; therefore, since max is always divisible by pow(2, B[i]), applying % max does not affect the final result.
Here is a python 100% answer that I hope offers an explanation :-)
In a nutshell; modulus % is similar to 'bitwise and' & for certain numbers.
eg any number % 10 is equivalent to the right most digit.
284%10 = 4
1994%10 = 4
FACTS OF LIFE:
for multiples of 2 -> X % Y is equivalent to X & ( Y - 1 )
precomputing (2**i)-1 for i in range(1, 31) is faster than computing everything in B when super large arrays are given as args for this particular lesson.
Thus fib(A[i]) & pb[B[i]] will be faster to compute than an X % Y style thingy.
https://app.codility.com/demo/results/trainingEXWWGY-UUR/
And for completeness the code is here.
https://github.com/niall-oc/things/blob/master/codility/ladder.py
Here is my explanation and solution in C++:
Compute the first L fibonacci numbers. Each calculation needs modulo 2^30 because the 50000th fibonacci number cannot be stored even in long double, it is so big. Since INT_MAX is 2^31, the summary of previously modulo'd numbers by 2^30 cannot exceed that. Therefore, we do not need to have bigger store and/or casting.
Go through the arrays executing the lookup and modulos. We can be sure this gives the correct result since modulo 2^30 does not take any information away. E.g. modulo 100 does not take away any information for subsequent modulo 10.
vector<int> solution(vector<int> &A, vector<int> &B)
{
const int L = A.size();
vector<int> fibonacci_numbers(L, 1);
fibonacci_numbers[1] = 2;
static const int pow_2_30 = pow(2, 30);
for (int i = 2; i < L; ++i) {
fibonacci_numbers[i] = (fibonacci_numbers[i - 1] + fibonacci_numbers[i - 2]) % pow_2_30;
}
vector<int> consecutive_answers(L, 0);
for (int i = 0; i < L; ++i) {
consecutive_answers[i] = fibonacci_numbers[A[i] - 1] % static_cast<int>(pow(2, B[i]));
}
return consecutive_answers;
}

Calculate logarithm by hand

I'd like to calculate the mathematical logarithm "by hand"...
... where stands for the logarithmBase and stands for the value.
Some examples (See Log calculator):
The base 2 logarithm of 10 is 3.3219280949
The base 5 logarithm of 15 is 1.6826061945
...
Hoever - I do not want to use a already implemented function call like Math.ceil, Math.log, Math.abs, ..., because I want a clean native solution that just deals with +-*/ and some loops.
This is the code I got so far:
function myLog(base, x)  {
let result = 0;
do {
x /= base;
result ++;
} while (x >= base)
return result;
}
let x = 10,
base = 2;
let result = myLog(base, x)
console.log(result)
But it doesn't seems like the above method is the right way to calculate the logarithm to base N - so any help how to fix this code would be really appreciated.
Thanks a million in advance jonas.
You could use a recursive approach:
const log = (base, n, depth = 20, curr = 64, precision = curr / 2) =>
depth <= 0 || base ** curr === n
? curr
: log(base, n, depth - 1, base ** curr > n ? curr - precision : curr + precision, precision / 2);
Usable as:
log(2, 4) // 2
log(2, 10) // 3.32196044921875
You can influence the precision by changing depth, and you can change the range of accepted values (currently ~180) with curr
How it works:
If we already reached the wanted depth or if we already found an accurate value:
depth <= 0 || base ** curr === n
Then it just returns curr and is done. Otherwise it checks if the logarithm we want to find is lower or higher than the current one:
base ** curr > n
It will then continue searching for a value recursively by
1) lowering depth by one
2) increasing / decreasing curr by the current precision
3) lower precision
If you hate functional programming, here is an imperative version:
function log(base, n, depth = 20) {
let curr = 64, precision = curr / 2;
while(depth-- > 0 && base ** curr !== n) {
if(base ** curr > n) {
curr -= precision;
} else {
curr += precision;
}
precision /= 2;
}
return curr;
}
By the way, the algorithm i used is called "logarithmic search" commonly known as "binary search".
First method: with a table of constants.
First normalize the argument to a number between 1 and 2 (this is achieved by multiplying or dividing by 2 as many times as necessary - keep a count of these operations). For efficiency, if the values can span many orders of magnitude, instead of equal factors you can use a squared sequence, 2, 4, 16, 256..., followed by a dichotomic search when you have bracketed the value.
F.i. if the exponents 16=2^4 works but not 256=2^8, you try 2^6, then one of 2^5 and 2^7 depending on outcome. If the final exponent is 2^d, the linear search takes O(d) operations and the geometric/dichotomic search only O(log d). To avoid divisions, it is advisable to keep a table of negative powers.
After normalization, you need to refine the mantissa. Compare the value to √2, and if larger multiply by 1/√2. This brings the value between 1 and √2. Then compare to √√2 and so on. As you go, you add the weights 1/2, 1/4, ... to the exponent when a comparison returns greater.
In the end, the exponent is the base 2 logarithm.
Example: lg 27
27 = 2^4 x 1.6875
1.6875 > √2 = 1.4142 ==> 27 = 2^4.5 x 1.1933
1.1933 > √√2 = 1.1892 ==> 27 = 2^4.75 x 1.0034
1.0034 < √√√2 = 1.0905 ==> 27 = 2^4.75 x 1.0034
...
The true value is 4.7549.
Note that you can work with other bases, in particular e. In some contexts, base 2 allows shortcuts, this is why I used it. Of course, the square roots should be tabulated.
Second method: with a Taylor series.
After the normalization step, you can use the standard series
log(1 + x) = x - x²/2 + x³/3 - ...
which converges for |x| < 1. (Caution: we now have natural logarithms.)
As convergence is too slow for values close to 1, it is advisable to use the above method to reduce to the range [1, √2). Then every new term brings a new bit of accuracy.
Alternatively, you can use the series for log((1 + x)/(1 - x)), which gives a good convergence speed even for the argument 2. See https://fr.wikipedia.org/wiki/Logarithme_naturel#D%C3%A9veloppement_en_s%C3%A9rie
Example: with x = 1.6875, y = 0.2558 and
2 x (0.2558 + 0.2558³/3 + 0.2558^5/5) = 0.5232
lg 27 ~ 4 + 0.5232 / ln 2 = 4.7548

Categories