Based on mdn Math.random() can return 0 but can not return 1
The Math.random() static method returns a floating-point, pseudo-random number that's greater than or equal to 0 and less than 1
I tried to loop Math.random() to get 10 number below e-10 and it takes 15 minutes to complete it with my 4 cores 8 threads cpu and from that 10 numbers 0 never get returned. Is my test case just too small or latest javascript Math.random() can never return 0?
note: I run the code using node.js v18.12.1 and the smallest number ever returned is 5.2724491439448684e-12 (yes its small, but it isn't 0)
let countSmallNumber = 0;
let temp = 0;
console.time("loopTime");
while (countSmallNumber < 10) {
temp = Math.random();
if (temp < 0.0000000001) {
countSmallNumber++;
console.log(`almost 0 => ${temp}`);
}
}
console.timeEnd("loopTime");
Related
I'm trying to solve all the lessons on codility but I failed to do so on the following problem: Ladder by codility
I've searched all over the internet and I'm not finding a answer that satisfies me because no one answers why the max variable impacts so much the result.
So, before posting the code, I'll explain the thinking.
By looking at it I didn't need much time to understand that the total number of combinations it's a Fibonacci number, and removing the 0 from the Fibonacci array, I'd find the answer really fast.
Now, afterwards, they told that we should return the number of combinations modulus 2^B[i].
So far so good, and I decided to submit it without the var max, then I got a score of 37%.. I searched all over the internet and the 100% result was similar to mine but they added that max = Math.pow(2,30).
Can anyone explain to me how and why that max influences so much the score?
My Code:
// Powers 2 to num
function pow(num){
return Math.pow(2,num);
}
// Returns a array with all fibonacci numbers except for 0
function fibArray(num){
// const max = pow(30); -> Adding this max to the fibonaccy array makes the answer be 100%
const arr = [0,1,1];
let current = 2;
while(current<=num){
current++;
// next = arr[current-1]+arr[current-2] % max;
next = arr[current-1]+arr[current-2]; // Without this max it's 30 %
arr.push(next);
}
arr.shift(); // remove 0
return arr;
}
function solution(A, B) {
let f = fibArray(A.length + 1);
let res = new Array(A.length);
for (let i = 0; i < A.length; ++i) {
res[i] = f[A[i]] % (pow(B[i]));
}
return res;
}
console.log(solution([4,4,5,5,1],[3,2,4,3,1])); //5,1,8,0,1
// Note that the console.log wont differ in this solution having max set or not.
// Running the exercise on Codility shows the full log with all details
// of where it passed and where it failed.
The limits for input parameters are:
Assume that:
L is an integer within the range [1..50,000];
each element of array A is an integer within the range [1..L];
each element of array B is an integer within the range [1..30].
So the array f in fibArray can be 50,001 long.
Fibonacci numbers grow exponentially; according to this page, the 50,000th Fib number has over 10,000 digits.
Javascript does not have built-in support for arbitrary precision integers, and even doubles only offer ~14 s.f. of precision. So with your modified code, you will get "garbage" values for any significant value of L. This is why you only got 30%.
But why is max necessary? Modulo math tells us that:
(a + b) % c = ([a % c] + [b % c]) % c
So by applying % max to the iterative calculation step arr[current-1] + arr[current-2], every element in fibArray becomes its corresponding Fib number mod max, without any variable exceeding the value of max (or built-in integer types) at any time:
fibArray[2] = (fibArray[1] + fibArray[0]) % max = (F1 + F0) % max = F2 % max
fibArray[3] = (F2 % max + F1) % max = (F2 + F1) % max = F3 % max
fibArray[4] = (F3 % max + F2 % max) = (F3 + F2) % max = F4 % max
and so on ...
(Fn is the n-th Fib number)
Note that as B[i] will never exceed 30, pow(2, B[i]) <= max; therefore, since max is always divisible by pow(2, B[i]), applying % max does not affect the final result.
Here is a python 100% answer that I hope offers an explanation :-)
In a nutshell; modulus % is similar to 'bitwise and' & for certain numbers.
eg any number % 10 is equivalent to the right most digit.
284%10 = 4
1994%10 = 4
FACTS OF LIFE:
for multiples of 2 -> X % Y is equivalent to X & ( Y - 1 )
precomputing (2**i)-1 for i in range(1, 31) is faster than computing everything in B when super large arrays are given as args for this particular lesson.
Thus fib(A[i]) & pb[B[i]] will be faster to compute than an X % Y style thingy.
https://app.codility.com/demo/results/trainingEXWWGY-UUR/
And for completeness the code is here.
https://github.com/niall-oc/things/blob/master/codility/ladder.py
Here is my explanation and solution in C++:
Compute the first L fibonacci numbers. Each calculation needs modulo 2^30 because the 50000th fibonacci number cannot be stored even in long double, it is so big. Since INT_MAX is 2^31, the summary of previously modulo'd numbers by 2^30 cannot exceed that. Therefore, we do not need to have bigger store and/or casting.
Go through the arrays executing the lookup and modulos. We can be sure this gives the correct result since modulo 2^30 does not take any information away. E.g. modulo 100 does not take away any information for subsequent modulo 10.
vector<int> solution(vector<int> &A, vector<int> &B)
{
const int L = A.size();
vector<int> fibonacci_numbers(L, 1);
fibonacci_numbers[1] = 2;
static const int pow_2_30 = pow(2, 30);
for (int i = 2; i < L; ++i) {
fibonacci_numbers[i] = (fibonacci_numbers[i - 1] + fibonacci_numbers[i - 2]) % pow_2_30;
}
vector<int> consecutive_answers(L, 0);
for (int i = 0; i < L; ++i) {
consecutive_answers[i] = fibonacci_numbers[A[i] - 1] % static_cast<int>(pow(2, B[i]));
}
return consecutive_answers;
}
So I'm very familiar with the good old
Math.floor(Math.random() * (max - min + 1)) + min;
and this works very nicely with small numbers, however when numbers get larger this quickly becomes biased and only returns numbers one zero below it (for ex. a random number between 0 and 1e100 will almost always (every time I've tested, so several billion times since I used a for loop to generate lots of numbers) return [x]e99). And yes I waited the long time for the program to generate that many numbers, twice. By this point, it would be safe to assume that the output is always [x]e99 for all practical uses.
So next I tried this
Math.floor(Math.pow(max - min + 1, Math.random())) + min;
and while that works perfectly for huge ranges it breaks for small ones. So my question is how can do both - be able to generate both small and large random numbers without any bias (or minimal bias to the point of not being noticeable)?
Note: I'm using Decimal.js to handle numbers in the range -1e2043 < x < 1e2043 but since it is the same algorithm I displayed the vanilla JavaScript forms above to prevent confusion. I can take a vanilla answer and convert it to Decimal.js without any trouble so feel free to answer with either.
Note #2: I want to even out the odds of getting large numbers. For example 1e33 should have the same odds as 1e90 in my 0-1e100 example. But at the same time I need to support smaller numbers and ranges.
Your Problem is Precision. That's the reason you use Decimal.js in the first place. Like every other Number in JS, Math.random() supports only 53 bit of precision (Some browser even used to create only the upper 32bit of randomness). But your value 1e100 would need 333 bit of precision. So the lower 280 bit (~75 decimal places out of 100) are discarded in your formula.
But Decimal.js provides a random() method. Why don't you use that one?
function random(min, max){
var delta = new Decimal(max).sub(min);
return Decimal.random( +delta.log(10) ).mul(delta).add(min);
}
Another "problem" why you get so many values with e+99 is probability. For the range 0 .. 1e100 the probabilities to get some exponent are
e+99 => 90%,
e+98 => 9%,
e+97 => 0.9%,
e+96 => 0.09%,
e+95 => 0.009%,
e+94 => 0.0009%,
e+93 => 0.00009%,
e+92 => 0.000009%,
e+91 => 0.0000009%,
e+90 => 0.00000009%,
and so on
So if you generate ten billion numbers, statistically you'll get a single value up to 1e+90. That are the odds.
I want to even out those odds for large numbers. 1e33 should have the same odds as 1e90 for example
OK, then let's generate a 10random in the range min ... max.
function random2(min, max){
var a = +Decimal.log10(min),
b = +Decimal.log10(max);
//trying to deal with zero-values.
if(a === -Infinity && b === -Infinity) return 0; //a random value between 0 and 0 ;)
if(a === -Infinity) a = Math.min(0, b-53);
if(b === -Infinity) b = Math.min(0, a-53);
return Decimal.pow(10, Decimal.random(Math.abs(b-a)).mul(b-a).add(a) );
}
now the exponents are pretty much uniformly distributed, but the values are a bit skewed. Because 101 to 101.5 10 .. 33 has the same probability as 101.5 to 102 34 .. 100
The issue with Math.random() * Math.pow(10, Math.floor(Math.random() * 100)); at smaller numbers is that random ranges [0, 1), meaning that when calculating the exponent separately one needs to make sure the prefix ranges [1, 10). Otherwise you want to calculate a number in [1eX, 1eX+1) but have e.g. 0.1 as prefix and end up in 1eX-1. Here is an example, maxExp is not 100 but 10 for readability of the output but easily adjustable.
let maxExp = 10;
function differentDistributionRandom() {
let exp = Math.floor(Math.random() * (maxExp + 1)) - 1;
if (exp < 0) return Math.random();
else return (Math.random() * 9 + 1) * Math.pow(10, exp);
}
let counts = new Array(maxExp + 1).fill(0).map(e => []);
for (let i = 0; i < (maxExp + 1) * 1000; i++) {
let x = differentDistributionRandom();
counts[Math.max(0, Math.floor(Math.log10(x)) + 1)].push(x);
}
counts.forEach((e, i) => {
console.log(`E: ${i - 1 < 0 ? "<0" : i - 1}, amount: ${e.length}, example: ${Number.isNaN(e[0]) ? "none" : e[0]}`);
});
You might see the category <0 here which is hopefully what you wanted (the cutoff point is arbitrary, here [0, 1) has the same probability as [1, 10) as [10, 100) and so on, but [0.01, 0.1) is again less likely than [0.1, 1))
If you didn't insist on base 10 you could reinterpret the pseudorandom bits from two Math.random calls as Float64 which would give a similar distribution, base 2:
function exponentDistribution() {
let bits = [Math.random(), Math.random()];
let buffer = new ArrayBuffer(24);
let view = new DataView(buffer);
view.setFloat64(8, bits[0]);
view.setFloat64(16, bits[1]);
//alternatively all at once with setInt32
for (let i = 0; i < 4; i++) {
view.setInt8(i, view.getInt8(12 + i));
view.setInt8(i + 4, view.getInt8(20 + i));
}
return Math.abs(view.getFloat64(0));
}
let counts = new Array(11).fill(0).map(e => []);
for (let i = 0; i < (1 << 11) * 100; i++) {
let x = exponentDistribution();
let exp = Math.floor(Math.log2(x));
if (exp >= -5 && exp <= 5) {
counts[exp + 5].push(x);
}
}
counts.forEach((e, i) => {
console.log(`E: ${i - 5}, amount: ${e.length}, example: ${Number.isNaN(e[0]) ? "none" : e[0]}`);
});
This one obviously is bounded by the precision ends of Float64, there are some uneven parts of the distribution due to some details of IEEE754, e.g. denorms/subnorms and i did not take care of special values like Infinity. It is rather to be seen as a fun extra, a reminder of the distribution of float values. Note that the loop does 1 << 11 (2048) times a number iterations, which is about the exponent range of Float64, 11 bit, [-1022, 1023]. That's why in the example each bucket gets approximately said number (100) hits.
You can create the number in increments less than Number.MAX_SAFE_INTEGER, then concatenate the generated numbers to a single string
const r = () => Math.floor(Math.random() * Number.MAX_SAFE_INTEGER);
let N = "";
for (let i = 0; i < 10; i++) N += r();
document.body.appendChild(document.createTextNode(N));
console.log(/e/.test(N));
I have this code:
function raffle(){
number = Math.random(100) * 100;}
raffle();
But everytime I raffle(); the number is the same.
Math.random() returns an a random number between 0 (inclusive) and 1 (exclusive). The Javascript random function doesn't take any parameters.
If you want a random number x such that 0 ≤ x < 100, then you would do:
function raffle() {
return Math.random() * 100;
}
Your raffle function never returns a value.
Here's a version which returns the random value.
function raffle() {
return Math.random() * 100;
}
(The Math.random() function returns a floating-point, pseudo-random number in the range [0, 1) that is, from 0 (inclusive) up to but not including 1 (exclusive), which you can then scale to your desired range.)
You can simply add the iteration number to every results, like so:
let runNumber = 0
function raffle(){
return Math.random() * 100 + runNumber++;
}
raffle();
You probably run it one after another, which is where the "pseudo" random part kick in.
There's an existing question / answer that deals with implementing probability in JavaScript, but I've read and re-read that answer and don't understand how it works (for my purpose) or how a simpler version of probability would look.
My goal is to do:
function probability(n){
// return true / false based on probability of n / 100
}
if(probability(70)){ // -> ~70% likely to be true
//do something
}
What's the simple way to achieve this?
You could do something like...
var probability = function(n) {
return !!n && Math.random() <= n;
};
Then call it with probability(.7). It works because Math.random() returns a number between and inclusive of 0 and 1 (see comment).
If you must use 70, simply divide it over 100 in the body of your function.
Function Probability:
probability(n){
return Math.random() < n;
}
// Example, for a 75% probability
if(probability(0.75)){
// Code to run if success
}
If we read about Math.random(), it will return a number in the [0;1) interval, which includes 0 but exclude 1, so to keep an even distribution, we need to exclude the top limit, that to say, using < and not <=.
Checking the upper and the lower bound probability (which are 0% or 100%):
We know that 0 ≤ Math.random() < 1 so, for a:
Probability of 0% (when n === 0, it should always returning false):
Math.random() < 0 // That actually will always return always false => Ok
Probability of 100% (when n === 1, it should always returning true):
Math.random() < 1 // That actually will always return always true => Ok
Running test of the probability function
// Function Probability
function probability(n){
return Math.random() < n;
}
// Running test with a probability of 86% (for 10 000 000 iterations)
var x = 0;
var prob = 0.86;
for(let i = 0; i < 10000000; i++){
if(probability(prob)){
x += 1;
}
}
console.log(`${x} of 10000000 given results by "Math.random()" were under ${prob}`);
console.log(`Hence so, a probability of ${x / 100000} %`);
This is even more simple :
function probability(n) {
return Math.random() <= n;
}
I have a question about this script I found and used. It works but I don't get why. The exercise was to make a list with random numbers from -50 to 50. The function below uses Math.floor(Math.random() * (the part i dont understand).
If I put this calculation on google I got as answer 151 and Math.random()*151 does not do from -50 to 50.
Can someone give me a clear explanation about this function below because I am sure that I am missing something.
this script works but I only want a clear explanation how
for (i = 0; i <= 100; i++)
{
Rnumber[i] = randomFromTo(-50,50);
}
function randomFromTo(from, to)
{
return Math.floor(Math.random() * (to - from + 1) + from);
}
to - from + 1 = 50 - (-50) + 1 = 101
Math.random() * 101 = number in range [0,101[
Math.floor([0,101[) = integer in range [0,100]
[0,100] + from = [0,100] + (-50) = integer in range [-50,50]
Which is exactly what is asked for.
https://developer.mozilla.org/en/JavaScript/Reference/Global_Objects/Math/random
Math.random returns a floating-point, pseudo-random number in the
range [0, 1) that is, from 0 (inclusive) up to but not including 1
(exclusive), which you can then scale to your desired range.
which when multiplied with a number > 1 and floored gives you an integer
Math.random() - get only value between 0 and 1.
Math.floor( number ) get integer down rounded value from number.
You should:
function randomFromTo(from, to)
{
// you can use doubled bitwise NOT operator which also as Math.floor get integer value from number but is much faster.
// ~1 == -2 , ~-2 == 1 and ~1.5 == -2 :)
return ~~( --from + ( Math.random() * ( ++to - from )) )
}