I was just curious, whether a number in JavaScript can ever reach Infinity.
The range of JavaScript numbers is pretty good enough -- 2 to the power of 64 different numbers, which is about 18 Quintilian (an 18 with 18 zeros after it). That’s a lot.
Now, I've few questions here:
What would really happen when a number grows beyond that range? Would JavaScript refer it as a new Infinity number?
What are all the scenarios in JavaScript, where the value Infinity could be assigned to a variable in runtime?
Let's look at a code example,
Attempting to write a method incrementNumToInfinity() to increment value of a certain number of times, so that a === b can evaluate to be true (also, to look at other possible scenarios, where the JavaScript Engine could assign the value Infinity to a variable in runtime).
var a = 1000; // a positive number
var b = Infinity;
console.log(a === b); // It returns false, that's expected
function incrementNumToInfinity(num) {
// Logic to convert our variable num into Infinity
return num;
};
a = incrementNumToInfinity(a); // Input: 1000, Expected output: Infinity
console.log(a === b); // Should return true
Can a number in JavaScript ever reach to Infinity in runtime?
It is possible at run time to get a number which is the result of a computation and which has for value Infinity. Nina Scholz has shown one such case: if you do x = 1 / 0, x will have for value Infinity.
What would really happen when a number grows beyond that range [i.e beyond the range JavaScript can handle]? Would JavaScript refer it as a new Infinity number?
We can try it. Number.MAX_VALUE is the maximum floating point number that JavaScript can represent. If you run this:
Number.MAX_VALUE + 1
You get a big number but not Infinity. What's going on there? Hmm, on a hunch let's try this:
Number.MAX_VALUE + 1 === Number.MAX_VALUE
The result is true. Say yhat? The problem is that floating point numbers have a limited precision, when I add 1 to Number.MAX_VALUE there isn't enough precision to register the increment.
If you try this:
Number.MAX_VALUE * 2
Then you get Infinity.
What are all the scenarios in JavaScript, where the value Infinity could be assigned to a variable in runtime?
"all the scenarios"... hmm... There are multiple issues with producing an enumeration of all the scenarios. For one thing, it is not clear what criteria should distinguish one scenario from one another. Is -Math.log(0) a different scenario from 1 / 0. If so, why? Then there's the issue that JavaScript engines have quite a bit of leeway to implement math functions. For instance, Math.tan is specified like this in the current draft:
Math.tan(x)
Returns an implementation-dependent approximation to the tangent of x. The argument is expressed in radians.
If x is NaN, the result is NaN.
If x is +0, the result is +0.
If x is -0, the result is -0.
If x is +∞ or -∞, the result is NaN.
It does not mandate a value for Math.tan(Math.PI / 2) If you recall your trigonometry classes, pi / 2 is 90 degrees and at that angle the tangent is infinite. Various versions of v8 have returned Infinity or a very large positive number. (See this question.) The specification does not mandate one result over the other: implementations are free to choose.
So practically if you start with a set of cases that you know mathematically should produce Infinity, you don't know whether they will actually produce that until you try them.
The part of your question with the incrementNumToInfinity function is not completely clear to me. You seem to be asking whether you can reach infinity simply by incrementing a number. It depends on what you mean. If you mean this:
let x = 0;
while (x !== Infinity) {
x++;
}
This will never terminate. x won't ever reach beyond Number.MAX_SAFE_INTEGER + 1. So it won't reach Infinity. Try this:
let x = Number.MAX_SAFE_INTEGER + 1;
x === x + 1;
You'll get the result true. That's again running into precision problems. The increment of 1 is not big enough to make a difference within the precision available to you.
Changing the increment to 2, 5, 10 or 10000000 does not really fix the issue, it just changes how far you can go before your increment no longer makes any difference.
Can a number in JavaScript ever reach to Infinity in runtime?
Assume your program does not have memory leak. I believe it can reach Infinity.
console.log(Number.MAX_SAFE_INTEGER)
// 9007199254740991
console.log(Number.MAX_VALUE)
// 1.7976931348623157e+308
var i = Number.MAX_SAFE_INTEGER
while (i != Infinity) {
i += Math.pow(10, 307)
console.log(i)
}
// 1.0000000000000005e+307
// 2.000000000000001e+307
// 3.0000000000000013e+307
// 4.000000000000002e+307
// 5.000000000000002e+307
// 6.000000000000003e+307
// 7.000000000000003e+307
// 8.000000000000004e+307
// 9.000000000000004e+307
// 1.0000000000000004e+308
// 1.1000000000000004e+308
// 1.2000000000000003e+308
// 1.3000000000000003e+308
// 1.4000000000000003e+308
// 1.5000000000000002e+308
// 1.6000000000000002e+308
// 1.7000000000000001e+308
// Infinity
The ratio of the square root of a square multiplied by PI of the same square subtracting PI to account for infinite decay as it approaches infinity, equals infinity. Or proving Archimedes wrong and right at the same time. PI and square are equivalent because neither will ever reach 0. This phenomenon also explains the zero boundary in the Pythagorean theory where A squared + B squared = c squared while approaching infinity.
Math.sqrt(1) / (Math.PI * ((Math.sqrt(1))) - Math.PI)
This is in result to the Fox and Duck Riddle. As the duck moves 1r of the distance to the pond the fox moves 180deg or the sum equivalent of the squares of its opposing and adjacent angles, we are give the square 2^2 (the travel distance from the center of the pond) Square root PI to the given 1:4 ratio therefor the hypotonuse of the triangle over pi - pi = Infinity or a 1:1 relationship with opposing vectors at any specific point.
ad 2:
What are all the scenarios in JavaScript, where the value Infinity could be assigned to a variable in runtime?
You could take a division with zero.
var x = 1 / 0;
console.log(x);
Related
I am curious how Stake.com managed to create the game "Limbo" where the odds of a multiplier happening is specific to the probability they've calculated. Here's the game : https://stake.com/casino/games/limbo
For example :
Multiplier -> x2
Probability -> 49.5% chance.
What it means is you have a 49.5% chance of winning because those are the odds that the multiplier will actually hit a number above x2.
If you set the multiplier all the way up to x1,000,000. You have a 0.00099% chance of actually hitting 1,000,000.
It's not a project I'm working on but I'm just extremely curious how we could achieve this.
Example:
Math.floor(Math.random()*1000000)
is not as random as we think, since Math.random() generates a number between 0-1. When paired with a huge multiplier like 1,000,000. We would actually generate a 6-figure number most of the time and it's not as random as we thought.
I've read that we have to convert it into a power law distribution but I'm not sure how it works. Would love to have more material to read up on how it works.
It sounds like you need to define some function that gives the probability of winning for a given multiplier N. These probabilities don't have to add up to 1, because they are not part of the same random variable; there is a unique random variable for each N chosen and two events, win or lose; we can subscript them as win(N) and lose(N). We really only need to define win(N) since lose(N) = 1 - win(N).
Something like an exponential functional would make sense here. Consider win(N) = 2^(1 - N). Then we get the following probabilities of winning:
n win(n)
1 1
2 1/2
3 1/4
4 1/8
etc
Or we could use just an inverse function: win(N) = 1/N
n win(n)
1 1
2 1/2
3 1/3
...
Then to actually see whether you win or lose for a given N, just choose a random number in some range - [0.0, 1.0) works fine for this purpose - and see whether that number is less than the win(N). If so, it's a win, of not, it's a loss.
Yes, technically speaking, it is probably true that the floating point numbers are not really uniformly distributed over [0, 1) when calling standard library functions. If you really need that level of precision then you have a much harder problem. But, for a game, regular rand() type functions should be plenty uniform for your purposes.
I have a JavaScript calculator which uses the Math.cbrt() function. When I calculate the cube root of 125 it returns 4.999999999999999. I understand that I could use Math.round() to round any answers that this function returns to integer values, but I do not want to do that exactly. Is there a way to use this if and only if the result of calculation is some number followed by a string of 9s (or something similar like 4.99999998 perhaps) after the decimal?
What you are dealing with is the frustration of floating point numbers in computing. See the Floating Point Guide for more information on this critical topic.
The short version:
Certain non-integer values cannot be represented accurately by computers, so they store a value that is "near enough". Just try evaluating 3.3 / 3 in your favourite REPL.
Say what?!
Computers are supposed to be perfect at this numbers/math thing, right? Can I trust any answer they give me?
Yes, for integers, they are pretty much perfect. But for non-integer calculations, you should assume that answers won't be exact, and account for these floating point errors.
The solution in Javascript
In newer versions of Javascript, you have a defined constant Number.EPSILON, which is the smallest difference between the actual number and the approximation that it can actually store.
Using this, you can multiply that constant by the result you get and add it to the result and you should get the exact value you require.
function cbrt(n) {
return Math.cbrt(n) + (Number.EPSILON * Math.cbrt(n));
}
Alternatively, you can use the rounding behaviour of the .toFixed() method on numbers together with the parseFloat() function if you only care about numbers up to a certain number of decimal places (less than 20).
function num(n, prec) {
if (prec === void 0) prec = 8; // default to 8 decimal places
return parseFloat(n.toFixed(prec));
}
var threshold = 0.999; // set to whatever you want
var fraction = result % 1;
if (fraction >= threshold) {
result = Math.round(result);
}
I am trying to Math.floor a scientific notation, but at one point the number gets too big and my current method doesn't work anymore. This is what I am using atm
var nr = (number+"").length - 4;
if( nr > 1 ) {
nr = Math.pow( 10, nr );
number= Math.floor(number/nr)*nr;
number= number.toExponential(3);
}
When it becomes a scientific notation by default, I think that's e20+, than my .length method doesn't work anymore since the length it returns isn't accurate. I can think of a work around, and that's to find out the number after e, and update my nr to Math.floor it properly, but it seems like so much work to do something so simple. Here's an example number 8.420960987929105e+79 I want to turn this into 8.420e+79, is there a way I can Math.floor the third decimal point always, no matter what the number is? As it stands when I use toExponential(3) it always rounds the number. My numbers can get as high as e+200 easily, so I need an easier way of doing what I'm currently doing.
Edit: managed to find a work around that works besides Connor Peet's answer for anyone who wants extra options
var nr = 8.420960987929105e+79+"";
var nr1 = nr.substr(0,4);
var nr2 = nr.substr(4, nr.length);
var finalNr = Number(nr1 + 0 + nr2).toExponential(3);
This way is more of a hack, it adds a 0 after the 4th number so when toExponential rounds it up, it gets 'floored' pretty much.
I wrote a little snippet to round a number to a certain number of significant figures some time ago. You might find it useful
function sigFigs(num, figures) {
var delta = Math.pow(10, Math.ceil(Math.log(num) / Math.log(10)) - figures);
return Math.round(num / delta) * delta;
}
sigFigs(number, 3); // => 8.420e+79
I failed to find any constant in JS language which represents MAX UINT 32
Does it exists? I can have hardcoded the number itself, but i prefer to go in the more appropriate path of coding
For integers, Number.MAX_SAFE_INTEGER would be appropriate, as it's the maximum safe integer in JavaScript (2^53 – 1). The 53 power comes from how the double-precision floating-point numbers work. Those are also used in JavaScript to store numbers.
// In the safe integers zone:
const a = Number.MAX_SAFE_INTEGER - 1;
const b = Number.MAX_SAFE_INTEGER - 0;
console.log(a); // 9007199254740990
console.log(b); // 9007199254740991 (b + 1)
console.log(a === b); // false
// Outside the safe integers zone:
const x = Number.MAX_SAFE_INTEGER + 1;
const y = Number.MAX_SAFE_INTEGER + 2;
console.log(x); // 9007199254740992
console.log(y); // Also 9007199254740992, because precision....
console.log(x === y); // true
By the way, imagine that would happen if your iteration meets this kind of unsafe zone - infinite loop.
See also:
Number.EPSILON for the difference between 1 and the smallest floating point number greater than 1;
Number.MAX_VALUE for maximal number representable in JavaScript - not integer, but floating point.
Number.MIN_SAFE_INTEGER - for minimal safe integer (negative) in JavaScript.
Number.MIN_VALUE - for minimal negative number overall (floating point).
In some cases it's nicer to just use use Number.POSITIVE_INFINITY (or Number.NEGATIVE_INFINITY for negative), like when finding max/min values - for empty set you would get this not quite valid numerical value, that you can more easily notice and understand.
On linked pages you can also find other interesting stuff, like Number.isSafeInteger function to check whenever number is safe integer.
It does not exist, however you can have Max Numeric Value returned by Number object
You can see it here
alert(Number.MAX_VALUE);
Reference
javascript was no ints every number is a floating point number which is of class Number. The max value of that is Number.MAX_VALUE but that is almost certainly not what you are looking for (Number.MAX_VALUE = 1.7976931348623157e+308)
Try This:
<script>
function myFunction()
{
document.getElementById("demo").innerHTML=Number.MAX_VALUE;
}
</script>
Description
The MAX_VALUE property has a value of approximately 1.79E+308. Values larger than MAX_VALUE are represented as "Infinity".
Because MAX_VALUE is a static property of Number, you always use it as Number.MAX_VALUE, rather than as a property of a Number object you created.
Example: Using MAX_VALUE
The following code multiplies two numeric values. If the result is less than or equal to MAX_VALUE, the func1 function is called; otherwise, the func2 function is called.
if (num1 * num2 <= Number.MAX_VALUE) {
func1();
} else {
func2();
}
I’m having problems generating normally distributed random numbers (mu=0 sigma=1)
using JavaScript.
I’ve tried Box-Muller's method and ziggurat, but the mean of the generated series of numbers comes out as 0.0015 or -0.0018 — very far from zero!! Over 500,000 randomly generated numbers this is a big issue. It should be close to zero, something like 0.000000000001.
I cannot figure out whether it’s a method problem, or whether JavaScript’s built-in Math.random() generates not exactly uniformly distributed numbers.
Has someone found similar problems?
Here you can find the ziggurat function:
http://www.filosophy.org/post/35/normaldistributed_random_values_in_javascript_using_the_ziggurat_algorithm/
And below is the code for the Box-Muller:
function rnd_bmt() {
var x = 0, y = 0, rds, c;
// Get two random numbers from -1 to 1.
// If the radius is zero or greater than 1, throw them out and pick two
// new ones. Rejection sampling throws away about 20% of the pairs.
do {
x = Math.random()*2-1;
y = Math.random()*2-1;
rds = x*x + y*y;
}
while (rds === 0 || rds > 1)
// This magic is the Box-Muller Transform
c = Math.sqrt(-2*Math.log(rds)/rds);
// It always creates a pair of numbers. I'll return them in an array.
// This function is quite efficient so don't be afraid to throw one away
// if you don't need both.
return [x*c, y*c];
}
If you generate n independent normal random variables, the standard deviation of the mean will be sigma / sqrt(n).
In your case n = 500000 and sigma = 1 so the standard error of the mean is approximately 1 / 707 = 0.0014. The 95% confidence interval, given 0 mean, would be around twice this or (-0.0028, 0.0028). Your sample means are well within this range.
Your expectation of obtaining 0.000000000001 (1e-12) is not mathematically grounded. To get within that range of accuracy, you would need to generate about 10^24 samples. At 10,000 samples per second that would still take 3 quadrillon years to do...this is precisely why it's good to avoid computing things by simulation if possible.
On the other hand, your algorithm does seem to be implemented correctly :)