This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
I found some problem with JavaScript exponentiation While loop code:
var x = Number(prompt("X:"));
var y = Number(prompt("Y:"));
var count = 1;
var power = 1;
while(count <= y){
power = power * x;
console.log (x + " to the power of " + count + " is: " + power);
count++;
}
This is simple mathematical formula X^Y.
In case of X=5 and Y=25 it's going well till Y=23. It looks like there is some problem with X odd numbers. Eg. X=3 and Y=35. Wrong result at Y=34. Missing "1" during multiplication. Can someone explain why it happend?
You better know something about isSafeInteger in js:
3 to the power of 33 is: 5559060566555523
3 to the power of 34 is: 16677181699666568
3 to the power of 35 is: 50031545098999704
console.log(
Number.isSafeInteger(5559060566555523),
Number.isSafeInteger(16677181699666568),
Number.isSafeInteger(50031545098999704)
)
A safe integer is an integer that
can be exactly represented as an IEEE-754 double precision number, and
whose IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation.
For example, 2^53 - 1 is a safe integer: it can be exactly represented, and no other integer rounds to it under any IEEE-754 rounding mode. In contrast, 2^53 is not a safe integer: it can be exactly represented in IEEE-754, but the integer 2^53 + 1 can't be directly represented in IEEE-754 but instead rounds to 2^53 under round-to-nearest and round-to-zero rounding. The safe integers consist of all integers from -(2^53 - 1) inclusive to 2^53 - 1 inclusive (± 9007199254740991 or ± 9,007,199,254,740,991).
Related
const a = 10;
const b = 0.123456789123456789;
console.log((a + b).toFixed(17));
// 10.12345678912345726
As you can see from example above, .12345678912345 , only this part are shown correctly , as I understand Javascript only consider 15 places precision ( including .). If I will change 10 to 100 , it will be same amount , but I was thinking it should be 17 places precision by MDN doc. What doesn't this phrase exactly mean 17 decimal places of precision ?
If I will show it without .toFixed() method , it will show same 15 precision 10.123456789123457 - response of a + b
Url: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number
According to JS/ECMAScript specification, the Number type uses double-precision floating point which has 64-bit format (binary64), consists of a sign bit (determines positive or negative value), 11 exponent bits and 52 fraction bits (each digit represents 4-bits, hence 64-bit has 16 digits):
The Number type representing the double-precision 64-bit format IEEE
754-2008 values as specified in the IEEE Standard for Binary
Floating-Point Arithmetic.
The maximum positive number which can be represented properly using double precision is 9007199254740992, which can be achieved by using Math.pow(2, 53). If the number range is between Math.pow(2, 53) and Math.pow(2, 54) (or between Math.pow(2, -53) and Math.pow(2, -54)), only even numbers can be represented properly because the exponent bits will affect LSB (least-significant bit) on the fraction bits.
Let's review the large number part:
var x = 12345678912345.6789
var x = new Number(12345678912345.6789)
This number contains more than 52 fractional bits (72 bits in total), hence the rounding used to keep the fractional bits to 52.
Also with this decimal number:
var x = new Number(.12345678912367890)
This number contains 68 fractional bits, hence the last zero is chopped off to keep 64-bit length.
Usually numeric representation larger than 9007199254740992 or smaller than 1.1102230246251565E-16 are stored as literal strings instead of Number. If you need to compute very large numbers, there are certain external libraries available to perform arbitrary precision arithmetic.
If you want to cast more then 16 points after the decimal point you can either:
Use literal string to represent your number
Use external libraries like math.js, BigInteger.js or strint library.
Consider this code (node v5.0.0)
const a = Math.pow(2, 53)
const b = Math.pow(2, 53) + 1
const c = Math.pow(2, 53) + 2
console.log(a === b) // true
console.log(a === c) // false
Why a === b is true?
What is the maximum integer value javascript can handle?
I'm implementing random integer generator up to 2^64. Is there any pitfall I should be aware of?
How javascript treat large integers?
JS does not have integers. JS numbers are 64 bit floats. They are stored as a mantissa and an exponent.
The precision is given by the mantissa, the magnitude by the exponent.
If your number needs more precision than what can be stored in the mantissa, the least significant bits will be truncated.
9007199254740992; // 9007199254740992
(9007199254740992).toString(2);
// "100000000000000000000000000000000000000000000000000000"
// \ \ ... /\
// 1 10 53 54
// The 54-th is not stored, but is not a problem because it's 0
9007199254740993; // 9007199254740992
(9007199254740993).toString(2);
// "100000000000000000000000000000000000000000000000000000"
// \ \ ... /\
// 1 10 53 54
// The 54-th bit should be 1, but the mantissa only has 53 bits!
9007199254740994; // 9007199254740994
(9007199254740994).toString(2);
// "100000000000000000000000000000000000000000000000000010"
// \ \ ... /\
// 1 10 53 54
// The 54-th is not stored, but is not a problem because it's 0
Then, you can store all these integers:
-9007199254740992, -9007199254740991, ..., 9007199254740991, 9007199254740992
The second one is called the minimum safe integer:
The value of Number.MIN_SAFE_INTEGER is the smallest integer n such
that n and n − 1 are both exactly representable as a Number value.
The value of Number.MIN_SAFE_INTEGER is −9007199254740991
(−(253−1)).
The second last one is called the maximum safe integer:
The value of Number.MAX_SAFE_INTEGER is the largest integer n such
that n and n + 1 are both exactly representable as a Number value.
The value of Number.MAX_SAFE_INTEGER is 9007199254740991
(253−1).
Answering your second question, here is your maximum safe integer in JavaScript:
console.log( Number.MAX_SAFE_INTEGER );
All the rest is written in MDN:
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The
reasoning behind that number is that JavaScript uses double-precision
floating-point format numbers as specified in IEEE 754 and can only
safely represent numbers between -(2 ** 53 - 1) and 2 ** 53 - 1.
Safe in this context refers to the ability to represent integers
exactly and to correctly compare them. For example,
Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will
evaluate to true, which is mathematically incorrect. See
Number.isSafeInteger() for more information.
.:: JavaScript only supports 53 bit integers ::.
All numbers in JavaScript are floating point which means that integers are always represented as
sign × mantissa × 2exponent
The mantissa has 53 bits. You can use the exponent to get higher integers, but then they won’t be contiguous, any more. For example, you generally need to multiply the mantissa by two (exponent 1) in order to reach the 54th bit.
However, if you multiply by two, you will only be able to represent every second integer:
Math.pow(2, 53) // 54 bits 9007199254740992
Math.pow(2, 53) + 1 // 9007199254740992
Math.pow(2, 53) + 2 //9007199254740994
Math.pow(2, 53) + 3 //9007199254740996
Math.pow(2, 53) + 4 //9007199254740996
Rounding effects during the addition make things unpredictable for odd increments (+1 versus +3). The actual representation is a bit more complicated but this explanation should help you understand the basic problem.
You can safely use strint library to encode large integers in strings and perform arithmetic operations on them too.
Here is the full article.
Number.MAX_VALUE will tell you the largest floating-point value representable in your JS implementation. The answer will likely be: 1.7976931348623157e+308. But that doesn't mean that every integer up to 10^308 can be represented exactly. As your example code shows, beyond 2^53 only even numbers can be represented, and as you go farther out on the number line the gaps get much wider.
If you need exact integers larger than 2^53, you probably want to work with a bignum package, which allows for arbitrarily large integers (within the bounds of available memory). Two packages that I happen to know are:
BigInt by Leemon
and
Crunch
To supplement to other answers here, it's worth mentioning that BigInt exists. This allows JavaScript to handle arbitrarily large integers.
Use the n suffix on your numbers and use regular operators like 2n ** 53n + 2n. Important to point out that a BigInt is not a Number, but you can do range-limited interoperation with Number via explicit conversions.
Some examples at the Node.js REPL:
> 999999999999999999999999999999n + 1n
1000000000000000000000000000000n
> 2n ** 53n
9007199254740992n
> 2n ** 53n + 1n
9007199254740993n
> 2n ** 53n == 2n ** 53n + 1n
false
> typeof 1n
'bigint'
> 3 * 4n
TypeError: Cannot mix BigInt and other types, use explicit conversions
> BigInt(3) * 4n
12n
> 3 * Number(4n)
12
> Number(2n ** 53n) == Number(2n ** 53n + 1n)
true
This question already has answers here:
Floating point representations seem to do integer arithmetic correctly - why?
(8 answers)
Closed 5 years ago.
So I have read this answer here:
Is floating point math broken?
That because every number in JS is double float 0.1+0.2 for example will NOT equal to 0.3.
But I don't understand why it never happens with integers? Why then 1+2 always equals 3 etc. It would seem that integers like 1 or 2 similarly to 0.1 and 0.2 don't have perfect representation in the binary64 so their math should also sometimes break but that never happens.
Why is that?
but I don't understand why it never happens with integers?
It does, the integer just has to be really big before they hit the limits of the IEEE-754 format:
var a = 9007199254740992;
console.log(a); // 9007199254740992
var b = a + 1;
console.log(b); // still 9007199254740992
console.log(a == b); // true
Floating point formats, such as IEEE-754 are essentially an expression that describes the value as the following:
value := sign * mantissa * 2 ^ exponent
The mantissa is an integer of various sizes. For four byte floating point, the mantissa is 24 bits and, for eight byte floating point, the mantissa is 48 bits. If the exponent is 0, the value of the expression is determined only by the sign and the mantissa. This is, in fact, how integers are represented by JavaScript.
What seems to take most people by surprise is due to the base 2 exponent instead of base 10. We accept, that, in base 10, the result of 1/3 or 2/3 cannot be exactly represented without an infinite number of digits or the acceptance of round-off error. Similarly, there are fractions in base 2 that have similar issues. Unfortunately for our base 10 mindset, these fractions most often involve negative powers of 10.
This question asks about the highest number in JavaScript without losing precision. Here, I ask about the highest representable number in JavaScript. Some answers to the other question reference the answer to this question, but they do not answer that question, so I hope I am safe asking here.
I tried to find the answer, but I got lost halfway through. The highest number representable in JavaScript seems to be somewhere between 2^1023 and 2^1024. I went further (in the iojs REPL) with
var highest = Math.pow(2, 1023);
for(let i = 1022; i > someNumber; i--) {
highest += Math.pow(2, someNumber);
}
The highest number here seems to be when someNumber is between 969 and 970. This means it is between (2^1023 + 2^1022 + ... + 2^970) and (2^1023 + 2^1022 + ... + 2^969). I'm not sure how to go further without running out of memory and/or waiting years for a loop or function to finish.
What is the highest number representable in JavaScript? Does JavaScript store all digits of this number, or just some, because whenever I see numbers of 10^21 or higher they are represented in scientific notation? Why can JavaScript represent these extremely high numbers, especially if it can "remember" all the digits? Isn't JavaScript a base 64 language?
Finally, why is this the highest representable number? I am asking because it is not an integer exponent of 2. Is it because it is a floating point number? If we took the highest floating point number, would that be related to some exponent of 2?
ECMAScript uses IEEE 754 floating points to represent all numbers (integers and floating points) in memory. It uses double precision (64 bit), so the largest possible number would be the following (in binary):
(-1)^0 * (1.1111111111111111111111111111111111111111111111111111)_2 * 2^1023
^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
sign bit 52 binary digits exponent
That is 1.9999999999999997779553950749686919152736663818359375 * 2^1023, which is exactly equal to 179769313486231570814527423734518473246981451219193357160576872808257310254570927992989173324623785766498017753719800531497718555288192667185248845624861831489179706103179456665410545164365169396987674822445002542175370097858557402467390846365155202987281348776667818932226328810501776426180817703854493120592.218308495538871112145305600. That number is also available in JavaScript as Number.MAX_VALUE.
JavaScript uses IEE 754 double-precision floating point numbers, aka the binary64. This format has 1 sign bit, 11 bits of exponent, and 52 bits of mantissa.
The highest possible number is that which is encoded using the highest possible exponents and mantissa, with a 0 sign bit. Except that the exponent value of 7ff (base 16) is used to encode Infinity and NaNs. The largest number is therefore encoded as 7fef ffff ffff ffff, and its value is (1 + (1 − 2^(−52))) × 2^1023.
Refer to the linked article for further details about the formula.
I'm reading the second chapter of the book Eloquent JavaScript. The author states that:
Any whole number less than 2^52 (which is more than 10^15) will safely fit in a JavaScript number.
I grabbed the value of 2^52 from wikipedia.
4,503,599,627,370,496
The value has to be less than 2^52, so I've substracted 1 from the initial value;
var max = 4503599627370495;
After defining the max variable I'm checking what's the value (I'm using Chrome 32.0.1700.77).
console.log(max); // 4503599627370495
I'd like to see what happens when I go over this limit, so I'm adding one a couple of times.
Unexpectedly:
max += 1;
console.log(max); // 4503599627370496
max += 1;
console.log(max); // 4503599627370497
max += 1;
console.log(max); // 4503599627370498
I went over the limit and the calculations are still precise.
I tried the next power of two instead, 2^53, I didn't substract 1 this time:
9,007,199,254,740,992
var max = 9007199254740992;
This one seems to be a bigger limit, it seems that I can quite safely add and substract numbers:
max += 1;
console.log(max); // 9007199254740992
max += 1;
console.log(max); // 9007199254740992
max -= 1;
console.log(max); // 9007199254740991
max += 1;
console.log(max); // 9007199254740992
max -= 900;
console.log(max); // 9007199254740092
max += 900;
console.log(max); // 9007199254740992
I can assign even a bigger value to the max, however it loses precision and I can't safely add or substract numbers again.
Could you please explain precisely the mechanism that sits under the hood? An example of what happens with the bits after going above 2^52 would be really helpful.
This is not a strongly typed programming language. JS has an object Number. You can even get an infinite number: document.write(Math.exp(1000));.
document.write(Number.MIN_VALUE + "<br>");
document.write(Number.MAX_VALUE + "<br>");
document.write(Number.POSITIVE_INFINITY + "<br>");
document.write(Number.NEGATIVE_INFINITY + "<br>");
alert([
Number.MAX_VALUE/(1e293),
Number.MAX_VALUE/(1e292),
Number.MAX_VALUE/(1e291),
Number.MAX_VALUE/(1e290),
].join('\n'))
Hope it's a useful answer. Thanks!
UPDATE:
max int is - +/- 9007199254740992
You can find some information on JavaScript's Number type here: ECMA-262 5th Edition: The Number Type.
As it mentions, numbers are represented as a 64-bit floating-point number, with 53 bits of mantissa (significant digits) and 11 bits for the exponent (IEEE 754). The result is then obtained with: mantissa * 2^exponent.
This means that up to 2^53 values can be represented in the mantissa (of those a few numbers have special meanings, and the others are positive and negative integers).
The number 2^53 (9007199254740992) can't be represented in the mantissa, and you have to use an exponent. As an example, you can represent 2^53 as (9007199254740992 / 2) * 2^1, ie. mantissa = 9007199254740992 / 2 = 4503599627370496 and exponent = 1.
Let's check what happens with 2^53+1 (9007199254740993). Here we have to do the same, mantissa = 9007199254740993 / 2 = 4503599627370496. Oops, isn't this the same mantissa we had for 2^53? Looks like there's been some rounding error! :)
(Note: the above examples are not actually how it really works: the mantissa is always interpreted as having a dot after the first digit, which means that eg. the number 3 is actually stored as 1.5*2. I omitted this in the above explanation to make it easier to follow.)
You can find some more information on floating-point numbers (in general) here: What Every Computer Scientist Should Know About Floating-Point Arithmetic.
You can think of 52 bits integer in JS, but remember that bitwise logical operators & | >> etc.. will only deal with 32 less significant bits discarding the rest.