Javascript - What is the maximum number in scientific notation restricted by? - javascript

In JavaScript, I was curious to find out what was the maximum possible number representable in scientific notation without getting "Infinity" as a result, so I wrote a little program and found out it's this one:
17976931348623158079372897140530341507993413271003782693617377898044496829276475094664901797758720709633028641669288791094655554785194040263065748867150582068190890200070838367627385484581771153176447573027006985557136695962284291481986083493647529271907416844436551070434271155969950809304288017790417449779
which can be abbreviated to 1.7976931348623157e+308.
My question is, what makes this specific number the maximum possible in JavaScript? Is it hardware-dependent (maybe maximum one on 64 bit?) or language-specific? Why exactly is 308 the maximum usable power of 10?
And also, how different is it in other languages?

Short answer:
Double precision float. Due to how the double data-type is defined.
Long answer:
All floating point numbers (double is a double-precision float) are written as a product of two values, the mantissa and the exponent. In principle, this works similar to how numbers are written in scientific notation: for the number 1.34 * 10^24, the mantissa is 1.34 and the exponent is 24.
https://en.wikipedia.org/wiki/Double-precision_floating-point_format
Number.MAX_VALUE
The value of Number.MAX_VALUE is the largest positive finite value of the Number type, which is approximately 1.7976931348623157e+308.
This property has the attributes { [[Writable]]: false, [[Enumerable]]: fafalselse, [[Configurable]]: false }.
http://ecma262-5.com/ELS5_HTML.htm#Section_8.5
What differs for floats (and doubles) is that you split the total bytes that hold the number into two parts, one for the mantissa and one for the exponent.
That gives you an exponent of 10 bits, and one sign bit for the exponent, so that would give you a number from -1023 to +1024.
However, the base of the exponent is not 10, but 2. The way the floating point number exponent is stored uses 8 bits (for floats) or 11 bits (for doubles), meaning you get exponent values of -127 to +128 (float) or -1023 to +1024 (double).
And 2^1024 gives us a value of 1.797693134862315907729305190789 * 10^308, which is the largest exponent of a double precision float.

Related

How to find the remainder value for this calculation in JavaScript or Java? [duplicate]

Why do some numbers lose accuracy when stored as floating point numbers?
For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:
32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875
How can such an apparently simple number be "too big" to express in 64 bits of memory?
In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2 -49
Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.
9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.
Seeing the Data
First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):
def float_to_bin_parts(number, bits=64):
if bits == 32: # single precision
int_pack = 'I'
float_pack = 'f'
exponent_bits = 8
mantissa_bits = 23
exponent_bias = 127
elif bits == 64: # double precision. all python floats are this
int_pack = 'Q'
float_pack = 'd'
exponent_bits = 11
mantissa_bits = 52
exponent_bias = 1023
else:
raise ValueError, 'bits argument must be 32 or 64'
bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]
There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.
Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.
When we call that function with our example, 9.2, here's what we get:
>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']
Interpreting the Data
You'll see I've split the return value into three components. These components are:
Sign
Exponent
Mantissa (also called Significand, or Fraction)
Sign
The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.
Exponent
The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).
Mantissa
The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:
6.0221413x1023
The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:
1.0010011001100110011001100110011001100110011001100110
This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.
When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:
0.0010011001100110011001100110011001100110011001100110
In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)
Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.
Recapping the Components
Sign (first component): 0 for positive, 1 for negative
Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa
Calculating the Number
Putting all three parts together, we're given this binary number:
1.0010011001100110011001100110011001100110011001100110 x 1011
Which we can then convert from binary to decimal:
1.1499999999999999 x 23 (inexact!)
And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:
9.1999999999999993
Representing as a Fraction
9.2
Now that we've built the number, it's possible to reconstruct it into a simple fraction:
1.0010011001100110011001100110011001100110011001100110 x 1011
Shift mantissa to a whole number:
10010011001100110011001100110011001100110011001100110 x 1011-110100
Convert to decimal:
5179139571476070 x 23-52
Subtract the exponent:
5179139571476070 x 2-49
Turn negative exponent into division:
5179139571476070 / 249
Multiply exponent:
5179139571476070 / 562949953421312
Which equals:
9.1999999999999993
9.5
>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']
Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.
Assemble the binary scientific notation:
1.0011 x 1011
Shift the decimal point:
10011 x 1011-100
Subtract the exponent:
10011 x 10-1
Binary to decimal:
19 x 2-1
Negative exponent to division:
19 / 21
Multiply exponent:
19 / 2
Equals:
9.5
Further reading
The Floating-Point Guide: What Every Programmer Should Know About Floating-Point Arithmetic, or, Why don’t my numbers add up? (floating-point-gui.de)
What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg 1991)
IEEE Double-precision floating-point format (Wikipedia)
Floating Point Arithmetic: Issues and Limitations (docs.python.org)
Floating Point Binary
This isn't a full answer (mhlester already covered a lot of good ground I won't duplicate), but I would like to stress how much the representation of a number depends on the base you are working in.
Consider the fraction 2/3
In good-ol' base 10, we typically write it out as something like
0.666...
0.666
0.667
When we look at those representations, we tend to associate each of them with the fraction 2/3, even though only the first representation is mathematically equal to the fraction. The second and third representations/approximations have an error on the order of 0.001, which is actually much worse than the error between 9.2 and 9.1999999999999993. In fact, the second representation isn't even rounded correctly! Nevertheless, we don't have a problem with 0.666 as an approximation of the number 2/3, so we shouldn't really have a problem with how 9.2 is approximated in most programs. (Yes, in some programs it matters.)
Number bases
So here's where number bases are crucial. If we were trying to represent 2/3 in base 3, then
(2/3)10 = 0.23
In other words, we have an exact, finite representation for the same number by switching bases! The take-away is that even though you can convert any number to any base, all rational numbers have exact finite representations in some bases but not in others.
To drive this point home, let's look at 1/2. It might surprise you that even though this perfectly simple number has an exact representation in base 10 and 2, it requires a repeating representation in base 3.
(1/2)10 = 0.510 = 0.12 = 0.1111...3
Why are floating point numbers inaccurate?
Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.
While all of the other answers are good there is still one thing missing:
It is impossible to represent irrational numbers (e.g. π, sqrt(2), log(3), etc.) precisely!
And that actually is why they are called irrational. No amount of bit storage in the world would be enough to hold even one of them. Only symbolic arithmetic is able to preserve their precision.
Although if you would limit your math needs to rational numbers only the problem of precision becomes manageable. You would need to store a pair of (possibly very big) integers a and b to hold the number represented by the fraction a/b. All your arithmetic would have to be done on fractions just like in highschool math (e.g. a/b * c/d = ac/bd).
But of course you would still run into the same kind of trouble when pi, sqrt, log, sin, etc. are involved.
TL;DR
For hardware accelerated arithmetic only a limited amount of rational numbers can be represented. Every not-representable number is approximated. Some numbers (i.e. irrational) can never be represented no matter the system.
There are infinitely many real numbers (so many that you can't enumerate them), and there are infinitely many rational numbers (it is possible to enumerate them).
The floating-point representation is a finite one (like anything in a computer) so unavoidably many many many numbers are impossible to represent. In particular, 64 bits only allow you to distinguish among only 18,446,744,073,709,551,616 different values (which is nothing compared to infinity). With the standard convention, 9.2 is not one of them. Those that can are of the form m.2^e for some integers m and e.
You might come up with a different numeration system, 10 based for instance, where 9.2 would have an exact representation. But other numbers, say 1/3, would still be impossible to represent.
Also note that double-precision floating-points numbers are extremely accurate. They can represent any number in a very wide range with as much as 15 exact digits. For daily life computations, 4 or 5 digits are more than enough. You will never really need those 15, unless you want to count every millisecond of your lifetime.
Why can we not represent 9.2 in binary floating point?
Floating point numbers are (simplifying slightly) a positional numbering system with a restricted number of digits and a movable radix point.
A fraction can only be expressed exactly using a finite number of digits in a positional numbering system if the prime factors of the denominator (when the fraction is expressed in it's lowest terms) are factors of the base.
The prime factors of 10 are 5 and 2, so in base 10 we can represent any fraction of the form a/(2b5c).
On the other hand the only prime factor of 2 is 2, so in base 2 we can only represent fractions of the form a/(2b)
Why do computers use this representation?
Because it's a simple format to work with and it is sufficiently accurate for most purposes. Basically the same reason scientists use "scientific notation" and round their results to a reasonable number of digits at each step.
It would certainly be possible to define a fraction format, with (for example) a 32-bit numerator and a 32-bit denominator. It would be able to represent numbers that IEEE double precision floating point could not, but equally there would be many numbers that can be represented in double precision floating point that could not be represented in such a fixed-size fraction format.
However the big problem is that such a format is a pain to do calculations on. For two reasons.
If you want to have exactly one representation of each number then after each calculation you need to reduce the fraction to it's lowest terms. That means that for every operation you basically need to do a greatest common divisor calculation.
If after your calculation you end up with an unrepresentable result because the numerator or denominator you need to find the closest representable result. This is non-trivil.
Some Languages do offer fraction types, but usually they do it in combination with arbitary precision, this avoids needing to worry about approximating fractions but it creates it's own problem, when a number passes through a large number of calculation steps the size of the denominator and hence the storage needed for the fraction can explode.
Some languages also offer decimal floating point types, these are mainly used in scenarios where it is imporant that the results the computer gets match pre-existing rounding rules that were written with humans in mind (chiefly financial calculations). These are slightly more difficult to work with than binary floating point, but the biggest problem is that most computers don't offer hardware support for them.

JavaScript Numerics: multiply first then divide? [duplicate]

Why do some numbers lose accuracy when stored as floating point numbers?
For example, the decimal number 9.2 can be expressed exactly as a ratio of two decimal integers (92/10), both of which can be expressed exactly in binary (0b1011100/0b1010). However, the same ratio stored as a floating point number is never exactly equal to 9.2:
32-bit "single precision" float: 9.19999980926513671875
64-bit "double precision" float: 9.199999999999999289457264239899814128875732421875
How can such an apparently simple number be "too big" to express in 64 bits of memory?
In most programming languages, floating point numbers are represented a lot like scientific notation: with an exponent and a mantissa (also called the significand). A very simple number, say 9.2, is actually this fraction:
5179139571476070 * 2 -49
Where the exponent is -49 and the mantissa is 5179139571476070. The reason it is impossible to represent some decimal numbers this way is that both the exponent and the mantissa must be integers. In other words, all floats must be an integer multiplied by an integer power of 2.
9.2 may be simply 92/10, but 10 cannot be expressed as 2n if n is limited to integer values.
Seeing the Data
First, a few functions to see the components that make a 32- and 64-bit float. Gloss over these if you only care about the output (example in Python):
def float_to_bin_parts(number, bits=64):
if bits == 32: # single precision
int_pack = 'I'
float_pack = 'f'
exponent_bits = 8
mantissa_bits = 23
exponent_bias = 127
elif bits == 64: # double precision. all python floats are this
int_pack = 'Q'
float_pack = 'd'
exponent_bits = 11
mantissa_bits = 52
exponent_bias = 1023
else:
raise ValueError, 'bits argument must be 32 or 64'
bin_iter = iter(bin(struct.unpack(int_pack, struct.pack(float_pack, number))[0])[2:].rjust(bits, '0'))
return [''.join(islice(bin_iter, x)) for x in (1, exponent_bits, mantissa_bits)]
There's a lot of complexity behind that function, and it'd be quite the tangent to explain, but if you're interested, the important resource for our purposes is the struct module.
Python's float is a 64-bit, double-precision number. In other languages such as C, C++, Java and C#, double-precision has a separate type double, which is often implemented as 64 bits.
When we call that function with our example, 9.2, here's what we get:
>>> float_to_bin_parts(9.2)
['0', '10000000010', '0010011001100110011001100110011001100110011001100110']
Interpreting the Data
You'll see I've split the return value into three components. These components are:
Sign
Exponent
Mantissa (also called Significand, or Fraction)
Sign
The sign is stored in the first component as a single bit. It's easy to explain: 0 means the float is a positive number; 1 means it's negative. Because 9.2 is positive, our sign value is 0.
Exponent
The exponent is stored in the middle component as 11 bits. In our case, 0b10000000010. In decimal, that represents the value 1026. A quirk of this component is that you must subtract a number equal to 2(# of bits) - 1 - 1 to get the true exponent; in our case, that means subtracting 0b1111111111 (decimal number 1023) to get the true exponent, 0b00000000011 (decimal number 3).
Mantissa
The mantissa is stored in the third component as 52 bits. However, there's a quirk to this component as well. To understand this quirk, consider a number in scientific notation, like this:
6.0221413x1023
The mantissa would be the 6.0221413. Recall that the mantissa in scientific notation always begins with a single non-zero digit. The same holds true for binary, except that binary only has two digits: 0 and 1. So the binary mantissa always starts with 1! When a float is stored, the 1 at the front of the binary mantissa is omitted to save space; we have to place it back at the front of our third element to get the true mantissa:
1.0010011001100110011001100110011001100110011001100110
This involves more than just a simple addition, because the bits stored in our third component actually represent the fractional part of the mantissa, to the right of the radix point.
When dealing with decimal numbers, we "move the decimal point" by multiplying or dividing by powers of 10. In binary, we can do the same thing by multiplying or dividing by powers of 2. Since our third element has 52 bits, we divide it by 252 to move it 52 places to the right:
0.0010011001100110011001100110011001100110011001100110
In decimal notation, that's the same as dividing 675539944105574 by 4503599627370496 to get 0.1499999999999999. (This is one example of a ratio that can be expressed exactly in binary, but only approximately in decimal; for more detail, see: 675539944105574 / 4503599627370496.)
Now that we've transformed the third component into a fractional number, adding 1 gives the true mantissa.
Recapping the Components
Sign (first component): 0 for positive, 1 for negative
Exponent (middle component): Subtract 2(# of bits) - 1 - 1 to get the true exponent
Mantissa (last component): Divide by 2(# of bits) and add 1 to get the true mantissa
Calculating the Number
Putting all three parts together, we're given this binary number:
1.0010011001100110011001100110011001100110011001100110 x 1011
Which we can then convert from binary to decimal:
1.1499999999999999 x 23 (inexact!)
And multiply to reveal the final representation of the number we started with (9.2) after being stored as a floating point value:
9.1999999999999993
Representing as a Fraction
9.2
Now that we've built the number, it's possible to reconstruct it into a simple fraction:
1.0010011001100110011001100110011001100110011001100110 x 1011
Shift mantissa to a whole number:
10010011001100110011001100110011001100110011001100110 x 1011-110100
Convert to decimal:
5179139571476070 x 23-52
Subtract the exponent:
5179139571476070 x 2-49
Turn negative exponent into division:
5179139571476070 / 249
Multiply exponent:
5179139571476070 / 562949953421312
Which equals:
9.1999999999999993
9.5
>>> float_to_bin_parts(9.5)
['0', '10000000010', '0011000000000000000000000000000000000000000000000000']
Already you can see the mantissa is only 4 digits followed by a whole lot of zeroes. But let's go through the paces.
Assemble the binary scientific notation:
1.0011 x 1011
Shift the decimal point:
10011 x 1011-100
Subtract the exponent:
10011 x 10-1
Binary to decimal:
19 x 2-1
Negative exponent to division:
19 / 21
Multiply exponent:
19 / 2
Equals:
9.5
Further reading
The Floating-Point Guide: What Every Programmer Should Know About Floating-Point Arithmetic, or, Why don’t my numbers add up? (floating-point-gui.de)
What Every Computer Scientist Should Know About Floating-Point Arithmetic (Goldberg 1991)
IEEE Double-precision floating-point format (Wikipedia)
Floating Point Arithmetic: Issues and Limitations (docs.python.org)
Floating Point Binary
This isn't a full answer (mhlester already covered a lot of good ground I won't duplicate), but I would like to stress how much the representation of a number depends on the base you are working in.
Consider the fraction 2/3
In good-ol' base 10, we typically write it out as something like
0.666...
0.666
0.667
When we look at those representations, we tend to associate each of them with the fraction 2/3, even though only the first representation is mathematically equal to the fraction. The second and third representations/approximations have an error on the order of 0.001, which is actually much worse than the error between 9.2 and 9.1999999999999993. In fact, the second representation isn't even rounded correctly! Nevertheless, we don't have a problem with 0.666 as an approximation of the number 2/3, so we shouldn't really have a problem with how 9.2 is approximated in most programs. (Yes, in some programs it matters.)
Number bases
So here's where number bases are crucial. If we were trying to represent 2/3 in base 3, then
(2/3)10 = 0.23
In other words, we have an exact, finite representation for the same number by switching bases! The take-away is that even though you can convert any number to any base, all rational numbers have exact finite representations in some bases but not in others.
To drive this point home, let's look at 1/2. It might surprise you that even though this perfectly simple number has an exact representation in base 10 and 2, it requires a repeating representation in base 3.
(1/2)10 = 0.510 = 0.12 = 0.1111...3
Why are floating point numbers inaccurate?
Because often-times, they are approximating rationals that cannot be represented finitely in base 2 (the digits repeat), and in general they are approximating real (possibly irrational) numbers which may not be representable in finitely many digits in any base.
While all of the other answers are good there is still one thing missing:
It is impossible to represent irrational numbers (e.g. π, sqrt(2), log(3), etc.) precisely!
And that actually is why they are called irrational. No amount of bit storage in the world would be enough to hold even one of them. Only symbolic arithmetic is able to preserve their precision.
Although if you would limit your math needs to rational numbers only the problem of precision becomes manageable. You would need to store a pair of (possibly very big) integers a and b to hold the number represented by the fraction a/b. All your arithmetic would have to be done on fractions just like in highschool math (e.g. a/b * c/d = ac/bd).
But of course you would still run into the same kind of trouble when pi, sqrt, log, sin, etc. are involved.
TL;DR
For hardware accelerated arithmetic only a limited amount of rational numbers can be represented. Every not-representable number is approximated. Some numbers (i.e. irrational) can never be represented no matter the system.
There are infinitely many real numbers (so many that you can't enumerate them), and there are infinitely many rational numbers (it is possible to enumerate them).
The floating-point representation is a finite one (like anything in a computer) so unavoidably many many many numbers are impossible to represent. In particular, 64 bits only allow you to distinguish among only 18,446,744,073,709,551,616 different values (which is nothing compared to infinity). With the standard convention, 9.2 is not one of them. Those that can are of the form m.2^e for some integers m and e.
You might come up with a different numeration system, 10 based for instance, where 9.2 would have an exact representation. But other numbers, say 1/3, would still be impossible to represent.
Also note that double-precision floating-points numbers are extremely accurate. They can represent any number in a very wide range with as much as 15 exact digits. For daily life computations, 4 or 5 digits are more than enough. You will never really need those 15, unless you want to count every millisecond of your lifetime.
Why can we not represent 9.2 in binary floating point?
Floating point numbers are (simplifying slightly) a positional numbering system with a restricted number of digits and a movable radix point.
A fraction can only be expressed exactly using a finite number of digits in a positional numbering system if the prime factors of the denominator (when the fraction is expressed in it's lowest terms) are factors of the base.
The prime factors of 10 are 5 and 2, so in base 10 we can represent any fraction of the form a/(2b5c).
On the other hand the only prime factor of 2 is 2, so in base 2 we can only represent fractions of the form a/(2b)
Why do computers use this representation?
Because it's a simple format to work with and it is sufficiently accurate for most purposes. Basically the same reason scientists use "scientific notation" and round their results to a reasonable number of digits at each step.
It would certainly be possible to define a fraction format, with (for example) a 32-bit numerator and a 32-bit denominator. It would be able to represent numbers that IEEE double precision floating point could not, but equally there would be many numbers that can be represented in double precision floating point that could not be represented in such a fixed-size fraction format.
However the big problem is that such a format is a pain to do calculations on. For two reasons.
If you want to have exactly one representation of each number then after each calculation you need to reduce the fraction to it's lowest terms. That means that for every operation you basically need to do a greatest common divisor calculation.
If after your calculation you end up with an unrepresentable result because the numerator or denominator you need to find the closest representable result. This is non-trivil.
Some Languages do offer fraction types, but usually they do it in combination with arbitary precision, this avoids needing to worry about approximating fractions but it creates it's own problem, when a number passes through a large number of calculation steps the size of the denominator and hence the storage needed for the fraction can explode.
Some languages also offer decimal floating point types, these are mainly used in scenarios where it is imporant that the results the computer gets match pre-existing rounding rules that were written with humans in mind (chiefly financial calculations). These are slightly more difficult to work with than binary floating point, but the biggest problem is that most computers don't offer hardware support for them.

JavaScript Highest Representable Number

This question asks about the highest number in JavaScript without losing precision. Here, I ask about the highest representable number in JavaScript. Some answers to the other question reference the answer to this question, but they do not answer that question, so I hope I am safe asking here.
I tried to find the answer, but I got lost halfway through. The highest number representable in JavaScript seems to be somewhere between 2^1023 and 2^1024. I went further (in the iojs REPL) with
var highest = Math.pow(2, 1023);
for(let i = 1022; i > someNumber; i--) {
highest += Math.pow(2, someNumber);
}
The highest number here seems to be when someNumber is between 969 and 970. This means it is between (2^1023 + 2^1022 + ... + 2^970) and (2^1023 + 2^1022 + ... + 2^969). I'm not sure how to go further without running out of memory and/or waiting years for a loop or function to finish.
What is the highest number representable in JavaScript? Does JavaScript store all digits of this number, or just some, because whenever I see numbers of 10^21 or higher they are represented in scientific notation? Why can JavaScript represent these extremely high numbers, especially if it can "remember" all the digits? Isn't JavaScript a base 64 language?
Finally, why is this the highest representable number? I am asking because it is not an integer exponent of 2. Is it because it is a floating point number? If we took the highest floating point number, would that be related to some exponent of 2?
ECMAScript uses IEEE 754 floating points to represent all numbers (integers and floating points) in memory. It uses double precision (64 bit), so the largest possible number would be the following (in binary):
(-1)^0 * (1.1111111111111111111111111111111111111111111111111111)_2 * 2^1023
^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
sign bit 52 binary digits exponent
That is 1.9999999999999997779553950749686919152736663818359375 * 2^1023, which is exactly equal to 179769313486231570814527423734518473246981451219193357160576872808257310254570927992989173324623785766498017753719800531497718555288192667185248845624861831489179706103179456665410545164365169396987674822445002542175370097858557402467390846365155202987281348776667818932226328810501776426180817703854493120592.218308495538871112145305600. That number is also available in JavaScript as Number.MAX_VALUE.
JavaScript uses IEE 754 double-precision floating point numbers, aka the binary64. This format has 1 sign bit, 11 bits of exponent, and 52 bits of mantissa.
The highest possible number is that which is encoded using the highest possible exponents and mantissa, with a 0 sign bit. Except that the exponent value of 7ff (base 16) is used to encode Infinity and NaNs. The largest number is therefore encoded as 7fef ffff ffff ffff, and its value is (1 + (1 − 2^(−52))) × 2^1023.
Refer to the linked article for further details about the formula.

Why is Number.MAX_VALUE 1.7976931348623157e+308 instead of 9007199254740992e+1024?

JavaScript uses IEEE 754 for storing numbers, for both integer and floating point values, where 53 bits are used for representing the mantissa and 11 bits are used for representing the exponent.
The maximum value representable with a signed 53 bit integer is ±9007199254740992, and the maximum value representable with a signed 11 bit integer is ±1024.
However, Number.MAX_VALUE in JavaScript is 1.7976931348623157e+308. Why isn’t it 9007199254740992e+1024, which is possible to represent with 64 bits and is the larger value?
The AeB syntax mirrors scientific notation and hence means A * 10^B, not A * 2^B. The maximum binary64 value is 9007199254740992 * 2^1024 (or something like that), not 9007199254740992 * 10^1024. Your proposed 9007199254740992e+1024 means the latter and is much larger (10^1024 ~= 2^3402), so it doesn't actually fit in a 64 bit binary float (try it!).
The scientific notation e+308 means 10^308, which is about 0.5562*2^1024.
The actual answer is (2^54 - 1)/2^53 *(2^1023). Look it up on Wikipedia! (Double precision floating point format

Large numbers erroneously rounded in JavaScript

See this code:
var jsonString = '{"id":714341252076979033,"type":"FUZZY"}';
var jsonParsed = JSON.parse(jsonString);
console.log(jsonString, jsonParsed);
When I see my console in Firefox 3.5, the value of jsonParsed is the number rounded:
Object id=714341252076979100 type=FUZZY
Tried different values, the same outcome (number rounded).
I also don't get its rounding rules. 714341252076979136 is rounded to 714341252076979200, whereas 714341252076979135 is rounded to 714341252076979100.
Why is this happening?
You're overflowing the capacity of JavaScript's number type, see §8.5 of the spec for details. Those IDs will need to be strings.
IEEE-754 double-precision floating point (the kind of number JavaScript uses) can't precisely represent all numbers (of course). Famously, 0.1 + 0.2 == 0.3 is false. That can affect whole numbers just like it affects fractional numbers; it starts once you get above 9,007,199,254,740,991 (Number.MAX_SAFE_INTEGER).
Beyond Number.MAX_SAFE_INTEGER + 1 (9007199254740992), the IEEE-754 floating-point format can no longer represent every consecutive integer. 9007199254740991 + 1 is 9007199254740992, but 9007199254740992 + 1 is also 9007199254740992 because 9007199254740993 cannot be represented in the format. The next that can be is 9007199254740994. Then 9007199254740995 can't be, but 9007199254740996 can.
The reason is we've run out of bits, so we no longer have a 1s bit; the lowest-order bit now represents multiples of 2. Eventually, if we keep going, we lose that bit and only work in multiples of 4. And so on.
Your values are well above that threshold, and so they get rounded to the nearest representable value.
As of ES2020, you can use BigInt for integers that are arbitrarily large, but there is no JSON representation for them. You could use strings and a reviver function:
const jsonString = '{"id":"714341252076979033","type":"FUZZY"}';
// Note it's a string −−−−^−−−−−−−−−−−−−−−−−−^
const obj = JSON.parse(jsonString, (key, value) => {
if (key === "id" && typeof value === "string" && value.match(/^\d+$/)) {
return BigInt(value);
}
return value;
});
console.log(obj);
(Look in the real console, the snippets console doesn't understand BigInt.)
If you're curious about the bits, here's what happens: An IEEE-754 binary double-precision floating-point number has a sign bit, 11 bits of exponent (which defines the overall scale of the number, as a power of 2 [because this is a binary format]), and 52 bits of significand (but the format is so clever it gets 53 bits of precision out of those 52 bits). How the exponent is used is complicated (described here), but in very vague terms, if we add one to the exponent, the value of the significand is doubled, since the exponent is used for powers of 2 (again, caveat there, it's not direct, there's cleverness in there).
So let's look at the value 9007199254740991 (aka, Number.MAX_SAFE_INTEGER):
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110011 1111111111111111111111111111111111111111111111111111
= 9007199254740991 (Number.MAX_SAFE_INTEGER)
That exponent value, 10000110011, means that every time we add one to the significand, the number represented goes up by 1 (the whole number 1, we lost the ability to represent fractional numbers much earlier).
But now that significand is full. To go past that number, we have to increase the exponent, which means that if we add one to the significand, the value of the number represented goes up by 2, not 1 (because the exponent is applied to 2, the base of this binary floating point number):
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110100 0000000000000000000000000000000000000000000000000000
= 9007199254740992 (Number.MAX_SAFE_INTEGER + 1)
Well, that's okay, because 9007199254740991 + 1 is 9007199254740992 anyway. But! We can't represent 9007199254740993. We've run out of bits. If we add just 1 to the significand, it adds 2 to the value:
+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− sign bit
/ +−−−−−−−+−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−− exponent
/ / | +−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−−+− significand
/ / | / |
0 10000110100 0000000000000000000000000000000000000000000000000001
= 9007199254740994 (Number.MAX_SAFE_INTEGER + 3)
The format just cannot represent odd numbers anymore as we increase the value, the exponent is too big.
Eventually, we run out of significand bits again and have to increase the exponent, so we end up only being able to represent multiples of 4. Then multiples of 8. Then multiples of 16. And so on.
What you're seeing here is actually the effect of two roundings. Numbers in ECMAScript are internally represented double-precision floating-point. When id is set to 714341252076979033 (0x9e9d9958274c359 in hex), it actually is assigned the nearest representable double-precision value, which is 714341252076979072 (0x9e9d9958274c380). When you print out the value, it is being rounded to 15 significant decimal digits, which gives 14341252076979100.
It is not caused by this json parser. Just try to enter 714341252076979033 to fbug's console. You'll see the same 714341252076979100.
See this blog post for details:
http://www.exploringbinary.com/print-precision-of-floating-point-integers-varies-too
JavaScript uses double precision floating point values, ie a total precision of 53 bits, but you need
ceil(lb 714341252076979033) = 60
bits to exactly represent the value.
The nearest exactly representable number is 714341252076979072 (write the original number in binary, replace the last 7 digits with 0 and round up because the highest replaced digit was 1).
You'll get 714341252076979100 instead of this number because ToString() as described by ECMA-262, §9.8.1 works with powers of ten and in 53 bit precision all these numbers are equal.
The problem is that your number requires a greater precision than JavaScript has.
Can you send the number as a string? Separated in two parts?
JavaScript can only handle exact whole numbers up to about 9000 million million (that's 9 with 15 zeros). Higher than that and you get garbage. Work around this by using strings to hold the numbers. If you need to do math with these numbers, write your own functions or see if you can find a library for them: I suggest the former as I don't like the libraries I've seen. To get you started, see two of my functions at another answer.

Categories