In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006?
The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. (You can request more or fewer digits by using the toPrecision method.)
JavaScript uses IEEE-754 basic 64-bit binary floating-point for its Number type. Using IEEE-754, the result of .1 + .2 is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:
Converting “.1” to the nearest value representable in the Number type.
Converting “.2” to the nearest value representable in the Number type.
Adding the above two values and rounding the result to the nearest value representable in the Number type.
When formatting this Number value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:
0.299999999999999988897769753748434595763683319091796875,
0.3000000000000000444089209850062616169452667236328125, and
0.300000000000000099920072216264088638126850128173828125.
If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible Number value the original value was.
This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a Number value m to a decimal numeral for the ToString operation:
Otherwise, let n, k, and s be integers such that k ≥ 1, 10k‐1 ≤ s < 10k, the Number value for s × 10n‐k is m, and k is as small as possible.
The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for s × 10n‐k”, the standard means the Number value that is the result of converting the mathematical value s × 10n‐k to the Number type (with the usual rounding). In this description, k is the number of significant digits that will be used, and this step is telling us to minimize k, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to the Number type, produce the original number m.
Related
I am new to JavaScript programming and referring to Eloquent JavaScript, 3rd Edition by Marijn Haverbeke.
There is a statement in this book which reads like,
"JavaScript uses a fixed number of bits, 64 of them, to store a single number value. There are only so many patterns you can make with 64 bits, which means that the number of different numbers that can be represented is limited. With N decimal digits, you can represent 10^N numbers. Similarly, given 64 binary digits, you can represent 2^64 different numbers, which is about 18 Quintilian (an 18 with 18 zeros after it). That’s a lot."
Can someone help me with the actual meaning of this statement. I am confused as to how the values more than 2^64 are stored in the computer memory.
Can someone help me with the actual meaning of this statement. I am
confused as to how the values more than 2^64 are stored in the
computer memory.
Your questions is related with more generic concepts in Computer Science. For this question Javascript stays at higher level.
Please understand basic concepts for memory and storage first;
https://study.com/academy/lesson/how-do-computers-store-data-memory-function.html
https://www.britannica.com/technology/computer-memory
https://www.reddit.com/r/askscience/comments/2kuu9e/how_do_computers_handle_extremely_large_numbers/
How do computers evaluate huge numbers?
Also for Javascript please see this ECMAScript section
Ref: https://www.ecma-international.org/ecma-262/5.1/#sec-8.5
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct “Not-a-Number” values of the IEEE Standard are represented in ECMAScript as a single special NaN value. (Note that the NaN value is produced by the program expression NaN.) In some implementations, external code might be able to detect a difference between various Not-a-Number values, but such behaviour is implementation-dependent; to ECMAScript code, all NaN values are indistinguishable from each other.
There are two other special values, called positive Infinity and negative Infinity. For brevity, these values are also referred to for expository purposes by the symbols +∞ and −∞, respectively. (Note that these two infinite Number values are produced by the program expressions +Infinity (or simply Infinity) and -Infinity.)
The other 18437736874454810624 (that is, 264−253) values are called the finite numbers. Half of these are positive numbers and half are negative numbers; for every finite positive Number value there is a corresponding negative value having the same magnitude.
Note that there is both a positive zero and a negative zero. For brevity, these values are also referred to for expository purposes by the symbols +0 and −0, respectively. (Note that these two different zero Number values are produced by the program expressions +0 (or simply 0) and -0.)
The 18437736874454810622 (that is, 264−253−2) finite nonzero values are of two kinds:
18428729675200069632 (that is, 264−254) of them are normalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 253 but not less than 252, and e is an integer ranging from −1074 to 971, inclusive.
The remaining 9007199254740990 (that is, 253−2) values are denormalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 252, and e is −1074.
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and -0).
A finite number has an odd significand if it is nonzero and the integer m used to express it (in one of the two forms shown above) is odd. Otherwise, it has an even significand.
In this specification, the phrase “the Number value for x” where x represents an exact nonzero real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with −0 removed and with two additional values added to it that are not representable in the Number type, namely 21024 (which is +1 × 253 × 2971) and −21024 (which is −1 × 253 × 2971). Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen; for this purpose, the two extra values 21024 and −21024 are considered to have even significands. Finally, if 21024 was chosen, replace it with +∞; if −21024 was chosen, replace it with −∞; if +0 was chosen, replace it with −0 if and only if x is less than zero; any other chosen value is used unchanged. The result is the Number value for x. (This procedure corresponds exactly to the behaviour of the IEEE 754 “round to nearest” mode.)
Some ECMAScript operators deal only with integers in the range −231 through 231−1, inclusive, or in the range 0 through 232−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 232 integer values. See the descriptions of the ToInt32 and ToUint32 operators in 9.5 and 9.6, respectively.
Probably you have learned about big numbers of mathematics.
For example Avogadro constant is 6.022x10**23
Computers can also store numbers in this format.
Except for two things:
They store it as a binary number
They would store Avogadro as 0.6022*10**24, more precisely
the precision: a value that is between 0 and 1 (0.6022); this is usually 2-8 byte
the size/greatness of the number (24); this is usually only 1 byte because of 2**256 is already a very big number.
As you can see this method can be used to store an inexact value of a very big/small number.
An example of inaccuracy: 0.1 + 0.2 == 0.30000000000000004
Due to performance issues, most engines are often using the normal format, if it makes no difference in the results.
This question already has an answer here:
How does JavaScript determine the number of digits to produce when formatting floating-point values?
(1 answer)
Closed 4 years ago.
Give a decimal number 0.2
EX
var theNumber= 0.2;
I ASSUME it would be stored in memory as (based on double-precision 64-bit floating point format IEEE 754)
0-01111111100-1001100110011001100110011001100110011001100110011001
That binary number is actually rounded to fit 64 bit.
If we take that value and convert it back to decimal, we will have
0.19999999999999998
(0.1999999999999999833466546306226518936455249786376953125)
Not exactly 0.2
My question is, when we ask for decimal value of theNumber (EX: alert(theNumber)), how does JavaScript runtime know theNumber is originally 0.2?
JavaScript’s default conversion of a Number to a string produces just enough decimal digits to uniquely distinguish the Number. (This arises out of step 5 in clause 7.1.12.1 of the ECMAScript 2018 Language Specification, which I explain a little here.)
Let’s consider the conversion of a decimal numeral to a Number first. When a numeral is converted to a Number, its exact mathematical value is rounded to the nearest value representable in a Number. So, when 0.2 in source code is converted to a Number, the result is 0.200000000000000011102230246251565404236316680908203125.
When converting a Number to decimal, how many digits do we need to produce to uniquely distinguish the Number? In the case of 0.200000000000000011102230246251565404236316680908203125, if we produce “0.2”, we have a decimal numeral that, when again converted to Number, the result is 0.200000000000000011102230246251565404236316680908203125. Thus, “0.2” uniquely distinguishes 0.200000000000000011102230246251565404236316680908203125 from other Number values, so it is all we need.
In other words, JavaScript’s rule of producing just enough digits to distinguish the Number means that any short decimal numeral when converted to Number and back to string will produce the same decimal numeral (except with insignificant zeros removed, so “0.2000” will become “0.2” or “045” will become “45”). (Once the decimal numeral becomes long enough to conflict with the Number value, it may no longer survive a round-trip conversion. For example, “0.20000000000000003” will become the Number 0.2000000000000000388578058618804789148271083831787109375 and then the string “0.20000000000000004”.)
If, as a result of arithmetic, we had a number close to 0.200000000000000011102230246251565404236316680908203125 but different, such as 0.2000000000000000388578058618804789148271083831787109375, then JavaScript will print more digits, “0.20000000000000004” in this case, because it needs more digits to distinguish it from the “0.2” case.
In fact, 0.2 is represented by other bit sequence than you posted. Every time your result will match correct bit sequence, console will output 0.2. But if your calculation results in other sequence, console will output something like your 0.19999999999999998.
Similar situation is with most common example 0.1 + 0.2 which gives output 0.30000000000000004 because bit sequence for this result is different than in 0.3's representation.
console.log(0.2)
console.log(0.05 + 0.15)
console.log(0.02 + 0.18)
console.log(0.3)
console.log(0.1 + 0.2)
console.log(0.05 + 0.25)
From ECMAScript Language Specification:
11.8.3.1 Static Semantics: MVA numeric literal stands for a value of the Number type. This value is determined in two steps: first, a mathematical value (MV) is derived from the literal; second, this mathematical value is rounded [...(and here whole procedure is described)]
You may be also interested in following section:
6.1.6 Number type[...]In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity [...] means a Number value chosen in the following manner. [...(whole procedure is described)] (This procedure corresponds exactly to the behaviour of the IEEE 754-2008 “round to nearest, ties to even” mode.)
So, my ASSUMPTION is wrong.
I have written a small program to do the experiment.
The binary value that goes to memory is not
0-01111111100-1001100110011001100110011001100110011001100110011001
The mantissa part is not 1001100110011001100110011001100110011001100110011001
It got that because I truncated the value, instead of rounding it. :((
1001100110011001100110011001100110011001100110011001...[1001] need to be rounded to 52 bit. Bit 53 if the series is a 1, so the series is rounded up and becomes: 1001100110011001100110011001100110011001100110011010
The correct binary value should be:
0-01111111100-1001100110011001100110011001100110011001100110011010
The full decimal of that value is:
0.200 000 000 000 000 011 102 230 246 251 565 404 236 316 680 908 203 125
not
0.199 999 999 999 999 983 346 654 630 622 651 893 645 524 978 637 695 312 5
And as Eric's answer, all decimal numbers, if are converted to the binary
0-01111111100-1001100110011001100110011001100110011001100110011010
will be "seen" as 0.2 (unless we use toFixed() to print more digits); all those decimal numbers SHARE the same binary signature (i really don't know how to describe it).
In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006?
The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. (You can request more or fewer digits by using the toPrecision method.)
JavaScript uses IEEE-754 basic 64-bit binary floating-point for its Number type. Using IEEE-754, the result of .1 + .2 is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:
Converting “.1” to the nearest value representable in the Number type.
Converting “.2” to the nearest value representable in the Number type.
Adding the above two values and rounding the result to the nearest value representable in the Number type.
When formatting this Number value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:
0.299999999999999988897769753748434595763683319091796875,
0.3000000000000000444089209850062616169452667236328125, and
0.300000000000000099920072216264088638126850128173828125.
If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible Number value the original value was.
This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a Number value m to a decimal numeral for the ToString operation:
Otherwise, let n, k, and s be integers such that k ≥ 1, 10k‐1 ≤ s < 10k, the Number value for s × 10n‐k is m, and k is as small as possible.
The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for s × 10n‐k”, the standard means the Number value that is the result of converting the mathematical value s × 10n‐k to the Number type (with the usual rounding). In this description, k is the number of significant digits that will be used, and this step is telling us to minimize k, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to the Number type, produce the original number m.
Note: I'm not asking about Is floating point math broken? , because I asks about number with integer value+another decimal number with dec instead of decimal number+decimal number.
for example, 10.0+0.1 generates a number with rounding errors, 10.1 generates another number with rounding errors, my question is , does 10.0+0.1 generate SAME amount of error as 10.1 so that 10.0+0.1===10.1 becomes equal to true?
For more example:
10.0+0.123 === 10.123
2.0+4.68===6.68
they are true by testing, and the first number are 10.0 and 2.0, which are integer values. Is it true that an integer + hardcoded float number (same sign) exactly equals to the hardcoded expected float number? Or in other words, does a.0+b.cde exactly equals to (a+b).cde (which a,b,c,d,e are hardcoded)?
It is not generally true that adding an integer value to a floating-point value produces a result equal to the exact mathematical result. A counterexample is that 10 + .274 === 10.274 evaluates to false.
You should understand that in 10.0+0.123 === 10.123, you are not comparing the result of adding .123 to 10 to 10.123. What this code does is:
Convert “10.0” to binary floating-point, yielding 10.
Convert “0.123” to binary floating-point, yielding 0.1229999999999999982236431605997495353221893310546875.
Add the above two, yielding 10.1229999999999993320898283855058252811431884765625. (Note this result is not the exact sum; it has been rounded to the nearest representable value.)
Convert “10.123” to binary floating-point, yielding 10.1229999999999993320898283855058252811431884765625.
Compare the latter two values.
Thus, the reason the comparison returns true is not because the addition had no rounding error but because the rounding errors on the left happened to equal the rounding errors on the right. (Note: Converting a string containing a decimal to binary floating-point is a mathematical operation. When the mathematical result is not exactly representable, the nearest representable value is produced instead. The difference is called rounding error.)
If you try 10 + .274 === 10.274, you will find they differ:
“10” converted to binary floating-point is 10.
“.274” converted to binary floating-point is 0.27400000000000002131628207280300557613372802734375.
Adding the above two produces 10.2740000000000009094947017729282379150390625.
“10.274” converted to binary-floating-point is 10.2739999999999991331378623726777732372283935546875.
No. JavaScript only has floats. Here's one case that fails.
10000.333333333333 + 1.0 // 10001.333333333332
I've been playing around with floating point numbers a little bit, and based on what I've learned about them in the past, the fact that 0.1 + 0.2 ends up being something like 0.30000000000000004 doesn't surprise me.
What does surprise me, however, is that integer arithmetic always seems to work just fine and not have any of these artifacts.
I first noticed this in JavaScript (Chrome V8 in node.js):
0.1 + 0.2 == 0.3 // false, NOT surprising
123456789012 + 18 == 123456789030 // true
22334455667788 + 998877665544 == 23333333333332 // true
1048576 / 1024 == 1024 // true
C++ (gcc on Mac OS X) seems to have the same properties.
The net result seems to be that integer numbers just — for lack of a better word — work. It's only when I start using decimal numbers that things get wonky.
Is this is a feature of the design, an mathematical artifact, or some optimisation done by compilers and runtime environments?
Is this is a feature of the design, an mathematical artifact, or some optimisation done by compilers and runtime environments?
It's a feature of the real numbers. A theorem from modern algebra (modern algebra, not high school algebra; math majors take a class in modern algebra after their basic calculus and linear algebra classes) says that for some positive integer b, any positive real number r can be expressed as r = a * bp, where a is in [1,b) and p is some integer. For example, 102410 = 1.02410*103. It is this theorem that justifies our use of scientific notation.
That number a can be classified as terminal (e.g. 1.0), repeating (1/3=0.333...), or non-repeating (the representation of pi). There's a minor issue here with terminal numbers. Any terminal number can be also be represented as a repeating number. For example, 0.999... and 1 are the same number. This ambiguity in representation can be resolved by specifying that numbers that can be represented as terminal numbers are represented as such.
What you have discovered is a consequence of the fact that all integers have a terminal representation in any base.
There is an issue here with how the reals are represented in a computer. Just as int and long long int don't represent all of integers, float and double don't represent all of the reals. The scheme used on most computer to represent a real number r is to represent in the form r = a*2p, but with the mantissa (or significand) a truncated to a certain number of bits and the exponent p limited to some finite number. What this means is that some integers cannot be represented exactly. For example, even though a googol (10100) is an integer, it's floating point representation is not exact. The base 2 representation of a googol is a 333 bit number. This 333 bit mantissa is truncated to 52+1 bits.
On consequence of this is that double precision arithmetic is no longer exact, even for integers if the integers in question are greater than 253. Try your experiment using the type unsigned long long int on values between 253 and 264. You'll find that double precision arithmetic is no longer exact for these large integers.
I'm writing that under assumption that Javascript uses double-precision floating-point representation for all numbers.
Some numbers have an exact representation in the floating-point format, in particular, all integers such that |x| < 2^53. Some numbers don't, in particular, fractions such as 0.1 or 0.2 which become infinite fractions in binary representation.
If all operands and the result of an operation have an exact representation, then it would be safe to compare the result using ==.
Related questions:
What number in binary can only be represented as an approximation?
Why can't decimal numbers be represented exactly in binary?
Integers withing the representable range are exactly representable by the machine, floats are not (well, most of them).
If by "basic integer math" you understand "feature", then yes, you can assume correctly implementing arithmetic is a feature.
The reason is, that you can represent every whole number (1, 2, 3, ...) exactly in binary format (0001, 0010, 0011, ...)
That is why integers are always correct because 0011 - 0001 is always 0010.
The problem with floating point numbers is, that the part after the point cannot be exactly converted to binary.
All of the cases that you say "work" are ones where the numbers you have given can be represented exactly in the floating point format. You'll find that adding 0.25 and 0.5 and 0.125 works exactly too because they can also be represented exactly in a binary floating point number.
it's only values that can't be such as 0.1 where you'll get what appear to be inexact results.
Integers are exact because because the imprecision results mainly from the way we write decimal fractions, and secondarily because many rational numbers simply don't have non-repeating representations in any given base.
See: https://stackoverflow.com/a/9650037/140740 for the full explanation.
That method only works, when you are adding a small enough integer to very large integer -- and even in that case you are not representing both of the integers in the 'floating point' format.
All floating point numbers can't be represented. it's due to the way of coding them. The wiki page explain it better than me: http://en.wikipedia.org/wiki/IEEE_754-1985.
So when you are trying to compare a floating point number, you should use a delta:
myFloat - expectedFloat < delta
You can use the smallest representable floating point number as delta.