Cannot understand how add operation works on decimal numbers in javascript - javascript

I have always been coding in java and have recently started coding in javascript (node.js to be precise). One thing that's driving me crazy is add operation on decimal numbers;
Consider the following code
var a=0.1, b=0.2, c=0.3;
var op1 = (a+b)+c;
var op2 = (b+c)+a;
To my amazement I find out op1 != op2 ! console.logging op1 and op2 print out the following:
console.log(op1); 0.6000000000000001
console.log(op2); 0.6
This does not make sense. This looks like a bug to me because javascript simply cannot ignore rules of arithmetic. Could someone please explain why this happens?

This is a case of floating-point error.
The only fractional numbers that can be exactly represented by a floating-point number are those that can be written as an integer fraction with the denominator as a power of two.
For example, 0.5 can be represented exactly, because it can be written 1/2. 0.3 can't.
In this case, the rounding errors in your variables and expressions combined just right to produce an error in the first case, but not in the second case.
Neither way is represented exactly behind the scenes, but when you output the value, it rounds it to 16 digits of precision. In one case, it rounded to a slightly larger number, and in the other, it rounded to the expected value.
The lesson you should learn is that you should never depend on a floating-point number having an exact value, especially after doing some arithmetic.

That is not a javascript problem, but a standard problem in every programming language when using floats. Your sample numbers 0.1, 0.2 and 0.3 cannot be represented as finite decimal, but as repeating decimal, because they have the divisor 5 in it:
0.1 = 1/(2*5) and so on.
If you use decimals only with divisor 2 (like a=0.5, b=0.25 and c=0.125), everything is fine.

When working with floats you might want to set the precision on the numbers before you compare.
(0.1 + 0.2).toFixed(1) == 0.3
However, in a few cases toFixed doesn't behavior the same way across browsers.
Here's a fix for toFixed.
http://bateru.com/news/2012/03/reimplementation-of-number-prototype-tofixed/

This happens because not all floating point numbers can be accurately represented in binary.
Multiply all your numbers by some number (for example, 100 if you're dealing with 2 decimal points).
Then do your arithmetic on the larger result.
After your arithmetic is finished, divide by the same factor you multiplied by in the beginning.
http://jsfiddle.net/SjU9d/
a = 10 * a;
b = 10 * b;
c = 10 * c;
op1 = (a+b)+c;
op2 = (b+c)+a;
op1 = op1 / 10;
op2 = op2 / 10;

Related

Algorithm to correct the floating point precision error in JavaScript

You can find a lot about Floating Point Precision Errors and how to avoid them in Javascript, for example "How to deal with floating point number precision in JavaScript?", who deal with the problem by just rounding the number to a fixed amount of decimal places.
My problem is slightly different, I get numbers from the backend (some with the rounding error) and want to display it without the error.
Of course I could just round the number to a set number of decimal places with value.toFixed(X). The problem is, that the numbers can range from 0.000000001 to 1000000000, so I can never know for sure, how many decimal places are valid.
(See this Fiddle for my unfruitful attempts)
Code :
var a = 0.3;
var b = 0.1;
var c = a - b; // is 0.19999999999999998, is supposed to be 0.2
// c.toFixed(2) = 0.20
// c.toFixed(4) = 0.2000
// c.toFixed(5) = 0.200000
var d = 0.000003;
var e = 0.000002;
var f = d - e; // is 0.0000010000000000000002 is supposed to be 0.000001
// f.toFixed(2) = 0.00
// f.toFixed(4) = 0.0000
// f.toFixed(5) = 0.000001
var g = 0.0003;
var h = 0.0005;
var i = g + h; // is 0.0007999999999999999, is supposed to be 0.0008
// i.toFixed(2) = 0.00
// i.toFixed(4) = 0.0008
// i.toFixed(5) = 0.000800
My Question is now, if there is any algorithm, that intelligently detects how many decimal places are reasonable and rounds the numbers accordingly?
When a decimal numeral is rounded to binary floating-point, there is no way to know, just from the result, what the original number was or how many significant digits it had. Infinitely many decimal numerals will round to the same result.
However, the rounding error is bounded. If it is known that the original number had at most a certain number of digits, then only decimal numerals with that number of digits are candidates. If only one of those candidates differs from the binary value by less than the maximum rounding error, then that one must be the original number.
If I recall correctly (I do not use JavaScript regularly), JavaScript uses IEEE-754 64-bit binary. For this format, it is known that any 15-digit decimal numeral may be converted to this binary floating-point format and back without error. Thus, if the original input was a decimal numeral with at most 15 significant digits, and it was converted to 64-bit binary floating-point (and no other operations were performed on it that could have introduced additional error), and you format the binary floating-point value as a 15-digit decimal numeral, you will have the original number.
The resulting decimal numeral may have trailing zeroes. It is not possible to know (from the binary floating-point value alone) whether those were in the original numeral.
One liner solution thanks to Eric's answer:
const fixFloatingPoint = val => Number.parseFloat(val.toFixed(15))
fixFloatingPoint(0.3 - 0.1) // 0.2
fixFloatingPoint(0.000003 - 0.000002) // 0.000001
fixFloatingPoint(0.0003 + 0.0005) // 0.0008
In order to fix issues where:
0.3 - 0.1 => 0.199999999
0.57 * 100 => 56.99999999
0.0003 - 0.0001 => 0.00019999999
You can do something like:
const fixNumber = num => Number(num.toPrecision(15));
Few examples:
fixNumber(0.3 - 0.1) => 0.2
fixNumber(0.0003 - 0.0001) => 0.0002
fixNumber(0.57 * 100) => 57

JavaScript - after what operations is toFixed() required and what argument should be passed?

From the discussion here and here we see that toFixed() can be used to help maintain precision when working with binary floats.
For various reasons, I do not want to use a third party library. However, for any fraction that can be expressed as a finite decimal within 64 bits, I would like to maintain maximum precision in the most generic way possible.
Assuming I use toFixed(), when do I need to use it? And is there a single "best" argument that I can pass to it to achieve the above goal?
For example, consider JavaScript's ability to represent the fraction 1/10 as a finite decimal after different operations:
10/100 = 0.1
0.09 + 0.01 = 0.09999999999999999
1.1 - 1 = 0.10000000000000009
The first requires no correction step while the last two do. Furthermore passing toFixed() a value of 15 works in this case (ex: Number((0.09 + 0.01).toFixed(15))):
10/100 = 0.1
0.09 + 0.01 = 0.1
1.1 - 1 = 0.1
but passing 16 does not:
10/100 = 0.1
0.09 + 0.01 = 0.1
1.1 - 1 = 0.1000000000000001
Try JSFIDDLE.
Will 15 as the argument for toFixed() always achieve the above objective? More importantly, when do I have to call toFixed()? In addition to add, subtract, multiply and divide, my math routines use Math.pow(), Math.exp() and Math.log().
Re when inaccuracies may occur:
Barring getting deep into the technical details of IEEE-754 double-precision floating point and having your code tailor itself to the specific characteristics of the two values it's operating on, you won't know when a result will be inaccurate; any operation, including simple addition, can result in an inaccurate result (as in the famous 0.1 + 0.2 = 0.30000000000000004 example).
Re what number to pass into toFixed:
You can use toFixed (and then conversion back to a number) for rounding (as you are in your question), but the number of places you should use is dictated by the precision you need in the result, and the range of values you're working with. Fundamentally, you need to decide how much precision you need, and then use rounding to achieve that precision with the best accuracy, because it's impossible to get perfect accuracy with IEEE-754 floating point across the full range of values (as you know). You get about 15 significant digits in total; you need to decide how to allocate them between the left and right-hand sides of the decimal.
But say your "normal" range of values is less than 10 billion with, say, four or five decimal places, you could use 5 as that gives you about 10 digits on the left and about five digits on the right.
Taking the famous example, for instance:
function roundToRange(num) {
return +num.toFixed(5);
}
snippet.log(roundToRange(0.1 + 0.2)); // 0.3, rather than the usual 0.30000000000000004
<!-- Script provides the `snippet` object, see http://meta.stackexchange.com/a/242144/134069 -->
<script src="http://tjcrowder.github.io/simple-snippets-console/snippet.js"></script>
The rounded result may not be perfect, of course, because there's a whole class of base 10 fractional numbers that can't be represented accurately in base 2 (what IEEE-754 uses) and so must be approximated, but by working within your range, you can keep imprecisions from outside your range from adding (or multiplying!) up.

Javascript divider issue

Please note, I am suprised with the below problem, I have two values to be divided in TextBox T1 and T2. I am dividing the same to get Amount And not Getting the Exact Amount. Rather then getting the Amount in Fraction 00000001.
Example:
var t1=5623.52;
var t2=56.2352;
var t3=5623.52/56.2352; //100.0000000001
Note: I can't round up the values since the vales are Exchange Rates so vary according to currency.
This is caused by the limited precision of floating point values. See The Floating Point Guide for full details.
The short version is that the 0.52 fractional part of your numbers cannot be represented exactly in binary, just like 1/3 cannot be represented exactly in decimal. Because of the limited number of digits of accuracy, the larger number is slightly more precise than the smaller one, and so is not exactly 100 times as large.
If that doesn't make sense, imagine you are dealing with thirds, and pretend that numbers are represented as decimals, to ten decimal places. If you declare:
var t1 = 1000.0 / 3.0;
var t2 = 10.0 / 3.0;
Then t2 is represented as 3.3333333333, which is as close as can be represented with the given precision. Something that is 100 times as large as t2 would be 333.3333333300, but t1 is actually represented as 333.3333333333. It is not exactly 100 times t2, due to rounding/truncation being applied at different points for the different numbers.
The fix, as always with floating-point rounding issues, is to use decimal types instead. Have a look at the Javascript cheat-sheet on the aforementioned guide for ways to go about this.
like Felix Kling said, don't use floating point values.
Or use parseInt if you want to keep an integer.
var t1=5623.52;
var t2=56.2352;
var t3=parseInt(t1/t2);

Floating point representations seem to do integer arithmetic correctly - why?

I've been playing around with floating point numbers a little bit, and based on what I've learned about them in the past, the fact that 0.1 + 0.2 ends up being something like 0.30000000000000004 doesn't surprise me.
What does surprise me, however, is that integer arithmetic always seems to work just fine and not have any of these artifacts.
I first noticed this in JavaScript (Chrome V8 in node.js):
0.1 + 0.2 == 0.3 // false, NOT surprising
123456789012 + 18 == 123456789030 // true
22334455667788 + 998877665544 == 23333333333332 // true
1048576 / 1024 == 1024 // true
C++ (gcc on Mac OS X) seems to have the same properties.
The net result seems to be that integer numbers just — for lack of a better word — work. It's only when I start using decimal numbers that things get wonky.
Is this is a feature of the design, an mathematical artifact, or some optimisation done by compilers and runtime environments?
Is this is a feature of the design, an mathematical artifact, or some optimisation done by compilers and runtime environments?
It's a feature of the real numbers. A theorem from modern algebra (modern algebra, not high school algebra; math majors take a class in modern algebra after their basic calculus and linear algebra classes) says that for some positive integer b, any positive real number r can be expressed as r = a * bp, where a is in [1,b) and p is some integer. For example, 102410 = 1.02410*103. It is this theorem that justifies our use of scientific notation.
That number a can be classified as terminal (e.g. 1.0), repeating (1/3=0.333...), or non-repeating (the representation of pi). There's a minor issue here with terminal numbers. Any terminal number can be also be represented as a repeating number. For example, 0.999... and 1 are the same number. This ambiguity in representation can be resolved by specifying that numbers that can be represented as terminal numbers are represented as such.
What you have discovered is a consequence of the fact that all integers have a terminal representation in any base.
There is an issue here with how the reals are represented in a computer. Just as int and long long int don't represent all of integers, float and double don't represent all of the reals. The scheme used on most computer to represent a real number r is to represent in the form r = a*2p, but with the mantissa (or significand) a truncated to a certain number of bits and the exponent p limited to some finite number. What this means is that some integers cannot be represented exactly. For example, even though a googol (10100) is an integer, it's floating point representation is not exact. The base 2 representation of a googol is a 333 bit number. This 333 bit mantissa is truncated to 52+1 bits.
On consequence of this is that double precision arithmetic is no longer exact, even for integers if the integers in question are greater than 253. Try your experiment using the type unsigned long long int on values between 253 and 264. You'll find that double precision arithmetic is no longer exact for these large integers.
I'm writing that under assumption that Javascript uses double-precision floating-point representation for all numbers.
Some numbers have an exact representation in the floating-point format, in particular, all integers such that |x| < 2^53. Some numbers don't, in particular, fractions such as 0.1 or 0.2 which become infinite fractions in binary representation.
If all operands and the result of an operation have an exact representation, then it would be safe to compare the result using ==.
Related questions:
What number in binary can only be represented as an approximation?
Why can't decimal numbers be represented exactly in binary?
Integers withing the representable range are exactly representable by the machine, floats are not (well, most of them).
If by "basic integer math" you understand "feature", then yes, you can assume correctly implementing arithmetic is a feature.
The reason is, that you can represent every whole number (1, 2, 3, ...) exactly in binary format (0001, 0010, 0011, ...)
That is why integers are always correct because 0011 - 0001 is always 0010.
The problem with floating point numbers is, that the part after the point cannot be exactly converted to binary.
All of the cases that you say "work" are ones where the numbers you have given can be represented exactly in the floating point format. You'll find that adding 0.25 and 0.5 and 0.125 works exactly too because they can also be represented exactly in a binary floating point number.
it's only values that can't be such as 0.1 where you'll get what appear to be inexact results.
Integers are exact because because the imprecision results mainly from the way we write decimal fractions, and secondarily because many rational numbers simply don't have non-repeating representations in any given base.
See: https://stackoverflow.com/a/9650037/140740 for the full explanation.
That method only works, when you are adding a small enough integer to very large integer -- and even in that case you are not representing both of the integers in the 'floating point' format.
All floating point numbers can't be represented. it's due to the way of coding them. The wiki page explain it better than me: http://en.wikipedia.org/wiki/IEEE_754-1985.
So when you are trying to compare a floating point number, you should use a delta:
myFloat - expectedFloat < delta
You can use the smallest representable floating point number as delta.

1.265 * 10000 = 126499.99999999999? [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
When I multiply 1.265 by 10000 , I get 126499.99999999999 when using Javascript.
Why is this so?
Floating point numbers can't handle decimals correctly in all cases. Check out
http://en.wikipedia.org/wiki/Floating-point_number#Accuracy_problems
http://www.mredkj.com/javascript/nfbasic2.html
You should be aware that all information in computers is in binary and the expansions of fractions in different bases vary.
For instance 1/3 in base 10= .33333333333333333333333333, while 1/3 in base 3 is equal to .1 and in base 2 is equal to .0101010101010101.
In case you don't have a complete understanding of how different bases work, here's an example:
The base 4 number 301.12. would be equal to 3 * 4^2 + 0 * 4^1 + 1 * 4^0 + 1 * 4^-1 + 2 *4^-2= 3 * 4^2 +1+ 1 * 4^-1 + 2 * 4^-2=49.375 in base 10.
Now the problems with accuracy in floating point comes from a limited number of bits in the significand. Floating point numbers have 3 parts to them, a sign bit, exponent and mantissa, most likely javascript uses 32 or 64 bit IEEE 754 floating point standard. For simpler calculations we'll use 32 bit, so 1.265 in floating point would be
Sign bit of 0 (0 for positive , 1 for negative) exponent of 0 (which with a 127 offset would be, ie exponent+offset, so 127 in unsigned binary) 01111111 (then finally we have the signifcand of 1.265, ieee floating point standard makes use of a hidden 1 representation so our binary represetnation of 1.265 is 1.01000011110101110000101, ignoring the 1:) 01000011110101110000101.
So our final IEEE 754 single (32-bit) representation of 1.625 is:
Sign Bit(+) Exponent (0) Mantissa (1.625)
0 01111111 01000011110101110000101
Now 1000 would be:
Sign Bit (+) Exponent(9) Mantissa(1000)
0 10001000 11110100000000000000000
Now we have to multiply these two numbers. Floating point multiplication consists of re-adding the hidden 1 to both mantissas, multiplying the two mantissa, subtracting the offset from the two exponents and then adding th two exponents together. After this the mantissa has to be normalized again.
First 1.01000011110101110000101*1.11110100000000000000000=10.0111100001111111111111111000100000000000000000
(this multiplication is a pain)
Now obviously we have an exponent of 9 + an exponent of 0 so we keep 10001000 as our exponent, and our sign bit remains, so all that is left is normalization.
We need our mantissa to be of the form 1.000000, so we have to shift it right once which also means we have to increment our exponent bringing us up to 10001001, now that our mantissa is normalized to 1.00111100001111111111111111000100000000000000000. It must be truncated to 23 bits so we are left with 1.00111100001111111111111 (not including the 1, because it will be hidden in our final representation) so our final answer that we are left with is
Sign Bit (+) Exponent(10) Mantissa
0 10001001 00111100001111111111111
Finally if we conver this answer back to decimal we get (+) 2^10 * (1+ 2^-3 + 2^-4 +2^-5+2^-6+2^-11+2^-12+2^-13+2^-14+2^-15+2^-16+2^-17+2^-18+2^-19+2^-20+2^-21+2^-22+2^-23)=1264.99987792
While I did simplify the problem multiplying 1000 by 1.265 instead of 10000 and using single floating point, instead of double, the concept stays the same. You use lose accuracy because the floating point representation only has so many bits in the mantissa with which to represent any given number.
Hope this helps.
It's a result of floating point representation error. Not all numbers that have finite decimal representation have a finite binary floating point representation.
Have a read of this article. Essentially, computers and floating-point numbers do not go together perfectly!
On the other hand, 126500 IS equal to 126499.99999999.... :)
Just like 1 is equal to 0.99999999....
Because 1 = 3 * 1/3 = 3 * 0.333333... = 0.99999999....
Purely due to the inaccuracies of floating point representation.
You could try using Math.round:
var x = Math.round(1.265 * 10000);
These small errors are usually caused by the precision of the floating points as used by the language. See this wikipedia page for more information about the accuracy problems of floating points.
Here's a way to overcome your problem, although arguably not very pretty:
var correct = parseFloat((1.265*10000).toFixed(3));
// Here's a breakdown of the line of code:
var result = (1.265*10000);
var rounded = result.toFixed(3); // Gives a string representation with three decimals
var correct = parseFloat(rounded); // Convert string into a float
// (doesn't show decimals)
If you need a solution, stop using floats or doubles and start using BigDecimal.
Check the BigDecimal implementation stz-ida.de/html/oss/js_bigdecimal.html.en
Even additions on the MS JScript engine :
WScript.Echo(1083.6-1023.6) give 59.9999999

Categories