Why does adding two decimals in Javascript produce a wrong result? [duplicate] - javascript

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
Is JavaScript’s Math broken?
Why does JS screw up this simple math?
console.log(.1 + .2) // 0.3000000000000004
console.log(.3 + .6) // 0.8999999999999999
The first example is greater than the correct result, while the second is less. ???!! How do you fix this? Do you have to always convert decimals into integers before performing operations? Do I only have to worry about adding (* and / don't appear to have the same problem in my tests)?
I've looked in a lot of places for answers. Some tutorials (like shopping cart forms) pretend the problem doesn't exist and just add values together. Gurus provide complex routines for various math functions or mention JS "does a poor job" in passing, but I have yet to see an explanation.

It's not a JS problem but a more general computer one. Floating number can't store properly all decimal numbers, because they store stuff in binary
For example:
0.5 is store as b0.1
but 0.1 = 1/10 so it's 1/16 + (1/10-1/16) = 1/16 + 0.0375
0.0375 = 1/32 + (0.0375-1/32) = 1/32 + 00625 ... etc
so in binary 0.1 is 0.00011...
but that's endless.
Except the computer has to stop at some point. So if in our example we stop at 0.00011 we have 0.09375 instead of 0.1.
Anyway the point is, that doesn't depend on the language but on the computer. What depends on the language is how you display numbers. Usually, the language rounds numbers to an acceptable representation. Apparently JS doesn't.
So what you have to do (the number in memory is accurate enough) is just to tell somehow to JS to round "nicely" number when converting them to text.
You may try the sprintf function which give you a fine control of how to display a number.

From The Floating-Point Guide:
Why don’t my numbers, like 0.1 + 0.2
add up to a nice round 0.3, and
instead I get a weird result like
0.30000000000000004?
Because internally, computers use a
format (binary floating-point) that
cannot accurately represent a number
like 0.1, 0.2 or 0.3 at all.
When the code is compiled or
interpreted, your “0.1” is already
rounded to the nearest number in that
format, which results in a small
rounding error even before the
calculation happens.
The site has detailed explanations as well as information on how to fix the problem (and how to decide whether it is a problem at all in your case).

This is not a javascript only limitation, it applies to all floating point calculations. The problem is that 0.1 and 0.2 and 0.3 are not exactly representable as javascript (or C or Java etc) floats. Thus the output you are seeing is due to that inaccuracy.
In particular only certain sums of powers of two are exactly representable. 0.5 = =0.1b = 2^(-1), 0.25=0.01b=(2^-2), 0.75=0.11b = (2^-1 + 2^-2) are all OK. But 1/10 = 0.000110001100011..b can only be expressed as an infinite sum of powers of 2, which the language chops off at some point. Its this chopping that is causing these slight errors.

This is normal for all programming languages because not all decimal values can be represented exactly in binary. See What Every Computer Scientist Should Know About Floating-Point Arithmetic

It has to do with how computers handle floating numbers. You can read more about it here: http://docs.sun.com/source/806-3568/ncg_goldberg.html

Related

How does the V8 toString(radix) method in Google Chrome or in Node.js handle floating point numbers?

The method works clearly with integer numbers, e.g.
(25).toString(2)
= '11001'
(25).toString(16)
= '19'
(25).toString(36)
= 'p'
But entering floats results in
(0.1).toString(2)
= '0.0001100110011001100110011001100110011001100110011001101'
(0.1).toString(16)
= '0.1999999999999a'
(0.1).toString(36)
= '0.3lllllllllm'
V8 is said to be an open-source engine. However, I cannot find the exact implementation of this method in the repository in order to understand it. How does the function handle floats?
(V8 developer here.)
#pilchard's link is right; more specifically the function you're looking for is DoubleToRadixCString.
The observation that 0.1 (decimal) is a non-terminating fraction in its binary representation (which is, of course, how doubles are stored inside computers) is a nice illustration of the underlying reason why 0.1 * 3 == 0.30000000000000004: the non-terminating fractions are necessarily cut off at some point, which constitutes a rounding error, and some operations make these rounding errors visible. It's the binary equivalent of a similar effect in decimal: when you represent 1/3 as "0.333333" (arbitrarily choosing 6 digits as the represented length, but if you only cut off after 100 digits that wouldn't really change anything), and multiply that by 3, you get "0.999999", not 1.

toFixed method not working as expected in

while working with large numbers i found some strange issue
(99999999999999.9).toFixed(2) returns "99999999999999.91"
where as
(99999999999999.8).toFixed(2) returns "99999999999999.80"
what I need is
(99999999999999.9).toFixed(2) should return "99999999999999.90"
how can I resolve this?
You basically cannot do this, this is due to the representation of numbers in floating point and how javascript works in the background:
Javascript uses IEEE 754 double precision floating point to represent symbols in memory 99999999999999.9 is displayed as
0 10000101101 0110101111001100010000011110100011111111111111111010
Which is a number consisting of 3 (integer) parts:
first the sign "0" means it's positive.
Then the exponent "multiplier" - 1069. We subtract 1023 from it (to allow for negative exponents) so the multiplier is `2^(1069-1023)
And then the mantissa - "data" - 1.421085471520199. -- calculation is shown on the wikipedia (without mathjax available a bit hard to show)
So the total value is +1.421085471520199*2^(1069-1023) = (according to wolfram alpha) 99999999999999.903472220635136
As you can see it isn't able to show that decimal precision exactly. This is due to limited mantissa, we can see if we change the last bit of the mantissa, "1 higher" we get 9.99999999999999245828438884352 × 10^13 as result. and the one below is 9.99999999999998893984717996032 × 10^13 .
So everything between 9.99999999999998893984717996032 × 10^13 and 9.9999999999999903472220635136 × 10^13 is represented as the same number - you won't notice a difference there. [*]
Now as to why floating point is rounded "up" when displaying again in this case: that is a bit harder to explain but I guess it's due to the implementation.
Now could we predict this? And why doesn't it happen in decimal? This is easy to explain. A numberic system has different bases, normally we use "base 10", while computers use base 2 as intrinsic unit.
The choice has far fetched consequences: when working in a certain base you can only represent fractions that are a multiple of the prime factors of that base. IE base 10 has prime factors 2 and 5. So we can display any fraction that is a multiple of those:
1/2 => 0.5
1/5 => 0.2
2/5 => 0.4
1/10 = 1/2*1/5 => 0.1
However 1/3 cannot be described as a multiple of those two fractions, nor 1/7 or 1/6 - so in base 10 we can not write those as "decimals" down exactly.
Similarly base 2 has only the prime factor "2". a number ending with '.9' is in decimal always a fraction based on "1/5", which is not part of binary - thus cannot be described in binary.
now there are solutions, often there exist libraries that give so called "decimal" packages - which keep numbers in decimal representation, and replace the internal computer FPU with manual calculations.
[*]: ps I do not know if it's actually these boundaries - so if the floating point interpreter will always round up, or if the interpreter will round towards the nearest floating point. Someone with knowledge of the JS interpreters could answer that.
You are definitely running into float precision error here.
If the specific usecase of yours is not needed to be extra sensitive you can use
(Math.round(yourValue*100)/100).toFixed(2)
There's also much more newer way of handling this using the Intl.NumberFormat API
Playing especially with the fractions config
minimumFractionDigits: 2,
maximumFractionDigits: 2
Ofcourse if you need some much more robust that plays well with other edge cases there's also decimal.js to help out
var a = new Decimal(99999999999999.9);
console.log(a.toFixed(2))

What is the best way to use float numbers with decimal point in JavaScript?

I did an investigation and I saw that there is a whole website to explain to you what is the correct way to use floats at: http://floating-point-gui.de/
In Java for example I was always using BigDecimal for the floats just to make sure that everything will work correctly without confusing me. For example:
BigDecimal a = new BigDecimal("0.1");
BigDecimal b = new BigDecimal("0.2");
BigDecimal c = a.add(b); // returns a BigDecimal representing exactly 0.3
// instead of this number: 0.30000000000000004 that can
// easily confuse me
However in JavaScript I realized that there is not such thing as a build-in library (at least at the Math object that I've looked)
So the best way that I did find so far was to use a JavaScript library that it is doing exactly that! In my projects I am using this one: https://github.com/dtrebbien/BigDecimal.js
Although I think this is the best library I could find, the library doesn't really matter so much. My main questions are:
Is it the best way possible to use a library like BigDecimal the best way to use floats for JavaScript? or am I missing something? I want to do basic calculations like add, multiplying ...e.t.c.
Is there any other suggested way for example to add two floats in JavaScript?
For example, let's say that I want to have: 0.1 + 0.2 . With the BigDecimal library, I will have:
var a = new BigDecimal("0.1");
var b = new BigDecimal("0.2");
console.log(a.add(b).toString()); //returns exactly 0.3
So is there any other way to add 0.1 + 0.2 and have exactly 0.3, in JavaScript without having to actually round the number ?
For the reference the below example in JavaScript will not work:
var a = 0.1;
var b = 0.2;
console.log(a + b); //This will have as an output: 0.30000000000000004
As all numbers in javascript are 64bit, in general the best way to do floating point aritmetic in javascript is to simply use numbers straight.
However, if you specifically have a problem where you need higher precision than what 64bits will provide, then you need to do something like that.
I urge you, however, to strongly consider if you have such a usecase or not.
If your problem is with some far-down decimals affecting your comparisons, there are functions to deal with that sort of thing specifically. I urge you to look up the Number.prototype.toFixed(n) function and also see this dicussion on almostEquals which proposes that you incorporate the use of an epsilon for float comparisons.
You could use the toFixed(n) method if you are not relying on high precision:
var a = 0.1;
var b = 0.2;
var sum = a + b;
console.log(sum.toFixed(1));
Your calculation shows a precision lost on the 17th floating point which is no big issue in the most cases.
I would advice you to go with toFixed() if you want to get the output right.
There are a few things to consider here.
A lot of times people use the term 'float' when they really mean to use fixed decimal
Fixed Decimal
US currency, for example, is a fixed decimal.
$12.40
$0.90
In this case, there will always be two decimal points.
If your values fit into the range of JavaScript integers (2^53-1) or 9007199254740991 then you can simply work in cents and store all your values that way.
1240
90
Floating point decimal
Now, floating point is where you deal with extreme ranges of numbers and the decimal point actually moves, or floats.
12.394
1294856.9458566
.0000000998984
49586747435893
In cases of floating point, accuracy to 53 significant bits (which means around 15 digits of accuracy). For a lot of things, that is good enough.
Big Decimal Classes
You should only look at big decimal classes if you need something beyond the range of JavaScript native numbers. BigDecimal classes are much slower than native math and you have to use a functional style of programming rather than use the math operators.
JavaScript does not support operator overloading, so there is no built-in way to do natural calculations like '0.1 + 0.2' with BigNumbers.
What you can do is use math.js, which has an expression parser and support for BigNumbers:
math.config({
number: 'bignumber', // Default type of number: 'number' or 'bignumber'
precision: 64 // Number of significant digits for BigNumbers
});
math.eval('0.1 + 0.2'); // returns a BigNumber, 0.3
See docs: http://mathjs.org/docs/datatypes/bignumbers.html
You could convert operands to integers beforehand
(0.1 * 10 + 0.2 * 10) / 10
will give the exact answer, but people probably don't want to deal with that floating point round stuff

Summing numbers javascript [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 9 years ago.
'-15.48' - '43'
Just wrote this in console and result is the following:
-58.480000000000004
Why Is it so? And what to do to get correct result?
Because all floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.
to fix it you may try:
(-15.48 - 43).toFixed(2);
Fiddle demo
use: toFixed()
var num = 5.56789;
var n=num.toFixed(2);
result:5.57
http://en.wikipedia.org/wiki/Machine_epsilon
Humans count in decimal numbers, machines mostly use binary. 10 == 2x5; 2 and 5 are mutually prime numbers. That trivial fact has an unpleasant consequence though.
In most calculations (except lucky degenerate cases) with "simple" decimal numbers, that include division, the result would be an indefinite repeating binary number.
In most calculations (except lucky degenerate cases) with "simple" binary numbers, that include division, the result would be an indefinite repeating decimal number.
One can check this using pen and pencil as described http://en.wikipedia.org/wiki/Repeating_decimal#Every_rational_number_is_either_a_terminating_or_repeating_decimal
That means that almost every time you see the the result of floating point calculations on your screen - as a finite number - the computer somewhat cheats at you and shows you some approximation instead of the real result.
That means that almost every time you store the results of the calculation in the variable - which has a finite size, not any larger than the computer's available memory - the computer somewhat cheats at you and retains some approximation instead of the real result.
The typical gotchas then may include.
Program is accounting for the sum of some looong sequence, an infinite loop of x AVG := AVG + X[i]; kind where 0 < X[i] < const. If the loop would run long enough, you would see that at some point AVG no more changes the value, all the elements fro mthatpoint further on are just discarded.
Program calculates some value twice using different formulas and then makes a safety check like Value_1 == Value_2. For theoretical mathematics the values are equal, for the real computers they are not.

Javascript floating point addition workaround [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
Ok, so we all know of the floating point number problem, such as:
0.23 - 1 + 1 = 0.22999999999999998
And since in javascript, unlike other languages, all numbers are actually floating points and there's no int/decimal there are all kinds of libraries and workarounds such as BigDecimal to handle this problem. This is best discussed here.
I was creating a "numeric spinner" control that supports floating point numbers, and obviously I wouldn't want the user to click "up" and then "down" and get a different number from what he started with, so I tried writing a workaround - a "addPrecise" method of sorts.
My idea was the following:
Take the two numbers I'm about to add, and figure out their "precision" - how many digits they have after the decimal point.
Add the two numbers as floats
Apply toFixed() on the result with the max precision of the two
For example:
float #1: 1.23
float #2: -1
adding them normally would result in 0.22999999999999998 but if I'm taking the maximal number of decimal places, which is #1's 2 digits, and apply toFixed(2) I get 0.23 as I wanted.
I've done this with the following piece of code but I'm not happy with it.
function addPrecise(f1, f2){
var floatRegex = /[^\d\-]/;
var f1p = floatRegex.exec(f1.toString()) ? f1.toString().split(floatRegex.exec(f1.toString()))[1].length : 0;
var f2p = floatRegex.exec(f2.toString()) ? f2.toString().split(floatRegex.exec(f2.toString()))[1].length : 0;
var precision = Math.max(f1p,f2p);
return parseFloat((parseFloat(f1) + parseFloat(f2)).toFixed(precision));
}
(It's worth noting that I'm using the regex to find the 'floating point' because in other locales it might be a comma instead of a period. I'm also taking into account the possibility that I got an Int (with no point) in which case it's precision is 0.)
Is there a cleaner/simpler/better way to do this?
Edit: I'd also like to point out that finding a reliable way to extract the number of decimal digits is also part of the question. I'm unsure about my method there.
An appropriate solution for this problem is:
Determine how many digits, say d, are needed after the decimal point.
Read inputs from the user and convert them to integers scaled by 10d.
Do all arithmetic using the integers.
When displaying values for the user, divide them by 10d for display.
More details:
If all work is in whole units of user-entered data, then the number of digits needed after the decimal point, d, is the same as the number of digits to be accepted from the user. (If one were going to do some arithmetic with fractions of the step size, for example, then more digits might be needed.)
When the user enters input, it should be converted to the scaled integers. Note that the user enters a string (not a floating-point number; it is just a string when the user types it), and the string should be a numeral (characters that represent a number). One way to process this input is to call library routines to convert this numeral to a floating-point number, then multiply by 10d, then round the floating-point result to the nearest integer (to compensate for floating-point rounding errors). For numbers of reasonable size, this will always produce exactly the desired result; floating-point rounding errors will not be a problem. (Excessively large user input should be rejected.) Another way to process this input is to write your own code to convert the numeral directly to a scaled integer, avoiding floating-point entirely.
As long as only addition, multiplication, and subtraction are used (e.g., multiplying the step size by a number of steps and adding to a prior value), and the integer range is not exceeded, then all arithmetic is exact; there are no rounding errors. If division is used, this approach must be reconsidered.
As with reading input, displaying output can be performed either with an assist from floating point and library routines or by writing your own code to convert directly. To use floating-point, convert the integer to floating-point, divide by 10d, and use a library routine to format it with no more than d digits after the decimal place. (Using default format conversions in library routines might result in more than d digits displayed. If only d digits are displayed, and d is a reasonably small number, then floating-point rounding errors that occur during the formatting will not be visible.)
It is possible to implement all of the above using only floating-point, as long as care is taken to ensure that the arithmetic uses entirely integers (or other values that are exactly representable in the floating-point format) within reasonable bounds.

Categories