I started having problems with decimals which made me learn about the whole floating point math. My question is, what's a viable solution for it?
x = 0.1;
y = 0.2;
num = x + y;
num = Math.round(num * 100) / 100;
or
x = 0.1;
y = 0.2;
num = x + y;
num = num.toFixed(2);
num = Number(num);
Are these both 100% viable options? As in, never have to worry about having the same problem anymore? Which one would you recommend? Or would you recommend a different solution? Any reason to use one solution over the other? Thanks in advance for any help.
EDIT:
Sorry I wasn't more specific. I'm fine with it always being 2 decimals, since that won't be a problem for my project. Obviously if you want more decimals you would use 1000 instead of 100 and toFixed(3), and so on. My main concern is, are the above 2 solutions 100% viable, as in, I won't have to worry about any of the same problems? And also, would you recommend the first solution or the second? Or another one altogether? Since I will be using a method quite a lot for many calculations. Thanks again for your help.
This is not a problem with JavaScript's floating point implementation, or something that will go away if you use a string formatting function like toFixed (the MDN docs for it here make clear that it is a string representation returned, not some other format of number). Rather, this is an inherent property of floating point arithmetic as a concept - it has a variable accuracy designed to closely approximate values within a certain range.
If you want your values to always be entirely accurate, the only solution is not to use floating point numbers. Generally, this is done by using integers representing some fraction of the "whole" numbers you're dealing with (e.g. pence/cents instead of pounds/euros/dollars, or milliseconds instead of seconds). Alternatively, you may be able to find a precision maths library which performs fixed-point arithmetic, so avoids the inaccuracies but will have worse performance.
If you don't mind the risk of the inaccuracies slowly building up, you can simply use formatting functions to only display to a certain precision when you output the result of a calculation. There is little point in converting to a string with a fixed precision and then back to a number, as the floating point implementation may still be unable to represent that number with complete precision.
Related
I am having an issue with the way Javascript is rounding numbers when hitting 0.5.
I am writing levies calculators, and am noticing a 0.1c discrepancy in the results.
The problem is that the result for them is 21480.705 which my application translates into 21480.71, whereas the tariff says 21480.70.
This is what I am seeing with Javascript:
(21480.105).toFixed(2)
"21480.10"
(21480.205).toFixed(2)
"21480.21"
(21480.305).toFixed(2)
"21480.31"
(21480.405).toFixed(2)
"21480.40"
(21480.505).toFixed(2)
"21480.51"
(21480.605).toFixed(2)
"21480.60"
(21480.705).toFixed(2)
"21480.71"
(21480.805).toFixed(2)
"21480.81"
(21480.905).toFixed(2)
"21480.90"
Questions:
What the hell is going on with this erratic rouding?
What's the quickest easiest way to get a "rounded up" result (when hitting 0.5)?
So as some of the others already explained the reason for the 'erratic' rounding is a floating point precision problem. You can investigate this by using the toExponential() method of a JavaScript number.
(21480.905).toExponential(20)
#>"2.14809049999999988358e+4"
(21480.805).toExponential(20)
#>"2.14808050000000002910e+4"
As you can see here 21480.905, gets a double representation that is slightly smaller than 21480.905, while 21480.805 gets a double representation slightly larger than the original value. Since the toFixed() method works with the double representation and has no idea of your original intended value, it does all it can and should do with the information it has.
One way to work around this, is to shift the decimal point to the number of decimals you require by multiplication, then use the standard Math.round(), then shift the decimal point back again, either by division or multiplication by the inverse. Then finally we call toFixed() method to make sure the output value gets correctly zero-padded.
var x1 = 21480.905;
var x2 = -21480.705;
function round_up(x,nd)
{
var rup=Math.pow(10,nd);
var rdwn=Math.pow(10,-nd); // Or you can just use 1/rup
return (Math.round(x*rup)*rdwn).toFixed(nd)
}
function round_down(x,nd)
{
var rup=Math.pow(10,nd);
var rdwn=Math.pow(10,-nd);
return (Math.round(x*-rup)*-rdwn).toFixed(nd)
}
function round_tozero(x,nd)
{
return x>0?round_down(x,nd):round_up(x,nd)
}
console.log(x1,'up',round_up(x1,2));
console.log(x1,'down',round_down(x1,2));
console.log(x1,'to0',round_tozero(x1,2));
console.log(x2,'up',round_up(x2,2));
console.log(x2,'down',round_down(x2,2));
console.log(x2,'to0',round_tozero(x2,2));
Finally:
Encountering a problem like this is usually a good time to sit down and have a long think about wether you are actually using the correct data type for your problem. Since floating point errors can accumulate with iterative calculation, and since people are sometimes strangely sensitive with regards to money magically disappearing/appearing in the CPU, maybe you would be better off keeping monetary counters in integer 'cents' (or some other well thought out structure) rather than floating point 'dollar'.
The why -
You may have heard that in some languages, such as JavaScript, numbers with a fractional part are calling floating-point numbers, and floating-point numbers are about dealing with approximations of numeric operations. Not exact calculations, approximations. Because how exactly would you expect to compute and store 1/3 or square root of 2, with exact calculations?
If you had not, then now you've heard of it.
That means that when you type in the number literal 21480.105, the actual value that ends up stored in computer memory is not actually 21480.105, but an approximation of it. The value closest to 21480.105 that can be represented as a floating-point number.
And since this value is not exactly 21480.105, that means it is either slightly more than that, or slightly less than that. More will be rounded up, and less will be rounded down, as expected.
The solution -
Your problem comes from approximations, that it seems you cannot afford. The solution is to work with exact numbers, not approximate.
Use whole numbers. Those are exact. Add in a fractional dot when you convert your numbers to string.
This works in most cases. (See note below.)
The rounding problem can be avoided by using numbers represented in
exponential notation:
function round(value, decimals) {
return Number(Math.round(value+'e'+decimals)+'e-'+decimals);
}
console.log(round(21480.105, 2).toFixed(2));
Found at http://www.jacklmoore.com/notes/rounding-in-javascript/
NOTE: As pointed out by Mark Dickinson, this is not a general solution because it returns NaN in certain cases, such as round(0.0000001, 2) and with large inputs.
Edits to make this more robust are welcome.
You could round to an Integer, then shift in a comma while displaying:
function round(n, digits = 2) {
// rounding to an integer is accurate in more cases, shift left by "digits" to get the number of digits behind the comma
const str = "" + Math.round(n * 10 ** digits);
return str
.padStart(digits + 1, "0") // ensure there are enough digits, 0 -> 000 -> 0.00
.slice(0, -digits) + "." + str.slice(-digits); // add a comma at "digits" counted from the end
}
What the hell is going on with this erratic rouding?
Please reference the cautionary Mozilla Doc, which identifies the cause for these discrepancies. "Floating point numbers cannot represent all decimals precisely in binary which can lead to unexpected results..."
Also, please reference Is floating point math broken? (Thank you Robby Cornelissen for the reference)
What's the quickest easiest way to get a "rounded up" result (when hitting 0.5)?
Use a JS library like accounting.js to round, format, and present currency.
For example...
function roundToNearestCent(rawValue) {
return accounting.toFixed(rawValue, 2);
}
const roundedValue = roundToNearestCent(21480.105);
console.log(roundedValue);
<script src="https://combinatronics.com/openexchangerates/accounting.js/master/accounting.js"></script>
Also, consider checking out BigDecimal in JavaScript.
Hope that helps!
I am working with js numbers and have lack of experience in that. So, I would like to ask few questions:
2.2932600144518896
e+160
is this float or integer number? If it's float how can I round it to two decimals (to get 2.29)? and if it's integer, I suppose it's very large number, and I have another problem than.
Thanks
Technically, as said in comments, this is a Number.
What you can do if you want the number (not its string representation):
var x = 2.2932600144518896e+160;
var magnitude = Math.floor(Math.log10(x)) + 1;
console.log(Math.round(x / Math.pow(10, magnitude - 3)) * Math.pow(10, magnitude - 3));
What's the problem with that? Floating point operation may not be precise, so some "number" different than 0 should appear.
To have this number really "rounded", you can only achieve it through string (than you can't make any operation).
JavaScript only has one Number type so is technically neither a float or an integer.
However this isn't really relevant as the value (or rather representation of it) is not specific to JavaScript and uses E-Notation which is a standard way to write very large/small numbers.
Taking this in to account 2.2932600144518896e+160 is equivalent to 2.2932600144518896 * Math.pow(10,160) and approximately 229 followed by 158 zeroes i.e. very flippin' big.
What is the safest way to divide two IEEE 754 floating point numbers?
In my case the language is JavaScript, but I guess this isn't important. The goal is to avoid the normal floating point pitfalls.
I've read that one could use a "correction factor" (cf) (e.g. 10 uplifted to some number, for instance 10^10) like so:
(a * cf) / (b * cf)
But I'm not sure this makes a difference in division?
Incidentally, I've already looked at the other floating point posts on Stack Overflow and I've still not found a single post on how to divide two floating point numbers. If the answer is that there is no difference between the solutions for working around floating point issues when adding and when dividing, then just answer that please.
Edit:
I've been asked in the comments which pitfalls I'm referring to, so I thought I'd just add a quick note here as well for the people who don't read the comments:
When adding 0.1 and 0.2, you would expect to get 0.3, but with floating point arithmetic you get 0.30000000000000004 (at least in JavaScript). This is just one example of a common pitfall.
The above issue is discussed many times here on Stack Overflow, but I don't know what can happen when dividing and if it differs from the pitfalls found when adding or multiplying. It might be that that there is no risks, in which case that would be a perfectly good answer.
The safest way is to simply divide them. Any prescaling will either do nothing, or increase rounding error, or cause overflow or underflow.
If you prescale by a power of two you may cause overflow or underflow, but will otherwise make no difference in the result.
If you prescale by any other number, you will introduce additional rounding steps on the multiplications, which may lead to increased rounding error on the division result.
If you simply divide, the result will be the closest representable number to the ratio of the two inputs.
IEEE 754 64-bit floating point numbers are incredibly precise. A difference in one part in almost 10^16 can be represented.
There are a few operations, such as floor and exact comparison, that make even extremely low significance bits matter. If you have been reading about floating point pitfalls you should have already seen examples. Avoid those. Round your output to an appropriate number of decimal places. Be careful adding numbers of very different magnitude.
The following program demonstrates the effects of using each power of 10 from 10 through 1e20 as scale factor. Most get the same result as not multiplying, 6.0, which is also the rational number arithmetic result. Some get a slightly larger result.
You can experiment with different division problems by changing the initializers for a and b. The program prints their exact values, after rounding to double.
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
double mult = 10;
double a = 2;
double b = 1.0 / 3.0;
System.out.println("a=" + new BigDecimal(a));
System.out.println("b=" + new BigDecimal(b));
System.out.println("No multiplier result="+(a/b));
for (int i = 0; i < 20; i++) {
System.out.println("mult="+mult + " result="+((a * mult) / (b * mult)));
mult *= 10;
}
}
}
Output:
a=2
b=0.333333333333333314829616256247390992939472198486328125
No multiplier result=6.0
mult=10.0 result=6.000000000000001
mult=100.0 result=6.000000000000001
mult=1000.0 result=6.0
mult=10000.0 result=6.000000000000001
mult=100000.0 result=6.000000000000001
mult=1000000.0 result=6.0
mult=1.0E7 result=6.000000000000001
mult=1.0E8 result=6.0
Floating point division will produce exactly the same "pitfalls" as addition or multiplication operations, and no amount of pre-scaling will fix it - the end result is the end result and it's the internal representation of that in IEEE-754 that causes the "problem".
The solution is to completely forget about these precision issues during calculations themselves, and to perform rounding as late as possible, i.e. only when displaying the results of the calculation, at the point at which the number is converted to a string using the .toFixed() function provided precisely for that purpose.
.tofixed() is not a good solution to divide float numbers.
Using javascript try : 4.11 / 100 and you will be surprised.
4.11 / 100 = 0.041100000000000005
Not all browsers get the same results.
Right solution is to convert float to integer:
parseInt(4.11 * Math.pow(10, 10)) / (100 * Math.pow(10, 10)) = 0.0411
I did an investigation and I saw that there is a whole website to explain to you what is the correct way to use floats at: http://floating-point-gui.de/
In Java for example I was always using BigDecimal for the floats just to make sure that everything will work correctly without confusing me. For example:
BigDecimal a = new BigDecimal("0.1");
BigDecimal b = new BigDecimal("0.2");
BigDecimal c = a.add(b); // returns a BigDecimal representing exactly 0.3
// instead of this number: 0.30000000000000004 that can
// easily confuse me
However in JavaScript I realized that there is not such thing as a build-in library (at least at the Math object that I've looked)
So the best way that I did find so far was to use a JavaScript library that it is doing exactly that! In my projects I am using this one: https://github.com/dtrebbien/BigDecimal.js
Although I think this is the best library I could find, the library doesn't really matter so much. My main questions are:
Is it the best way possible to use a library like BigDecimal the best way to use floats for JavaScript? or am I missing something? I want to do basic calculations like add, multiplying ...e.t.c.
Is there any other suggested way for example to add two floats in JavaScript?
For example, let's say that I want to have: 0.1 + 0.2 . With the BigDecimal library, I will have:
var a = new BigDecimal("0.1");
var b = new BigDecimal("0.2");
console.log(a.add(b).toString()); //returns exactly 0.3
So is there any other way to add 0.1 + 0.2 and have exactly 0.3, in JavaScript without having to actually round the number ?
For the reference the below example in JavaScript will not work:
var a = 0.1;
var b = 0.2;
console.log(a + b); //This will have as an output: 0.30000000000000004
As all numbers in javascript are 64bit, in general the best way to do floating point aritmetic in javascript is to simply use numbers straight.
However, if you specifically have a problem where you need higher precision than what 64bits will provide, then you need to do something like that.
I urge you, however, to strongly consider if you have such a usecase or not.
If your problem is with some far-down decimals affecting your comparisons, there are functions to deal with that sort of thing specifically. I urge you to look up the Number.prototype.toFixed(n) function and also see this dicussion on almostEquals which proposes that you incorporate the use of an epsilon for float comparisons.
You could use the toFixed(n) method if you are not relying on high precision:
var a = 0.1;
var b = 0.2;
var sum = a + b;
console.log(sum.toFixed(1));
Your calculation shows a precision lost on the 17th floating point which is no big issue in the most cases.
I would advice you to go with toFixed() if you want to get the output right.
There are a few things to consider here.
A lot of times people use the term 'float' when they really mean to use fixed decimal
Fixed Decimal
US currency, for example, is a fixed decimal.
$12.40
$0.90
In this case, there will always be two decimal points.
If your values fit into the range of JavaScript integers (2^53-1) or 9007199254740991 then you can simply work in cents and store all your values that way.
1240
90
Floating point decimal
Now, floating point is where you deal with extreme ranges of numbers and the decimal point actually moves, or floats.
12.394
1294856.9458566
.0000000998984
49586747435893
In cases of floating point, accuracy to 53 significant bits (which means around 15 digits of accuracy). For a lot of things, that is good enough.
Big Decimal Classes
You should only look at big decimal classes if you need something beyond the range of JavaScript native numbers. BigDecimal classes are much slower than native math and you have to use a functional style of programming rather than use the math operators.
JavaScript does not support operator overloading, so there is no built-in way to do natural calculations like '0.1 + 0.2' with BigNumbers.
What you can do is use math.js, which has an expression parser and support for BigNumbers:
math.config({
number: 'bignumber', // Default type of number: 'number' or 'bignumber'
precision: 64 // Number of significant digits for BigNumbers
});
math.eval('0.1 + 0.2'); // returns a BigNumber, 0.3
See docs: http://mathjs.org/docs/datatypes/bignumbers.html
You could convert operands to integers beforehand
(0.1 * 10 + 0.2 * 10) / 10
will give the exact answer, but people probably don't want to deal with that floating point round stuff
When I pull the values I want to multiply, they're strings. So I pull them, parse them as floats (to preserve the decimal places), and multiply them together.
LineTaxRate = parseFloat(myRate) * parseFloat(myQuantity) * parseFloat(myTaxRateRound);
This has worked for 99% of my invoices but I discovered one very odd problem.
When it multiplied: 78 * 7 * 0.0725
Javascript is returning: 39.584999999999994
When you normally do the math in a calculator its: 39.585
When all is said and done, I take that number and round it using .toFixed(2)
Because Javascript is returning that number, it's not rounding to the desired value of: $39.59
I tried Math.round() the total but I still get the same number.
I have thought of rounding the number to 3 decimals then two, but that seems hacky to me.
I have searched everywhere and all I see is people mention parseFloat loses its precision, and to use .toFixed, however in the example above, that doesn't help.
Here is my test script i made to recreate the issue:
<script>
var num1 = parseFloat("78");
var num2 = parseFloat("7");
var num3 = parseFloat("0.0725");
var myTotal = num1 * num2 * num3;
var result = Math.round(myTotal*100)/100
alert(myTotal);
alert(myTotal.toFixed(2));
alert(result);
</script>
Floating points are represented in binary, not decimal. Some decimal numbers will not be represented precisely. And unfortunately, since Javascript only has one Number class, it's not a very good tool for this job. Other languages have decent decimal libraries designed to avoid precisely this kind of error. You're going to have to either accept one-cent errors, implement a solution server-side, or work very hard to fix this.
edit: ooh! you can do 78 * 7 * 725 and then divide by 10000, or to be even more precise just put the decimal point in the right place. Basically represent the tax rate as something other than a tiny fraction. Less convenient but it'll probably take care of your multiplication errors.
You might find the Accounting.js library useful for this. It has an "improved" toFixed() method.
JavaScript/TypeScript have only one Number class which is not that good. I have the same problem as I am using TypeScript. I solved my problem by using decimal.js-light library.
new Decimal(78).mul(7).mul(0.0725) returns as expected 39.585