let price: number = 0.7; // $
let discountPrice: number = 0.6; // $
let netPrice: number = price - discountPrice;
console.log(netPrice); // 0.09999999999999998 but not 1 cent
Due to IEEE 754 standard, we are losing data precision after computation.
How client side calculations are done in ecommerce applications? to maintain precision
For e-commerce applications and financial calculations, you should scale up your decimal values by a consistent multiplier and represent all monetary values as integers. This is to avoid the nuances with floating-point math. In JavaScript we only have the floating-point data type for numeric values, but luckily integer math under the floating-point data-type is exact. Therefore converting monetary values to integers (e.g., 2550 cents instead of 25.50 dollars) resolves the issue.
I am doing like this in my project. You can just use .toFixed()
let price = 0.7; // $
let discountPrice = 0.6; // $
let netPrice = (price - discountPrice).toFixed(1);
console.log(netPrice); // correct answer
Also check my answer here: https://stackoverflow.com/a/50056569/631803
Math.round() will not work correctly, so go with .toFixed().
I have 2 suggestions:
1. use toPrecision() function to truncate the float number
If you don't want big changes to your code, you can try this function:
function truncate(number) {
return (parseFloat(number).toPrecision(2));
}
2. always use the smallest unit of the money for ecommerce calculation
E.g. 0.7 dollar = 70 cents, 0.6 dollar = 60 cents
So, 0.7 dollar - 0.6 dollar = 70 cents - 60 cents = 10 cents. // will give you precise value
Related
It's relating to a problem with how JavaScript handles large (Floating-Point) numbers.
What is JavaScript's highest integer value that a number can go to without losing precision? is referring to the highest possible number. I was after a way to bypass that for getting the min and max in the example below.
var lowest = Math.min(131472982990263674, 131472982995395415);
console.log(lowest);
Wil return:
131472982990263680
To get the min and max value would it be required to write a function to suit or is there a way I can get it working with the Math.min and Math.max functions?
The closest solution I've found was this but I couldn't manage to get it working as I cannot avail of BigInt function as it's not exposed to my version.
Large numbers erroneously rounded in JavaScript
You can try to convert the numbers to BigInt:
const n1 = BigInt("131472982990263674"); // or const n1 = 131472982990263674n;
const n2 = BigInt("131472982995395415"); // or const n1 = 131472982995395415n;
Then find the min and max using this post:
const n1 = BigInt("131472982990263674");
const n2 = BigInt("131472982995395415");
function BigIntMinAndMax (...args){
return args.reduce(([min,max], e) => {
return [
e < min ? e : min,
e > max ? e : max,
];
}, [args[0], args[0]]);
};
const [min, max] = BigIntMinAndMax(n1, n2);
console.log(`Min: ${min}`);
console.log(`Max: ${max}`);
Math.min and Math.max are working just fine.
When you write 131472982990263674 in a JavaScript program, it is rounded to the nearest IEEE 754 binary64 floating-point number, which is 131472982990263680.
Written in hexadecimal or binary, you can see that 131472982990263674 = 0x1d315fb40b2157a = 0b111010011000101011111101101000000101100100001010101111010 takes 56 bits of precision to represent.
If you round that to the nearest number with only 53 bits of precision, what you get is 0b111010011000101011111101101000000101100100001010110000000 = 0x1d315fb40b21580 = 131472982990263680 (significant bits in bold).
Similarly, when you write 131472982995395415 in JavaScript, what you get back is 131472982995395410.
So when you write the code Math.min(131472982990263674, 131472982995395415), you pass the numbers 131472982990263680 and 131472982995395410 into the Math.min function.
Given that, it should come as no surprise that Math.min returns 131472982990263680.
> 131472982990263674
131472982990263680
> 131472982995395415
131472982995395410
> Math.min(131472982990263674, 131472982995395415)
131472982990263680
It's not clear what your original goal is.
Are you given two JavaScript numbers to begin with, and are you trying to find the min or max?
If so, Math.min and Math.max are the right thing.
Are you given two strings, and are you trying to order them by the numbers they represent?
If so, it depends on the notation you want to support.
If you only want to support decimal notation for integers (with no scientific notation, like 123e4), then you can simply chop leading zeros and compare the strings lexicographically with < or > in JavaScript.
> function strmin(x, y) { return x < y ? x : y }
> strmin("131472982990263674", "131472982995395415")
'131472982990263674'
If you want to support arbitrary-precision decimal notation (including non-integers and perhaps scientific notation), and you want to maintain distinctions between, for instance, 1.00000000000000001 and 1.00000000000000002, then you probably want a general arbitrary-precision decimal arithmetic library.
Are you trying to do arithmetic with integers in a range that might exceed 2⁵³, and need the computation to be exact, requiring >53 bits of precision?
If so, you may need some kind of wider-precision or arbitrary-precision arithmetic than JavaScript numbers alone provide, like bigint recently added to JavaScript.
If you only need a little more than 53 bits of precision, as is often the case inside numerical algorithms for transcendental elementary functions, there's also T.J. Dekker's algorithm for extending (say) binary64 arithmetic into double-binary64 or “double-double” arithmetic: a double-binary64 number is the sum 𝑥 + 𝑦 of two binary64 floating-point numbers 𝑥 and 𝑦, where typically 𝑥 holds the higher-order bits and 𝑦 holds the lower-order bits so together they can store 106 bits of precision.
In the interest of creating cross-platform code, I'd like to develop a simple financial application in JavaScript. The calculations required involve compound interest and relatively long decimal numbers. I'd like to know what mistakes to avoid when using JavaScript to do this type of math—if it is possible at all!
You should probably scale your decimal values by 100, and represent all the monetary values in whole cents. This is to avoid problems with floating-point logic and arithmetic. There is no decimal data type in JavaScript - the only numeric data type is floating-point. Therefore it is generally recommended to handle money as 2550 cents instead of 25.50 dollars.
Consider that in JavaScript:
var result = 1.0 + 2.0; // (result === 3.0) returns true
But:
var result = 0.1 + 0.2; // (result === 0.3) returns false
The expression 0.1 + 0.2 === 0.3 returns false, but fortunately integer arithmetic in floating-point is exact, so decimal representation errors can be avoided by scaling1.
Note that while the set of real numbers is infinite, only a finite number of them (18,437,736,874,454,810,627 to be exact) can be represented exactly by the JavaScript floating-point format. Therefore the representation of the other numbers will be an approximation of the actual number2.
1 Douglas Crockford: JavaScript: The Good Parts: Appendix A - Awful Parts (page 105).
2 David Flanagan: JavaScript: The Definitive Guide, Fourth Edition: 3.1.3 Floating-Point Literals (page 31).
Scaling every value by 100 is the solution. Doing it by hand is probably useless, since you can find libraries that do that for you. I recommend moneysafe, which offers a functional API well suited for ES6 applications:
const { in$, $ } = require('moneysafe');
console.log(in$($(10.5) + $(.3)); // 10.8
https://github.com/ericelliott/moneysafe
Works both in Node.js and the browser.
There's no such thing as "precise" financial calculation because of just two decimal fraction digits but that's a more general problem.
In JavaScript, you can scale every value by 100 and use Math.round() everytime a fraction can occur.
You could use an object to store the numbers and include the rounding in its prototypes valueOf() method. Like this:
sys = require('sys');
var Money = function(amount) {
this.amount = amount;
}
Money.prototype.valueOf = function() {
return Math.round(this.amount*100)/100;
}
var m = new Money(50.42355446);
var n = new Money(30.342141);
sys.puts(m.amount + n.amount); //80.76569546
sys.puts(m+n); //80.76
That way, everytime you use a Money-object, it will be represented as rounded to two decimals. The unrounded value is still accessible via m.amount.
You can build in your own rounding algorithm into Money.prototype.valueOf(), if you like.
Unfortunately all of the answers so far ignore the fact that not all currencies have 100 sub-units (e.g., the cent is the sub-unit of the US dollar (USD)). Currencies like the Iraqi Dinar (IQD) have 1000 sub-units: an Iraqi Dinar has 1000 fils. The Japanese Yen (JPY) has no sub-units. So "multiply by 100 to do integer arithmetic" isn't always the correct answer.
Additionally for monetary calculations you also need to keep track of the currency. You can't add a US Dollar (USD) to an Indian Rupee (INR) (without first converting one to the other).
There are also limitations on the maximum amount that can be represented by JavaScript's integer data type.
In monetary calculations you also have to keep in mind that money has finite precision (typically 0-3 decimal points) & rounding needs to be done in particular ways (e.g., "normal" rounding vs. banker's rounding). The type of rounding to be performed might also vary by jurisdiction/currency.
How to handle money in javascript has a very good discussion of the relevant points.
In my searches I found the dinero.js library that addresses many of the issues wrt monetary calculations. Haven't used it yet in a production system so can't give an informed opinion on it.
use decimaljs ... It a very good library that solves a harsh part of the problem ...
just use it in all your operation.
https://github.com/MikeMcl/decimal.js/
Your problem stems from inaccuracy in floating point calculations. If you're just using rounding to solve this you'll have greater error when you're multiplying and dividing.
The solution is below, an explanation follows:
You'll need to think about mathematics behind this to understand it. Real numbers like 1/3 cannot be represented in math with decimal values since they're endless (e.g. - .333333333333333 ...). Some numbers in decimal cannot be represented in binary correctly. For example, 0.1 cannot be represented in binary correctly with a limited number of digits.
For more detailed description look here: http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html
Take a look at the solution implementation: http://floating-point-gui.de/languages/javascript/
Due to the binary nature of their encoding, some decimal numbers cannot be represented with perfect accuracy. For example
var money = 600.90;
var price = 200.30;
var total = price * 3;
// Outputs: false
console.log(money >= total);
// Outputs: 600.9000000000001
console.log(total);
If you need to use pure javascript then you have need to think about solution for every calculation. For above code we can convert decimals to whole integers.
var money = 60090;
var price = 20030;
var total = price * 3;
// Outputs: true
console.log(money >= total);
// Outputs: 60090
console.log(total);
Avoiding Problems with Decimal Math in JavaScript
There is a dedicated library for financial calculations with great documentation. Finance.js
Use this code for currency calculation and round numbers in two digits.
<!DOCTYPE html>
<html>
<body>
<h1>JavaScript Variables</h1>
<p id="test1"></p>
<p id="test2"></p>
<p id="test3"></p>
<script>
function setDecimalPoint(num) {
if (isNaN(parseFloat(num)))
return 0;
else {
var Number = parseFloat(num);
var multiplicator = Math.pow(10, 2);
Number = parseFloat((Number * multiplicator).toFixed(2));
return (Math.round(Number) / multiplicator);
}
}
document.getElementById("test1").innerHTML = "Without our method O/P is: " + (655.93 * 9)/100;
document.getElementById("test2").innerHTML = "Calculator O/P: 59.0337, Our value is: " + setDecimalPoint((655.93 * 9)/100);
document.getElementById("test3").innerHTML = "Calculator O/P: 32.888.175, Our value is: " + setDecimalPoint(756.05 * 43.5);
</script>
</body>
</html>
You can find a lot about Floating Point Precision Errors and how to avoid them in Javascript, for example "How to deal with floating point number precision in JavaScript?", who deal with the problem by just rounding the number to a fixed amount of decimal places.
My problem is slightly different, I get numbers from the backend (some with the rounding error) and want to display it without the error.
Of course I could just round the number to a set number of decimal places with value.toFixed(X). The problem is, that the numbers can range from 0.000000001 to 1000000000, so I can never know for sure, how many decimal places are valid.
(See this Fiddle for my unfruitful attempts)
Code :
var a = 0.3;
var b = 0.1;
var c = a - b; // is 0.19999999999999998, is supposed to be 0.2
// c.toFixed(2) = 0.20
// c.toFixed(4) = 0.2000
// c.toFixed(5) = 0.200000
var d = 0.000003;
var e = 0.000002;
var f = d - e; // is 0.0000010000000000000002 is supposed to be 0.000001
// f.toFixed(2) = 0.00
// f.toFixed(4) = 0.0000
// f.toFixed(5) = 0.000001
var g = 0.0003;
var h = 0.0005;
var i = g + h; // is 0.0007999999999999999, is supposed to be 0.0008
// i.toFixed(2) = 0.00
// i.toFixed(4) = 0.0008
// i.toFixed(5) = 0.000800
My Question is now, if there is any algorithm, that intelligently detects how many decimal places are reasonable and rounds the numbers accordingly?
When a decimal numeral is rounded to binary floating-point, there is no way to know, just from the result, what the original number was or how many significant digits it had. Infinitely many decimal numerals will round to the same result.
However, the rounding error is bounded. If it is known that the original number had at most a certain number of digits, then only decimal numerals with that number of digits are candidates. If only one of those candidates differs from the binary value by less than the maximum rounding error, then that one must be the original number.
If I recall correctly (I do not use JavaScript regularly), JavaScript uses IEEE-754 64-bit binary. For this format, it is known that any 15-digit decimal numeral may be converted to this binary floating-point format and back without error. Thus, if the original input was a decimal numeral with at most 15 significant digits, and it was converted to 64-bit binary floating-point (and no other operations were performed on it that could have introduced additional error), and you format the binary floating-point value as a 15-digit decimal numeral, you will have the original number.
The resulting decimal numeral may have trailing zeroes. It is not possible to know (from the binary floating-point value alone) whether those were in the original numeral.
One liner solution thanks to Eric's answer:
const fixFloatingPoint = val => Number.parseFloat(val.toFixed(15))
fixFloatingPoint(0.3 - 0.1) // 0.2
fixFloatingPoint(0.000003 - 0.000002) // 0.000001
fixFloatingPoint(0.0003 + 0.0005) // 0.0008
In order to fix issues where:
0.3 - 0.1 => 0.199999999
0.57 * 100 => 56.99999999
0.0003 - 0.0001 => 0.00019999999
You can do something like:
const fixNumber = num => Number(num.toPrecision(15));
Few examples:
fixNumber(0.3 - 0.1) => 0.2
fixNumber(0.0003 - 0.0001) => 0.0002
fixNumber(0.57 * 100) => 57
I have always been coding in java and have recently started coding in javascript (node.js to be precise). One thing that's driving me crazy is add operation on decimal numbers;
Consider the following code
var a=0.1, b=0.2, c=0.3;
var op1 = (a+b)+c;
var op2 = (b+c)+a;
To my amazement I find out op1 != op2 ! console.logging op1 and op2 print out the following:
console.log(op1); 0.6000000000000001
console.log(op2); 0.6
This does not make sense. This looks like a bug to me because javascript simply cannot ignore rules of arithmetic. Could someone please explain why this happens?
This is a case of floating-point error.
The only fractional numbers that can be exactly represented by a floating-point number are those that can be written as an integer fraction with the denominator as a power of two.
For example, 0.5 can be represented exactly, because it can be written 1/2. 0.3 can't.
In this case, the rounding errors in your variables and expressions combined just right to produce an error in the first case, but not in the second case.
Neither way is represented exactly behind the scenes, but when you output the value, it rounds it to 16 digits of precision. In one case, it rounded to a slightly larger number, and in the other, it rounded to the expected value.
The lesson you should learn is that you should never depend on a floating-point number having an exact value, especially after doing some arithmetic.
That is not a javascript problem, but a standard problem in every programming language when using floats. Your sample numbers 0.1, 0.2 and 0.3 cannot be represented as finite decimal, but as repeating decimal, because they have the divisor 5 in it:
0.1 = 1/(2*5) and so on.
If you use decimals only with divisor 2 (like a=0.5, b=0.25 and c=0.125), everything is fine.
When working with floats you might want to set the precision on the numbers before you compare.
(0.1 + 0.2).toFixed(1) == 0.3
However, in a few cases toFixed doesn't behavior the same way across browsers.
Here's a fix for toFixed.
http://bateru.com/news/2012/03/reimplementation-of-number-prototype-tofixed/
This happens because not all floating point numbers can be accurately represented in binary.
Multiply all your numbers by some number (for example, 100 if you're dealing with 2 decimal points).
Then do your arithmetic on the larger result.
After your arithmetic is finished, divide by the same factor you multiplied by in the beginning.
http://jsfiddle.net/SjU9d/
a = 10 * a;
b = 10 * b;
c = 10 * c;
op1 = (a+b)+c;
op2 = (b+c)+a;
op1 = op1 / 10;
op2 = op2 / 10;
So I am adding and subtracting floats in javascript, and I need to know how to always take the ceiling of any number that has more than 3 decimal places. For example:
3.19 = 3.19
3.191 = 3.20
3.00000001 = 3.01
num = Math.ceil(num * 100) / 100;
Though, due to the way floats are represented, you may not get a clean number that's to two decimal places. For display purposes, always do num.toFixed(2).
Actually I don't think you want to represent dollar amounts as float, due to the same reason cited by Box9.
For example, 0.1*3 != 0.3 in my browser. It's better to represent them as integers (e.g. cents).