Javascript divider issue - javascript

Please note, I am suprised with the below problem, I have two values to be divided in TextBox T1 and T2. I am dividing the same to get Amount And not Getting the Exact Amount. Rather then getting the Amount in Fraction 00000001.
Example:
var t1=5623.52;
var t2=56.2352;
var t3=5623.52/56.2352; //100.0000000001
Note: I can't round up the values since the vales are Exchange Rates so vary according to currency.

This is caused by the limited precision of floating point values. See The Floating Point Guide for full details.
The short version is that the 0.52 fractional part of your numbers cannot be represented exactly in binary, just like 1/3 cannot be represented exactly in decimal. Because of the limited number of digits of accuracy, the larger number is slightly more precise than the smaller one, and so is not exactly 100 times as large.
If that doesn't make sense, imagine you are dealing with thirds, and pretend that numbers are represented as decimals, to ten decimal places. If you declare:
var t1 = 1000.0 / 3.0;
var t2 = 10.0 / 3.0;
Then t2 is represented as 3.3333333333, which is as close as can be represented with the given precision. Something that is 100 times as large as t2 would be 333.3333333300, but t1 is actually represented as 333.3333333333. It is not exactly 100 times t2, due to rounding/truncation being applied at different points for the different numbers.
The fix, as always with floating-point rounding issues, is to use decimal types instead. Have a look at the Javascript cheat-sheet on the aforementioned guide for ways to go about this.

like Felix Kling said, don't use floating point values.
Or use parseInt if you want to keep an integer.
var t1=5623.52;
var t2=56.2352;
var t3=parseInt(t1/t2);

Related

A safe way to divide two floating point numbers?

What is the safest way to divide two IEEE 754 floating point numbers?
In my case the language is JavaScript, but I guess this isn't important. The goal is to avoid the normal floating point pitfalls.
I've read that one could use a "correction factor" (cf) (e.g. 10 uplifted to some number, for instance 10^10) like so:
(a * cf) / (b * cf)
But I'm not sure this makes a difference in division?
Incidentally, I've already looked at the other floating point posts on Stack Overflow and I've still not found a single post on how to divide two floating point numbers. If the answer is that there is no difference between the solutions for working around floating point issues when adding and when dividing, then just answer that please.
Edit:
I've been asked in the comments which pitfalls I'm referring to, so I thought I'd just add a quick note here as well for the people who don't read the comments:
When adding 0.1 and 0.2, you would expect to get 0.3, but with floating point arithmetic you get 0.30000000000000004 (at least in JavaScript). This is just one example of a common pitfall.
The above issue is discussed many times here on Stack Overflow, but I don't know what can happen when dividing and if it differs from the pitfalls found when adding or multiplying. It might be that that there is no risks, in which case that would be a perfectly good answer.
The safest way is to simply divide them. Any prescaling will either do nothing, or increase rounding error, or cause overflow or underflow.
If you prescale by a power of two you may cause overflow or underflow, but will otherwise make no difference in the result.
If you prescale by any other number, you will introduce additional rounding steps on the multiplications, which may lead to increased rounding error on the division result.
If you simply divide, the result will be the closest representable number to the ratio of the two inputs.
IEEE 754 64-bit floating point numbers are incredibly precise. A difference in one part in almost 10^16 can be represented.
There are a few operations, such as floor and exact comparison, that make even extremely low significance bits matter. If you have been reading about floating point pitfalls you should have already seen examples. Avoid those. Round your output to an appropriate number of decimal places. Be careful adding numbers of very different magnitude.
The following program demonstrates the effects of using each power of 10 from 10 through 1e20 as scale factor. Most get the same result as not multiplying, 6.0, which is also the rational number arithmetic result. Some get a slightly larger result.
You can experiment with different division problems by changing the initializers for a and b. The program prints their exact values, after rounding to double.
import java.math.BigDecimal;
public class Test {
public static void main(String[] args) {
double mult = 10;
double a = 2;
double b = 1.0 / 3.0;
System.out.println("a=" + new BigDecimal(a));
System.out.println("b=" + new BigDecimal(b));
System.out.println("No multiplier result="+(a/b));
for (int i = 0; i < 20; i++) {
System.out.println("mult="+mult + " result="+((a * mult) / (b * mult)));
mult *= 10;
}
}
}
Output:
a=2
b=0.333333333333333314829616256247390992939472198486328125
No multiplier result=6.0
mult=10.0 result=6.000000000000001
mult=100.0 result=6.000000000000001
mult=1000.0 result=6.0
mult=10000.0 result=6.000000000000001
mult=100000.0 result=6.000000000000001
mult=1000000.0 result=6.0
mult=1.0E7 result=6.000000000000001
mult=1.0E8 result=6.0
Floating point division will produce exactly the same "pitfalls" as addition or multiplication operations, and no amount of pre-scaling will fix it - the end result is the end result and it's the internal representation of that in IEEE-754 that causes the "problem".
The solution is to completely forget about these precision issues during calculations themselves, and to perform rounding as late as possible, i.e. only when displaying the results of the calculation, at the point at which the number is converted to a string using the .toFixed() function provided precisely for that purpose.
.tofixed() is not a good solution to divide float numbers.
Using javascript try : 4.11 / 100 and you will be surprised.
4.11 / 100 = 0.041100000000000005
Not all browsers get the same results.
Right solution is to convert float to integer:
parseInt(4.11 * Math.pow(10, 10)) / (100 * Math.pow(10, 10)) = 0.0411

Would this be a reliable way to deal with rounding errors when multiplying floating point numbers in Javascript?

I am adding client-side sub-total calculations to my order page, so that the volume discount will show as the user makes selections.
I am finding that some of the calculations are off by one cent here or there. This wouldn't be a very big deal except for the fact that the total doesn't match the final total calculated server-side (in PHP).
I know that the rounding errors are an expected result when dealing with floating point numbers. For example, 149.95 * 0.15 = 22.492499999999996 and 149.95 * 0.30 = 44.98499999999999. The former rounds as desired, the latter does not.
I've searched on this topic and found a variety of discussions, but nothing that satisfactorily addresses the problem.
My current calculation is as follows:
discount = Math.round(price * factor * 100) / 100;
A common suggestion is to work in cents rather than fractions of dollars. However, this would require me to convert my starting numbers, round them, multiply them, round the result, and then convert it back.
Essentially:
discount = Math.round(Math.round(price * 100) * Math.round(factor * 100) / 100) / 100;
I was thinking of adding 0.0001 to the number before rounding. For example:
discount = Math.round(price * factor * 100 + 0.0001) / 100;
This works for the scenarios I've tried, but I am wondering about my logic. Will adding 0.0001 always be enough, and never too much, to force the desired rounding result?
Note: For my purposes here, I am only concerned with a single calculation per price (so not compounding the errors) and will never be displaying more than two decimal places.
EDIT: For example, I want to round the result of 149.95 * 0.30 to two decimal places and get 44.99. However, I get 44.98 because the actual result is 44.98499999999999 not 44.985. The error is not being introduced by the / 100. It is happening before that.
Test:
alert(149.95 * 0.30); // yields 44.98499999999999
Thus:
alert(Math.round(149.95 * 0.30 * 100) / 100); // yields 44.98
The 44.98 is expected considering the actual result of the multiplication, but not desired since it is not what a user would expect (and differs from the PHP result).
Solution: I'm going to convert everything to integers to do my calculations. As the accepted answer points out, I can simplify my original conversion calculation somewhat. My idea of adding the 0.0001 is just a dirty hack. Best to use the right tool for the job.
I don't think adding a small amount will favor you, I guess there are cases where it is too much. Also it needs to be properly documented, otherwise one could see it as incorrect.
working in cents […] would require me to convert my starting numbers, round them, multiply them, round the result, and then convert it back:
discount = Math.round(Math.round(price * 100) * Math.round(factor * 100) / 100) / 100;
I think it should work as well to round afterwards only. However, you should first multiply the result so that the significant digits are the sum of the two sig digits from before, i.e. 2+2=4 decimal places in your example:
discount = Math.round(Math.round( (price * factor) * 10000) / 100) / 100;
Adding a small amount to your numbers will not be very accurate. You can try using a library to get better results: https://github.com/jtobey/javascript-bignum.
Bergi’s answer shows a solution. This answer shows a mathematical demonstration that it is correct. In the process, it also establishes some bound on how much error in the input is tolerable.
Your problem is this:
You have a floating-point number, x, which already contains rounding errors. E.g., it is intended to represent 149.95 but actually contains 149.94999999999998863131622783839702606201171875.
You want to multiply this floating-point number x by a discount value d.
You want to know the result of the multiplication to the nearest penny, performed as if ideal mathematics were used with no errors.
Suppose we add two more assumptions:
x always represents some exact number of cents. That is, it represents a number that has an exact number of hundredths, such as 149.95.
The error in x is small, less than, say, .00004.
The discount value d represents an integer percentage (that is, also an exact number of hundredths, such as .25 for 25%) and is in the interval [0%, 100%].
The error is d is tiny, always the result of correct conversion of a decimal numeral with two digits after the decimal point to double-precision (64 bit) binary floating point.
Consider the value x*d*10000. Ideally, this would be an integer, since x and d are each ideally multiples of .01, so multiplying the ideal product of x and d by 10,000 produces an integer. Since the errors in x and d are small, then rounding x*d*10000 to an integer will produce that ideal integer. E.g., instead of the ideal x and d, we have x and d plus small errors, x+e0 and d+e1, and we are computing (x+e0)•(d+e1)•10000 = (x•d+x•e1+d•e0+e0•e1)•10000. We have assumed that e1 is tiny, so the dominant error is d•e0•10000. We assumed e0, the error in x, is less than .00004, and d is at most 1 (100%), so d•e0•10000 is less than .4. This error, plus the tiny errors from e1, are not enough to change the rounding of x*d*10000 from the ideal integer to some other integer. (This is because the error must be at least .5 to change how a result that should be an integer rounds. E.g., 3 plus an error of .5 would round to 4, but 3 plus .49999 would not.)
Thus, Math.round(x*d*10000) produces the integer desired. Then Math.round(x*d*10000)/100 is an approximation of x*d*100 that is accurate to much less than one cent, so rounding it, with Math.round(Math.round(x*d*10000)/100) produces exactly the number of cents desired. Finally, dividing that by 100 (to produce a number of dollars, with hundredths, instead of a number of cents, as an integer) produces a new rounding error, but the error is so small that, when the resulting value is correctly converted to decimal with two decimal digits, the correct value is displayed. (If further arithmetic is performed with this value, that might not remain true.)
We can see from the above that, if the error in x grows to .00005, this calculation can fail. Suppose the value of an order might grow to $100,000. The floating-point error in representing a value around 100,000 is at most 100,000•2-53. If somebody ordered one hundred thousand items with this error (they could not, since the items would have smaller individual prices than $100,000, so their errors would be smaller), and the prices were individually added up, performing one hundred thousand (minus one) additions adding one hundred thousand new errors, then we have almost two hundred thousand errors of at most 100,000•2-53, so the total error is at most 2•105•105•2-53, which is about .00000222. Therefore, this solution should work for normal orders.
Note that the solution requires reconsideration if the discount is not an integer percentage. For example, if the discount is stated as “one third” instead of 33%, then x*d*10000 is not expected to be an integer.

Cannot understand how add operation works on decimal numbers in javascript

I have always been coding in java and have recently started coding in javascript (node.js to be precise). One thing that's driving me crazy is add operation on decimal numbers;
Consider the following code
var a=0.1, b=0.2, c=0.3;
var op1 = (a+b)+c;
var op2 = (b+c)+a;
To my amazement I find out op1 != op2 ! console.logging op1 and op2 print out the following:
console.log(op1); 0.6000000000000001
console.log(op2); 0.6
This does not make sense. This looks like a bug to me because javascript simply cannot ignore rules of arithmetic. Could someone please explain why this happens?
This is a case of floating-point error.
The only fractional numbers that can be exactly represented by a floating-point number are those that can be written as an integer fraction with the denominator as a power of two.
For example, 0.5 can be represented exactly, because it can be written 1/2. 0.3 can't.
In this case, the rounding errors in your variables and expressions combined just right to produce an error in the first case, but not in the second case.
Neither way is represented exactly behind the scenes, but when you output the value, it rounds it to 16 digits of precision. In one case, it rounded to a slightly larger number, and in the other, it rounded to the expected value.
The lesson you should learn is that you should never depend on a floating-point number having an exact value, especially after doing some arithmetic.
That is not a javascript problem, but a standard problem in every programming language when using floats. Your sample numbers 0.1, 0.2 and 0.3 cannot be represented as finite decimal, but as repeating decimal, because they have the divisor 5 in it:
0.1 = 1/(2*5) and so on.
If you use decimals only with divisor 2 (like a=0.5, b=0.25 and c=0.125), everything is fine.
When working with floats you might want to set the precision on the numbers before you compare.
(0.1 + 0.2).toFixed(1) == 0.3
However, in a few cases toFixed doesn't behavior the same way across browsers.
Here's a fix for toFixed.
http://bateru.com/news/2012/03/reimplementation-of-number-prototype-tofixed/
This happens because not all floating point numbers can be accurately represented in binary.
Multiply all your numbers by some number (for example, 100 if you're dealing with 2 decimal points).
Then do your arithmetic on the larger result.
After your arithmetic is finished, divide by the same factor you multiplied by in the beginning.
http://jsfiddle.net/SjU9d/
a = 10 * a;
b = 10 * b;
c = 10 * c;
op1 = (a+b)+c;
op2 = (b+c)+a;
op1 = op1 / 10;
op2 = op2 / 10;

Javascript multiplying incorrectly, causing incorrect rounding

When I pull the values I want to multiply, they're strings. So I pull them, parse them as floats (to preserve the decimal places), and multiply them together.
LineTaxRate = parseFloat(myRate) * parseFloat(myQuantity) * parseFloat(myTaxRateRound);
This has worked for 99% of my invoices but I discovered one very odd problem.
When it multiplied: 78 * 7 * 0.0725
Javascript is returning: 39.584999999999994
When you normally do the math in a calculator its: 39.585
When all is said and done, I take that number and round it using .toFixed(2)
Because Javascript is returning that number, it's not rounding to the desired value of: $39.59
I tried Math.round() the total but I still get the same number.
I have thought of rounding the number to 3 decimals then two, but that seems hacky to me.
I have searched everywhere and all I see is people mention parseFloat loses its precision, and to use .toFixed, however in the example above, that doesn't help.
Here is my test script i made to recreate the issue:
<script>
var num1 = parseFloat("78");
var num2 = parseFloat("7");
var num3 = parseFloat("0.0725");
var myTotal = num1 * num2 * num3;
var result = Math.round(myTotal*100)/100
alert(myTotal);
alert(myTotal.toFixed(2));
alert(result);
</script>
Floating points are represented in binary, not decimal. Some decimal numbers will not be represented precisely. And unfortunately, since Javascript only has one Number class, it's not a very good tool for this job. Other languages have decent decimal libraries designed to avoid precisely this kind of error. You're going to have to either accept one-cent errors, implement a solution server-side, or work very hard to fix this.
edit: ooh! you can do 78 * 7 * 725 and then divide by 10000, or to be even more precise just put the decimal point in the right place. Basically represent the tax rate as something other than a tiny fraction. Less convenient but it'll probably take care of your multiplication errors.
You might find the Accounting.js library useful for this. It has an "improved" toFixed() method.
JavaScript/TypeScript have only one Number class which is not that good. I have the same problem as I am using TypeScript. I solved my problem by using decimal.js-light library.
new Decimal(78).mul(7).mul(0.0725) returns as expected 39.585

javascript - ceiling of a dollar amount

So I am adding and subtracting floats in javascript, and I need to know how to always take the ceiling of any number that has more than 3 decimal places. For example:
3.19 = 3.19
3.191 = 3.20
3.00000001 = 3.01
num = Math.ceil(num * 100) / 100;
Though, due to the way floats are represented, you may not get a clean number that's to two decimal places. For display purposes, always do num.toFixed(2).
Actually I don't think you want to represent dollar amounts as float, due to the same reason cited by Box9.
For example, 0.1*3 != 0.3 in my browser. It's better to represent them as integers (e.g. cents).

Categories