This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
Strange problem comparing floats in objective-C
Is JavaScript’s math broken?
1.265 * 10000 = 126499.99999999999 ?????
After watching this I discovered that in JavaScript:
0.1 + 0.2 === 0.3
evaluates to false.
Is there a way to work around this?
The best and only answer I have found that provides accurate results is to use a Decimal library. The BigDecimal java class has been ported to javascript, see my answer in this SO post.
Note: Scaling values will "treat" the issue but will not "cure" it.
How about
function isEqual(a, b)
{
var epsilon = 0.01; // tunable
return Math.abs(a - b) < epsilon;
}
This is a problem inherent in binary numbers that hits all major programming languages.
Convert decimal .1 (1/10) to binary by hand - you'll find it has a repeating mantissa and can't be represented exactly. Like trying to represent 1/3 as a decimal.
You should always compare floating point numbers by using a constant (normally called epsilon) to determine much can two numbers differ to be considered "equal".
Use fixed-point math (read: integers) to do math where you care about that kind of precision. Otherwise write a function that compares your numbers within a range that you can accept as being "close enough" to equal.
Just an idea. Multiply 10000 (or some similarly big number as long as its more than your max number of decimals) to all your values before you compare them, that why they will be integers.
function padFloat( val ) {
return val * 10000;
}
padFloat( 0.1 ) + padFloat( 0.2 ) === padFloat( 0.3 );
You can of course multiply each number like
10 * 0.1 + 10 * 0.2 === 10 * 0.3
which evaluates to true.
Related
I am working with js numbers and have lack of experience in that. So, I would like to ask few questions:
2.2932600144518896
e+160
is this float or integer number? If it's float how can I round it to two decimals (to get 2.29)? and if it's integer, I suppose it's very large number, and I have another problem than.
Thanks
Technically, as said in comments, this is a Number.
What you can do if you want the number (not its string representation):
var x = 2.2932600144518896e+160;
var magnitude = Math.floor(Math.log10(x)) + 1;
console.log(Math.round(x / Math.pow(10, magnitude - 3)) * Math.pow(10, magnitude - 3));
What's the problem with that? Floating point operation may not be precise, so some "number" different than 0 should appear.
To have this number really "rounded", you can only achieve it through string (than you can't make any operation).
JavaScript only has one Number type so is technically neither a float or an integer.
However this isn't really relevant as the value (or rather representation of it) is not specific to JavaScript and uses E-Notation which is a standard way to write very large/small numbers.
Taking this in to account 2.2932600144518896e+160 is equivalent to 2.2932600144518896 * Math.pow(10,160) and approximately 229 followed by 158 zeroes i.e. very flippin' big.
This question already has answers here:
Floating point representations seem to do integer arithmetic correctly - why?
(8 answers)
Closed 5 years ago.
So I have read this answer here:
Is floating point math broken?
That because every number in JS is double float 0.1+0.2 for example will NOT equal to 0.3.
But I don't understand why it never happens with integers? Why then 1+2 always equals 3 etc. It would seem that integers like 1 or 2 similarly to 0.1 and 0.2 don't have perfect representation in the binary64 so their math should also sometimes break but that never happens.
Why is that?
but I don't understand why it never happens with integers?
It does, the integer just has to be really big before they hit the limits of the IEEE-754 format:
var a = 9007199254740992;
console.log(a); // 9007199254740992
var b = a + 1;
console.log(b); // still 9007199254740992
console.log(a == b); // true
Floating point formats, such as IEEE-754 are essentially an expression that describes the value as the following:
value := sign * mantissa * 2 ^ exponent
The mantissa is an integer of various sizes. For four byte floating point, the mantissa is 24 bits and, for eight byte floating point, the mantissa is 48 bits. If the exponent is 0, the value of the expression is determined only by the sign and the mantissa. This is, in fact, how integers are represented by JavaScript.
What seems to take most people by surprise is due to the base 2 exponent instead of base 10. We accept, that, in base 10, the result of 1/3 or 2/3 cannot be exactly represented without an infinite number of digits or the acceptance of round-off error. Similarly, there are fractions in base 2 that have similar issues. Unfortunately for our base 10 mindset, these fractions most often involve negative powers of 10.
This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
The jQuery displays 12.123456789123003 instead of 12.123456789123 when this value 1212.3456789123 multiplies with 100.
Code:
<p class="price">12.123456789123</p>
<button>Calculate</button>
$(':button').click(function () {
$('p.price').text(function (i, v) {
return v * 100;
});
this.disabled = true;
});
Because of the non-exact nature of floating point values (this is not JavaScript's fault), you need to be more specific, i.e.:
$('p.price').text(function (i, v) {
return (v * 100).toFixed(10);
});
Where .toFixed(10) determines the desired size of your fraction.
JavaScript has problems with floating point numbers precision.
If you want precise results in JS, like when you working with money, you need use something like BigDecimal
There is 12 digits in decimal portion so when 12.123456789123 is multiplied by 100 1212.3456789123 doesn't contain the 12 digits so it's filling remaining numbers that's it.
This is a rounding error. Don't use floating-point types for currency values; you're better off having the price be in terms of the smallest integral unit. It's quite unusual to have prices be in precise units like that. If you really need to use a floating-point type, then use 1e-12 * Math.round(1e14 * v).
I have always been coding in java and have recently started coding in javascript (node.js to be precise). One thing that's driving me crazy is add operation on decimal numbers;
Consider the following code
var a=0.1, b=0.2, c=0.3;
var op1 = (a+b)+c;
var op2 = (b+c)+a;
To my amazement I find out op1 != op2 ! console.logging op1 and op2 print out the following:
console.log(op1); 0.6000000000000001
console.log(op2); 0.6
This does not make sense. This looks like a bug to me because javascript simply cannot ignore rules of arithmetic. Could someone please explain why this happens?
This is a case of floating-point error.
The only fractional numbers that can be exactly represented by a floating-point number are those that can be written as an integer fraction with the denominator as a power of two.
For example, 0.5 can be represented exactly, because it can be written 1/2. 0.3 can't.
In this case, the rounding errors in your variables and expressions combined just right to produce an error in the first case, but not in the second case.
Neither way is represented exactly behind the scenes, but when you output the value, it rounds it to 16 digits of precision. In one case, it rounded to a slightly larger number, and in the other, it rounded to the expected value.
The lesson you should learn is that you should never depend on a floating-point number having an exact value, especially after doing some arithmetic.
That is not a javascript problem, but a standard problem in every programming language when using floats. Your sample numbers 0.1, 0.2 and 0.3 cannot be represented as finite decimal, but as repeating decimal, because they have the divisor 5 in it:
0.1 = 1/(2*5) and so on.
If you use decimals only with divisor 2 (like a=0.5, b=0.25 and c=0.125), everything is fine.
When working with floats you might want to set the precision on the numbers before you compare.
(0.1 + 0.2).toFixed(1) == 0.3
However, in a few cases toFixed doesn't behavior the same way across browsers.
Here's a fix for toFixed.
http://bateru.com/news/2012/03/reimplementation-of-number-prototype-tofixed/
This happens because not all floating point numbers can be accurately represented in binary.
Multiply all your numbers by some number (for example, 100 if you're dealing with 2 decimal points).
Then do your arithmetic on the larger result.
After your arithmetic is finished, divide by the same factor you multiplied by in the beginning.
http://jsfiddle.net/SjU9d/
a = 10 * a;
b = 10 * b;
c = 10 * c;
op1 = (a+b)+c;
op2 = (b+c)+a;
op1 = op1 / 10;
op2 = op2 / 10;
This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
When I multiply 1.265 by 10000 , I get 126499.99999999999 when using Javascript.
Why is this so?
Floating point numbers can't handle decimals correctly in all cases. Check out
http://en.wikipedia.org/wiki/Floating-point_number#Accuracy_problems
http://www.mredkj.com/javascript/nfbasic2.html
You should be aware that all information in computers is in binary and the expansions of fractions in different bases vary.
For instance 1/3 in base 10= .33333333333333333333333333, while 1/3 in base 3 is equal to .1 and in base 2 is equal to .0101010101010101.
In case you don't have a complete understanding of how different bases work, here's an example:
The base 4 number 301.12. would be equal to 3 * 4^2 + 0 * 4^1 + 1 * 4^0 + 1 * 4^-1 + 2 *4^-2= 3 * 4^2 +1+ 1 * 4^-1 + 2 * 4^-2=49.375 in base 10.
Now the problems with accuracy in floating point comes from a limited number of bits in the significand. Floating point numbers have 3 parts to them, a sign bit, exponent and mantissa, most likely javascript uses 32 or 64 bit IEEE 754 floating point standard. For simpler calculations we'll use 32 bit, so 1.265 in floating point would be
Sign bit of 0 (0 for positive , 1 for negative) exponent of 0 (which with a 127 offset would be, ie exponent+offset, so 127 in unsigned binary) 01111111 (then finally we have the signifcand of 1.265, ieee floating point standard makes use of a hidden 1 representation so our binary represetnation of 1.265 is 1.01000011110101110000101, ignoring the 1:) 01000011110101110000101.
So our final IEEE 754 single (32-bit) representation of 1.625 is:
Sign Bit(+) Exponent (0) Mantissa (1.625)
0 01111111 01000011110101110000101
Now 1000 would be:
Sign Bit (+) Exponent(9) Mantissa(1000)
0 10001000 11110100000000000000000
Now we have to multiply these two numbers. Floating point multiplication consists of re-adding the hidden 1 to both mantissas, multiplying the two mantissa, subtracting the offset from the two exponents and then adding th two exponents together. After this the mantissa has to be normalized again.
First 1.01000011110101110000101*1.11110100000000000000000=10.0111100001111111111111111000100000000000000000
(this multiplication is a pain)
Now obviously we have an exponent of 9 + an exponent of 0 so we keep 10001000 as our exponent, and our sign bit remains, so all that is left is normalization.
We need our mantissa to be of the form 1.000000, so we have to shift it right once which also means we have to increment our exponent bringing us up to 10001001, now that our mantissa is normalized to 1.00111100001111111111111111000100000000000000000. It must be truncated to 23 bits so we are left with 1.00111100001111111111111 (not including the 1, because it will be hidden in our final representation) so our final answer that we are left with is
Sign Bit (+) Exponent(10) Mantissa
0 10001001 00111100001111111111111
Finally if we conver this answer back to decimal we get (+) 2^10 * (1+ 2^-3 + 2^-4 +2^-5+2^-6+2^-11+2^-12+2^-13+2^-14+2^-15+2^-16+2^-17+2^-18+2^-19+2^-20+2^-21+2^-22+2^-23)=1264.99987792
While I did simplify the problem multiplying 1000 by 1.265 instead of 10000 and using single floating point, instead of double, the concept stays the same. You use lose accuracy because the floating point representation only has so many bits in the mantissa with which to represent any given number.
Hope this helps.
It's a result of floating point representation error. Not all numbers that have finite decimal representation have a finite binary floating point representation.
Have a read of this article. Essentially, computers and floating-point numbers do not go together perfectly!
On the other hand, 126500 IS equal to 126499.99999999.... :)
Just like 1 is equal to 0.99999999....
Because 1 = 3 * 1/3 = 3 * 0.333333... = 0.99999999....
Purely due to the inaccuracies of floating point representation.
You could try using Math.round:
var x = Math.round(1.265 * 10000);
These small errors are usually caused by the precision of the floating points as used by the language. See this wikipedia page for more information about the accuracy problems of floating points.
Here's a way to overcome your problem, although arguably not very pretty:
var correct = parseFloat((1.265*10000).toFixed(3));
// Here's a breakdown of the line of code:
var result = (1.265*10000);
var rounded = result.toFixed(3); // Gives a string representation with three decimals
var correct = parseFloat(rounded); // Convert string into a float
// (doesn't show decimals)
If you need a solution, stop using floats or doubles and start using BigDecimal.
Check the BigDecimal implementation stz-ida.de/html/oss/js_bigdecimal.html.en
Even additions on the MS JScript engine :
WScript.Echo(1083.6-1023.6) give 59.9999999