Javascript floating point number addition different outputs [duplicate] - javascript

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 9 years ago.
Why this will not give the results as expected?
console.log(0.2+0.1); // gives 0.30000000000000004
console.log(0.2+0.3); // gives 0.5
console.log(0.2+0.5); // gives 0.7
console.log(0.2+0.4); // gives 0.6000000000000001
Why the first and last will not give the results as expected?

It’s because JavaScript used the IEEE Standard for Binary Floating-Point Arithmetic.
All floating point math is like this and is based on the IEEE 754 standard. JavaScript uses 64-bit floating point representation, which is the same as Java's double.
Can be possible duplicate of Is floating point math broken?

Try .toFixed() .This method formats a number with a specific number of digits to the right of the decimal.
console.log((0.2 + 0.1).toFixed(1)); // gives 0.3
console.log((0.2 + 0.3).toFixed(1)); // gives 0.5
console.log((0.2 + 0.5).toFixed(1)); // gives 0.7
console.log((0.2 + 0.4).toFixed(1)); // gives 0.6

Related

Adding integer numbers javascript, double float [duplicate]

This question already has answers here:
Floating point representations seem to do integer arithmetic correctly - why?
(8 answers)
Closed 5 years ago.
So I have read this answer here:
Is floating point math broken?
That because every number in JS is double float 0.1+0.2 for example will NOT equal to 0.3.
But I don't understand why it never happens with integers? Why then 1+2 always equals 3 etc. It would seem that integers like 1 or 2 similarly to 0.1 and 0.2 don't have perfect representation in the binary64 so their math should also sometimes break but that never happens.
Why is that?
but I don't understand why it never happens with integers?
It does, the integer just has to be really big before they hit the limits of the IEEE-754 format:
var a = 9007199254740992;
console.log(a); // 9007199254740992
var b = a + 1;
console.log(b); // still 9007199254740992
console.log(a == b); // true
Floating point formats, such as IEEE-754 are essentially an expression that describes the value as the following:
value := sign * mantissa * 2 ^ exponent
The mantissa is an integer of various sizes. For four byte floating point, the mantissa is 24 bits and, for eight byte floating point, the mantissa is 48 bits. If the exponent is 0, the value of the expression is determined only by the sign and the mantissa. This is, in fact, how integers are represented by JavaScript.
What seems to take most people by surprise is due to the base 2 exponent instead of base 10. We accept, that, in base 10, the result of 1/3 or 2/3 cannot be exactly represented without an infinite number of digits or the acceptance of round-off error. Similarly, there are fractions in base 2 that have similar issues. Unfortunately for our base 10 mindset, these fractions most often involve negative powers of 10.

Bug in javascript with multiply [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
Today I have entered random float number and multipled by hundred firstly using normal code and then in console as was giving me wrong number, console is returning me the same.
The given float number is: 1050.6
Therefore: 1050.6 * 100 should be 105060, but javascript is returning me 105059.99999999999
Anyone knows why?
JavaScript uses 64-bit floating point representation (double precision). Numbers are represented in this format as a whole number multiplied by a power of two.
This is based on the IEEE 754 standard
Rational numbers with a denominator that isn't a power of 2 can't be exactly represented. This is why floating point multiplication gives this result.
Source: https://en.wikipedia.org/wiki/IEEE_floating_point
If you want to get the real value, there are two methods you can use
Rounding with Math.round
Math.round(1050.6 * 100)
Or toFixed
(1050.6 * 1000).toFixed(0)
It's a feature of how computers handle floating point numbers. It turns out that decimal numbers can't always be perfectly represented in binary so the computer gives the closest approximation it can.
https://en.wikipedia.org/wiki/IEEE_floating_point

JavaScript multiplication arithmetic [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 9 years ago.
I tried this expression in both my FireFox and Chrome console:
17.99 * 100;
Expected Result: 1799
Actual Result: 1798.9999999999998
I also tried:
parseInt(17.99 * 100);
Expected Result: 1799
Actual Result: 1798
Why is this happening, and how do I get the expected result?
Floating point arithmetic isn't an exact science. The reason is that in memory, your precision is stored in binary (all powers of two). If it can't be represented by an exact power of two you can get some lost precision.
Your number, 1798.9999999999998 had enough lost precision that it didn't round up in the multiplication.
http://en.wikipedia.org/wiki/IEEE_floating_point
Try this:
Math.round(17.99*100)
As the previous answer explained, multiplying a float number is not exact science; what you can do is to expect a result in a certain precision range. Take a look at Number.prototype.toPrecision().
(17.99*100).toPrecision(4)

Multiplication int and float(double) in JavaScript [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 9 years ago.
I want to make multiplication using JavaScript.
Multiplication of 2 and 0.15 is 0.3 but 3 and 0.15 is 0.44999999999999996. I want to get 0.45 such as result. How can I do it with JavaScript?
It's a rounding error. You may want to round your number to a fixed amount of digits, like this:
parseFloat((your_number).toFixed(2));
Unfortunately this happens in any language using floating point arithmetic. It's an artifact arising when floating point operations are encoded into binary, operations performed, and decoded from binary to report the answer in a form you'd expect.
Depending on what you want to do with the output, you can call a round()-like function to round to a number of decimal places or compare the output to a value using a (small) number called a tolerance. E.g. two numbers are considered equal if their absolute difference is less than this tolerance.

1.265 * 10000 = 126499.99999999999? [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
When I multiply 1.265 by 10000 , I get 126499.99999999999 when using Javascript.
Why is this so?
Floating point numbers can't handle decimals correctly in all cases. Check out
http://en.wikipedia.org/wiki/Floating-point_number#Accuracy_problems
http://www.mredkj.com/javascript/nfbasic2.html
You should be aware that all information in computers is in binary and the expansions of fractions in different bases vary.
For instance 1/3 in base 10= .33333333333333333333333333, while 1/3 in base 3 is equal to .1 and in base 2 is equal to .0101010101010101.
In case you don't have a complete understanding of how different bases work, here's an example:
The base 4 number 301.12. would be equal to 3 * 4^2 + 0 * 4^1 + 1 * 4^0 + 1 * 4^-1 + 2 *4^-2= 3 * 4^2 +1+ 1 * 4^-1 + 2 * 4^-2=49.375 in base 10.
Now the problems with accuracy in floating point comes from a limited number of bits in the significand. Floating point numbers have 3 parts to them, a sign bit, exponent and mantissa, most likely javascript uses 32 or 64 bit IEEE 754 floating point standard. For simpler calculations we'll use 32 bit, so 1.265 in floating point would be
Sign bit of 0 (0 for positive , 1 for negative) exponent of 0 (which with a 127 offset would be, ie exponent+offset, so 127 in unsigned binary) 01111111 (then finally we have the signifcand of 1.265, ieee floating point standard makes use of a hidden 1 representation so our binary represetnation of 1.265 is 1.01000011110101110000101, ignoring the 1:) 01000011110101110000101.
So our final IEEE 754 single (32-bit) representation of 1.625 is:
Sign Bit(+) Exponent (0) Mantissa (1.625)
0 01111111 01000011110101110000101
Now 1000 would be:
Sign Bit (+) Exponent(9) Mantissa(1000)
0 10001000 11110100000000000000000
Now we have to multiply these two numbers. Floating point multiplication consists of re-adding the hidden 1 to both mantissas, multiplying the two mantissa, subtracting the offset from the two exponents and then adding th two exponents together. After this the mantissa has to be normalized again.
First 1.01000011110101110000101*1.11110100000000000000000=10.0111100001111111111111111000100000000000000000
(this multiplication is a pain)
Now obviously we have an exponent of 9 + an exponent of 0 so we keep 10001000 as our exponent, and our sign bit remains, so all that is left is normalization.
We need our mantissa to be of the form 1.000000, so we have to shift it right once which also means we have to increment our exponent bringing us up to 10001001, now that our mantissa is normalized to 1.00111100001111111111111111000100000000000000000. It must be truncated to 23 bits so we are left with 1.00111100001111111111111 (not including the 1, because it will be hidden in our final representation) so our final answer that we are left with is
Sign Bit (+) Exponent(10) Mantissa
0 10001001 00111100001111111111111
Finally if we conver this answer back to decimal we get (+) 2^10 * (1+ 2^-3 + 2^-4 +2^-5+2^-6+2^-11+2^-12+2^-13+2^-14+2^-15+2^-16+2^-17+2^-18+2^-19+2^-20+2^-21+2^-22+2^-23)=1264.99987792
While I did simplify the problem multiplying 1000 by 1.265 instead of 10000 and using single floating point, instead of double, the concept stays the same. You use lose accuracy because the floating point representation only has so many bits in the mantissa with which to represent any given number.
Hope this helps.
It's a result of floating point representation error. Not all numbers that have finite decimal representation have a finite binary floating point representation.
Have a read of this article. Essentially, computers and floating-point numbers do not go together perfectly!
On the other hand, 126500 IS equal to 126499.99999999.... :)
Just like 1 is equal to 0.99999999....
Because 1 = 3 * 1/3 = 3 * 0.333333... = 0.99999999....
Purely due to the inaccuracies of floating point representation.
You could try using Math.round:
var x = Math.round(1.265 * 10000);
These small errors are usually caused by the precision of the floating points as used by the language. See this wikipedia page for more information about the accuracy problems of floating points.
Here's a way to overcome your problem, although arguably not very pretty:
var correct = parseFloat((1.265*10000).toFixed(3));
// Here's a breakdown of the line of code:
var result = (1.265*10000);
var rounded = result.toFixed(3); // Gives a string representation with three decimals
var correct = parseFloat(rounded); // Convert string into a float
// (doesn't show decimals)
If you need a solution, stop using floats or doubles and start using BigDecimal.
Check the BigDecimal implementation stz-ida.de/html/oss/js_bigdecimal.html.en
Even additions on the MS JScript engine :
WScript.Echo(1083.6-1023.6) give 59.9999999

Categories