Wrong value after multiplication by 100 [duplicate] - javascript

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 8 years ago.
The jQuery displays 12.123456789123003 instead of 12.123456789123 when this value 1212.3456789123 multiplies with 100.
Code:
<p class="price">12.123456789123</p>
<button>Calculate</button>
$(':button').click(function () {
$('p.price').text(function (i, v) {
return v * 100;
});
this.disabled = true;
});

Because of the non-exact nature of floating point values (this is not JavaScript's fault), you need to be more specific, i.e.:
$('p.price').text(function (i, v) {
return (v * 100).toFixed(10);
});
Where .toFixed(10) determines the desired size of your fraction.

JavaScript has problems with floating point numbers precision.
If you want precise results in JS, like when you working with money, you need use something like BigDecimal

There is 12 digits in decimal portion so when 12.123456789123 is multiplied by 100 1212.3456789123 doesn't contain the 12 digits so it's filling remaining numbers that's it.

This is a rounding error. Don't use floating-point types for currency values; you're better off having the price be in terms of the smallest integral unit. It's quite unusual to have prices be in precise units like that. If you really need to use a floating-point type, then use 1e-12 * Math.round(1e14 * v).

Related

Bug in javascript with multiply [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
Today I have entered random float number and multipled by hundred firstly using normal code and then in console as was giving me wrong number, console is returning me the same.
The given float number is: 1050.6
Therefore: 1050.6 * 100 should be 105060, but javascript is returning me 105059.99999999999
Anyone knows why?
JavaScript uses 64-bit floating point representation (double precision). Numbers are represented in this format as a whole number multiplied by a power of two.
This is based on the IEEE 754 standard
Rational numbers with a denominator that isn't a power of 2 can't be exactly represented. This is why floating point multiplication gives this result.
Source: https://en.wikipedia.org/wiki/IEEE_floating_point
If you want to get the real value, there are two methods you can use
Rounding with Math.round
Math.round(1050.6 * 100)
Or toFixed
(1050.6 * 1000).toFixed(0)
It's a feature of how computers handle floating point numbers. It turns out that decimal numbers can't always be perfectly represented in binary so the computer gives the closest approximation it can.
https://en.wikipedia.org/wiki/IEEE_floating_point

Multiplication int and float(double) in JavaScript [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 9 years ago.
I want to make multiplication using JavaScript.
Multiplication of 2 and 0.15 is 0.3 but 3 and 0.15 is 0.44999999999999996. I want to get 0.45 such as result. How can I do it with JavaScript?
It's a rounding error. You may want to round your number to a fixed amount of digits, like this:
parseFloat((your_number).toFixed(2));
Unfortunately this happens in any language using floating point arithmetic. It's an artifact arising when floating point operations are encoded into binary, operations performed, and decoded from binary to report the answer in a form you'd expect.
Depending on what you want to do with the output, you can call a round()-like function to round to a number of decimal places or compare the output to a value using a (small) number called a tolerance. E.g. two numbers are considered equal if their absolute difference is less than this tolerance.

How to get 12.6 with a=10.3 and b=2.3? [duplicate]

This question already has answers here:
How to deal with floating point number precision in JavaScript?
(47 answers)
Closed 8 years ago.
Tried :
var a=10.3;
var b=2.3;
alert(a+b);
but I get 12.600000000000001. I know JavaScript is loosely typed, but I hope I can do a sum :)
you can use toFixed() method also
var a=10.3;
var b=2.3;
alert((a+b).toFixed(1));​
Works in chrome
Multiply to the precision you want then round and divide by whatever you multiplied by:
var a=10.3;
var b=2.3;
alert(Math.round((a+b) * 10) / 10);
http://jsfiddle.net/DYKJB/3/
It's not about the typing but about the precision of floating point types. You need to round for presentation.
Floating point types are not a good choice if you need exact values. If you want to express currency values express them as cents or use an appropriate library for this.

Comparing large numbers with javascript [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Subtracting long numbers in javascript
Can anyone tell me how to compare large numbers in javascript?
Something like
var sla = 1263293940000;
var resp = 1263296389700;
if(sla > resp)
{
//do something
}
You might want to look into the BigInteger library.
Internally all javascript numbers are represented as double-precision floating point numbers. As you've discovered, this causes some rounding errors for very large numbers (and in other places). If you need more precision, you'll need to use a library like the one Alex posted.
return new Number(first)>new Number(second);
return ('12345678901234568.13') <= ('12345678901234568.12');

How to work around the decimal problem in JavaScript? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
Strange problem comparing floats in objective-C
Is JavaScript’s math broken?
1.265 * 10000 = 126499.99999999999 ?????
After watching this I discovered that in JavaScript:
0.1 + 0.2 === 0.3
evaluates to false.
Is there a way to work around this?
The best and only answer I have found that provides accurate results is to use a Decimal library. The BigDecimal java class has been ported to javascript, see my answer in this SO post.
Note: Scaling values will "treat" the issue but will not "cure" it.
How about
function isEqual(a, b)
{
var epsilon = 0.01; // tunable
return Math.abs(a - b) < epsilon;
}
This is a problem inherent in binary numbers that hits all major programming languages.
Convert decimal .1 (1/10) to binary by hand - you'll find it has a repeating mantissa and can't be represented exactly. Like trying to represent 1/3 as a decimal.
You should always compare floating point numbers by using a constant (normally called epsilon) to determine much can two numbers differ to be considered "equal".
Use fixed-point math (read: integers) to do math where you care about that kind of precision. Otherwise write a function that compares your numbers within a range that you can accept as being "close enough" to equal.
Just an idea. Multiply 10000 (or some similarly big number as long as its more than your max number of decimals) to all your values before you compare them, that why they will be integers.
function padFloat( val ) {
return val * 10000;
}
padFloat( 0.1 ) + padFloat( 0.2 ) === padFloat( 0.3 );
You can of course multiply each number like
10 * 0.1 + 10 * 0.2 === 10 * 0.3
which evaluates to true.

Categories