How can one avoid too many numbers after the decimalpoint? [duplicate] - javascript

This question already has answers here:
How to deal with floating point number precision in JavaScript?
(47 answers)
Closed 2 years ago.
The following code generates numbers and than divides it with 1000:
But some times numbers like 1.2295999999999998 are generated, while 1229.6 is only being divided by 1000?
What do I need to change in the code, so that this does not occur?
Z1 = document.getElementById("Z1").innerHTML = Math.floor((Math.random() * 99899 + 100)) / Math.pow(10, Math.floor((Math.random() * 3 + 1)));
document.getElementById("L").innerHTML = Z1 / 1000;
<label id="Z1"></label><br>
<label id="L"></label>

document.getElementById("L").innerHTML = (Z1 / 1000).toFixed(2);
// if you want 2 digits after decimalpoint
Further Explaination:
Floating point numbers cannot represent all decimals precisely in binary. It has to do with how decimal values are converted to binary floating point numbers. For example, 1/10 turns into a repeating decimal in binary, so the number is not perfectly represented, and repeated operations can expose the error.

Related

JavaScript - While loop - odd numbers exponentiation [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
I found some problem with JavaScript exponentiation While loop code:
var x = Number(prompt("X:"));
var y = Number(prompt("Y:"));
var count = 1;
var power = 1;
while(count <= y){
power = power * x;
console.log (x + " to the power of " + count + " is: " + power);
count++;
}
This is simple mathematical formula X^Y.
In case of X=5 and Y=25 it's going well till Y=23. It looks like there is some problem with X odd numbers. Eg. X=3 and Y=35. Wrong result at Y=34. Missing "1" during multiplication. Can someone explain why it happend?
You better know something about isSafeInteger in js:
3 to the power of 33 is: 5559060566555523
3 to the power of 34 is: 16677181699666568
3 to the power of 35 is: 50031545098999704
console.log(
Number.isSafeInteger(5559060566555523),
Number.isSafeInteger(16677181699666568),
Number.isSafeInteger(50031545098999704)
)
A safe integer is an integer that
can be exactly represented as an IEEE-754 double precision number, and
whose IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation.
For example, 2^53 - 1 is a safe integer: it can be exactly represented, and no other integer rounds to it under any IEEE-754 rounding mode. In contrast, 2^53 is not a safe integer: it can be exactly represented in IEEE-754, but the integer 2^53 + 1 can't be directly represented in IEEE-754 but instead rounds to 2^53 under round-to-nearest and round-to-zero rounding. The safe integers consist of all integers from -(2^53 - 1) inclusive to 2^53 - 1 inclusive (± 9007199254740991 or ± 9,007,199,254,740,991).

Algorithm to correct the floating point precision error in JavaScript

You can find a lot about Floating Point Precision Errors and how to avoid them in Javascript, for example "How to deal with floating point number precision in JavaScript?", who deal with the problem by just rounding the number to a fixed amount of decimal places.
My problem is slightly different, I get numbers from the backend (some with the rounding error) and want to display it without the error.
Of course I could just round the number to a set number of decimal places with value.toFixed(X). The problem is, that the numbers can range from 0.000000001 to 1000000000, so I can never know for sure, how many decimal places are valid.
(See this Fiddle for my unfruitful attempts)
Code :
var a = 0.3;
var b = 0.1;
var c = a - b; // is 0.19999999999999998, is supposed to be 0.2
// c.toFixed(2) = 0.20
// c.toFixed(4) = 0.2000
// c.toFixed(5) = 0.200000
var d = 0.000003;
var e = 0.000002;
var f = d - e; // is 0.0000010000000000000002 is supposed to be 0.000001
// f.toFixed(2) = 0.00
// f.toFixed(4) = 0.0000
// f.toFixed(5) = 0.000001
var g = 0.0003;
var h = 0.0005;
var i = g + h; // is 0.0007999999999999999, is supposed to be 0.0008
// i.toFixed(2) = 0.00
// i.toFixed(4) = 0.0008
// i.toFixed(5) = 0.000800
My Question is now, if there is any algorithm, that intelligently detects how many decimal places are reasonable and rounds the numbers accordingly?
When a decimal numeral is rounded to binary floating-point, there is no way to know, just from the result, what the original number was or how many significant digits it had. Infinitely many decimal numerals will round to the same result.
However, the rounding error is bounded. If it is known that the original number had at most a certain number of digits, then only decimal numerals with that number of digits are candidates. If only one of those candidates differs from the binary value by less than the maximum rounding error, then that one must be the original number.
If I recall correctly (I do not use JavaScript regularly), JavaScript uses IEEE-754 64-bit binary. For this format, it is known that any 15-digit decimal numeral may be converted to this binary floating-point format and back without error. Thus, if the original input was a decimal numeral with at most 15 significant digits, and it was converted to 64-bit binary floating-point (and no other operations were performed on it that could have introduced additional error), and you format the binary floating-point value as a 15-digit decimal numeral, you will have the original number.
The resulting decimal numeral may have trailing zeroes. It is not possible to know (from the binary floating-point value alone) whether those were in the original numeral.
One liner solution thanks to Eric's answer:
const fixFloatingPoint = val => Number.parseFloat(val.toFixed(15))
fixFloatingPoint(0.3 - 0.1) // 0.2
fixFloatingPoint(0.000003 - 0.000002) // 0.000001
fixFloatingPoint(0.0003 + 0.0005) // 0.0008
In order to fix issues where:
0.3 - 0.1 => 0.199999999
0.57 * 100 => 56.99999999
0.0003 - 0.0001 => 0.00019999999
You can do something like:
const fixNumber = num => Number(num.toPrecision(15));
Few examples:
fixNumber(0.3 - 0.1) => 0.2
fixNumber(0.0003 - 0.0001) => 0.0002
fixNumber(0.57 * 100) => 57

javascript increment by 1/8 issue [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
for (var i = 0; i < 10; i += .1) {
}
console.log(i) === 10.09999999999998
BUT ...
for (var i = 0; i < 10; i += 1/8) {
}
console.log(i) === 10
Why is the result an integer when increment by 1/8?
Because 1/8 can be represented exactly as a base-2 (binary) fraction, but 0.1 cannot. 1/8 is 2 to the negative third power, but 0.1 is not 2 to any integer power. Floating-point values are stored in binary, so math on integer powers of two is more likely to return exact values than math on non-integer powers of 2.
That said, it is better to assume that no floating-point operation will be entirely exact. Different languages and processors may give different results, so don't count on that 1/8 summing working everywhere.

Bug in javascript with multiply [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
Today I have entered random float number and multipled by hundred firstly using normal code and then in console as was giving me wrong number, console is returning me the same.
The given float number is: 1050.6
Therefore: 1050.6 * 100 should be 105060, but javascript is returning me 105059.99999999999
Anyone knows why?
JavaScript uses 64-bit floating point representation (double precision). Numbers are represented in this format as a whole number multiplied by a power of two.
This is based on the IEEE 754 standard
Rational numbers with a denominator that isn't a power of 2 can't be exactly represented. This is why floating point multiplication gives this result.
Source: https://en.wikipedia.org/wiki/IEEE_floating_point
If you want to get the real value, there are two methods you can use
Rounding with Math.round
Math.round(1050.6 * 100)
Or toFixed
(1050.6 * 1000).toFixed(0)
It's a feature of how computers handle floating point numbers. It turns out that decimal numbers can't always be perfectly represented in binary so the computer gives the closest approximation it can.
https://en.wikipedia.org/wiki/IEEE_floating_point

1.265 * 10000 = 126499.99999999999? [duplicate]

This question already has answers here:
Why does floating-point arithmetic not give exact results when adding decimal fractions?
(31 answers)
Closed 6 years ago.
When I multiply 1.265 by 10000 , I get 126499.99999999999 when using Javascript.
Why is this so?
Floating point numbers can't handle decimals correctly in all cases. Check out
http://en.wikipedia.org/wiki/Floating-point_number#Accuracy_problems
http://www.mredkj.com/javascript/nfbasic2.html
You should be aware that all information in computers is in binary and the expansions of fractions in different bases vary.
For instance 1/3 in base 10= .33333333333333333333333333, while 1/3 in base 3 is equal to .1 and in base 2 is equal to .0101010101010101.
In case you don't have a complete understanding of how different bases work, here's an example:
The base 4 number 301.12. would be equal to 3 * 4^2 + 0 * 4^1 + 1 * 4^0 + 1 * 4^-1 + 2 *4^-2= 3 * 4^2 +1+ 1 * 4^-1 + 2 * 4^-2=49.375 in base 10.
Now the problems with accuracy in floating point comes from a limited number of bits in the significand. Floating point numbers have 3 parts to them, a sign bit, exponent and mantissa, most likely javascript uses 32 or 64 bit IEEE 754 floating point standard. For simpler calculations we'll use 32 bit, so 1.265 in floating point would be
Sign bit of 0 (0 for positive , 1 for negative) exponent of 0 (which with a 127 offset would be, ie exponent+offset, so 127 in unsigned binary) 01111111 (then finally we have the signifcand of 1.265, ieee floating point standard makes use of a hidden 1 representation so our binary represetnation of 1.265 is 1.01000011110101110000101, ignoring the 1:) 01000011110101110000101.
So our final IEEE 754 single (32-bit) representation of 1.625 is:
Sign Bit(+) Exponent (0) Mantissa (1.625)
0 01111111 01000011110101110000101
Now 1000 would be:
Sign Bit (+) Exponent(9) Mantissa(1000)
0 10001000 11110100000000000000000
Now we have to multiply these two numbers. Floating point multiplication consists of re-adding the hidden 1 to both mantissas, multiplying the two mantissa, subtracting the offset from the two exponents and then adding th two exponents together. After this the mantissa has to be normalized again.
First 1.01000011110101110000101*1.11110100000000000000000=10.0111100001111111111111111000100000000000000000
(this multiplication is a pain)
Now obviously we have an exponent of 9 + an exponent of 0 so we keep 10001000 as our exponent, and our sign bit remains, so all that is left is normalization.
We need our mantissa to be of the form 1.000000, so we have to shift it right once which also means we have to increment our exponent bringing us up to 10001001, now that our mantissa is normalized to 1.00111100001111111111111111000100000000000000000. It must be truncated to 23 bits so we are left with 1.00111100001111111111111 (not including the 1, because it will be hidden in our final representation) so our final answer that we are left with is
Sign Bit (+) Exponent(10) Mantissa
0 10001001 00111100001111111111111
Finally if we conver this answer back to decimal we get (+) 2^10 * (1+ 2^-3 + 2^-4 +2^-5+2^-6+2^-11+2^-12+2^-13+2^-14+2^-15+2^-16+2^-17+2^-18+2^-19+2^-20+2^-21+2^-22+2^-23)=1264.99987792
While I did simplify the problem multiplying 1000 by 1.265 instead of 10000 and using single floating point, instead of double, the concept stays the same. You use lose accuracy because the floating point representation only has so many bits in the mantissa with which to represent any given number.
Hope this helps.
It's a result of floating point representation error. Not all numbers that have finite decimal representation have a finite binary floating point representation.
Have a read of this article. Essentially, computers and floating-point numbers do not go together perfectly!
On the other hand, 126500 IS equal to 126499.99999999.... :)
Just like 1 is equal to 0.99999999....
Because 1 = 3 * 1/3 = 3 * 0.333333... = 0.99999999....
Purely due to the inaccuracies of floating point representation.
You could try using Math.round:
var x = Math.round(1.265 * 10000);
These small errors are usually caused by the precision of the floating points as used by the language. See this wikipedia page for more information about the accuracy problems of floating points.
Here's a way to overcome your problem, although arguably not very pretty:
var correct = parseFloat((1.265*10000).toFixed(3));
// Here's a breakdown of the line of code:
var result = (1.265*10000);
var rounded = result.toFixed(3); // Gives a string representation with three decimals
var correct = parseFloat(rounded); // Convert string into a float
// (doesn't show decimals)
If you need a solution, stop using floats or doubles and start using BigDecimal.
Check the BigDecimal implementation stz-ida.de/html/oss/js_bigdecimal.html.en
Even additions on the MS JScript engine :
WScript.Echo(1083.6-1023.6) give 59.9999999

Categories