javascript increment by 1/8 issue [duplicate] - javascript

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 7 years ago.
for (var i = 0; i < 10; i += .1) {
}
console.log(i) === 10.09999999999998
BUT ...
for (var i = 0; i < 10; i += 1/8) {
}
console.log(i) === 10
Why is the result an integer when increment by 1/8?

Because 1/8 can be represented exactly as a base-2 (binary) fraction, but 0.1 cannot. 1/8 is 2 to the negative third power, but 0.1 is not 2 to any integer power. Floating-point values are stored in binary, so math on integer powers of two is more likely to return exact values than math on non-integer powers of 2.
That said, it is better to assume that no floating-point operation will be entirely exact. Different languages and processors may give different results, so don't count on that 1/8 summing working everywhere.

Related

How can one avoid too many numbers after the decimalpoint? [duplicate]

This question already has answers here:
How to deal with floating point number precision in JavaScript?
(47 answers)
Closed 2 years ago.
The following code generates numbers and than divides it with 1000:
But some times numbers like 1.2295999999999998 are generated, while 1229.6 is only being divided by 1000?
What do I need to change in the code, so that this does not occur?
Z1 = document.getElementById("Z1").innerHTML = Math.floor((Math.random() * 99899 + 100)) / Math.pow(10, Math.floor((Math.random() * 3 + 1)));
document.getElementById("L").innerHTML = Z1 / 1000;
<label id="Z1"></label><br>
<label id="L"></label>
document.getElementById("L").innerHTML = (Z1 / 1000).toFixed(2);
// if you want 2 digits after decimalpoint
Further Explaination:
Floating point numbers cannot represent all decimals precisely in binary. It has to do with how decimal values are converted to binary floating point numbers. For example, 1/10 turns into a repeating decimal in binary, so the number is not perfectly represented, and repeated operations can expose the error.

JavaScript - While loop - odd numbers exponentiation [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 4 years ago.
I found some problem with JavaScript exponentiation While loop code:
var x = Number(prompt("X:"));
var y = Number(prompt("Y:"));
var count = 1;
var power = 1;
while(count <= y){
power = power * x;
console.log (x + " to the power of " + count + " is: " + power);
count++;
}
This is simple mathematical formula X^Y.
In case of X=5 and Y=25 it's going well till Y=23. It looks like there is some problem with X odd numbers. Eg. X=3 and Y=35. Wrong result at Y=34. Missing "1" during multiplication. Can someone explain why it happend?
You better know something about isSafeInteger in js:
3 to the power of 33 is: 5559060566555523
3 to the power of 34 is: 16677181699666568
3 to the power of 35 is: 50031545098999704
console.log(
Number.isSafeInteger(5559060566555523),
Number.isSafeInteger(16677181699666568),
Number.isSafeInteger(50031545098999704)
)
A safe integer is an integer that
can be exactly represented as an IEEE-754 double precision number, and
whose IEEE-754 representation cannot be the result of rounding any other integer to fit the IEEE-754 representation.
For example, 2^53 - 1 is a safe integer: it can be exactly represented, and no other integer rounds to it under any IEEE-754 rounding mode. In contrast, 2^53 is not a safe integer: it can be exactly represented in IEEE-754, but the integer 2^53 + 1 can't be directly represented in IEEE-754 but instead rounds to 2^53 under round-to-nearest and round-to-zero rounding. The safe integers consist of all integers from -(2^53 - 1) inclusive to 2^53 - 1 inclusive (± 9007199254740991 or ± 9,007,199,254,740,991).

Adding integer numbers javascript, double float [duplicate]

This question already has answers here:
Floating point representations seem to do integer arithmetic correctly - why?
(8 answers)
Closed 5 years ago.
So I have read this answer here:
Is floating point math broken?
That because every number in JS is double float 0.1+0.2 for example will NOT equal to 0.3.
But I don't understand why it never happens with integers? Why then 1+2 always equals 3 etc. It would seem that integers like 1 or 2 similarly to 0.1 and 0.2 don't have perfect representation in the binary64 so their math should also sometimes break but that never happens.
Why is that?
but I don't understand why it never happens with integers?
It does, the integer just has to be really big before they hit the limits of the IEEE-754 format:
var a = 9007199254740992;
console.log(a); // 9007199254740992
var b = a + 1;
console.log(b); // still 9007199254740992
console.log(a == b); // true
Floating point formats, such as IEEE-754 are essentially an expression that describes the value as the following:
value := sign * mantissa * 2 ^ exponent
The mantissa is an integer of various sizes. For four byte floating point, the mantissa is 24 bits and, for eight byte floating point, the mantissa is 48 bits. If the exponent is 0, the value of the expression is determined only by the sign and the mantissa. This is, in fact, how integers are represented by JavaScript.
What seems to take most people by surprise is due to the base 2 exponent instead of base 10. We accept, that, in base 10, the result of 1/3 or 2/3 cannot be exactly represented without an infinite number of digits or the acceptance of round-off error. Similarly, there are fractions in base 2 that have similar issues. Unfortunately for our base 10 mindset, these fractions most often involve negative powers of 10.

JavaScript Highest Representable Number

This question asks about the highest number in JavaScript without losing precision. Here, I ask about the highest representable number in JavaScript. Some answers to the other question reference the answer to this question, but they do not answer that question, so I hope I am safe asking here.
I tried to find the answer, but I got lost halfway through. The highest number representable in JavaScript seems to be somewhere between 2^1023 and 2^1024. I went further (in the iojs REPL) with
var highest = Math.pow(2, 1023);
for(let i = 1022; i > someNumber; i--) {
highest += Math.pow(2, someNumber);
}
The highest number here seems to be when someNumber is between 969 and 970. This means it is between (2^1023 + 2^1022 + ... + 2^970) and (2^1023 + 2^1022 + ... + 2^969). I'm not sure how to go further without running out of memory and/or waiting years for a loop or function to finish.
What is the highest number representable in JavaScript? Does JavaScript store all digits of this number, or just some, because whenever I see numbers of 10^21 or higher they are represented in scientific notation? Why can JavaScript represent these extremely high numbers, especially if it can "remember" all the digits? Isn't JavaScript a base 64 language?
Finally, why is this the highest representable number? I am asking because it is not an integer exponent of 2. Is it because it is a floating point number? If we took the highest floating point number, would that be related to some exponent of 2?
ECMAScript uses IEEE 754 floating points to represent all numbers (integers and floating points) in memory. It uses double precision (64 bit), so the largest possible number would be the following (in binary):
(-1)^0 * (1.1111111111111111111111111111111111111111111111111111)_2 * 2^1023
^ ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ^^^^
sign bit 52 binary digits exponent
That is 1.9999999999999997779553950749686919152736663818359375 * 2^1023, which is exactly equal to 179769313486231570814527423734518473246981451219193357160576872808257310254570927992989173324623785766498017753719800531497718555288192667185248845624861831489179706103179456665410545164365169396987674822445002542175370097858557402467390846365155202987281348776667818932226328810501776426180817703854493120592.218308495538871112145305600. That number is also available in JavaScript as Number.MAX_VALUE.
JavaScript uses IEE 754 double-precision floating point numbers, aka the binary64. This format has 1 sign bit, 11 bits of exponent, and 52 bits of mantissa.
The highest possible number is that which is encoded using the highest possible exponents and mantissa, with a 0 sign bit. Except that the exponent value of 7ff (base 16) is used to encode Infinity and NaNs. The largest number is therefore encoded as 7fef ffff ffff ffff, and its value is (1 + (1 − 2^(−52))) × 2^1023.
Refer to the linked article for further details about the formula.

How to work around the decimal problem in JavaScript? [duplicate]

This question already has answers here:
Closed 13 years ago.
Possible Duplicates:
Strange problem comparing floats in objective-C
Is JavaScript’s math broken?
1.265 * 10000 = 126499.99999999999 ?????
After watching this I discovered that in JavaScript:
0.1 + 0.2 === 0.3
evaluates to false.
Is there a way to work around this?
The best and only answer I have found that provides accurate results is to use a Decimal library. The BigDecimal java class has been ported to javascript, see my answer in this SO post.
Note: Scaling values will "treat" the issue but will not "cure" it.
How about
function isEqual(a, b)
{
var epsilon = 0.01; // tunable
return Math.abs(a - b) < epsilon;
}
This is a problem inherent in binary numbers that hits all major programming languages.
Convert decimal .1 (1/10) to binary by hand - you'll find it has a repeating mantissa and can't be represented exactly. Like trying to represent 1/3 as a decimal.
You should always compare floating point numbers by using a constant (normally called epsilon) to determine much can two numbers differ to be considered "equal".
Use fixed-point math (read: integers) to do math where you care about that kind of precision. Otherwise write a function that compares your numbers within a range that you can accept as being "close enough" to equal.
Just an idea. Multiply 10000 (or some similarly big number as long as its more than your max number of decimals) to all your values before you compare them, that why they will be integers.
function padFloat( val ) {
return val * 10000;
}
padFloat( 0.1 ) + padFloat( 0.2 ) === padFloat( 0.3 );
You can of course multiply each number like
10 * 0.1 + 10 * 0.2 === 10 * 0.3
which evaluates to true.

Categories