How do works Math.fround() function.
The Math.fround() function returns the nearest float representation of a number.
But when it is passed the float number Math.fround(1.5) it returns the same(1.5) value.
But when it is passed the float number Math.fround(1.55) it returns a different value 1.5499999523162841. why and How?
I am confused about how Math.fround() works.
How is it different from the Math.round() function?
Also let me know why it's not supported in Internet Explorer.
To understand how this function works you actually have to know the following topics:
JavaScript’s Number type in details
The simple math behind decimal-binary conversion algorithms
How to round binary numbers
The mechanics behind exponent bias in floating point
The ECMA script specifies the following algorithm for the conversion:
When Math.fround is called with argument x, the following steps are
taken:
If x is NaN, return NaN.
If x is one of +0, -0, +∞, -∞, return x.
Let x32 be the result of converting x to a value in IEEE 754-2008 binary32
format using roundTiesToEven.
Let x64 be the result of converting x32
to a value in IEEE 754-2008 binary64 format.
Return the ECMAScript
Number value corresponding to x64.
So, let's do that for 1.5 and 1.55.
Math.fround(1.5)
1) Represent in 64bit float
0 01111111111 1000000000000000000000000000000000000000000000000000
2) Represent in the scientific notation
1.1000000000000000000000000000000000000000000000000000 x 2^0
3) Round to 23 bit mantissa
1.10000000000000000000000
4) Convert to decimal:
1.5
Math.fround(1.55)
1) Represent in 64bit float
0 01111111111 1000110011001100110011001100110011001100110011001101
2) Represent in the scientific notation
1.1000110011001100110011001100110011001100110011001101 x 2^0
3) Round to 23 bit mantissa
1.10001100110011001100110
4) Convert to decimal:
1.5499999523162841
I got these below things which is I got from some of the documents. Kindly share your answers if you have more details.
I have found something about Math.fround ( x ) from ECMAScript® 2015 Language Specification
When Math.fround() is called with argument x the following steps are taken:
If x is NaN, return NaN.
If x is one of +0, −0, +∞, −∞, return x.
Let x32 be the result of converting x to a value in IEEE 754-2008 binary32 format using roundTiesToEven.
Let x64 be the result of converting x32 to a value in IEEE 754-2008 binary64 format.
Return the ECMAScript Number value corresponding to x64.
This can be emulated with the following function, if Float32Array are supported:
Math.fround = Math.fround || (function (array) {
return function(x) {
return array[0] = x, array[0];
};
})(Float32Array(1));
** Kindly share more answers for learning more details
Related
It's relating to a problem with how JavaScript handles large (Floating-Point) numbers.
What is JavaScript's highest integer value that a number can go to without losing precision? is referring to the highest possible number. I was after a way to bypass that for getting the min and max in the example below.
var lowest = Math.min(131472982990263674, 131472982995395415);
console.log(lowest);
Wil return:
131472982990263680
To get the min and max value would it be required to write a function to suit or is there a way I can get it working with the Math.min and Math.max functions?
The closest solution I've found was this but I couldn't manage to get it working as I cannot avail of BigInt function as it's not exposed to my version.
Large numbers erroneously rounded in JavaScript
You can try to convert the numbers to BigInt:
const n1 = BigInt("131472982990263674"); // or const n1 = 131472982990263674n;
const n2 = BigInt("131472982995395415"); // or const n1 = 131472982995395415n;
Then find the min and max using this post:
const n1 = BigInt("131472982990263674");
const n2 = BigInt("131472982995395415");
function BigIntMinAndMax (...args){
return args.reduce(([min,max], e) => {
return [
e < min ? e : min,
e > max ? e : max,
];
}, [args[0], args[0]]);
};
const [min, max] = BigIntMinAndMax(n1, n2);
console.log(`Min: ${min}`);
console.log(`Max: ${max}`);
Math.min and Math.max are working just fine.
When you write 131472982990263674 in a JavaScript program, it is rounded to the nearest IEEE 754 binary64 floating-point number, which is 131472982990263680.
Written in hexadecimal or binary, you can see that 131472982990263674 = 0x1d315fb40b2157a = 0b111010011000101011111101101000000101100100001010101111010 takes 56 bits of precision to represent.
If you round that to the nearest number with only 53 bits of precision, what you get is 0b111010011000101011111101101000000101100100001010110000000 = 0x1d315fb40b21580 = 131472982990263680 (significant bits in bold).
Similarly, when you write 131472982995395415 in JavaScript, what you get back is 131472982995395410.
So when you write the code Math.min(131472982990263674, 131472982995395415), you pass the numbers 131472982990263680 and 131472982995395410 into the Math.min function.
Given that, it should come as no surprise that Math.min returns 131472982990263680.
> 131472982990263674
131472982990263680
> 131472982995395415
131472982995395410
> Math.min(131472982990263674, 131472982995395415)
131472982990263680
It's not clear what your original goal is.
Are you given two JavaScript numbers to begin with, and are you trying to find the min or max?
If so, Math.min and Math.max are the right thing.
Are you given two strings, and are you trying to order them by the numbers they represent?
If so, it depends on the notation you want to support.
If you only want to support decimal notation for integers (with no scientific notation, like 123e4), then you can simply chop leading zeros and compare the strings lexicographically with < or > in JavaScript.
> function strmin(x, y) { return x < y ? x : y }
> strmin("131472982990263674", "131472982995395415")
'131472982990263674'
If you want to support arbitrary-precision decimal notation (including non-integers and perhaps scientific notation), and you want to maintain distinctions between, for instance, 1.00000000000000001 and 1.00000000000000002, then you probably want a general arbitrary-precision decimal arithmetic library.
Are you trying to do arithmetic with integers in a range that might exceed 2⁵³, and need the computation to be exact, requiring >53 bits of precision?
If so, you may need some kind of wider-precision or arbitrary-precision arithmetic than JavaScript numbers alone provide, like bigint recently added to JavaScript.
If you only need a little more than 53 bits of precision, as is often the case inside numerical algorithms for transcendental elementary functions, there's also T.J. Dekker's algorithm for extending (say) binary64 arithmetic into double-binary64 or “double-double” arithmetic: a double-binary64 number is the sum 𝑥 + 𝑦 of two binary64 floating-point numbers 𝑥 and 𝑦, where typically 𝑥 holds the higher-order bits and 𝑦 holds the lower-order bits so together they can store 106 bits of precision.
Can I ever run into any floating number precision errors if I don't perform any arithmetic operations on the floats? The only operations I do with numbers in my program are limited to the following:
Getting numbers as strings from a web service and converting them to floats using parseFloat()
Comparing resulting floats using <= < == > >=
Example:
const input = ['1000.69', '1001.04' /*, ... */]
const x = parseFloat(input[0])
const y = parseFloat(input[1])
console.log(x < y)
console.log(x > y)
console.log(x == y)
As for parseFloat() implemetation, I'm using latest Node.js.
The source of floats is prices in USD as strings, always two decimals.
As long as the source of your floats is reliable, your checks are safe, yes.
I'd still round them to an acceptable decimal number after the parsing, just to be 100% safe.
As the MDN docs show in one of their examples
// these all return 3.14
parseFloat(3.14);
parseFloat('3.14');
parseFloat(' 3.14 ');
parseFloat('314e-2');
parseFloat('0.0314E+2');
parseFloat('3.14some non-digit characters');
parseFloat({ toString: function() { return "3.14" } });
//and of course
parseFloat('3.140000000') === 3.14
The parseFloat operation converts a string into it's number value. The spec says:
In this specification, the phrase “the Number value for x” where x represents an exact real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with -0 removed and with two additional values added to it that are not representable in the Number type, namely 2ℝ1024ℝ (which is +1ℝ × 2ℝ53ℝ × 2ℝ971ℝ) and -2ℝ1024ℝ (which is -1ℝ × 2ℝ53ℝ × 2ℝ971ℝ). Choose the member of this set that is closest in value to x.
That reads as if two same strings are always converted to the same closest number. Except for NaN, two same numbers are equal.
6.1.6.1.13 Number::equal ( x, y )
If x is NaN, return false.
If y is NaN, return false.
If x is the same Number value as y, return true.
If x is +0 and y is -0, return true.
If x is -0 and y is +0, return true.
Return false.
emphasis mine
I'm testing toFixed() method of javascript. The result is seen as below.
(49.175).toFixed(2) => "49.17"
(49.775).toFixed(2) => "49.77"
(49.185).toFixed(2) => "49.19"
(49.785).toFixed(2) => "49.78"
(49.1175).toFixed(3) => "49.117"
(49.1775).toFixed(3) => "49.178"
(49.1185).toFixed(3) => "49.118"
(49.1785).toFixed(3) => "49.178"
I made this test at chrome browser, and I'm surprised with the result. I couldn't catch the logic. It doesn't fit neither 'round away from zero' nor 'round to even'.
What is the rule behind of 'toFixed()' function ?
About toFixed
Returns a String containing this Number value represented in decimal fixed-point notation with fractionDigits digits after the decimal point. If fractionDigits is undefined, 0 is assumed. Specifically, perform the following steps:
Algorithm Number.prototype.toFixed (fractionDigits): https://www.ecma-international.org/ecma-262/5.1/#sec-15.7.4.5
The length property of the toFixed method is 1.
If the toFixed method is called with more than one argument, then the behaviour is undefined (see clause 15).
An implementation is permitted to extend the behaviour of toFixed for values of fractionDigits less than 0 or greater than 20. In this case toFixed would not necessarily throw RangeError for such values.
NOTE The output of toFixed may be more precise than toString for some values because toString only prints enough significant digits to distinguish the number from adjacent number values.
JS Work Around
function fix(n, p) {
return (+(Math.round(+(n + 'e' + p)) + 'e' + -p)).toFixed(p);
}
let exampleA = fix(49.1175, 3);
let exampleB = fix(49.1775, 3);
let exampleC = fix(49.775, 2);
const random = Math.random();
console.log(exampleA);
console.log(exampleB);
console.log(exampleC);
console.log('Before:', random, 'After Custom =>', fix(random, 3), 'Default:', random.toFixed(3));
// 49.118
// 49.178
// 49.78
Precision Needed
I suggest just simply porting set precision from C++ to a Node.JS Module.
You could simply rig up and use a child_process also in Node.JS to call a C++ program with an argument, and have the C++ run a function to convert the value and output to the console.
The issue is, that the numbers you entered do not exist! On scanning, they are (binary) rounded to the nearest possible/existing number. toPrecision(18) shows the numbers after scanning more exact:
(49.175).toPrecision(18); // "49.1749999999999972" => "49.17"
(49.775).toPrecision(18); // "49.7749999999999986" => "49.77"
(49.185).toPrecision(18); // "49.1850000000000023" => "49.19"
(49.785).toPrecision(18); // "49.7849999999999966" => "49.78"
So the number is rounded 2 times: First on scanning, and then by toFixed().
From the MDN:
toFixed() returns a string representation of numObj that does not use exponential notation and has exactly digits digits after the decimal place. The number is rounded if necessary, and the fractional part is padded with zeros if necessary so that it has the specified length. If numObj is greater or equal to 1e+21, this method simply calls Number.prototype.toString() and returns a string in exponential notation.
And later you can read:
WARNING: Floating point numbers cannot represent all decimals precisely in binary which can lead to unexpected results such as 0.1 + 0.2 === 0.3 returning false.
The above warning in conjuntion with the round logic (maybe arithmetical operations on the number) will explain the different behaviours you are experimenting in the rounding procedure (you can read it here).
Note: I'm not asking about Is floating point math broken? , because I asks about number with integer value+another decimal number with dec instead of decimal number+decimal number.
for example, 10.0+0.1 generates a number with rounding errors, 10.1 generates another number with rounding errors, my question is , does 10.0+0.1 generate SAME amount of error as 10.1 so that 10.0+0.1===10.1 becomes equal to true?
For more example:
10.0+0.123 === 10.123
2.0+4.68===6.68
they are true by testing, and the first number are 10.0 and 2.0, which are integer values. Is it true that an integer + hardcoded float number (same sign) exactly equals to the hardcoded expected float number? Or in other words, does a.0+b.cde exactly equals to (a+b).cde (which a,b,c,d,e are hardcoded)?
It is not generally true that adding an integer value to a floating-point value produces a result equal to the exact mathematical result. A counterexample is that 10 + .274 === 10.274 evaluates to false.
You should understand that in 10.0+0.123 === 10.123, you are not comparing the result of adding .123 to 10 to 10.123. What this code does is:
Convert “10.0” to binary floating-point, yielding 10.
Convert “0.123” to binary floating-point, yielding 0.1229999999999999982236431605997495353221893310546875.
Add the above two, yielding 10.1229999999999993320898283855058252811431884765625. (Note this result is not the exact sum; it has been rounded to the nearest representable value.)
Convert “10.123” to binary floating-point, yielding 10.1229999999999993320898283855058252811431884765625.
Compare the latter two values.
Thus, the reason the comparison returns true is not because the addition had no rounding error but because the rounding errors on the left happened to equal the rounding errors on the right. (Note: Converting a string containing a decimal to binary floating-point is a mathematical operation. When the mathematical result is not exactly representable, the nearest representable value is produced instead. The difference is called rounding error.)
If you try 10 + .274 === 10.274, you will find they differ:
“10” converted to binary floating-point is 10.
“.274” converted to binary floating-point is 0.27400000000000002131628207280300557613372802734375.
Adding the above two produces 10.2740000000000009094947017729282379150390625.
“10.274” converted to binary-floating-point is 10.2739999999999991331378623726777732372283935546875.
No. JavaScript only has floats. Here's one case that fails.
10000.333333333333 + 1.0 // 10001.333333333332
I have a decimal precision function which takes in an object and a precision number as arguments and it returns me a number which is parsed using JSON.parse(number) as output.
I have observed that the toPrecision() function returns a value in exponential notation when the value passed is an integer with no decimal places and the precision value is between 1 and 100. When I pass in the exponential notation to the JSON.parse(), it gives me a number. I am not understanding how this is working internally. Could anyone explain me what exactly is happening here. The following is the function which I have devised:
function precise(object, precision){
let st = JSON.stringify(object);
st = st.replace(/[+-]?([0-9]*[.])?[0-9]+/g, s => parseFloat(s).toPrecision(precision));
return JSON.parse(st);
}
For instance, if I call precise(100, 2), st will have the value 1.0e+2 and the return value will be 100. How does this conversion taking place?
Thank you.
If I'm understanding your question correctly it is essentially...
"Why does console.log((100).toPrecision(2)); result in 1.0e+2?"
The argument that is given to toPrecision defines the desired number of precision (aka "scientific precision" or number of digits (not decimal places)).
In other words... there are more than two digits in 100 so the result has to be defined using scientific notation.