Why is Node.js automatically rounding my floating point? - javascript

I'm trying to write a function that would fetch me the number of decimal places after the decimal point in a floating-point literal using the answer as a reference from here.
Although this seems to work fine when tried in the browser consoles, in the Node.js environment while running test cases, the precision is truncated only up to 14 digits.
let data = 123.834756380650877834678
console.log(data) // 123.83475638065088
And the function returns 14 as the answer.
Why is the rounding off happening at rest? Is it a default behavior?

The floating-point format used in JavaScript (and presumably Node.js) is IEEE-754 binary64. When 123.834756380650877834678 is used in source code, it is converted to the nearest representable value, which is 123.834756380650873097692965529859066009521484375.
When this is converted to a string with default formatting, JavaScript uses just enough digits to uniquely distinguish the value. For 123.834756380650873097692965529859066009521484375, this should produce “123.83475638065087”. If you are getting “123.83475638065088”, which differs in the last digit, then the software you are using does not conform to the JavaScript specification (ECMAScript).
In any case, the binary64 format does not have sufficient precision to preserve the information that the original numeral, “123.834756380650877834678”, has 21 digits after the decimal point.
The code you link to also does not and cannot compute the number of digits in an original numeral. It computes the number of digits needed to uniquely distinguish the value represented after conversion to binary64. For sufficiently short numerals without trailing zeros after the decimal point, this is the same as the number of digits after the decimal point in the original numeral. For others, it may not be.

it is default behavior of JavaScript.I think it will be same in node.js.
in JS The maximum number of decimals is 17.
for more details take a look at here

Related

Conversion safety for numbers received from JSON API

I have a backend that sends numbers with up to 19 digits before the decimal point and 6 digits after the decimal point. The backend is exact and correctly converts these digits into JSON like
{
myNumber: 9999999999999999999.123456
}
On the frontend I use JavaScript and the standard fetch API response.json().
By default the myNumber is converted into JavaScripts number type by response.json(). Can there be any loss of precision during this conversion?
Later on I might convert the number type back to string. Can there be any loss of precision during this conversion?
I need to understand which numbers can be safely converted from (a) the JSON representation to (b) JavaScripts number type and (c) back to a string. In which ranges will (a) and (c) be identical, when will I run into issues?
I hope I abstracted that enough away, but in case anyone is interested: The backend is C# Web API with SQL Server which uses DECIMAL(19,6).
Basically, use strings instead of JSON numbers. RFC 8259 states:
This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754 binary64 (double precision) numbers [IEEE754] is generally
available and widely used, good interoperability can be achieved by
implementations that expect no more precision or range than these
provide, in the sense that implementations will approximate JSON
numbers within the expected precision. A JSON number such as 1E400
or 3.141592653589793238462643383279 may indicate potential
interoperability problems, since it suggests that the software that
created it expects receiving software to have greater capabilities
for numeric magnitude and precision than is widely available.
The numbers you're dealing with have significantly more digits than IEEE-754 double precision will preserve.
If you use a string in the JSON like this:
{
"myNumber": "9999999999999999999.123456"
}
... then you can handle the conversion to/from any high/arbitrary-precision type yourself; it's completely in your control.

How works precision Float of numbers in Javascript?

Hello guys I'm working on javascript file about astronomical calculations so I need more precisions.
I have a lot operations for each line and I want like result more precision (also ten or twenty decimal after comma).
In JavaScript if I do not declare number of decimal (for example using .ToFixed(n)), how many positions after the comma, the language consider during the calculation?
For example:
var a= 1.123456789*3.5; // result 3.9320987615
var b= a/0.321;
var c= b*a;
// c => 48.16635722800571
It will be the result?
Like using all decimal after comma for each variables of javascript does approximations ?
Why in other questions users suggests to use decimal.js or other libraries?
I'm sorry if my question seems stupid but for me is important.
I hope you can help me.
Sorry for my english !
Javascript uses IEEE-754 double-precision, that means it only calculates numbers to around 15 decimal digits of precision, any more than that gets cut off. If you need more precision you have to use decimal.js or another similar library. Other questions recommend decimal.js or other libraries is because it is easy to put into your program and can provide as much precision as you want.
The reason it isn't implemented in the computer by default is that it takes a lot more effort for the computer to calculate to 20 digits over 15 digits because the computer is built to compute only to 15 decimal digits. If you want to read more on it I would recommend reading Arbitrary Precision Arithmetic on Wikipedia.
Simply because the toFixed() method converts a number into a string, keeping a specified number of decimals. So, the returned value will be a string not a number.

Hexadecimal Number in Javascript

I'm writing a small code using serialport module of node js
Docs of my hardware chip specified my tranmsitted data to be a byte array of hexadecimal numbers.
However, I have the values stored in decimal notation.
Using myDecimalnumber.toString(16) returns in hexa notation but in string format. I need in Number format. Converting the resultant into number is making it decimal again, but not in hexa!
I'm confused as to how to send the data to the chip. Please suggest!
Numbers are just numbers, they don't have a number base.
However, I have the values stored in decimal notation.
No, you don't. The only way it could be in decimal notation would be if it were a string, but if myDecimalnumber.toString(16) gives you a hex string, then myDecimalnumber is a Number, not a string (or you have a custom String.prototype.toString method, but I'm sure you don't).
Using myDecimalnumber.toString(16) returns in hexa notation but in string format. I need in Number format.
A number has no concept of a number base. That's a concept related to the representation of a number. That is, 10 decimal is 12 octal is A hex. They're all the same number. It's just their representation (e.g., how we write it down, its string form) that involves a number base.
Docs of my hardware chip specified my tranmsitted data to be a byte array of hexadecimal numbers.
That seems really unlikely. If it's the case, it was written by a hyper-junior engineer or mistranslated from another language.
The chip probably requires an array of integers (numbers), but you'll need to refer to the documentation to see what size of integeres (8-bit, 16-bit, 32-bit, 64-bit, etc.). But it could be that it requires an array of characters with data encoded as hex. In that case, you need to know how many digits per number it requires (likely values are 2, 4, etc.).
But again, fundamentally, number bases are only related to the representation of numbers (the way we write them down, or keep them in strings), not actual numbers. Numbers are just numbers, they don't have a number base.

How does number precision affect performance in JavaScript, or does it?

In JavaScript, there's only one type for all different kinds of numbers. Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Generally no. There are some possible performance implications when a number doesn't fit in a 31b signed int.
A tour of V8: object representation explains
According to the spec, all numbers in JavaScript are 64-bit floating point doubles. We frequently work with integers though, so V8 represents numbers with 31-bit signed integers whenever possible (the low bit is always 0; this helps the garbage collector distinguish numbers from pointers). So objects with the fast small integers elements kind only contain this type of number. If we want to store a fractional number or a larger integer or a special value like -0, then we need to upgrade the array to fast doubles. This involves a potentially expensive copy-and-convert operation, but it doesn't happen often in practice. fast doubles objects are still pretty fast because all of the numbers are stored in an unboxed representation. If we want to store any other kind of value, e.g., a string or an object, we must upgrade to a general array of fast elements.
Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
No. The number type in JavaScript is a 64-bit floating-point value with base 2 and always has the same precision. The computer works on that data bit by bit and it doesn't matter whether that data represents something that looks simple to a human like 1.0 or something seemingly complicated like 123423.5645632. In fact, for base 2 floats, the 'human' values are just as 'hard', because 1.1 is truly represented by a much longer number (sth. like 1.10000000000000054). All this doesn't matter, because the computer truly operates on 64 ones and zeros. There are always some arcane exceptions in floating point, but those usually don't matter in practice.
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Decimal numbers are stored as doubles (64bit), and it doesn't matter whether that is 1.0 or 1.1221234423. Again, the number of bits is constant for these data types.
The same is true for ints, but MongoDB has support for both 32- and 64-bit ints. So NumberLong is indeed larger than regular 32-bit ints and just as large as doubles.

What's the first number that JavaScript can't represent accurately?

I'm working with currency values, so it's important to calculate accurately.
My current code breaks down a string into tokens then evaluates them. For decimal values, it first converts them to integers, does the calculation then converts back to a decimal.
For example if I had the expression
"0.1 * 0.2"
The first step would be to break it down into the tokens 0.1, * and 0.2. It then does some other malarky and figures it needs to multiple 0.1 and 0.2 together. The calculation would be
1 * 2 / 100
The calculation is done as integers to prevent JavaScript rounding error, i.e.
0.1 * 0.2 == 0.020000000000000004
My colleges argument is that by converting to a float from a string initially you've already lost precision. So my question is what's the upper bound and lower bound either side of 0 where a number cannot be represented by JavaScript exactly? So that I can check for this and handle it, if that's the right approach.
The problem you describe isn't a bounds problem. IEEE-754 double-precision binary floating point numbers can represent the value 0.5 perfectly, but cannot represent, say, 0.1 perfectly. Note that those have the same number of digits. The issue isn't the number of places of precision, it's the fact that the number type uses a different number base than we do. It uses base 2, rather than our base 10.
Just as we can't accurately represent 1 / 3 in our base 10 system, certain numbers cannot be accurately represented in IEEE-754's base 2 system.
In 2008, the IEEE came out with a revision adding a new format to IEEE-754 (it defines several formats; the "double-precision binary" one used by JS is just one of them) called "decimal64" which uses base 10 rather than base 2, for applications that need to handle rounding the same way we do (financial apps and such). That may start seeping into programming languages and such; for now, IEEE-754 single-precision and double-precision are the typical ones used, and others not based on the recent IEEE-754 standard like C#'s decimal.
In the meantime, there are "big decimal" libraries for JavaScript, such as big.js (haven't used it, no affiliation). If you search for "bignumber in JavaScript" or "JavaScript exact floating point" you should find multiple options.
I dont think that there is any such number which you can say that it is the first number that JavaScript can't represent accurately. It is all about decimal numbers and loss of precision.
Also to add there is no decimal data type in JavaScript - the only numeric data type is floating-point. JavaScript uses 64-bit floating point representation.
Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Also to note that every floating point math is like this and is based on the IEEE 754 standard.
If I'm not mistaken, all JS engines uses IEEE 754 double to handle floating point numbers.
Take a look at the http://en.wikipedia.org/wiki/Double-precision_floating-point_format page, section 'Double-precision examples'. In the case numbers are close to 0, take a closer look at subnormals formula:
So, the closest to zero number, that can be represented in JavaScript floating point datatype is 21-1023 * 2-52 = 2-1022 * 2-52 = 2-1074.
Empirically:
Math.pow(2,-1074) = 5e-324
Math.pow(2,-1075) = 0
I notice you are doing currency calculations. You may find it better to use integers here. Use an integer to represent the price in cents. This eliminates problems with floating point which are more suitable for scientific calculations. This would basically be equivalent to using a BigDecimal in java.

Categories