Suppose I have a string of a non-decimal representation of a number beyond Number.MAX_SAFE_INTEGER. How would I get a BigInt of that number?
Were the string a representation of a decimal number, I'd just have BigInt(string), but the number is represented non-decimally.
Note: for my application, efficiency matters.
Edit: I'm looking for a general technique that works for arbitrary bases.
You could try one of the npm packages for large numbers, such as this:
npm big number
Related
I have a backend that sends numbers with up to 19 digits before the decimal point and 6 digits after the decimal point. The backend is exact and correctly converts these digits into JSON like
{
myNumber: 9999999999999999999.123456
}
On the frontend I use JavaScript and the standard fetch API response.json().
By default the myNumber is converted into JavaScripts number type by response.json(). Can there be any loss of precision during this conversion?
Later on I might convert the number type back to string. Can there be any loss of precision during this conversion?
I need to understand which numbers can be safely converted from (a) the JSON representation to (b) JavaScripts number type and (c) back to a string. In which ranges will (a) and (c) be identical, when will I run into issues?
I hope I abstracted that enough away, but in case anyone is interested: The backend is C# Web API with SQL Server which uses DECIMAL(19,6).
Basically, use strings instead of JSON numbers. RFC 8259 states:
This specification allows implementations to set limits on the range
and precision of numbers accepted. Since software that implements
IEEE 754 binary64 (double precision) numbers [IEEE754] is generally
available and widely used, good interoperability can be achieved by
implementations that expect no more precision or range than these
provide, in the sense that implementations will approximate JSON
numbers within the expected precision. A JSON number such as 1E400
or 3.141592653589793238462643383279 may indicate potential
interoperability problems, since it suggests that the software that
created it expects receiving software to have greater capabilities
for numeric magnitude and precision than is widely available.
The numbers you're dealing with have significantly more digits than IEEE-754 double precision will preserve.
If you use a string in the JSON like this:
{
"myNumber": "9999999999999999999.123456"
}
... then you can handle the conversion to/from any high/arbitrary-precision type yourself; it's completely in your control.
I'm trying to write a function that would fetch me the number of decimal places after the decimal point in a floating-point literal using the answer as a reference from here.
Although this seems to work fine when tried in the browser consoles, in the Node.js environment while running test cases, the precision is truncated only up to 14 digits.
let data = 123.834756380650877834678
console.log(data) // 123.83475638065088
And the function returns 14 as the answer.
Why is the rounding off happening at rest? Is it a default behavior?
The floating-point format used in JavaScript (and presumably Node.js) is IEEE-754 binary64. When 123.834756380650877834678 is used in source code, it is converted to the nearest representable value, which is 123.834756380650873097692965529859066009521484375.
When this is converted to a string with default formatting, JavaScript uses just enough digits to uniquely distinguish the value. For 123.834756380650873097692965529859066009521484375, this should produce “123.83475638065087”. If you are getting “123.83475638065088”, which differs in the last digit, then the software you are using does not conform to the JavaScript specification (ECMAScript).
In any case, the binary64 format does not have sufficient precision to preserve the information that the original numeral, “123.834756380650877834678”, has 21 digits after the decimal point.
The code you link to also does not and cannot compute the number of digits in an original numeral. It computes the number of digits needed to uniquely distinguish the value represented after conversion to binary64. For sufficiently short numerals without trailing zeros after the decimal point, this is the same as the number of digits after the decimal point in the original numeral. For others, it may not be.
it is default behavior of JavaScript.I think it will be same in node.js.
in JS The maximum number of decimals is 17.
for more details take a look at here
Hello guys I'm working on javascript file about astronomical calculations so I need more precisions.
I have a lot operations for each line and I want like result more precision (also ten or twenty decimal after comma).
In JavaScript if I do not declare number of decimal (for example using .ToFixed(n)), how many positions after the comma, the language consider during the calculation?
For example:
var a= 1.123456789*3.5; // result 3.9320987615
var b= a/0.321;
var c= b*a;
// c => 48.16635722800571
It will be the result?
Like using all decimal after comma for each variables of javascript does approximations ?
Why in other questions users suggests to use decimal.js or other libraries?
I'm sorry if my question seems stupid but for me is important.
I hope you can help me.
Sorry for my english !
Javascript uses IEEE-754 double-precision, that means it only calculates numbers to around 15 decimal digits of precision, any more than that gets cut off. If you need more precision you have to use decimal.js or another similar library. Other questions recommend decimal.js or other libraries is because it is easy to put into your program and can provide as much precision as you want.
The reason it isn't implemented in the computer by default is that it takes a lot more effort for the computer to calculate to 20 digits over 15 digits because the computer is built to compute only to 15 decimal digits. If you want to read more on it I would recommend reading Arbitrary Precision Arithmetic on Wikipedia.
Simply because the toFixed() method converts a number into a string, keeping a specified number of decimals. So, the returned value will be a string not a number.
I am using node 6.x (npm 3.x) with restify (latest). If a javascript object contains a property set to an integer, by default it looks like restify.send() will serialize that integer into "low" and "high" parts -- presumably representing the low/high 32-bit components of a 64-bit integer.
How can I turn off this default behavior, so that integers are not encoded into low and high parts?
Thanks.
I can reproduce this behaviour when using integer, is that what you're using to represent integer values that may exceed JavaScript's Number.MAX_SAFE_INTEGER?
If so, then you need to convert those integer instances to a proper JS number, otherwise they can't be represented as numerical value in JSON:
Number(obj.intProperty) // or: obj.intProperty.toNumber()
HOWEVER: I assume there's a reason for you using integer. If the number represented by obj.intProperty is too big to be represented as a plain JS Number, converting it may yield invalid results (that's why the JSON-representation of an integer is an object consisting of two 32-bit values).
EDIT: turns out that the issue was caused by the Neo4J driver's representation of 64-bit integers, as documented here: https://www.npmjs.com/package/neo4j-driver#a-note-on-numbers-and-the-integer-type
I'm writing a small code using serialport module of node js
Docs of my hardware chip specified my tranmsitted data to be a byte array of hexadecimal numbers.
However, I have the values stored in decimal notation.
Using myDecimalnumber.toString(16) returns in hexa notation but in string format. I need in Number format. Converting the resultant into number is making it decimal again, but not in hexa!
I'm confused as to how to send the data to the chip. Please suggest!
Numbers are just numbers, they don't have a number base.
However, I have the values stored in decimal notation.
No, you don't. The only way it could be in decimal notation would be if it were a string, but if myDecimalnumber.toString(16) gives you a hex string, then myDecimalnumber is a Number, not a string (or you have a custom String.prototype.toString method, but I'm sure you don't).
Using myDecimalnumber.toString(16) returns in hexa notation but in string format. I need in Number format.
A number has no concept of a number base. That's a concept related to the representation of a number. That is, 10 decimal is 12 octal is A hex. They're all the same number. It's just their representation (e.g., how we write it down, its string form) that involves a number base.
Docs of my hardware chip specified my tranmsitted data to be a byte array of hexadecimal numbers.
That seems really unlikely. If it's the case, it was written by a hyper-junior engineer or mistranslated from another language.
The chip probably requires an array of integers (numbers), but you'll need to refer to the documentation to see what size of integeres (8-bit, 16-bit, 32-bit, 64-bit, etc.). But it could be that it requires an array of characters with data encoded as hex. In that case, you need to know how many digits per number it requires (likely values are 2, 4, etc.).
But again, fundamentally, number bases are only related to the representation of numbers (the way we write them down, or keep them in strings), not actual numbers. Numbers are just numbers, they don't have a number base.