How works precision Float of numbers in Javascript? - javascript

Hello guys I'm working on javascript file about astronomical calculations so I need more precisions.
I have a lot operations for each line and I want like result more precision (also ten or twenty decimal after comma).
In JavaScript if I do not declare number of decimal (for example using .ToFixed(n)), how many positions after the comma, the language consider during the calculation?
For example:
var a= 1.123456789*3.5; // result 3.9320987615
var b= a/0.321;
var c= b*a;
// c => 48.16635722800571
It will be the result?
Like using all decimal after comma for each variables of javascript does approximations ?
Why in other questions users suggests to use decimal.js or other libraries?
I'm sorry if my question seems stupid but for me is important.
I hope you can help me.
Sorry for my english !

Javascript uses IEEE-754 double-precision, that means it only calculates numbers to around 15 decimal digits of precision, any more than that gets cut off. If you need more precision you have to use decimal.js or another similar library. Other questions recommend decimal.js or other libraries is because it is easy to put into your program and can provide as much precision as you want.
The reason it isn't implemented in the computer by default is that it takes a lot more effort for the computer to calculate to 20 digits over 15 digits because the computer is built to compute only to 15 decimal digits. If you want to read more on it I would recommend reading Arbitrary Precision Arithmetic on Wikipedia.

Simply because the toFixed() method converts a number into a string, keeping a specified number of decimals. So, the returned value will be a string not a number.

Related

Why is Node.js automatically rounding my floating point?

I'm trying to write a function that would fetch me the number of decimal places after the decimal point in a floating-point literal using the answer as a reference from here.
Although this seems to work fine when tried in the browser consoles, in the Node.js environment while running test cases, the precision is truncated only up to 14 digits.
let data = 123.834756380650877834678
console.log(data) // 123.83475638065088
And the function returns 14 as the answer.
Why is the rounding off happening at rest? Is it a default behavior?
The floating-point format used in JavaScript (and presumably Node.js) is IEEE-754 binary64. When 123.834756380650877834678 is used in source code, it is converted to the nearest representable value, which is 123.834756380650873097692965529859066009521484375.
When this is converted to a string with default formatting, JavaScript uses just enough digits to uniquely distinguish the value. For 123.834756380650873097692965529859066009521484375, this should produce “123.83475638065087”. If you are getting “123.83475638065088”, which differs in the last digit, then the software you are using does not conform to the JavaScript specification (ECMAScript).
In any case, the binary64 format does not have sufficient precision to preserve the information that the original numeral, “123.834756380650877834678”, has 21 digits after the decimal point.
The code you link to also does not and cannot compute the number of digits in an original numeral. It computes the number of digits needed to uniquely distinguish the value represented after conversion to binary64. For sufficiently short numerals without trailing zeros after the decimal point, this is the same as the number of digits after the decimal point in the original numeral. For others, it may not be.
it is default behavior of JavaScript.I think it will be same in node.js.
in JS The maximum number of decimals is 17.
for more details take a look at here

Javascript Decimal issues [duplicate]

This question already has answers here:
Is floating point math broken?
(31 answers)
Closed 3 years ago.
Using the parseFloat method converted the string value from the DB to float and multiplied by 100. But the output looks odd. Following the piece of code which I've used.
parseFloat(upliftPer) * 100 //upliftPer value read from DB and its value is 0.0099
So when it multiplied with 100 getting 0.9900000000000001 I suppose to get 0.99 but some junk values getting appended. Also I went ahead and did the same in the console log of chrome browser still the same result. I have attached screenshots for reference. Solution I needed is 0.0099 * 100 should result 0.99. I cant apply round / toFixed since I need more precision.
This is because of JavaScript internal casting to double type. There's always a certain degree of noise due to floating point inaccuracy. Here's some information on that.
You can just round it up using toFixed(x) with as much decimal spaces of precision as you want (or as much as JavaScript would allow you).
It is not related to JavaScript nor to any programming language.
It’s because of the conversion from decimal floating-point to binary representation: the process ends up in an infinite process that has to be truncated because of the limited number of bits available for storing the numbers (and thus, of the limited amount of memory in computers)
Check out the process of conversion and take a look at this converter that tells you how much error there is during conversion
As #sebasaenz pointed out in his answer, you can use toFixed(x) to round up your number and get rid of junk

Native support for 64 bit floating point in Javascript

As the question here states, even the newest Ecmascript 8 has no support for 64bit integers.
But the stage 3 proposal for bigInt looks promising and I expect it will be added to the Js spec sometime soon.
However, Even according to the proposal, we have to use a special constructor for declaring big numbers. (Q1) What is the technical reason behind, being unable to represent big numbers in the general way?
let bigNum = 2 ** 64 // Why can't JS do this without losing precision? (at least in future)
I know that JavaScript represents all numbers using IEEE-754 double-precision (64 bit) floating points and that this causes the problem.
(Q2) Why can't Javascript represent all numbers using some other standard which doesn't lose precision?
(Q3) What complications would arise if Javascript actually did that?
Edit: As stated by T.J, we can use a suffix instead of a constructor, but I still feel that is not exactly the general notation
(Q1) What is the technical reason behind, being unable to represent big numbers in the general way?
Breaking the web. The fundamentals of JavaScript numbers cannot be changed now, 20-odd years after they were originally defined. There's also the issue of performance: JavaScript's current numbers (IEEE-754 binary double [64-bit] precision) are very fast floating point thanks to being built into CPUs and math coprocessors. The cost of that speed is precision; the cost of arbitrary precision (or a dramatically larger precise range) is performance.
Someday in the future, perhaps JavaScript will get IEEE-754 64-bit or even 128-bit decimal floating point numbers (see here and here), if those formats (introduced in 2008) diffuse into the ecosystem and get hardware support. But that's speculation on my part. :-)
(Q2) Why can't Javascript represent all numbers using some other standard which doesn't lose precision?
See Q1. :-)
(Q3) What complications would arise if Javascript actually did that?
See Q1. :-)
Even according to the proposal, we have to use a special constructor for declaring big numbers.
If you want 64-bit specifically. If you just want BigInts, the proposal includes a new notation, the n suffix, for that: 2n is a BigInt 2. So with BigInts, your example would be
let bigNum = 2n ** 64n;

Stop Sublime/Node.js shortening numbers

When I work with big numbers Sublime Text 2 or Node.js (I don't know which one it actually does) shortens those big numers. For example:
var example = Math.pow(2, 70);
example = 1.1805916207174113e+21
I would like to see the full number (1180591620717411303424), but I don't know which settings I have to change for that nor do I know where to find those settings.
It's NodeJS that's "shortenting" these numbers. It display large numbers in scientific notation, because Javascript uses 64-bit floating point numbers.
If you're referring to displaying numbers not in scientific notation, i.e. 'x.xxxxxxxxxxe+yy', refer to this SO answer:
How to avoid scientific notation for large numbers in JavaScript?
If you're looking to handle numbers larger than 64-bit floats...
JavaScript can't handle 64-bit integers, can it?
And note the mention of the library "BigNumber".

JavaScript: Efficient integer arithmetic

I'm currently writing a compiler for a small language that compiles to JavaScript. In this language, I'd quite like to have integers, but JavaScript only supports Number, which is a double-precision floating point value. So, what's the most efficient way to implement integers in JavaScript? And how efficient is this compared to just using Number?
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
So, what's the most efficient way to implement integers in JavaScript?
The primitive number type is as efficient as it gets. Many modern JS engines support JIT compilation, so it should be almost as efficient as native floating-point arithmetic.
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
You can achieve the semantics of standard 32-bit integer arithmetic by noting that JavaScript converts "numbers" to 32-bit integers for bitwise operations. >>> (unsigned right shift) converts its operand to an unsigned 32-bit integer while the rest (all other shifts and bitwise AND/OR) convert their operand to a signed 32-bit integer. For example:
0xFFFFFFFF | 0 yields -1 (signed cast)
(0xFFFFFFFF + 1) | 0 yields 0 (overflow)
-1 >>> 0 yields 0xFFFFFFFF (unsigned cast)
I found this implementation of BigIntegers in Javascript: http://www-cs-students.stanford.edu/~tjw/jsbn/
Perhaps this will help?
Edit: Also, the Google Closure library implements 64-bit integers:
http://code.google.com/p/closure-library/source/browse/trunk/closure/goog/math/long.js
These are essentially just generating convenience objects though, and won't do anything for improving on the fundamental data-type efficiency.
On a modern CPU if you restrict your integer values to the range +- 2^52 then using a double will be barely less efficient than using a long.
The double IEE754 type has 53 bits of mantissa, so you can easily represent the 32-bit integer range and then some.
In any event, the rest of Javascript will be much more of a bottleneck than the individual CPU instructions used to handle arithmetic.
All Numbers are Numbers. There is no way around this. JavaScript has no byte or int type. Either deal with the limitations or use a more low level language to write your compiler in.
The only sensible option if you want to achieve this is to edit one of the JavaScript interpreters (say V8) and extend JS to allow access to native C bytes.
Well you can choose JavaScript's number type which is probably computed using primitives of your CPU or you can choose to layer a complete package of operators and functions and what not on an emulated series of bits...?
... If performance and efficiency is your concern, stick with the doubles.
The most efficient would be to use numbers, and add operations to make sure that operations on the simulated integers would give an integer result. A division for example would have to be rounded down, and a multiplication would either have to be checked for overflow or masked down to fit in the integer range.
This of course means that floating point operations in your language will be considerably faster than integer operations, defeating most of the purpose of having an integer type in the first place.
Note that ECMA-262 Edition 3 added
Number.prototype.toFixed, which takes
a precision argument telling how many
digits after the decimal point to
show. Use this method well and you
won't mind the disparity between
finite precision base 2 and the
"arbitrary" or "appropriate" precision
base 10 that we use every day.
- Brendan Eich

Categories