Parsing amount with JSON.parse - javascript

Dos it exist a maximum float number with two digits after decimal point that can be parsed with JSON.parse without losing precision?
For example:
JSON.parse('{"amount": 9999999999999.99}')
{amount: 9999999999999.99} // not lose (probably)
JSON.parse('{"amount": 99999999999999.99}')
{amount: 99999999999999.98} // lose

If x is a decimal floating-point number with 15 significant digits or fewer, then converting x to JavaScript’s Number type and then converting the result back to a decimal floating-point number with the same number of significant digits produces exactly x, provided the number is within normal bounds. Therefore, all decimal numerals with two digits after the decimal point from “.00” to “9999999999999.99” can be parsed, stored, and reformatted with two digits after the decimal point, and the result will be the original numeral.
The stored value will generally not equal the original value. For example, when “.99” is parsed, the result will be exactly 0.9899999999999999911182158029987476766109466552734375. However, the stored value will be sufficiently close to the original value that, when converted back to the original number of digits, the original value is recovered. Note that you must know the original number of digits; it is not inherently a part of the Number value.
15 is a lower bound for this property. There may be some exponent values for which all 16-digit decimal numerals survive a round trip. However, since 99999999999999.99 (16 digits) produces 99999999999999.98, we know this is not one of those intervals.
If you want to know the specific number between 9999999999999.99 and 99999999999999.99 where this round-trip property first fails, you may have to hunt for it computationally. It many not be a value that is easy to calculate directly by mathematical properties.

Related

For 2 floating point numbers with same magnitude but opposite sign, does their absolute value exactly equal?

For example:
var a=x.yz;
var b=-x.yz;
after they round to the "error" value, would -a exactly equal to b?
Or would it round to different values like Math.round(2.5) and Math.round(-2.5)?
If the numeral has no more than 20 significant decimal digits, the results of converting a numeral with a - and a numeral without a - are exactly the same except for the sign.
JavaScript is an implementation of ECMAScript, specified in Ecma-262 and ISO/IEC 16262. Ecma-262 specifies that the IEEE-754 64-bit binary floating point format is used, except there is a single NaN value.
Clause 7.1.3.1 specifies how a String (containing a numeral, such as 1.2345) is converted to a Number. Unless the numeral is a decimal numeral with more than 20 significant digits, it is converted as specified in clause 6.1.6, which specifies rounding rules corresponding to IEEE-754’s round-to-nearest-ties-to-even method. That method is symmetric with respect to sign, and therefore the result of converting the negation of a numeral to a JavaScript Number equals the negation of the result of converting the numeral to a Number.
If the numeral has more than 20 significant decimal digits, clause 7.1.3.1 allows an implementation to use either of the two 20-digit numerals nearest the source numeral. The document does not impose further requirements on this choice, so a JavaScript implementation could behave asymmetrically with respect to sign in this case. However, it would be a poor implementation choice to do so.
https://docs.oracle.com/javase/8/docs/api/java/lang/Math.html#round-float-
As documentation says Math.random( float ) will round to positive infinity, this will help you to understand why Math.round(2.5) and Math.round(-2.5) returns a different value. So in your case -(-a) will be rounded in fact as +a.

Why does right shift on positive number sometimes result in a negative number?

Right shifting a number in javascript sometimes results in a negative number. What is the reason behind that? Can that be mitigated?
const now = 1562143596806 // UNIX timestamp in milliseconds
console.log(now >> 8) // -4783199
Use the zero-fill right shift operator (>>>) to always get a positive result:
const now = 1562143596806 // UNIX timestamp in milliseconds
console.log(now >>> 8)
The reason for the >> operator returning the number is caused by the fact that, originally, the number is internally represented as a 64-bit floating point number:
10110101110110111000000111010000100000110
The bit shift operation will first convert the operand to a 32-bit integer. It does this by keeping only the 32 least significant bits, and discarding the rest:
10110111000000111010000100000110
Then it will shift it by the specified number of bits while maintaining the sign, i.e. shifting in 8 1 bits from the left:
11111111101101110000001110100001
Converting back to decimal, this yields:
-4783199
The basic issue is that 1562143596806 is too large to fit in 32 bits. It can be represented as a Number, but when performing bitwise operations, the value is first converted to a 32bit integer and that means the "top bits" are already dropped before shifting - the upper bits of the result are therefore not filled from the original value, they are copies of the sign of that temporary 32bit value (or with >>>, they would be zero, which is not really an improvement). That the result happens to come out negative is just an accident depending on the exact bit pattern of the input, if it had been positive it would still have been the wrong positive value.
Such large values could be safely manipulated as BigInt, but support for that is lacking. Using floating point arithmetic can work, but requires extra care. For example you can divide by 256 and floor the result, but you cannot use the usual |0 to get rid of the fractional part, because even after dividing by 256 the value is too big to fit in 32 bits. Various non-built-in BigInt libraries exist to deal with this sort of thing too.

How does Javascript store a numeric value?

I am new to JavaScript programming and referring to Eloquent JavaScript, 3rd Edition by Marijn Haverbeke.
There is a statement in this book which reads like,
"JavaScript uses a fixed number of bits, 64 of them, to store a single number value. There are only so many patterns you can make with 64 bits, which means that the number of different numbers that can be represented is limited. With N decimal digits, you can represent 10^N numbers. Similarly, given 64 binary digits, you can represent 2^64 different numbers, which is about 18 Quintilian (an 18 with 18 zeros after it). That’s a lot."
Can someone help me with the actual meaning of this statement. I am confused as to how the values more than 2^64 are stored in the computer memory.
Can someone help me with the actual meaning of this statement. I am
confused as to how the values more than 2^64 are stored in the
computer memory.
Your questions is related with more generic concepts in Computer Science. For this question Javascript stays at higher level.
Please understand basic concepts for memory and storage first;
https://study.com/academy/lesson/how-do-computers-store-data-memory-function.html
https://www.britannica.com/technology/computer-memory
https://www.reddit.com/r/askscience/comments/2kuu9e/how_do_computers_handle_extremely_large_numbers/
How do computers evaluate huge numbers?
Also for Javascript please see this ECMAScript section
Ref: https://www.ecma-international.org/ecma-262/5.1/#sec-8.5
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct “Not-a-Number” values of the IEEE Standard are represented in ECMAScript as a single special NaN value. (Note that the NaN value is produced by the program expression NaN.) In some implementations, external code might be able to detect a difference between various Not-a-Number values, but such behaviour is implementation-dependent; to ECMAScript code, all NaN values are indistinguishable from each other.
There are two other special values, called positive Infinity and negative Infinity. For brevity, these values are also referred to for expository purposes by the symbols +∞ and −∞, respectively. (Note that these two infinite Number values are produced by the program expressions +Infinity (or simply Infinity) and -Infinity.)
The other 18437736874454810624 (that is, 264−253) values are called the finite numbers. Half of these are positive numbers and half are negative numbers; for every finite positive Number value there is a corresponding negative value having the same magnitude.
Note that there is both a positive zero and a negative zero. For brevity, these values are also referred to for expository purposes by the symbols +0 and −0, respectively. (Note that these two different zero Number values are produced by the program expressions +0 (or simply 0) and -0.)
The 18437736874454810622 (that is, 264−253−2) finite nonzero values are of two kinds:
18428729675200069632 (that is, 264−254) of them are normalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 253 but not less than 252, and e is an integer ranging from −1074 to 971, inclusive.
The remaining 9007199254740990 (that is, 253−2) values are denormalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 252, and e is −1074.
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and -0).
A finite number has an odd significand if it is nonzero and the integer m used to express it (in one of the two forms shown above) is odd. Otherwise, it has an even significand.
In this specification, the phrase “the Number value for x” where x represents an exact nonzero real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with −0 removed and with two additional values added to it that are not representable in the Number type, namely 21024 (which is +1 × 253 × 2971) and −21024 (which is −1 × 253 × 2971). Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen; for this purpose, the two extra values 21024 and −21024 are considered to have even significands. Finally, if 21024 was chosen, replace it with +∞; if −21024 was chosen, replace it with −∞; if +0 was chosen, replace it with −0 if and only if x is less than zero; any other chosen value is used unchanged. The result is the Number value for x. (This procedure corresponds exactly to the behaviour of the IEEE 754 “round to nearest” mode.)
Some ECMAScript operators deal only with integers in the range −231 through 231−1, inclusive, or in the range 0 through 232−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 232 integer values. See the descriptions of the ToInt32 and ToUint32 operators in 9.5 and 9.6, respectively.
Probably you have learned about big numbers of mathematics.
For example Avogadro constant is 6.022x10**23
Computers can also store numbers in this format.
Except for two things:
They store it as a binary number
They would store Avogadro as 0.6022*10**24, more precisely
the precision: a value that is between 0 and 1 (0.6022); this is usually 2-8 byte
the size/greatness of the number (24); this is usually only 1 byte because of 2**256 is already a very big number.
As you can see this method can be used to store an inexact value of a very big/small number.
An example of inaccuracy: 0.1 + 0.2 == 0.30000000000000004
Due to performance issues, most engines are often using the normal format, if it makes no difference in the results.

Why is JavaScript's number *display* for large numbers inaccurate?

So in JavaScript, 111111111111111111111 == 111111111111111110000. Just type any long number – at least about 17 digits – to see it in action ;-)
That is because JavaScript uses double-precision floating-point numbers, and certain very long numeric literals can not be expressed accurately. Instead, those numbers get rounded to the nearest representable number possible. See e.g. What is JavaScript's highest integer value that a Number can go to without losing precision?
However, doing the math according to IEEE-754, I found out that every number inside [111111111111111106560, 111111111111111122944] will be replaced by 111111111111111114752 internally. But instead of showing this ...4752 number, it gets displayed as 111111111111111110000. So JavaScript is showing those trailing zeros which obfuscates the real behavior. This is especially annoying because even numbers like 263, which are accurately representable, get "rounded" as described.
So my question is: Why does JavaScript behave like this when displaying the number?
JavaScript integers can only be +/- 253, which is:
9007199254740992
One of your numbers is
111111111111111106560
which is considerably outside of the range of numbers that can accurately represented as an integer.
This follows the IEEE 754:
Sign bit: 1 bit
Exponent width: 11 bits
Significand precision: 53 bits (52 explicitly stored)
EDIT
The Display of numbers is sometimes rounded by the JavaScript engine, yes. However, that can be over-ridden using the toFixed method. (Warning, toFixed is known to be broken under some versions of IE).
In your console, type:
111111111111111122944..toFixed(0)
"111111111111111114752"

parseFloat of string longer than 16 characters

Does parseFloat of a string have a limit to how many characters the string can be? I don't see anything about a limit here: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/parseFloat
But running the following in console seems to show results I wasn't expecting.
parseFloat('1111111111111111'); // 16 characters long
// result 1111111111111111
parseFloat('11111111111111111'); // 17 characters long
// result 11111111111111112
Can anyone break this down for me?
In Javascript, floating point numbers are stored as double precision values. These have about 16 significant digits, which means that a 17-digit number won't necessarily be stored exactly.
You can supply numbers of any length to parseFloat(), but it won't be possible to store anything larger than 1.79769×10308, which is the largest possible value that can be stored in a double precision variable.
I'd recommend reading this if you have time: What Every Computer Scientist Should Know About Floating-Point Arithmetic

Categories