How does Javascript store a numeric value? - javascript

I am new to JavaScript programming and referring to Eloquent JavaScript, 3rd Edition by Marijn Haverbeke.
There is a statement in this book which reads like,
"JavaScript uses a fixed number of bits, 64 of them, to store a single number value. There are only so many patterns you can make with 64 bits, which means that the number of different numbers that can be represented is limited. With N decimal digits, you can represent 10^N numbers. Similarly, given 64 binary digits, you can represent 2^64 different numbers, which is about 18 Quintilian (an 18 with 18 zeros after it). That’s a lot."
Can someone help me with the actual meaning of this statement. I am confused as to how the values more than 2^64 are stored in the computer memory.

Can someone help me with the actual meaning of this statement. I am
confused as to how the values more than 2^64 are stored in the
computer memory.
Your questions is related with more generic concepts in Computer Science. For this question Javascript stays at higher level.
Please understand basic concepts for memory and storage first;
https://study.com/academy/lesson/how-do-computers-store-data-memory-function.html
https://www.britannica.com/technology/computer-memory
https://www.reddit.com/r/askscience/comments/2kuu9e/how_do_computers_handle_extremely_large_numbers/
How do computers evaluate huge numbers?
Also for Javascript please see this ECMAScript section
Ref: https://www.ecma-international.org/ecma-262/5.1/#sec-8.5
The Number type has exactly 18437736874454810627 (that is, 264−253+3) values, representing the double-precision 64-bit format IEEE 754 values as specified in the IEEE Standard for Binary Floating-Point Arithmetic, except that the 9007199254740990 (that is, 253−2) distinct “Not-a-Number” values of the IEEE Standard are represented in ECMAScript as a single special NaN value. (Note that the NaN value is produced by the program expression NaN.) In some implementations, external code might be able to detect a difference between various Not-a-Number values, but such behaviour is implementation-dependent; to ECMAScript code, all NaN values are indistinguishable from each other.
There are two other special values, called positive Infinity and negative Infinity. For brevity, these values are also referred to for expository purposes by the symbols +∞ and −∞, respectively. (Note that these two infinite Number values are produced by the program expressions +Infinity (or simply Infinity) and -Infinity.)
The other 18437736874454810624 (that is, 264−253) values are called the finite numbers. Half of these are positive numbers and half are negative numbers; for every finite positive Number value there is a corresponding negative value having the same magnitude.
Note that there is both a positive zero and a negative zero. For brevity, these values are also referred to for expository purposes by the symbols +0 and −0, respectively. (Note that these two different zero Number values are produced by the program expressions +0 (or simply 0) and -0.)
The 18437736874454810622 (that is, 264−253−2) finite nonzero values are of two kinds:
18428729675200069632 (that is, 264−254) of them are normalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 253 but not less than 252, and e is an integer ranging from −1074 to 971, inclusive.
The remaining 9007199254740990 (that is, 253−2) values are denormalised, having the form
s × m × 2e
where s is +1 or −1, m is a positive integer less than 252, and e is −1074.
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type (indeed, the integer 0 has two representations, +0 and -0).
A finite number has an odd significand if it is nonzero and the integer m used to express it (in one of the two forms shown above) is odd. Otherwise, it has an even significand.
In this specification, the phrase “the Number value for x” where x represents an exact nonzero real mathematical quantity (which might even be an irrational number such as π) means a Number value chosen in the following manner. Consider the set of all finite values of the Number type, with −0 removed and with two additional values added to it that are not representable in the Number type, namely 21024 (which is +1 × 253 × 2971) and −21024 (which is −1 × 253 × 2971). Choose the member of this set that is closest in value to x. If two values of the set are equally close, then the one with an even significand is chosen; for this purpose, the two extra values 21024 and −21024 are considered to have even significands. Finally, if 21024 was chosen, replace it with +∞; if −21024 was chosen, replace it with −∞; if +0 was chosen, replace it with −0 if and only if x is less than zero; any other chosen value is used unchanged. The result is the Number value for x. (This procedure corresponds exactly to the behaviour of the IEEE 754 “round to nearest” mode.)
Some ECMAScript operators deal only with integers in the range −231 through 231−1, inclusive, or in the range 0 through 232−1, inclusive. These operators accept any value of the Number type but first convert each such value to one of 232 integer values. See the descriptions of the ToInt32 and ToUint32 operators in 9.5 and 9.6, respectively.

Probably you have learned about big numbers of mathematics.
For example Avogadro constant is 6.022x10**23
Computers can also store numbers in this format.
Except for two things:
They store it as a binary number
They would store Avogadro as 0.6022*10**24, more precisely
the precision: a value that is between 0 and 1 (0.6022); this is usually 2-8 byte
the size/greatness of the number (24); this is usually only 1 byte because of 2**256 is already a very big number.
As you can see this method can be used to store an inexact value of a very big/small number.
An example of inaccuracy: 0.1 + 0.2 == 0.30000000000000004
Due to performance issues, most engines are often using the normal format, if it makes no difference in the results.

Related

How are JavaScript decimal places defined? [duplicate]

In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006?
The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. (You can request more or fewer digits by using the toPrecision method.)
JavaScript uses IEEE-754 basic 64-bit binary floating-point for its Number type. Using IEEE-754, the result of .1 + .2 is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:
Converting “.1” to the nearest value representable in the Number type.
Converting “.2” to the nearest value representable in the Number type.
Adding the above two values and rounding the result to the nearest value representable in the Number type.
When formatting this Number value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:
0.299999999999999988897769753748434595763683319091796875,
0.3000000000000000444089209850062616169452667236328125, and
0.300000000000000099920072216264088638126850128173828125.
If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible Number value the original value was.
This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a Number value m to a decimal numeral for the ToString operation:
Otherwise, let n, k, and s be integers such that k ≥ 1, 10k‐1 ≤ s < 10k, the Number value for s × 10n‐k is m, and k is as small as possible.
The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for s × 10n‐k”, the standard means the Number value that is the result of converting the mathematical value s × 10n‐k to the Number type (with the usual rounding). In this description, k is the number of significant digits that will be used, and this step is telling us to minimize k, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to the Number type, produce the original number m.

What are the chances of Math.random returning 0?

Like the asker of this question, I was wondering why Math.ceil(Math.random() * 10) was not preferred over Math.floor(Math.random() * 10) + 1, and found that it was because Math.random has a tiny (but relevant) chance of returning 0 exactly. But how tiny?
Further research told me that this random number is accurate to 16 decimal places... well, sort of. And it's the "sort of" that I'm curious about.
I understand that floating point numbers work differently to decimals. I struggle with the specifics though. If the number were a strict decimal value, I believe the chances would be one in ten billiard (or ten quadrillion, in the American system) - 1:1016.
Is this correct, or have I messed up, or does the floating point thing make a difference?
JavaScript is a dialect of ECMAScript. The ECMAScript-262 standard fails to specify Math.random precisely. The relevant clause says:
Math.random ( )
Returns a Number value with positive sign, greater than or equal to +0𝔽 but strictly less than 1𝔽, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-defined algorithm or strategy. This function takes no arguments.
Each Math.random function created for distinct realms must produce a distinct sequence of values from successive calls.
In the absence of a complete specification, no definitive statement can be made about the probability of Math.random returning zero. Each ECMAScript implementation may choose a different algorithm and need not provide a truly uniform distribution.
ECMAScript uses the IEEE-754 basic 64-bit binary floating-point format for its Number type. In this format, the significand (fraction portion) of the number has 53 bits. Every floating-point number has the form s • f • 2e, where s (for sign) is +1 or −1, f (for fraction) is the significand and is an integer in [0, 253), and e (for exponent) is an integer in [−1074, 971]. The number is said to be normalized if the high bit of f is set (so f is in [252, 253)). Since negative numbers are not a concern in this answer, let s be implicitly +1 for the rest of this answer.
One issue with distributing random numbers in [0, 1) is that the representable values are not evenly spaced. There are 252 representable values in [½, 1)—all those with f in [252, 253) and e = −53. And there are the same number of values in [¼, ½)—all those with f in [252, 253) and e = −54. Since there are the same number of numbers in this interval but the interval is half as long, the numbers are more closely spaced. Similarly, in [⅛, ¼), the spacing halves again. This continues until the exponent reaches −1074, at which point the normal numbers end with f = 252. The numbers smaller than that are said to be subnormal (or zero), with f in [0, 252) and e = −1074, and they are evenly spaced.
One choice about how to distribute the numbers for Math.random is to use only the set of evenly spaced numbers f • 2−53 for f in [0, 253). This uses all the representable values in [½, 1), but only half the values in [¼, ½), one-fourth the values in [⅛, ¼), and so on. This is simple and avoids some oddities in the distribution. If implemented correctly, the probability zero is produced is one in 253.
Another choice is to use all the representable values in [0, 1), each with probability proportional to the distance from it to the next higher representable value. Thus, each representable number in [½, 1) would be chosen with probability 1/253, each representable number in [¼, ½) would be chosen with probability 1/254, each representable number in [⅛, ¼) would be chosen with probability 1/255, and so on. This distribution approximates a uniform distribution on the reals and provides finer precision where the floating-point format is finer. If implemented correctly, the probability zero is produced is one in 21074.
Another choice is to use all the representable values in [0, 1), each with probability proportional to the length of the segment in which the representable value is the nearest representable value of all the real numbers in the segment. I will omit discussion of some details of this distribution except to say it mimics the results one would get by choosing a real number with uniform distribution and then rounding it to a representable value using the round-to-nearest-ties-to-even rule. If implemented correctly, the probability zero is produced is one in 21075. (One problem with this distribution is that a uniform distribution over the reals in [0, 1) will sometimes produce a number so close to 1 that rounding produces 1. This then requires either that Math.random be allowed to return 1 or that the distribution be fudged in some way, perhaps by returning the next lower representable value instead of 1.)
I will note that the ECMAScript specification is sufficiently lax that one might assert that Math.random may distribute the numbers with equal probability for each representable value, ignoring the spacing between them. This would not mimic a uniform distribution over the real numbers at all, and I expect very few people would favor it. However, if implemented, the probability zero is returned is one in 1021 • 252, because there are 252 normalized numbers with exponents from −53 to −1074 (1020 values of e), and 252 subnormal or zero numbers.

How does JavaScript determine the number of digits to produce when formatting floating-point values?

In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006?
The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. (You can request more or fewer digits by using the toPrecision method.)
JavaScript uses IEEE-754 basic 64-bit binary floating-point for its Number type. Using IEEE-754, the result of .1 + .2 is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:
Converting “.1” to the nearest value representable in the Number type.
Converting “.2” to the nearest value representable in the Number type.
Adding the above two values and rounding the result to the nearest value representable in the Number type.
When formatting this Number value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:
0.299999999999999988897769753748434595763683319091796875,
0.3000000000000000444089209850062616169452667236328125, and
0.300000000000000099920072216264088638126850128173828125.
If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible Number value the original value was.
This rules comes from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a Number value m to a decimal numeral for the ToString operation:
Otherwise, let n, k, and s be integers such that k ≥ 1, 10k‐1 ≤ s < 10k, the Number value for s × 10n‐k is m, and k is as small as possible.
The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for s × 10n‐k”, the standard means the Number value that is the result of converting the mathematical value s × 10n‐k to the Number type (with the usual rounding). In this description, k is the number of significant digits that will be used, and this step is telling us to minimize k, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to the Number type, produce the original number m.

Numbers System in javascripts

Division by zero is not an error in JavaScript: it simply returns infinity or negative
infinity. There is one exception, however: zero divided by zero does not have a well-
defined value, and the result of this operation is the special not-a-number value, printed
as NaN . NaN also arises if you attempt to divide infinity by infinity, or take the square in JavaScript root of a negative number or use arithmetic operators with non-numeric operands that
cannot be converted to numbers.
for Example
1. 0===0 returns True & 1==1 returns true
2. 0/0 returns NaN & 1/1 returns 1
Zero is Number and one is also a number?
I want the explanation? Why this Exactly happens in JavaScript only?
Because Javascript follows the IEEE 754 standard, which defines floating-point arithmetic and specifies this behavior.
Why does the standard specify NaN as the result of these operations? Because there is no sensible value to give them, so instead they have a well-defined insensible value.
Dividing anything by 0 is Infinity. That's a correct answer (from a computation point-of-view, not necessarily a mathematics point-of-view). Imagine doing the division on paper. You'll have an infinite number of operations because you always keep subtracting 0.
The reason most things don't allow dividing by 0 is because they have no way to handle an infinite operation - you wouldn't want your machine crashing every time you tried diving by 0 on your calculator.
This is a good video showing the above. An old mechanical calculator that only knows the rules of addition/subtraction (which is all multiplication/division really is). It never stop running because it can always keep subtracting 0.
JavaScript tries to be nice to programmers who aren't mathematics experts.
Read more about the IEEE 754 design rational.
Ironically, the they are also a number:
typeof NaN;//'number'
typeof Infinity;//'number'
To answer your key question, this is how javascript works.
See the specification here
Applying the / Operator
The / operator performs division, producing the quotient of its operands. The left operand is the dividend and the right operand is the divisor. ECMAScript does not perform integer division. The operands and result of all division operations are double-precision floating-point numbers. The result of division is determined by the specification of IEEE 754 arithmetic:
If either operand is NaN, the result is NaN.
The sign of the result is positive if both operands have the same sign, negative if the operands have different signs.
Division of an infinity by an infinity results in NaN.
Division of an infinity by a zero results in an infinity. The sign is determined by the rule already stated above.
Division of an infinity by a nonzero finite value results in a signed infinity. The sign is determined by the rule already stated above.
Division of a finite value by an infinity results in zero. The sign is determined by the rule already stated above.
Division of a zero by a zero results in NaN; division of zero by any other finite value results in zero, with the sign determined by the rule already stated above.
Division of a nonzero finite value by a zero results in a signed infinity. The sign is determined by the rule already stated above.
In the remaining cases, where neither an infinity, nor a zero, nor NaN is involved, the quotient is computed and rounded to the nearest representable value using IEEE 754 round-to-nearest mode. If the magnitude is too large to represent, the operation overflows; the result is then an infinity of appropriate sign. If the magnitude is too small to represent, the operation underflows and the result is a zero of the appropriate sign. The ECMAScript language requires support of gradual underflow as defined by IEEE 754.
In mathematics, zero, symbolized by the numeric character 0, is both:
In a positional number system, a place indicator meaning "no units of this multiple." For example, in the decimal number 1,041, there is one unit in the thousands position, no units in the hundreds position, four units in the tens position, and one unit in the 1-9 position.
An independent value midway between +1 and -1.
In writing outside of mathematics, depending on the context, various denotative or connotative meanings for zero include "total failure," "absence," "nil," and "absolutely nothing." ("Nothing" is an even more abstract concept than "zero" and their meanings sometimes intersect.)
Brahmagupta developed the concept of the zero as an actual independent number, not just a place-holder, and wrote rules for adding and subtracting zero from other numbers. The Indian writings were passed on to al-Khwarizmi (from whose name we derive the term algorithm ) and thence to Leonardo Fibonacci and others who continued to develop the concept and the number.
Follow the link here

How is the parseInt in JavaScript defined to handle large "numbers" - is there an ECMA leak? I got a wow here

The problem: JavaScript returns 20160107190308484 for parseInt("20160107190308485")
Not the question: I do not need a workaround; I got one.
Not the question II: I do not need an explanation why this can happen, and how numbers are stored and so on.
How are numbers as input for parseInt defined? Why do I get a wrong result, not a NaN?
I only find in documentation how parseInt handles literals and so on, that only the first numbers are interpreted and ...
As W3C or MDN
Note: Only the first number in the string is returned!
Note: Leading and trailing spaces are allowed.
Note: If the first character cannot be converted to a number, parseInt() returns NaN.
I can't find anything that says "123456789202232334234212312311" is not a "number". And nothing like "a number must be in the range of -BIGINT .. +BIGINT or so. And I could not find a hint for errors thrown.
I just get 20160107190308484 for "20160107190308485" (two browsers tested) and other pairs like:
20160107155044520, 20160107155044522
20160107002720970, 20160107002720967
20160107000953860, 20160107000953859
For me this feels like a hole in ECMA.
Is there a number defined that is allowed as input for parseInt?
For those who are interested in "how came?":
I stumbled over that problem with a small logic, that converts classes and ids into an object holding the information like
class="book240 lang-en" id="book240page2212"
converting to an object like:
{
book: 240
page: 2212
lang: "en"
}
Notice, that INTs are numbers, not strings.
Therefore I have a loop that makes that generic:
for n in some_regexp_check_names
# m hold a Regexp result arg[1] is the interest:
nr = parseInt(m[1]) # Make it a number
#--------- Here the funny fault
if nr > 0 # Check if number & > 0 all my ids are > 0
out_obj[n]=nr # Take the number
else
out_obj[n]=m[1] # Take the string as is
All this is nice and is working with "real strings" and "real (int) ids", and then I had a situation where dynamically semi uuids out of datetime where created: "temp-2016-01-07-19-03-08.485" - and still all fine, but I had the idea to remove all constant chars from this (i.e. temp-.) and BANG, because the string "20160107190308485" gives 20160107190308484 as parseInt.
And it took me only ~3 hours to find that error.
How are numbers as input for parseInt defined?
I think it's pretty clearly defined in the spec (§18.2.5):
Let mathInt be the mathematical integer value that is represented by Z
in radix-R notation, using the letters A-Z and a-z for digits with
values 10 through 35. (However, if R is 10 and Z contains more than 20
significant digits, every significant digit after the 20th may be
replaced by a 0 digit, at the option of the implementation; and if R
is not 2, 4, 8, 10, 16, or 32, then mathInt may be an
implementation-dependent approximation to the mathematical integer
value that is represented by Z in radix-R notation.)
Let number be the Number value for mathInt.
For that last step, the section for the Number type (§6.1.6) specifies:
In this specification, the phrase “the Number value for x” where x
represents an exact nonzero real mathematical quantity (which might
even be an irrational number such as π) means a Number value chosen in
the following manner. Consider the set of all finite values of the
Number type, with −0 removed and with two additional values added to
it that are not representable in the Number type, namely 21024 (which
is +1 × 253 × 2971) and −21024 (which is −1 × 253 × 2971). Choose the
member of this set that is closest in value to x. If two values of the
set are equally close, then the one with an even significand is
chosen; for this purpose, the two extra values 21024 and −21024 are
considered to have even significands. Finally, if 21024 was chosen,
replace it with +∞; if −21024 was chosen, replace it with −∞; if +0
was chosen, replace it with −0 if and only if x is less than zero; any
other chosen value is used unchanged. The result is the Number value
for x. (This procedure corresponds exactly to the behaviour of the
IEEE 754-2008 “round to nearest, ties to even” mode.)
Why do I get a wrong result not a NaN?
You get a rounded result (with the maximum precision available in that range) - and there's nothing wrong with that, your input is a valid number, not NaN.
Although Bergis answer is correct, I want to tell what my real mistake was:
Don't think of parseInt of a function, that parses into an INTeger like known from C as 16, 32 or 64 bit value. It parse the number as long as there are valid digits, so if the number is to big (does not fit exactly into eg 64 bit), you dont get an error, you just get a rounded value like working with float. If you want to be on the safe side: check the result against Number.MAX_SAFE_INTEGER.
The maximum integer value in JavaScript is 2^53 == 9 007 199 254 740 992. This is because Numbers are stored as floating point is a 52 bit mantissa.
Read more: http://web.archive.org/web/20161117214618/http://bjola.ca/coding/largest-integer-in-javascript/

Categories