I'm using a server-side calculation which needs to generate (with * and + operations) and compare 40-bit integers. I'm aware that at that point the V8 engine stores the numbers as Double rather than int. Can I rely on these numbers to be generated and compared correctly?
My intuition says yes - doubles shouldn't have trouble with that - but I'm not sure how to check or where to find information on this.
Yes.
A JavaScript Number, which is a 64-bit IEEE 754 floating point value, can store integers from -253 to 253 without loss of precision, since doubles can store up to 53 bits of mantissa (52 explictly).
References:
ECMA-262: 4.3.19 Number value
Double-precision floating point numbers (Wikipedia)
Related
Is the value of Number.MAX_VALUE a limitation only for numbers of data type Number?
Doesn't BigInt have the same upper bound?
And another one question... Does BigInt represent number in 64-bit format IEEE-754?
Since the BigInt format represents large numbers, can the BigInt value take up more space than 64 bits?
Doesn't BigInt have the same upper bound?
No, BigInts have no specified limit. That said, realistically, there will always be implementation-defined limits (just like for Strings, Arrays, etc.). Currently, Chrome allows a billion bits, and Firefox allows a million bits. The expectation is that most realistically-occurring BigInts won't run into this limit.
Does BigInt represent number in 64-bit format IEEE-754?
No, IEEE-754 is a floating-point format, so it's not useful for BigInts, which have integer values. Besides, they have "unlimited" precision.
can the BigInt value take up more space than 64 bits?
Yes, BigInts can consume a lot more than 64 bits. For example, a BigInt with a thousand bits (such as 2n ** 999n) will need (about) a thousand bits of memory (likely rounded up to the next multiple of 4 or 8 bytes, plus a small object header, specifics depend on the engine).
I just started reading Javascript (Professional JavaScript for Web Developers, 3rd Edition) and the text variously says that integers are:
[1] Stored as IEEE-754 format (pg. 35).
[2] Have a -0 value (pg. 36).
[3] Stored as a 32-bit two's complement number (pg. 50).
[4] Stored as a IEEE-754 64-bits number (pg. 49).
These are inconsistent definitions. What is the format of an integer in Javascipt?
thanks
Here is the specification:
http://ecma262-5.com/ELS5_HTML.htm#Section_8.5
According to the specification, javascript doesn't have integers, only floating point numbers.
IEEE-754 floats do have a -0 value, so there's no inconsistency between 1, 2 and 4. Only 3 appears to be inconsistent.
Javascript numbers are always stored as IEEE-754 64-bit floating point numbers. In some expressions (usually with bitwise operators) they may be temporarily converted to 32-bit integers and then converted back to 64-bit floats, but they're always stored as floats. I assume this conversion is what #3 is actually referring to.
A JIT compiler for Javascript may use actual integers in the compiled code as an optimization (especially if asm.js is involved), but during interpretation it's still all floats.
I'm working with currency values, so it's important to calculate accurately.
My current code breaks down a string into tokens then evaluates them. For decimal values, it first converts them to integers, does the calculation then converts back to a decimal.
For example if I had the expression
"0.1 * 0.2"
The first step would be to break it down into the tokens 0.1, * and 0.2. It then does some other malarky and figures it needs to multiple 0.1 and 0.2 together. The calculation would be
1 * 2 / 100
The calculation is done as integers to prevent JavaScript rounding error, i.e.
0.1 * 0.2 == 0.020000000000000004
My colleges argument is that by converting to a float from a string initially you've already lost precision. So my question is what's the upper bound and lower bound either side of 0 where a number cannot be represented by JavaScript exactly? So that I can check for this and handle it, if that's the right approach.
The problem you describe isn't a bounds problem. IEEE-754 double-precision binary floating point numbers can represent the value 0.5 perfectly, but cannot represent, say, 0.1 perfectly. Note that those have the same number of digits. The issue isn't the number of places of precision, it's the fact that the number type uses a different number base than we do. It uses base 2, rather than our base 10.
Just as we can't accurately represent 1 / 3 in our base 10 system, certain numbers cannot be accurately represented in IEEE-754's base 2 system.
In 2008, the IEEE came out with a revision adding a new format to IEEE-754 (it defines several formats; the "double-precision binary" one used by JS is just one of them) called "decimal64" which uses base 10 rather than base 2, for applications that need to handle rounding the same way we do (financial apps and such). That may start seeping into programming languages and such; for now, IEEE-754 single-precision and double-precision are the typical ones used, and others not based on the recent IEEE-754 standard like C#'s decimal.
In the meantime, there are "big decimal" libraries for JavaScript, such as big.js (haven't used it, no affiliation). If you search for "bignumber in JavaScript" or "JavaScript exact floating point" you should find multiple options.
I dont think that there is any such number which you can say that it is the first number that JavaScript can't represent accurately. It is all about decimal numbers and loss of precision.
Also to add there is no decimal data type in JavaScript - the only numeric data type is floating-point. JavaScript uses 64-bit floating point representation.
Floating point rounding errors. 0.1 cannot be represented as accurately in base-2 as in base-10 due to the missing prime factor of 5. Also to note that every floating point math is like this and is based on the IEEE 754 standard.
If I'm not mistaken, all JS engines uses IEEE 754 double to handle floating point numbers.
Take a look at the http://en.wikipedia.org/wiki/Double-precision_floating-point_format page, section 'Double-precision examples'. In the case numbers are close to 0, take a closer look at subnormals formula:
So, the closest to zero number, that can be represented in JavaScript floating point datatype is 21-1023 * 2-52 = 2-1022 * 2-52 = 2-1074.
Empirically:
Math.pow(2,-1074) = 5e-324
Math.pow(2,-1075) = 0
I notice you are doing currency calculations. You may find it better to use integers here. Use an integer to represent the price in cents. This eliminates problems with floating point which are more suitable for scientific calculations. This would basically be equivalent to using a BigDecimal in java.
I'm a bit confused about the size of a javascript number in a 32bit browser. Is it still represented as a 64bit number with max value at 2^53?
Answers couldn't be more wrong, it does depend on engine.
In V8 (Google Chrome, Opera, Node.js) 32-bit:
Integers that fit 31-bit signed representation (from -1073741824 to 1073741823) are represented directly by embedding it them in pointers.
Any other number is generally represented as a heap object that has a 64-bit double as a field for the numeric value (think of Java Double wrapper). In optimized functions such numbers can be temporarily stored directly on the stack and registers. Also certain kind of arrays can store doubles directly "permanently".
In V8 64-bit:
Same as 32-bit except integers can now fit in 32-bit signed representation (from -2147483648 to 2147483647) instead of 31-bit.
Yes. A number in Javascript is a double precision floating point number. It's the same regardless of the platform that it runs on.
I suppose my answer lies on MDN # 64-bit integers
This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is JavaScript's Math broken?
...and what can I do against that? After the calculation I want to have a String representation of the result. Because of that the zeros (plus the one) are desturburbing. "toFixed()" is not a perfect solution because I want to have all (correct) decimals of the potential result and I do not know the result (and the number of decimals) before. So the calculation shown above is only an example.
The short answer is: computers represent numbers in binary, so they can't represent all base 10 fractions perfectly as JavaScript numbers.
The (slightly) longer answer is that, because JavaScript numbers are 64-bit floating-point (equivalent to the double type in Java, C#, etc.), as Wikipedia describes here, there is a limited number of significand bits. For this reason, the precision of this base-2 number is limited.
As an analogy, consider representing the fraction 1/3 in base 10. Say that you only have so many digits to use. That means that you can never ever ever represent 1/3 exactly in base 10, because 1/3 requires an infinite number of digits to represent in base 10. Similarly, you can never represent 1/10 perfectly in a finite number of bits, because 1/10 requires an infinite number of bits to represent exactly. What you're seeing here is a fraction (58/10) that a computer can't represent exactly in a limited number of bits, so the computer is coming as close as it can.
Squeezing infinitely many real numbers into a finite number of bits requires an approximate representation. Although there are infinitely many integers, in most programs the result of integer computations can be stored in 32 bits. In contrast, given any fixed number of bits, most calculations with real numbers will produce quantities that cannot be exactly represented using that many bits. Therefore the result of a floating-point calculation must often be rounded in order to fit back into its finite representation. This rounding error is the characteristic feature of floating-point computation.
Above from http://docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html