Would I be correct in assuming that the reason that JavaScript only supports one binary representation for NaN is that it allows interpreters to speed up operations involving NaNs by checking for that specific bit pattern as a 64 bit integer rather than rely upon the FPU handling them?
Stephen Canon's comment spurred me to run some timing tests, which I guess I should have done in the first place. I'm posting the results as an answer in case they're of interest to anyone else...
Using my Intel Core2 Quad CPU (2.33GHz) PC I compared
for(i=0;i<100000000;++i) x += 1.0;
in C++ and JavaScript with x first equal to 0.0 and then to NaN.
C++ x==0.0: 124ms
C++ x==NaN: 11888ms
JS x==0.0: 268ms
JS x==NaN: 432ms
So it seems that JavaScript is exploiting the fact that it already has to dynamically dispatch arithmetic operators to treat NaN as a special case.
I'm guessing that it's a hidden type that has no data, hence only one binary representation for NaN.
Related
I'm wondering if someone might be able to explain a specific aspect of the JavaScript BigInt implementation to me.
The overview implementation I understand - rather than operating in base 10, build an array representing digits effectively operating in base 2^32/2^64 depending on build architecture.
What I'm curious about is the display/console.log implementation for this type - it's incredibly fast for most common cases, to the point where if you didn't know anything about the implementation you'd probably assume it was native. But, knowing what I do about the implementation, it's incredible to me that it's able to do the decimal cast/string concatenation math as quickly as it can, and I'm deeply curious how it works.
A moderate look into bigint.cc and bigint.h in the Chromium source has only confused me further, as there are a number of methods whose signatures are defined, but whose implementations I can't seem to find.
I'd appreciate even being pointed to another spot in the Chromium source which contains the decimal cast implementation.
(V8 developer here.)
#Bergi basically provided the relevant links already, so just to sum it up:
Formatting a binary number as a decimal string is a "base conversion", and its basic building block is:
while (number > 0) {
next_char = "0123456789"[number % 10];
number = number / 10; // Truncating integer division.
}
(Assuming that next_char is also written into some string backing store; this string is being built up from the right.)
Special-cased for the common situation that the BigInt only had one 64-bit "digit" to begin with, you can find this algorithm in code here.
The generalization for more digits and non-decimal radixes is here; it's the same algorithm.
This algorithm runs sufficiently fast for sufficiently small BigInts; its problem is that it scales quadratically with the length of the BigInt. So for large BigInts (where some initial overhead easily pays for itself due to enabling better scaling), we have a divide-and-conquer implementation that's built on better-scaling division and multiplication algorithms.
When the requested radix is a power of two, then no such heavy machinery is necessary, because a linear-time implementation is easy. That's why some_bigint.toString(16) is and always will be much faster than some_bigint.toString() (at least for large BigInts), so when you need de/serialization rather than human readability, hex strings are preferable for performance.
if you didn't know anything about the implementation you'd probably assume it was native
What does that even mean?
As the question here states, even the newest Ecmascript 8 has no support for 64bit integers.
But the stage 3 proposal for bigInt looks promising and I expect it will be added to the Js spec sometime soon.
However, Even according to the proposal, we have to use a special constructor for declaring big numbers. (Q1) What is the technical reason behind, being unable to represent big numbers in the general way?
let bigNum = 2 ** 64 // Why can't JS do this without losing precision? (at least in future)
I know that JavaScript represents all numbers using IEEE-754 double-precision (64 bit) floating points and that this causes the problem.
(Q2) Why can't Javascript represent all numbers using some other standard which doesn't lose precision?
(Q3) What complications would arise if Javascript actually did that?
Edit: As stated by T.J, we can use a suffix instead of a constructor, but I still feel that is not exactly the general notation
(Q1) What is the technical reason behind, being unable to represent big numbers in the general way?
Breaking the web. The fundamentals of JavaScript numbers cannot be changed now, 20-odd years after they were originally defined. There's also the issue of performance: JavaScript's current numbers (IEEE-754 binary double [64-bit] precision) are very fast floating point thanks to being built into CPUs and math coprocessors. The cost of that speed is precision; the cost of arbitrary precision (or a dramatically larger precise range) is performance.
Someday in the future, perhaps JavaScript will get IEEE-754 64-bit or even 128-bit decimal floating point numbers (see here and here), if those formats (introduced in 2008) diffuse into the ecosystem and get hardware support. But that's speculation on my part. :-)
(Q2) Why can't Javascript represent all numbers using some other standard which doesn't lose precision?
See Q1. :-)
(Q3) What complications would arise if Javascript actually did that?
See Q1. :-)
Even according to the proposal, we have to use a special constructor for declaring big numbers.
If you want 64-bit specifically. If you just want BigInts, the proposal includes a new notation, the n suffix, for that: 2n is a BigInt 2. So with BigInts, your example would be
let bigNum = 2n ** 64n;
I found a VB script in an Excel file that implements Runge-Kutta integration for solving a differential equation. I converted the implementation line by line to Javscript. However, the Javascript program does not yield the exact same results as the VB one... The numbers are close enough to be marginally acceptable, but I trust that the VB implementation is correct as it was part of a PhD Dissertation. So this leads me to believe that my Javascript implementation, running on NodeJS, suffers from some rounding ghosts.
In particular, I have noticed that as I continually add 0.01 to my time counter, the values are eventually unable to be represented properly, and I get values like 5.299999999999999934 instead of 5.3. This makes sense to me, as I have read about this quirk of Javascript in some books.
My Questions Are
Is my observation correct that Visual Basic does not suffer from this same precision shortcoming?
Does this mean plain vanilla Javascript math is inherently less accurate than plain vanilla VB math? (Without using other math libraries).
If I clamp my time variable, as in, force the value to be 5.30, instead of 5.299999999, will this actually make my results more correct?
edit: Using a Math.round(X*100)/100 clamp changes the output of all my time values to have only 2 decimal places, but does not actually change the values of any computations.
JavaScript uses IEEE 754 double-precision floating-point numbers...just like Visual Basic. The only difference is that JavaScript uses doubles for all numbers (that is, it doesn't have an integer type, or a single-precision float).
If a calculation in VB uses numbers of type Double in its calculations, it should have the same accuracy and precision as the equivalent calculation in JavaScript.
The upshot is that VB is in no way more accurate with respect numerical calculations than JavaScript. Whenever you're dealing with floating-point values, though, you have to be careful.
You should probably read David Goldberg's excellent article, What Every Computer Scientist Should Know About Floating-Point Arithmetic.
The chance of errors multiplying is increased with the number of summations, and there are a LOT of summations in techniques like Runge-Kutta, so the algorithm has to be constructed very carefully to take this into account. The definitive book on the subject is the time-tested Numerical Recipes, which covers Runge-Kutta, and takes into account floating-point errors.
Has any one faced Math.js auto approximation issue and got any work around for this?
If I enter any number more than 18 digits then this library returns the approximate value; not the exact value. Lets say if user enters "03030130000309293689" then it returns "3030130000309293600" and when user enters "3030130000309293799" even it returns "3030130000309293600". Can we stop this approximation? This is a bug or if not then how can I avoid approximation?
Due to this approximation if any user enters "03030130000309293695 == 03030130000309293799" then it will always return true which is totally wrong.
github -- https://github.com/josdejong/mathjs
We can try this at http://mathjs.org/ ( in Demo notepad).
This is released for production!
I think if any time user enters like "03030130000309293695 == 03030130000309293799" both side number only then we can do string comparison. Rest all cases will be taken care by approximation.Why I am saying this is because if i use the same library for "73712347274723714284 *73712347274723713000" computation then it gives result in scientific notation.
03030130000309293695 and 03030130000309293799 are pretty much the same number.
HOW?
According to this answer the limit of JS number is 9007199254740992(2^53). Your both numbers are greater than this number and so precision is left out. You probably need to use library like Big.js
It's not a library issue, it's just language architecture issue. You can even open your browser console and type in your equation to see it it truthy.
This is not really a problem of Math.js but a result of how numbers work in javascript. Javascript uses 64bit binary floating point numbers (also known as 64bit double in C). As such, it has only 53 bits to store your number.
I've written an explanation here: Javascript number gets another value
You can read the wikipedia page for 64 bit doubles for more detail: http://en.wikipedia.org/wiki/Double_precision_floating-point_format
Now for the second part of your question:
If not then how can I avoid approximation?
There are several libraries in javascript that implements big numbers:
For the browser there's this: https://github.com/MikeMcl/bignumber.js
Which is written in pure javascript. Should also be usable node.js.
For node.js there's this: https://github.com/justmoon/node-bignum
Which is a wrapper around the big number library used by OpenSSL. It's written in C so can't be loaded in the browser but should be faster and maybe more memory efficient on node.js.
The latest version of math.js has support for bignumbers, see docs:
https://github.com/josdejong/mathjs/blob/master/docs/datatypes/bignumbers.md
I'm currently writing a compiler for a small language that compiles to JavaScript. In this language, I'd quite like to have integers, but JavaScript only supports Number, which is a double-precision floating point value. So, what's the most efficient way to implement integers in JavaScript? And how efficient is this compared to just using Number?
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
So, what's the most efficient way to implement integers in JavaScript?
The primitive number type is as efficient as it gets. Many modern JS engines support JIT compilation, so it should be almost as efficient as native floating-point arithmetic.
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
You can achieve the semantics of standard 32-bit integer arithmetic by noting that JavaScript converts "numbers" to 32-bit integers for bitwise operations. >>> (unsigned right shift) converts its operand to an unsigned 32-bit integer while the rest (all other shifts and bitwise AND/OR) convert their operand to a signed 32-bit integer. For example:
0xFFFFFFFF | 0 yields -1 (signed cast)
(0xFFFFFFFF + 1) | 0 yields 0 (overflow)
-1 >>> 0 yields 0xFFFFFFFF (unsigned cast)
I found this implementation of BigIntegers in Javascript: http://www-cs-students.stanford.edu/~tjw/jsbn/
Perhaps this will help?
Edit: Also, the Google Closure library implements 64-bit integers:
http://code.google.com/p/closure-library/source/browse/trunk/closure/goog/math/long.js
These are essentially just generating convenience objects though, and won't do anything for improving on the fundamental data-type efficiency.
On a modern CPU if you restrict your integer values to the range +- 2^52 then using a double will be barely less efficient than using a long.
The double IEE754 type has 53 bits of mantissa, so you can easily represent the 32-bit integer range and then some.
In any event, the rest of Javascript will be much more of a bottleneck than the individual CPU instructions used to handle arithmetic.
All Numbers are Numbers. There is no way around this. JavaScript has no byte or int type. Either deal with the limitations or use a more low level language to write your compiler in.
The only sensible option if you want to achieve this is to edit one of the JavaScript interpreters (say V8) and extend JS to allow access to native C bytes.
Well you can choose JavaScript's number type which is probably computed using primitives of your CPU or you can choose to layer a complete package of operators and functions and what not on an emulated series of bits...?
... If performance and efficiency is your concern, stick with the doubles.
The most efficient would be to use numbers, and add operations to make sure that operations on the simulated integers would give an integer result. A division for example would have to be rounded down, and a multiplication would either have to be checked for overflow or masked down to fit in the integer range.
This of course means that floating point operations in your language will be considerably faster than integer operations, defeating most of the purpose of having an integer type in the first place.
Note that ECMA-262 Edition 3 added
Number.prototype.toFixed, which takes
a precision argument telling how many
digits after the decimal point to
show. Use this method well and you
won't mind the disparity between
finite precision base 2 and the
"arbitrary" or "appropriate" precision
base 10 that we use every day.
- Brendan Eich