I'm a bit confused about the size of a javascript number in a 32bit browser. Is it still represented as a 64bit number with max value at 2^53?
Answers couldn't be more wrong, it does depend on engine.
In V8 (Google Chrome, Opera, Node.js) 32-bit:
Integers that fit 31-bit signed representation (from -1073741824 to 1073741823) are represented directly by embedding it them in pointers.
Any other number is generally represented as a heap object that has a 64-bit double as a field for the numeric value (think of Java Double wrapper). In optimized functions such numbers can be temporarily stored directly on the stack and registers. Also certain kind of arrays can store doubles directly "permanently".
In V8 64-bit:
Same as 32-bit except integers can now fit in 32-bit signed representation (from -2147483648 to 2147483647) instead of 31-bit.
Yes. A number in Javascript is a double precision floating point number. It's the same regardless of the platform that it runs on.
I suppose my answer lies on MDN # 64-bit integers
Related
I'm trying to write a function that would fetch me the number of decimal places after the decimal point in a floating-point literal using the answer as a reference from here.
Although this seems to work fine when tried in the browser consoles, in the Node.js environment while running test cases, the precision is truncated only up to 14 digits.
let data = 123.834756380650877834678
console.log(data) // 123.83475638065088
And the function returns 14 as the answer.
Why is the rounding off happening at rest? Is it a default behavior?
The floating-point format used in JavaScript (and presumably Node.js) is IEEE-754 binary64. When 123.834756380650877834678 is used in source code, it is converted to the nearest representable value, which is 123.834756380650873097692965529859066009521484375.
When this is converted to a string with default formatting, JavaScript uses just enough digits to uniquely distinguish the value. For 123.834756380650873097692965529859066009521484375, this should produce “123.83475638065087”. If you are getting “123.83475638065088”, which differs in the last digit, then the software you are using does not conform to the JavaScript specification (ECMAScript).
In any case, the binary64 format does not have sufficient precision to preserve the information that the original numeral, “123.834756380650877834678”, has 21 digits after the decimal point.
The code you link to also does not and cannot compute the number of digits in an original numeral. It computes the number of digits needed to uniquely distinguish the value represented after conversion to binary64. For sufficiently short numerals without trailing zeros after the decimal point, this is the same as the number of digits after the decimal point in the original numeral. For others, it may not be.
it is default behavior of JavaScript.I think it will be same in node.js.
in JS The maximum number of decimals is 17.
for more details take a look at here
In JavaScript, there's only one type for all different kinds of numbers. Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Generally no. There are some possible performance implications when a number doesn't fit in a 31b signed int.
A tour of V8: object representation explains
According to the spec, all numbers in JavaScript are 64-bit floating point doubles. We frequently work with integers though, so V8 represents numbers with 31-bit signed integers whenever possible (the low bit is always 0; this helps the garbage collector distinguish numbers from pointers). So objects with the fast small integers elements kind only contain this type of number. If we want to store a fractional number or a larger integer or a special value like -0, then we need to upgrade the array to fast doubles. This involves a potentially expensive copy-and-convert operation, but it doesn't happen often in practice. fast doubles objects are still pretty fast because all of the numbers are stored in an unboxed representation. If we want to store any other kind of value, e.g., a string or an object, we must upgrade to a general array of fast elements.
Does the amount of decimals in the numbers used (precision) affect performance especially in JavaScript? If it does, how?
No. The number type in JavaScript is a 64-bit floating-point value with base 2 and always has the same precision. The computer works on that data bit by bit and it doesn't matter whether that data represents something that looks simple to a human like 1.0 or something seemingly complicated like 123423.5645632. In fact, for base 2 floats, the 'human' values are just as 'hard', because 1.1 is truly represented by a much longer number (sth. like 1.10000000000000054). All this doesn't matter, because the computer truly operates on 64 ones and zeros. There are always some arcane exceptions in floating point, but those usually don't matter in practice.
How about saving numbers in MongoDB: Do precise numbers take more space than less precise ones?
Decimal numbers are stored as doubles (64bit), and it doesn't matter whether that is 1.0 or 1.1221234423. Again, the number of bits is constant for these data types.
The same is true for ints, but MongoDB has support for both 32- and 64-bit ints. So NumberLong is indeed larger than regular 32-bit ints and just as large as doubles.
I just started reading Javascript (Professional JavaScript for Web Developers, 3rd Edition) and the text variously says that integers are:
[1] Stored as IEEE-754 format (pg. 35).
[2] Have a -0 value (pg. 36).
[3] Stored as a 32-bit two's complement number (pg. 50).
[4] Stored as a IEEE-754 64-bits number (pg. 49).
These are inconsistent definitions. What is the format of an integer in Javascipt?
thanks
Here is the specification:
http://ecma262-5.com/ELS5_HTML.htm#Section_8.5
According to the specification, javascript doesn't have integers, only floating point numbers.
IEEE-754 floats do have a -0 value, so there's no inconsistency between 1, 2 and 4. Only 3 appears to be inconsistent.
Javascript numbers are always stored as IEEE-754 64-bit floating point numbers. In some expressions (usually with bitwise operators) they may be temporarily converted to 32-bit integers and then converted back to 64-bit floats, but they're always stored as floats. I assume this conversion is what #3 is actually referring to.
A JIT compiler for Javascript may use actual integers in the compiled code as an optimization (especially if asm.js is involved), but during interpretation it's still all floats.
I'm using a server-side calculation which needs to generate (with * and + operations) and compare 40-bit integers. I'm aware that at that point the V8 engine stores the numbers as Double rather than int. Can I rely on these numbers to be generated and compared correctly?
My intuition says yes - doubles shouldn't have trouble with that - but I'm not sure how to check or where to find information on this.
Yes.
A JavaScript Number, which is a 64-bit IEEE 754 floating point value, can store integers from -253 to 253 without loss of precision, since doubles can store up to 53 bits of mantissa (52 explictly).
References:
ECMA-262: 4.3.19 Number value
Double-precision floating point numbers (Wikipedia)
I'm currently writing a compiler for a small language that compiles to JavaScript. In this language, I'd quite like to have integers, but JavaScript only supports Number, which is a double-precision floating point value. So, what's the most efficient way to implement integers in JavaScript? And how efficient is this compared to just using Number?
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
So, what's the most efficient way to implement integers in JavaScript?
The primitive number type is as efficient as it gets. Many modern JS engines support JIT compilation, so it should be almost as efficient as native floating-point arithmetic.
In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.
You can achieve the semantics of standard 32-bit integer arithmetic by noting that JavaScript converts "numbers" to 32-bit integers for bitwise operations. >>> (unsigned right shift) converts its operand to an unsigned 32-bit integer while the rest (all other shifts and bitwise AND/OR) convert their operand to a signed 32-bit integer. For example:
0xFFFFFFFF | 0 yields -1 (signed cast)
(0xFFFFFFFF + 1) | 0 yields 0 (overflow)
-1 >>> 0 yields 0xFFFFFFFF (unsigned cast)
I found this implementation of BigIntegers in Javascript: http://www-cs-students.stanford.edu/~tjw/jsbn/
Perhaps this will help?
Edit: Also, the Google Closure library implements 64-bit integers:
http://code.google.com/p/closure-library/source/browse/trunk/closure/goog/math/long.js
These are essentially just generating convenience objects though, and won't do anything for improving on the fundamental data-type efficiency.
On a modern CPU if you restrict your integer values to the range +- 2^52 then using a double will be barely less efficient than using a long.
The double IEE754 type has 53 bits of mantissa, so you can easily represent the 32-bit integer range and then some.
In any event, the rest of Javascript will be much more of a bottleneck than the individual CPU instructions used to handle arithmetic.
All Numbers are Numbers. There is no way around this. JavaScript has no byte or int type. Either deal with the limitations or use a more low level language to write your compiler in.
The only sensible option if you want to achieve this is to edit one of the JavaScript interpreters (say V8) and extend JS to allow access to native C bytes.
Well you can choose JavaScript's number type which is probably computed using primitives of your CPU or you can choose to layer a complete package of operators and functions and what not on an emulated series of bits...?
... If performance and efficiency is your concern, stick with the doubles.
The most efficient would be to use numbers, and add operations to make sure that operations on the simulated integers would give an integer result. A division for example would have to be rounded down, and a multiplication would either have to be checked for overflow or masked down to fit in the integer range.
This of course means that floating point operations in your language will be considerably faster than integer operations, defeating most of the purpose of having an integer type in the first place.
Note that ECMA-262 Edition 3 added
Number.prototype.toFixed, which takes
a precision argument telling how many
digits after the decimal point to
show. Use this method well and you
won't mind the disparity between
finite precision base 2 and the
"arbitrary" or "appropriate" precision
base 10 that we use every day.
- Brendan Eich