I have a u64 (unsigned integer) stored in 8 bytes of memory. Clearly the range is 0-2^64 integers.
I am converting it to a javascript number by turning each byte into hex and making a hex string:
let s = '0x'
s += buffer.slice(0,1).toString("hex")
s += buffer.slice(1,2).toString("hex")
...
n = parseInt(s)
Works great for everything I have done so far.
But when I look at how javascript stores numbers, I become unsure. Javascript uses 8 bytes for numbers, but treats all numbers the same. This internal javascript "number" representation can also hold floating point numbers.
Can a javascript number store all integers from 0 to 2^64? seems not.
At what point do I get into trouble?
What do people do to get round this?
An unsigned 64 bit integer has the range of a 0 to 18.446.744.073.709.551.615.
You could use the Number wrapper object with the .MAX_VALUE property, it represents the maximum numeric value representable in JavaScript.
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#.
General Info:
Integers in JS:
JavaScript has only floating-point numbers. Integers appear internally in two ways. First, most JavaScript engines store a small enough number without a decimal fraction as an integer (with, for example, 31 bits) and maintain that representation as long as possible. They have to switch back to a floating point representation if a number’s magnitude grows too large or if a decimal fraction appears.
Second, the ECMAScript specification has integer operators: namely, all of the bitwise operators. Those operators convert their operands to 32-bit integers and return 32-bit integers. For the specification, integer only means that the numbers don’t have a decimal fraction, and 32-bit means that they are within a certain range. For engines, 32-bit integer means that an actual integer (non-floating-point) representation can usually be introduced or maintained.
Ranges of integers
Internally, the following ranges of integers are important in JavaScript:
Safe integers [1], the largest practically usable range of integers that JavaScript supports:
53 bits plus a sign, range (−2^53, 2^53) which relates to (+/-) 9.007.199.254.740.992
Array indices [2]:
32 bits, unsigned
Maximum length: 2^32−1
Range of indices: [0, 2^32−1) (excluding the maximum length!)
Bitwise operands [3]:
Unsigned right shift operator (>>>): 32 bits, unsigned, range [0, 2^32)
All other bitwise operators: 32 bits, including a sign, range [−2^31, 2^31)
“Char codes”, UTF-16 code units as numbers:
Accepted by String.fromCharCode()
Returned by String.prototype.charCodeAt()
16 bit, unsigned
References:
[1] Safe integers in JavaScript
[2] Arrays in JavaScript
[3] Label bitwise_ops
Source: https://2ality.com/2014/02/javascript-integers.html
Related
This question already has answers here:
What is JavaScript's highest integer value that a number can go to without losing precision?
(21 answers)
Closed 4 months ago.
How to increase this number(you can try it on the browser console):
36893488147419103000 + 1
The result of this is:
36893488147419103000
The number stays the same no changes to it why is that? and how can I increase it by 1?
For big integers you should use the BigInt (Big Integer) type.
Note 1: you almost always cannot mix BigInt numbers with Numbers (eg for math operations) without first performing an explicit conversion.
Note 2: JSON does not currently natively support BigInt values. As a workaround you can use strings (eg. '1n' for the values and then use a reviver function when calling JSON.parse.
JavaScript currently only has two numeric types: double-precision IEEE 754 floating point Numbers, and Big Integers which can be used to represent arbitrarily large integers. You can declare a BigInt number literal using the suffix n, eg. 1n.
IEEE 754 Numbers can "only" accurately represent integers up to and including Number.MAX_SAFE_INTEGER, which has a value of 2^53 - 1 or 9,007,199,254,740,991 or ~9 quadrillion.
From MDN:
Double precision floating point format only has 52 bits to represent the mantissa, so it can only safely represent integers between -(253 – 1) and 253 – 1. "Safe" in this context refers to the ability to represent integers exactly and to compare them correctly. For example, Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will evaluate to true, which is mathematically incorrect. See Number.isSafeInteger() for more information.
A "Decimal" number type, that will be able to represent arbitrarily precise decimal numbers, is under development.
Obviously the number is internally represented as a floating point number.When the value you want do add to this number is less then the value of the least significant bit, it will not change the the value.
The only way would be to use floating point numbers with a higher resolution i.e. with a higher number of significant bits.
Double precision floating point format only has 52 bits to represent the mantissa, so it can only safely represent integers between -(253 – 1) and 253 – 1. See Number.MAX_SAFE_INTEGER. Larger numbers may not be able to be represented exactly.
const a = 10;
const b = 0.123456789123456789;
console.log((a + b).toFixed(17));
// 10.12345678912345726
As you can see from example above, .12345678912345 , only this part are shown correctly , as I understand Javascript only consider 15 places precision ( including .). If I will change 10 to 100 , it will be same amount , but I was thinking it should be 17 places precision by MDN doc. What doesn't this phrase exactly mean 17 decimal places of precision ?
If I will show it without .toFixed() method , it will show same 15 precision 10.123456789123457 - response of a + b
Url: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number
According to JS/ECMAScript specification, the Number type uses double-precision floating point which has 64-bit format (binary64), consists of a sign bit (determines positive or negative value), 11 exponent bits and 52 fraction bits (each digit represents 4-bits, hence 64-bit has 16 digits):
The Number type representing the double-precision 64-bit format IEEE
754-2008 values as specified in the IEEE Standard for Binary
Floating-Point Arithmetic.
The maximum positive number which can be represented properly using double precision is 9007199254740992, which can be achieved by using Math.pow(2, 53). If the number range is between Math.pow(2, 53) and Math.pow(2, 54) (or between Math.pow(2, -53) and Math.pow(2, -54)), only even numbers can be represented properly because the exponent bits will affect LSB (least-significant bit) on the fraction bits.
Let's review the large number part:
var x = 12345678912345.6789
var x = new Number(12345678912345.6789)
This number contains more than 52 fractional bits (72 bits in total), hence the rounding used to keep the fractional bits to 52.
Also with this decimal number:
var x = new Number(.12345678912367890)
This number contains 68 fractional bits, hence the last zero is chopped off to keep 64-bit length.
Usually numeric representation larger than 9007199254740992 or smaller than 1.1102230246251565E-16 are stored as literal strings instead of Number. If you need to compute very large numbers, there are certain external libraries available to perform arbitrary precision arithmetic.
If you want to cast more then 16 points after the decimal point you can either:
Use literal string to represent your number
Use external libraries like math.js, BigInteger.js or strint library.
I was trying to repeat a character N times, and came across the Math.pow function.
But when I use it in the console, the results don't make any sense to me:
Math.pow(10,15) - 1 provides the correct result 999999999999999
But why does Math.pow(10,16) - 1 provide 10000000000000000?
You are producing results which exceed the Number.MAX_SAFE_INTEGER value, and so they are not accurate any more up to the unit.
This is related to the fact that JavaScript uses 64-bit floating point representation for numbers, and so in practice you only have about 16 (decimal) digits of precision.
Since the introduction of BigInt in EcmaScript, you can get an accurate result with that data type, although it cannot be used in combination with Math.pow. Instead you can use the ** operator.
See how the use of number and bigint (with the n suffix) differ:
10 ** 16 - 1 // == 10000000000000000
10n ** 16n - 1n // == 9999999999999999n
Unlike many other programming languages, JavaScript does not define different types of numbers, like integers, short, long, floating-point etc.
JavaScript numbers are always stored as double-precision floating-point numbers, following the international IEEE 754 standard.
This format stores numbers in 64 bits, where the number (the fraction) is stored in bits 0 to 51, the exponent in bits 52 to 62, and the sign-in bit 63:
Integers (numbers without a period or exponent notation) are accurate up to 15 digits.
I tried to google it, but all key words references to funtions or solutions working with content of variable.
My question is simple.
If variable represents a number,
var a = 1;
what is its max bit length? I mean, what highest number can it contain before buffer overflow happens?
Is it int32? Is it int64? Is it a different length?
Thanks in advance
As the spec says, numbers in JavaScript are IEEE-754 double-precision floating point:
They're 64 bits in size.
Their range is -1.7976931348623157e+308 through 1.7976931348623157e+308 (that latter is available via Number.MAX_VALUE), which is to say the positive and negative versions of (2 - 2^-52) × 2^1023, but they can't perfectly represent all of those values. Famously, 0.1 + 0.2 comes out as 0.30000000000000004; see Is floating-point math broken?
The max "safe" integer value (whole number value that won't be imprecise) is 9,007,199,254,740,991, which is available as Number.MAX_SAFE_INTEGER on ES2015-compliant JavaScript engines.
Similarly, MIN_SAFE_INTEGER is -9,007,199,254,740,991
Numbers in Javascript are IEEE 754 floating point double-precision values which has a 53-bit mantissa. See the MDN:
Integer range for Number
The following example shows minimum and maximum integer values that
can be represented as Number object (for details, refer to ECMAScript
standard, chapter 8.5 The Number Type):
var biggestInt = 9007199254740992;
var smallestInt = -9007199254740992;
Anyone knows why javascript Number.toString function does not represents negative numbers correctly?
//If you try
(-3).toString(2); //shows "-11"
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
I am really curious why it doesn't work properly or what is the reason it works this way?
I've searched it but didn't find anything that helps.
Short answer:
The toString() function takes the decimal, converts it
to binary and adds a "-" sign.
A zero fill right shift converts it's operands to signed 32-bit
integers in two complements format.
A more detailed answer:
Question 1:
//If you try
(-3).toString(2); //show "-11"
It's in the function .toString(). When you output a number via .toString():
Syntax
numObj.toString([radix])
If the numObj is negative, the sign is preserved. This is the case
even if the radix is 2; the string returned is the positive binary
representation of the numObj preceded by a - sign, not the two's
complement of the numObj.
It takes the decimal, converts it to binary and adds a "-" sign.
Base 10 "3" converted to base 2 is "11"
Add a sign gives us "-11"
Question 2:
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
A zero fill right shift converts it's operands to signed 32-bit integers. The result of that operation is always an unsigned 32-bit integer.
The operands of all bitwise operators are converted to signed 32-bit
integers in two's complement format.
-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.
http://en.wikipedia.org/wiki/Two%27s_complement
http://en.wikipedia.org/wiki/Logical_shift
var binary = (-3 >>> 0).toString(2); // coerced to uint32
console.log(binary);
console.log(parseInt(binary, 2) >> 0); // to int32
on jsfiddle
output is
11111111111111111111111111111101
-3
.toString() is designed to return the sign of the number in the string representation. See EcmaScript 2015, section 7.1.12.1:
If m is less than zero, return the String concatenation of the String "-" and ToString(−m).
This rule is no different for when a radix is passed as argument, as can be concluded from section 20.1.3.6:
Return the String representation of this Number value using the radix specified by radixNumber. [...] the algorithm should be a generalization of that specified in 7.1.12.1.
Once that is understood, the surprising thing is more as to why it does not do the same with -3 >>> 0.
But that behaviour has actually nothing to do with .toString(2), as the value is already different before calling it:
console.log (-3 >>> 0); // 4294967293
It is the consequence of how the >>> operator behaves.
It does not help either that (at the time of writing) the information on mdn is not entirely correct. It says:
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format.
But this is not true for all bitwise operators. The >>> operator is an exception to the rule. This is clear from the evaluation process specified in EcmaScript 2015, section 12.5.8.1:
Let lnum be ToUint32(lval).
The ToUint32 operation has a step where the operand is mapped into the unsigned 32 bit range:
Let int32bit be int modulo 232.
When you apply the above mentioned modulo operation (not to be confused with JavaScript's % operator) to the example value of -3, you get indeed 4294967293.
As -3 and 4294967293 are evidently not the same number, it is no surprise that (-3).toString(2) is not the same as (4294967293).toString(2).
Just to summarize a few points here, if the other answers are a little confusing:
what we want to obtain is the string representation of a negative number in binary representation; this means the string should show a signed binary number (using 2's complement)
the expression (-3 >>> 0).toString(2), let's call it A, does the job; but we want to know why and how it works
had we used var num = -3; num.toString(-3) we would have gotten -11, which is simply the unsigned binary representation of the number 3 with a negative sign in front, which is not what we want
expression A works like this:
1) (-3 >>> 0)
The >>> operation takes the left operand (-3), which is a signed integer, and simply shifts the bits 0 positions to the left (so the bits are unchanged), and the unsigned number corresponding to these unchanged bits.
The bit sequence of the signed number -3 is the same bit sequence as the unsigned number 4294967293, which is what node gives us if we simply type -3 >>> 0 into the REPL.
2) (-3 >>> 0).toString
Now, if we call toString on this unsigned number, we will just get the string representation of the bits of the number, which is the same sequence of bits as -3.
What we effectively did was say "hey toString, you have normal behavior when I tell you to print out the bits of an unsigned integer, so since I want to print out a signed integer, I'll just convert it to an unsigned integer, and you print the bits out for me."
Daan's answer explains it well.
toString(2) does not really convert the number to two's complement, instead it just do simple translation of the number to its positive binary form, while preserve the sign of it.
Example
Assume the given input is -15,
1. negative sign will be preserved
2. `15` in binary is 1111, therefore (-15).toString(2) gives output
-1111 (this is not in 2's complement!)
We know that in 2's complement of -15 in 32 bits is
11111111 11111111 11111111 11110001
Therefore in order to get the binary form of (-15), we can actually convert it to unsigned 32 bits integer using the unsigned right shift >>>, before passing it to toString(2) to print out the binary form. This is the reason we do (-15 >>> 0).toString(2) which will give us 11111111111111111111111111110001, the correct binary representation of -15 in 2's complement.