First, (-1 >>> 0) === (2**32 - 1) which I expect is due to adding a new zero to the left, thus converting the number into 33-bit number?
But, Why is (-1 >>> 32) === (2**32 - 1) as well, while I expect it (after shifting the 32-bit number 32 times and replacing the Most Significant Bits with zeros) to be 0.
Shouldn't it be equal ((-1 >>> 31) >>> 1) === 0? or Am I missing something?
When you execute (-1 >>> 0) you are executing an unsigned right shift. The unsigned here is key. Per the spec, the result of >>> is always unsigned. -1 is represented as the two's compliment of 1. This in binary is all 1s (In an 8 bit system it'd be 11111111).
So now you are making it unsigned by executing >>> 0. You are saying, "shift the binary representation of -1, which is all 1s, by zero bits (make no changes), but make it return an unsigned number.” So, you get the value of all 1s. Go to any javascript console in a browser and type:
console.log(2**32 - 1) //4294967295
// 0b means binary representation, and it can have a negative sign
console.log(0b11111111111111111111111111111111) //4294967295
console.log(-0b1 >>> 0) //4294967295
Remember 2 ** any number minus 1 is always all ones in binary. It's the same number of ones as the power you raised two to. So 2**32 - 1 is 32 1s. For example, two to the 3rd power (eight) minus one (seven) is 111 in binary.
So for the next one (-1 >>> 32) === (2**32 - 1).... let's look at a few things. We know the binary representation of -1 is all 1s. Then shift it right one digit and you get the same value as having all 1s but precede it with a zero (and return an unsigned number).
console.log(-1 >>> 1) //2147483647
console.log(0b01111111111111111111111111111111) //2147483647
And keep shifting until you have 31 zeros and a single 1 at the end.
console.log(-1 >>> 31) //1
This makes sense to me, we have 31 0s and a single 1 now for our 32 bits.
So then you hit the weird case, shifting one more time should make zero right?
Per the spec:
6.1.6.1.11 Number::unsignedRightShift ( x, y )
Let lnum be ! ToInt32(x).
Let rnum be ! ToUint32(y).
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
Return the result of performing a zero-filling right shift of lnum by shiftCount bits. Vacated bits are filled with zero. The result is an unsigned 32-bit integer.
So we know we already have -1, which is all 1s in twos compliment. And we are going to shift it per the last step of the docs by shiftCount bits (which we think is 32). And shiftCount is:
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
So what is rnum & 0x1F? Well & means a bitwise AND operation. lnum is the number left of the >>> and rnum is the number right of it. So we are saying 32 AND 0x1F. Remember 32 is 100000. 0x is hexadecimal where each character can be represented by 4 bits. 1 is 0001 and F is 1111. So 0x1F is 00011111 or 11111 (31 in base 10, 2**5 - 1 also).
console.log(0x1F) //31 (which is 11111)
32: 100000 &
0x1F: 011111
---------
000000
The number of bits to shift if zero. This is because the leading 1 in 32 is not part of the 5 most significant bits! 32 is six bits. So we take 32 1s and shift it zero bits! That's why. The answer is still 32 1s.
On the example -1 >>> 31 this made sense because 31 is <= 5 bits. So we did
31: 11111 &
0x1F: 11111
-------
11111
And shifted it 31 bits.... as expected.
Let's test this further.... let's do
console.log(-1 >>> 33) //2147483647
console.log(-1 >>> 1) //2147483647
That makes sense, just shift it one bit.
33: 100001 &
0x1F: 011111
---------
00001
So, go over 5 bits with a bitwise operator and get confused. Want to play stump the dummy with a person who hasn't researched the ECMAScript to answer a stackoverflow post? Just ask why are these the same.
console.log(-1 >>> 24033) //2147483647
console.log(-1 >>> 1) //2147483647
Well of course it's because
console.log(0b101110111100001) // 24033
console.log(0b000000000000001) // 1
// ^^^^^ I only care about these bits!!!
When you do (-1 >>> 0), you are turning the sign bit into zero while keeping the rest of the number the same, therefore ending up as 2**32 - 1.
The next behaviour is documented in the ECMAScript specification. The actual number of shifts is going to be "the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F".
Since 32 & 0x1F === 0, both of your results will be identical.
Related
With "normal" numbers(32bit range), i'm using zero fill right shift operator to convert to binary, which works both with positive and negative numbers(results in the two's complement binary):
const numberToConvert = -100
(numberToConvert >>> 0).toString(2);
//Result is correct, in two's complement: '11111111111111111111111110011100'
But how can this be done with a negative BigInt?
If i do:
(-1000000000000000000n >>> 0).toString(2)
I get an error "Uncaught TypeError: Cannot mix BigInt and other types, use explicit conversions"
So then i try to use 0 as a bigint:
(-1000000000000000000n >>> 0n).toString(2)
I get the following error: Uncaught TypeError: BigInts have no unsigned right shift, use >> instead
Doing so, results in the non two's complement binary, with "-" appended to it:
(-1000000000000000000n >> 0n).toString(2)
//Result is:'-110111100000101101101011001110100111011001000000000000000000'
How can I get the two's complement binary, of a negative bigint?
The bitwise operators are for 32-bit integers anyway, and why it doesn't work with BigInt, as quoted in JavaScript Definitive Guide, 7th Ed, David Flanagan, O'Reilly, p. 78:
Shift right with zero fill (>>>): This is the only one of the JavaScript bitwise operators that cannot be used with BigInt values. BigInt does not represent negative numbers by setting the high bit the way that 32-bit integers do, and this operator only makes sense for that particular two’s complement representation.
Also note that it looks like it is giving you two's complement, but in fact, the negative number is converted to 32-bit unsigned integer, and then printed as binary, giving you the impression that it is two's complement:
console.log(-100 >>> 0); // => 4294967196
The two's complement has this property:
You have a number, say 123, which is 01111011 in 8 bit binary, and you want the negative number of that, which is -123.
Two complement says: the answer you want, just treat it as a positive number, and add it with the original number 123, and you will just get all 0's with the overflow of the 8 bit number.
As an example, treating everything as positive, 123 + theAnswerYouWant is 01111011 + 10000101, which is exactly 00000000 with an overflow, which is 100000000 (note the extra 1 in front). In other words, you want 256 - 123, which is 133 and if you render 133 as 8 bit, that's the answer you want.
As a result, you can use 28 to subtract the orignal number, and treat it as a positive number and display it, using .toString(2), which you already have.
The following is for 64 bits:
function getBinary(a, nBits) {
[a, nBits] = [BigInt(a), BigInt(nBits)];
if ((a > 0 && a >= 2n ** (nBits - 1n)) || (a < 0 && -a > 2n ** (nBits - 1n))) {
throw new RangeError("overflow error");
}
return a >= 0
? a.toString(2).padStart(Number(nBits), "0")
: (2n ** nBits + a).toString(2);
}
console.log(getBinary(1000000000000000000n, 64));
console.log(getBinary(-1000000000000000000n, 64));
console.log(getBinary(-1, 64));
console.log(getBinary(-2, 64));
console.log(getBinary(-3, 64));
console.log(getBinary(-4, 64n)); // trying the nBits as a BigInt as a test
console.log(getBinary(2n ** 63n - 1n, 64));
console.log(getBinary(-(2n ** 63n), 64));
// console.log(getBinary(2n ** 63n, 64)); // throw Error
// console.log(getBinary(-(2n ** 63n) - 1n, 64)); // throw Error
Note that you don't have to pad it when a is negative, because for example, if it is 8 bit, the number being displayed is any where from 11111111 to 10000000 and it is always 8 bits.
Some more details:
You may already know ones' complement is just simply flipping the bits (from 0 to 1, and 1 to 0). Another way to think of it is, you add the two numbers together and it will becomes all 1s.
The usual way two's complement is described, is to flip the bits, and add 1 to it. You see, if you start with 11111111 and subtract 01111011 (which is 123 decimal), you get 10000100 and it is exactly the same as flipping the bit. (actually this follows from above: adding them get all 1s, so using all 1s to subtract one of them get the other one.
Well, so if you start with 11111111 and subtract that number, and then add 1, isn't it the same as using 11111111, add 1, and subtract that number? Well, 11111111 plus 1 is 100000000 (note the extra 1 in front) -- that's exactly starting with 2n where n is the n-bit integer, and then subtract that number. So you see why the property at the beginning of this post is true.
In fact, two's complement is designed with such purpose: if we want to find out 2 - 1, to make the computer calculate that, we only need to consider this "two's complement" as positive numbers and add them together using the processor's "add circuitry": 00000010 plus 11111111. We get 00000001 but have a carry (the overflow). If we handle the overflow correctly by discarding it, we get the answer: 1. If we use ones' complement instead, we can't use the same addition circuitry to carry out 00000010 + 11111110 to get a 1 because the result is 00000000 which is 0
Another way to think about (4) is, if you have a car's odometer, and it says 000002 miles so far, how do you subtract 1 from it? Well, if you represent -1 as 9999999, then you just add 999999 to the 2, and get 1000001 but the leftmost 1 does not show on the odometer, and now the odometer will become 000001. In decimal, representing -1 as 999999 is 10's complement. In binary, representing -1 as 11111111 is called two's complement.
Two's complement only makes sense with fixed bit lengths. Numbers are converted to 32-bit integers (this is an old convention from back when javascript was messier). BigInt doesn't have that kind of conversion as the length is considered arbitrary. So, in order to use two's complement with BigInt, you'll need to figure out what length you want to use then convert it. Conversion to two's complement is described many places including Wikipedia.
Here, we use the LSB to MSB method since it's pretty easy to implement as string processing in javascript:
const toTwosComplement = (n, len) => {
// `n` must be an integer
// `len` must be a positive integer greater than bit-length of `n`
n = BigInt(n);
len = Number(len);
if(!Number.isInteger(len)) throw '`len` must be an integer';
if(len <= 0) throw '`len` must be greater than zero';
// If non-negative, a straight conversion works
if(n >= 0){
n = n.toString(2)
if(n.length >= len) throw 'out of range';
return n.padStart(len, '0');
}
n = (-n).toString(2); // make positive and convert to bit string
if(!(n.length < len || n === '1'.padEnd(len, '0'))) throw 'out of range';
// Start at the LSB and work up. Copy bits up to and including the
// first 1 bit then invert the remaining
let invert = false;
return n.split('').reverse().map(bit => {
if(invert) return bit === '0' ? '1' : '0';
if(bit === '0') return bit;
invert = true;
return bit;
}).reverse().join('').padStart(len, '1');
};
console.log(toTwosComplement( 1000000000000000000n, 64));
console.log(toTwosComplement(-1000000000000000000n, 64));
console.log(toTwosComplement(-1, 64));
console.log(toTwosComplement(2n**63n-1n, 64));
console.log(toTwosComplement(-(2n**63n), 64));
div.as-console-wrapper{max-height:none;height:100%;}
I know that numbers in JavaScript are stored in IEEE-754 format. But when we use integers, particularly bitwise operators, they're represented as two's complement with 32 bits.
So -1 would be 0xFFFFFFFF. But (-1).toString(2) is -1. And -1 >>> 31 is 1, that's right, but -1 >>> 32 must be 0, however it's 4294967295. And -1 << 32 must be 0, but it is -1.
Why do bitwise operations work in this way? And toString() shows number with sign -, why this minus is not in sign bit? Also why -1 >> 0 is -1, but -1 >>> 0 is 4294967295? I know what is the difference between >> and >>>, but the second operand is 0, so I can't understand why these operations work in different ways.
Difference between >>> and >>
In an arithmetic shift, the sign bit is extended to preserve the
signedness of the number.
-1 in 8 bit is 11111111 -2 is 11111110 ...
This is handled like that because if you count to the highest possible number +1, the lowest possible number will be shown (8 bit: 01111111 +1 = 10000000), Thats why 111111111 is -1
Logical right shift, however, does not care that the value could
possibly represent a signed number; it simply moves everything to the
right and fills in from the left with 0s.
so here, -1 >>> pushes the 11111111 one to the right so the "-" sign gets lost and the highest positive number 0111111 (in 8 bit) is shown
Also the reason why -1 >> 0 equals -1 is because 11111111 >> 0 does literally add nothing while -1 >>> 0 still moves everything and fills the left bits with 0, each step you raise this "0" will half the value until 0. You can try -1 >>> 31 to see it gets 1
I'm having trouble understanding how shifting works. I would expect that a and b would be the same but that's not the case:
a = 0xff000000;
console.log(a.toString(16));
b = 0xff << 24;
console.log(b.toString(16));
resulting in:
ff000000
-1000000
I came to this code while trying to create a 32bit number from 4 bytes.
Bitwise operators convert their operands to signed 32 bit numbers. That means the most significant bit is the sign bit, which gives you only 31 bits for the number value.
0xff000000 by itself is interpreted as 64bit floating point value. But truncating this to a 32bit signed integer produces a negative value since the most significant bit is 1:
0xff000000.toString(2);
> "11111111000000000000000000000000"
(0xff000000 | 0).toString(16)
> -1000000
According to Bitwise operations on 32-bit unsigned ints? you can use >>> 0 to convert the value back to an unsigned value:
0xff << 24 >>> 0
> 4278190080
From the spec:
The result is an unsigned 32-bit integer.
So it turns out this is as per the spec. Bit shift operators return signed, 32-bit integer results.
The result is a signed 32-bit integer.
From the latest ECMAScript spec.
Because your number is already 8 bits long, shifting it left by 24 bits and then interpreting that as a signed integer means that the leading 1 bit is seen as making it a negative number.
Consider this code (node v5.0.0)
const a = Math.pow(2, 53)
const b = Math.pow(2, 53) + 1
const c = Math.pow(2, 53) + 2
console.log(a === b) // true
console.log(a === c) // false
Why a === b is true?
What is the maximum integer value javascript can handle?
I'm implementing random integer generator up to 2^64. Is there any pitfall I should be aware of?
How javascript treat large integers?
JS does not have integers. JS numbers are 64 bit floats. They are stored as a mantissa and an exponent.
The precision is given by the mantissa, the magnitude by the exponent.
If your number needs more precision than what can be stored in the mantissa, the least significant bits will be truncated.
9007199254740992; // 9007199254740992
(9007199254740992).toString(2);
// "100000000000000000000000000000000000000000000000000000"
// \ \ ... /\
// 1 10 53 54
// The 54-th is not stored, but is not a problem because it's 0
9007199254740993; // 9007199254740992
(9007199254740993).toString(2);
// "100000000000000000000000000000000000000000000000000000"
// \ \ ... /\
// 1 10 53 54
// The 54-th bit should be 1, but the mantissa only has 53 bits!
9007199254740994; // 9007199254740994
(9007199254740994).toString(2);
// "100000000000000000000000000000000000000000000000000010"
// \ \ ... /\
// 1 10 53 54
// The 54-th is not stored, but is not a problem because it's 0
Then, you can store all these integers:
-9007199254740992, -9007199254740991, ..., 9007199254740991, 9007199254740992
The second one is called the minimum safe integer:
The value of Number.MIN_SAFE_INTEGER is the smallest integer n such
that n and n − 1 are both exactly representable as a Number value.
The value of Number.MIN_SAFE_INTEGER is −9007199254740991
(−(253−1)).
The second last one is called the maximum safe integer:
The value of Number.MAX_SAFE_INTEGER is the largest integer n such
that n and n + 1 are both exactly representable as a Number value.
The value of Number.MAX_SAFE_INTEGER is 9007199254740991
(253−1).
Answering your second question, here is your maximum safe integer in JavaScript:
console.log( Number.MAX_SAFE_INTEGER );
All the rest is written in MDN:
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The
reasoning behind that number is that JavaScript uses double-precision
floating-point format numbers as specified in IEEE 754 and can only
safely represent numbers between -(2 ** 53 - 1) and 2 ** 53 - 1.
Safe in this context refers to the ability to represent integers
exactly and to correctly compare them. For example,
Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will
evaluate to true, which is mathematically incorrect. See
Number.isSafeInteger() for more information.
.:: JavaScript only supports 53 bit integers ::.
All numbers in JavaScript are floating point which means that integers are always represented as
sign × mantissa × 2exponent
The mantissa has 53 bits. You can use the exponent to get higher integers, but then they won’t be contiguous, any more. For example, you generally need to multiply the mantissa by two (exponent 1) in order to reach the 54th bit.
However, if you multiply by two, you will only be able to represent every second integer:
Math.pow(2, 53) // 54 bits 9007199254740992
Math.pow(2, 53) + 1 // 9007199254740992
Math.pow(2, 53) + 2 //9007199254740994
Math.pow(2, 53) + 3 //9007199254740996
Math.pow(2, 53) + 4 //9007199254740996
Rounding effects during the addition make things unpredictable for odd increments (+1 versus +3). The actual representation is a bit more complicated but this explanation should help you understand the basic problem.
You can safely use strint library to encode large integers in strings and perform arithmetic operations on them too.
Here is the full article.
Number.MAX_VALUE will tell you the largest floating-point value representable in your JS implementation. The answer will likely be: 1.7976931348623157e+308. But that doesn't mean that every integer up to 10^308 can be represented exactly. As your example code shows, beyond 2^53 only even numbers can be represented, and as you go farther out on the number line the gaps get much wider.
If you need exact integers larger than 2^53, you probably want to work with a bignum package, which allows for arbitrarily large integers (within the bounds of available memory). Two packages that I happen to know are:
BigInt by Leemon
and
Crunch
To supplement to other answers here, it's worth mentioning that BigInt exists. This allows JavaScript to handle arbitrarily large integers.
Use the n suffix on your numbers and use regular operators like 2n ** 53n + 2n. Important to point out that a BigInt is not a Number, but you can do range-limited interoperation with Number via explicit conversions.
Some examples at the Node.js REPL:
> 999999999999999999999999999999n + 1n
1000000000000000000000000000000n
> 2n ** 53n
9007199254740992n
> 2n ** 53n + 1n
9007199254740993n
> 2n ** 53n == 2n ** 53n + 1n
false
> typeof 1n
'bigint'
> 3 * 4n
TypeError: Cannot mix BigInt and other types, use explicit conversions
> BigInt(3) * 4n
12n
> 3 * Number(4n)
12
> Number(2n ** 53n) == Number(2n ** 53n + 1n)
true
What is the highest number this javascript expression can evaluate to? What is the lowest number? Why?
+(''+Math.random()).substring(2)
Extra credit: How many different values can the expression evaluate to? Can it be every value from the minimum to the maximum, or are some intermediate values not obtainable due to rounding issues?
Response to Daniel's answer (deleted, was 10000000000000000 max, 0 min):
I was playing around in Chrome's console and got this:
Math.random();
>> 0.00012365682050585747
'12365682050585747'.length
>> 17
12365682050585747 > 10000000000000000
>> true
... so 10000000000000000 can't be the max!
It depends on how the random number is generated, and how the number will be converted to string. The ECMAScript spec doesn't specify both of these.
In practice, the number will have at most 17 significant figures, so the maximum should be at most 1017.
The spec does specify that a number will be displayed in decimal form (instead of scientific form) when the exponent is between -6 and 20 (10-6 ≤ x < 1021), so we just need to restrict our attention on numbers in [10-6, 1) when trying to seek the maximum exhaustively.
However, in this range a number must be representable as s × 2e, where 1 ≤ s ≤ 2 − 2-52 with a precision of Δs = 2-52 and -20 ≤ e ≤ -1. The spec recommends that ToNumber(ToString(x)) == x, so the number should be precise down to 2-52+e for a given e. Thus the "17-digit" number with (2 − n × 2-52) × 2e with the smallest n will be the biggest number representable with a given e, after chopping the initial 0..
v
(-20) 0.0000019073486328124998
(-19) 0.0000038146972656249996
(-18) 0.0000076293945312499975 (n=3)
(-17) 0.000015258789062499998
(-16) 0.000030517578124999997
(-15) 0.000061035156249999986 (n=2)
(-14) 0.00012207031249999999
(-13) 0.00024414062499999997
(-12) 0.00048828124999999995
(-11) 0.0009765624999999999 (always 16-digit?)
(-10) 0.0019531249999999998
(-9) 0.0039062499999999996
(-8) 0.0078124999999999965 (n=4)
(-7) 0.015624999999999998
(-6) 0.031249999999999997
(-5) 0.062499999999999986 (n=2)
(-4) 0.12499999999999999
(-3) 0.24999999999999997
(-2) 0.49999999999999994
(-1) 0.9999999999999999 (always 16-digit?)
From here we know that the absolute maximum is 78,124,999,999,999,965.
Math.random() can return any nonnegative numbers in the interval [0, 1), so the safe minimum is -324 from 5e-324 (the smallest subnormal number in double precision is 4.94 × 10-324).
For me the highest number is 1/0 (===Infinity) and the lowest obviously -1/0 (in Chromium browser).
Edit: You can also try parse a number from string to see which evaluates to Infinity.
var a = "1";
while(parseInt(a)!==Infinity) a=a+"0";
alert("Length of the highest number is: " + (a.length-1));
309 for me.