Different behavior for 1 << 32 compare javascript to python - javascript

Javascript
1<<31
-2147483648
1<<32
1
Python
1<<31
2147483648
1<<32
4294967296
Is this related to max int?
But 4294967296 not Bigger then the max int in js.

An integer in JavaScript is actually an IEEE 754 64bit float number. But an integer in Python may be a simple integer or a bignum.
All bit operations in JavaScript was defined on 32bit signed / unsigned integers. When you do these operations, the two operand was first converted to 32 bit integers, and the result will always be a 32 bit integer.
If you want multiple a number with 232, you should do 1 * 2 ** 32 (or 1 * Math.pow(2, 32) in ES5) instead of this one.
Python has builtin bignum support, which support all bit operations such as shift left. As a result, you may shift a number with any (reasonable) bits and it may be greater than 232.

Related

Why the Int-Result of "9995151671798245" is 9995151671798244? [duplicate]

Number() function returns incorrect values on some arguments, like this:
Number('10000000712224641') returns 10000000712224640
Number('10000000544563531') returns 10000000544563532
I tested this on Firefox, Chome, IE and Node.js. Why is this happening?
JavaScript safely supports approximately up to 17 digits and all numbers, whether floats or integers, are expressed in 64-bit IEEE-754 binary floating.
Number.MAX_SAFE_INTEGER // 9007199254740991
When you get above that number, the trailing digits get rounded unless you have a power of 2 (or the addition of powers of two)
Math.pow(2, 54) // 18014398509481984 (not rounded)
Math.pow(2, 54) + 1 // 18014398509481984 (rounded)
Math.pow(2, 54) - 1 // 18014398509481984 (rounded)
Math.pow(2,57) + Math.pow(2,52) // 148618787703226370 (not rounded)
Math.pow(2, 57) + Math.pow(2, 52) + 1 // 148618787703226370 (rounded)
Javascript uses 64-bit IEEE-754 binary floating point to store all numbers - like double in C# and Java, for example. There isn't a different type to store integers. (The actual implementation may use optimizations to avoid always performing arithmetic in this way, but from an end-user perspective the results will always be as if every number were treated as a 64-bit binary floating point value.)
That means only 52 bits are available to store the significand, with the other bits being used for the exponent and sign. With normalization, that means you can effectively store values with 53 significant bits of precision. That means beyond 253-1 (which is the value 9007199254740991 as quoted in other answers), the distance between "adjacent" numbers is more than 1, so you can't store all integers exactly.
This is due to the fact that javascript supports a number of digits. The maximum safe integer possible is stored in a constant called MAX_SAFE_INTEGER which contains value 9007199254740991.

Advice on converting 8 byte (u64) unsigned integer into javascript

I have a u64 (unsigned integer) stored in 8 bytes of memory. Clearly the range is 0-2^64 integers.
I am converting it to a javascript number by turning each byte into hex and making a hex string:
let s = '0x'
s += buffer.slice(0,1).toString("hex")
s += buffer.slice(1,2).toString("hex")
...
n = parseInt(s)
Works great for everything I have done so far.
But when I look at how javascript stores numbers, I become unsure. Javascript uses 8 bytes for numbers, but treats all numbers the same. This internal javascript "number" representation can also hold floating point numbers.
Can a javascript number store all integers from 0 to 2^64? seems not.
At what point do I get into trouble?
What do people do to get round this?
An unsigned 64 bit integer has the range of a 0 to 18.446.744.073.709.551.615.
You could use the Number wrapper object with the .MAX_VALUE property, it represents the maximum numeric value representable in JavaScript.
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#.
General Info:
Integers in JS:
JavaScript has only floating-point numbers. Integers appear internally in two ways. First, most JavaScript engines store a small enough number without a decimal fraction as an integer (with, for example, 31 bits) and maintain that representation as long as possible. They have to switch back to a floating point representation if a number’s magnitude grows too large or if a decimal fraction appears.
Second, the ECMAScript specification has integer operators: namely, all of the bitwise operators. Those operators convert their operands to 32-bit integers and return 32-bit integers. For the specification, integer only means that the numbers don’t have a decimal fraction, and 32-bit means that they are within a certain range. For engines, 32-bit integer means that an actual integer (non-floating-point) representation can usually be introduced or maintained.
Ranges of integers
Internally, the following ranges of integers are important in JavaScript:
Safe integers [1], the largest practically usable range of integers that JavaScript supports:
53 bits plus a sign, range (−2^53, 2^53) which relates to (+/-) 9.007.199.254.740.992
Array indices [2]:
32 bits, unsigned
Maximum length: 2^32−1
Range of indices: [0, 2^32−1) (excluding the maximum length!)
Bitwise operands [3]:
Unsigned right shift operator (>>>): 32 bits, unsigned, range [0, 2^32)
All other bitwise operators: 32 bits, including a sign, range [−2^31, 2^31)
“Char codes”, UTF-16 code units as numbers:
Accepted by String.fromCharCode()
Returned by String.prototype.charCodeAt()
16 bit, unsigned
References:
[1] Safe integers in JavaScript
[2] Arrays in JavaScript
[3] Label bitwise_ops
Source: https://2ality.com/2014/02/javascript-integers.html

Left shift results in negative numbers in Javascript

I'm having trouble understanding how shifting works. I would expect that a and b would be the same but that's not the case:
a = 0xff000000;
console.log(a.toString(16));
b = 0xff << 24;
console.log(b.toString(16));
resulting in:
ff000000
-1000000
I came to this code while trying to create a 32bit number from 4 bytes.
Bitwise operators convert their operands to signed 32 bit numbers. That means the most significant bit is the sign bit, which gives you only 31 bits for the number value.
0xff000000 by itself is interpreted as 64bit floating point value. But truncating this to a 32bit signed integer produces a negative value since the most significant bit is 1:
0xff000000.toString(2);
> "11111111000000000000000000000000"
(0xff000000 | 0).toString(16)
> -1000000
According to Bitwise operations on 32-bit unsigned ints? you can use >>> 0 to convert the value back to an unsigned value:
0xff << 24 >>> 0
> 4278190080
From the spec:
The result is an unsigned 32-bit integer.
So it turns out this is as per the spec. Bit shift operators return signed, 32-bit integer results.
The result is a signed 32-bit integer.
From the latest ECMAScript spec.
Because your number is already 8 bits long, shifting it left by 24 bits and then interpreting that as a signed integer means that the leading 1 bit is seen as making it a negative number.

Math.pow gives wrong result

I was trying to repeat a character N times, and came across the Math.pow function.
But when I use it in the console, the results don't make any sense to me:
Math.pow(10,15) - 1 provides the correct result 999999999999999
But why does Math.pow(10,16) - 1 provide 10000000000000000?
You are producing results which exceed the Number.MAX_SAFE_INTEGER value, and so they are not accurate any more up to the unit.
This is related to the fact that JavaScript uses 64-bit floating point representation for numbers, and so in practice you only have about 16 (decimal) digits of precision.
Since the introduction of BigInt in EcmaScript, you can get an accurate result with that data type, although it cannot be used in combination with Math.pow. Instead you can use the ** operator.
See how the use of number and bigint (with the n suffix) differ:
10 ** 16 - 1 // == 10000000000000000
10n ** 16n - 1n // == 9999999999999999n
Unlike many other programming languages, JavaScript does not define different types of numbers, like integers, short, long, floating-point etc.
JavaScript numbers are always stored as double-precision floating-point numbers, following the international IEEE 754 standard.
This format stores numbers in 64 bits, where the number (the fraction) is stored in bits 0 to 51, the exponent in bits 52 to 62, and the sign-in bit 63:
Integers (numbers without a period or exponent notation) are accurate up to 15 digits.

Why does Number() return wrong values with very large integers?

Number() function returns incorrect values on some arguments, like this:
Number('10000000712224641') returns 10000000712224640
Number('10000000544563531') returns 10000000544563532
I tested this on Firefox, Chome, IE and Node.js. Why is this happening?
JavaScript safely supports approximately up to 17 digits and all numbers, whether floats or integers, are expressed in 64-bit IEEE-754 binary floating.
Number.MAX_SAFE_INTEGER // 9007199254740991
When you get above that number, the trailing digits get rounded unless you have a power of 2 (or the addition of powers of two)
Math.pow(2, 54) // 18014398509481984 (not rounded)
Math.pow(2, 54) + 1 // 18014398509481984 (rounded)
Math.pow(2, 54) - 1 // 18014398509481984 (rounded)
Math.pow(2,57) + Math.pow(2,52) // 148618787703226370 (not rounded)
Math.pow(2, 57) + Math.pow(2, 52) + 1 // 148618787703226370 (rounded)
Javascript uses 64-bit IEEE-754 binary floating point to store all numbers - like double in C# and Java, for example. There isn't a different type to store integers. (The actual implementation may use optimizations to avoid always performing arithmetic in this way, but from an end-user perspective the results will always be as if every number were treated as a 64-bit binary floating point value.)
That means only 52 bits are available to store the significand, with the other bits being used for the exponent and sign. With normalization, that means you can effectively store values with 53 significant bits of precision. That means beyond 253-1 (which is the value 9007199254740991 as quoted in other answers), the distance between "adjacent" numbers is more than 1, so you can't store all integers exactly.
This is due to the fact that javascript supports a number of digits. The maximum safe integer possible is stored in a constant called MAX_SAFE_INTEGER which contains value 9007199254740991.

Categories