Javascript parseInteger() to Int32? - javascript

I'm trying to convert an integer to Int32 in Javascript (similar to how you'd convert a string to an integer using parseInteger()), however I cannot find the appropriate method. How can I do this?

JavaScript's number type is IEEE-754 double-precision binary floating point; it doesn't have an integer type except temporarily during some math operations or as part of a typed array (Int32Array, for instance, or a Uint32Array if you mean unsigned). So you have two options:
Ensure that the number has a value that fits in a 32-bit int, even though it's still a number (floating point double). One way to do that is to do a bitwise OR operation with the value 0, because the bitwise operations in JavaScript convert their operands to 32-bit integers before doing the operation:
| 0 does a signed conversion using the specification's ToInt32 operation:
value = value | 0;
// Use `value`...
With that, -5 becomes -5. 123456789123 becomes -1097262461 (yes, negative).
or >>> 0 does an unsigned conversion using the spec's ToUint32:
value = value >>> 0;
// Use `value`...
The latter converts to unsigned 32-bit int. -5 becomes 4294967291, 123456789123 becomes 3197704835.
Use an Int32Array or Uint32Array:
const a = new Int32Array(1); // Or use Uint32Array for unsigned
a[0] = value;
// Use `a[0]`...
Int32Array uses ToInt32, Uint32Array uses ToUint32.
Note that any time you use a[0], it will be converted back to a standard number (floating point double), but if you use the array, depending on what you use it for, it will get used as-is.
Note that there's a method that may seem like it's for doing this, but isn't: Math.fround. That doesn't convert to 32-bit int, it converts to 32-bit float (IEEE-754 single-precision floating point). So it isn't useful for this.

Well, the easiest way I found is using bitwise not ~.
This is the description from MDN:
The operands are converted to 32-bit integers and expressed by a series of bits (zeroes and ones).
So you can just type double ~ to convert your numbers. Here's some examples:
~~1 // 1
~~-1 // -1
~~5.05 // 5
~~-5.05 // -5
~~2147483647 // 2147483647
~~2147483648 // -2147483648
~~Math.pow(2, 32) // 0

Related

Why my two's complement gives me a completely different result in javascript

I'm currently figuring out a checksum of a byte buffer of 32 bits each byte, I have to calculate two checksums, 32bit unint sum and 32 bit uint xor for every component of the byte buffer (except some locations). The sum works as expected but the xor gives me a weird value.
The value I get from the xor is -58679487 and when applying two's complement I get 58679487 but when converting it to a hex value it is 0x037F60BF and I'm looking for 0xFC809F41. If I place the initial xor value (-58679487) in rapidtables and convert it from dec to hex it displays the correct two's complement value in hex. What am I doing wrong?
i=startAddress;
while(i<buf.length){
if(i !== chk1 && i!== chk2 && i!== chk3 && i!== chk4){
file32Sumt += buf.readUint32BE(i);
file32Xort ^= buf.readUint32BE(i);
i+=4;
}else{
console.log('cks location.'+ buf.readUint32BE(i).toString(16));
i+=4;
}
}
//two's complement
console.log((~file32Sumt+1).toString(16));
console.log((~file32Xort+1).toString(16));
Already did the two's complement by using the bitwise NOT operator (~) then adding 1 but seems it's not working. Also tried using Math.abs(file32Xort) but got the same result.
Don't negate. Use this:
console.log((file32Xort >>> 0).toString(16));
file32Xort has the right value already, but as a signed 32-bit integer. What you need here is the unsigned interpretation of the same value. number >>> 0 is a simple way to do that reinterpretation.
Additionally, I think you should write
file32Sumt = (file32Sumt + buf.readUint32BE(i)) >>> 0;
.. or something to that effect, this time using some bitwise operator (it doesn't have to be an unsigned right shift, but I think that makes sense in this context) to prevent the sum from becoming too large (by limiting it to 32 bits) and potentially exhibiting floating point rounding. The calculation of file32Xort already uses a bitwise operator so it doesn't need an extra one like that, only at the end to reinterpret the result as unsigned.

How do I use 32-bit signed integer's complement to represent an integer in javascript

example:
I'm going to use 0xC0000000 (32-bit signed complement) for -2^30
const num = Number('0xC0000000')
console.log(num === -Math.pow(2,30)) // expected: true
JavaScript only has two's complement integers in two places:
As a temporary value during some operations. The bitwise operators use 32-bit ints, and some Math object methods work with 32-bit int values (for example, imul).
As an element in a Int16Array, Int32Array, or Int64Array. (Int32Array in your case.)
Otherwise, all numbers are either number (IEEE-754 double-precision binary floating point) or BigInt (arbitrary-precision non-two's-complement integers).
So for instance, you can have a two's complement in a single-element Int32Array.
const array = Int32Array.from([0xC000000]);
console.log(array[0].toString(16));
However, whenever you use that 32-bit integer, it gets converted to number. That said, JavaScript's conversions between number and 32-bit signed int are fairly smart. For example, consider this two's complement boundary condition:
const array = Int32Array.from([0x7FFFFFFF]);
console.log(array[0].toString(16)); // 7fffffff
++array[0];
console.log(array[0].toString(16)); // -80000000
That's what you'd want for a two's complement operation, even though it isn't a two's complement operation (in theory; JavaScript engines are allowed to optimize). The operation is 32-bit two's complement int to number, increment number, convert number back to 32-bit two's complement int. But we still get the desired result.

Advice on converting 8 byte (u64) unsigned integer into javascript

I have a u64 (unsigned integer) stored in 8 bytes of memory. Clearly the range is 0-2^64 integers.
I am converting it to a javascript number by turning each byte into hex and making a hex string:
let s = '0x'
s += buffer.slice(0,1).toString("hex")
s += buffer.slice(1,2).toString("hex")
...
n = parseInt(s)
Works great for everything I have done so far.
But when I look at how javascript stores numbers, I become unsure. Javascript uses 8 bytes for numbers, but treats all numbers the same. This internal javascript "number" representation can also hold floating point numbers.
Can a javascript number store all integers from 0 to 2^64? seems not.
At what point do I get into trouble?
What do people do to get round this?
An unsigned 64 bit integer has the range of a 0 to 18.446.744.073.709.551.615.
You could use the Number wrapper object with the .MAX_VALUE property, it represents the maximum numeric value representable in JavaScript.
The JavaScript Number type is a double-precision 64-bit binary format IEEE 754 value, like double in Java or C#.
General Info:
Integers in JS:
JavaScript has only floating-point numbers. Integers appear internally in two ways. First, most JavaScript engines store a small enough number without a decimal fraction as an integer (with, for example, 31 bits) and maintain that representation as long as possible. They have to switch back to a floating point representation if a number’s magnitude grows too large or if a decimal fraction appears.
Second, the ECMAScript specification has integer operators: namely, all of the bitwise operators. Those operators convert their operands to 32-bit integers and return 32-bit integers. For the specification, integer only means that the numbers don’t have a decimal fraction, and 32-bit means that they are within a certain range. For engines, 32-bit integer means that an actual integer (non-floating-point) representation can usually be introduced or maintained.
Ranges of integers
Internally, the following ranges of integers are important in JavaScript:
Safe integers [1], the largest practically usable range of integers that JavaScript supports:
53 bits plus a sign, range (−2^53, 2^53) which relates to (+/-) 9.007.199.254.740.992
Array indices [2]:
32 bits, unsigned
Maximum length: 2^32−1
Range of indices: [0, 2^32−1) (excluding the maximum length!)
Bitwise operands [3]:
Unsigned right shift operator (>>>): 32 bits, unsigned, range [0, 2^32)
All other bitwise operators: 32 bits, including a sign, range [−2^31, 2^31)
“Char codes”, UTF-16 code units as numbers:
Accepted by String.fromCharCode()
Returned by String.prototype.charCodeAt()
16 bit, unsigned
References:
[1] Safe integers in JavaScript
[2] Arrays in JavaScript
[3] Label bitwise_ops
Source: https://2ality.com/2014/02/javascript-integers.html

Javascript xor ^ with 0 return bad result

I'm using the binary xor operator ^ with 2 variables like this :
var v1 = 0;
var v2 = 3834034524;
var result = v1 ^ v2;
The result is -460932772.
Have you an idea why ?
Thank you
This is an expected behavior these are signed numbers.
Just truncate the result to an unsigned integer
var result = (v1 ^ v2) >>> 0;
3834034524, as a 32bit unsigned integer is hex E486B95C or binary 11100100100001101011100101011100. Notice that the most significant (leftmost) bit is set. This is the sign bit on 32bit signed integers.
There, that bit pattern translates to decimal -460932772. The XOR operation is forcing the result into signed integers.
Additional info: a 32bit signed integer can handle values from -2147483648 to +2147483647 (which your original value exceeded and it thus wrapped around). 32bit unsigned integers handle values from 0 to +4294967295. JavaScript is a dynamically typed language and the values may change types as needed. The number may become a floating point value, or bitwise operations may turn it into an integer, or it could become a string. There are some ways to use specific datatypes in recent versions of JavaScript, but this is not something you'd do with simple calculations.
The ToInt32 operation does not preserve the sign - it casts your number to a signed 32-bit representation. Since 3834034524 is larger than 231, it will overflow and result in a negative integer.
010 --ToInt32--> 000000000000000000000000000000002
^ 383403452410 --ToInt32--> 111001001000011010111001010111002
V xor V
= -46093277210 <-fromInt32- 111001001000011010111001010111002

Negative numbers to binary string in JavaScript

Anyone knows why javascript Number.toString function does not represents negative numbers correctly?
//If you try
(-3).toString(2); //shows "-11"
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
I am really curious why it doesn't work properly or what is the reason it works this way?
I've searched it but didn't find anything that helps.
Short answer:
The toString() function takes the decimal, converts it
to binary and adds a "-" sign.
A zero fill right shift converts it's operands to signed 32-bit
integers in two complements format.
A more detailed answer:
Question 1:
//If you try
(-3).toString(2); //show "-11"
It's in the function .toString(). When you output a number via .toString():
Syntax
numObj.toString([radix])
If the numObj is negative, the sign is preserved. This is the case
even if the radix is 2; the string returned is the positive binary
representation of the numObj preceded by a - sign, not the two's
complement of the numObj.
It takes the decimal, converts it to binary and adds a "-" sign.
Base 10 "3" converted to base 2 is "11"
Add a sign gives us "-11"
Question 2:
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
A zero fill right shift converts it's operands to signed 32-bit integers. The result of that operation is always an unsigned 32-bit integer.
The operands of all bitwise operators are converted to signed 32-bit
integers in two's complement format.
-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.
http://en.wikipedia.org/wiki/Two%27s_complement
http://en.wikipedia.org/wiki/Logical_shift
var binary = (-3 >>> 0).toString(2); // coerced to uint32
console.log(binary);
console.log(parseInt(binary, 2) >> 0); // to int32
on jsfiddle
output is
11111111111111111111111111111101
-3
.toString() is designed to return the sign of the number in the string representation. See EcmaScript 2015, section 7.1.12.1:
If m is less than zero, return the String concatenation of the String "-" and ToString(−m).
This rule is no different for when a radix is passed as argument, as can be concluded from section 20.1.3.6:
Return the String representation of this Number value using the radix specified by radixNumber. [...] the algorithm should be a generalization of that specified in 7.1.12.1.
Once that is understood, the surprising thing is more as to why it does not do the same with -3 >>> 0.
But that behaviour has actually nothing to do with .toString(2), as the value is already different before calling it:
console.log (-3 >>> 0); // 4294967293
It is the consequence of how the >>> operator behaves.
It does not help either that (at the time of writing) the information on mdn is not entirely correct. It says:
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format.
But this is not true for all bitwise operators. The >>> operator is an exception to the rule. This is clear from the evaluation process specified in EcmaScript 2015, section 12.5.8.1:
Let lnum be ToUint32(lval).
The ToUint32 operation has a step where the operand is mapped into the unsigned 32 bit range:
Let int32bit be int modulo 232.
When you apply the above mentioned modulo operation (not to be confused with JavaScript's % operator) to the example value of -3, you get indeed 4294967293.
As -3 and 4294967293 are evidently not the same number, it is no surprise that (-3).toString(2) is not the same as (4294967293).toString(2).
Just to summarize a few points here, if the other answers are a little confusing:
what we want to obtain is the string representation of a negative number in binary representation; this means the string should show a signed binary number (using 2's complement)
the expression (-3 >>> 0).toString(2), let's call it A, does the job; but we want to know why and how it works
had we used var num = -3; num.toString(-3) we would have gotten -11, which is simply the unsigned binary representation of the number 3 with a negative sign in front, which is not what we want
expression A works like this:
1) (-3 >>> 0)
The >>> operation takes the left operand (-3), which is a signed integer, and simply shifts the bits 0 positions to the left (so the bits are unchanged), and the unsigned number corresponding to these unchanged bits.
The bit sequence of the signed number -3 is the same bit sequence as the unsigned number 4294967293, which is what node gives us if we simply type -3 >>> 0 into the REPL.
2) (-3 >>> 0).toString
Now, if we call toString on this unsigned number, we will just get the string representation of the bits of the number, which is the same sequence of bits as -3.
What we effectively did was say "hey toString, you have normal behavior when I tell you to print out the bits of an unsigned integer, so since I want to print out a signed integer, I'll just convert it to an unsigned integer, and you print the bits out for me."
Daan's answer explains it well.
toString(2) does not really convert the number to two's complement, instead it just do simple translation of the number to its positive binary form, while preserve the sign of it.
Example
Assume the given input is -15,
1. negative sign will be preserved
2. `15` in binary is 1111, therefore (-15).toString(2) gives output
-1111 (this is not in 2's complement!)
We know that in 2's complement of -15 in 32 bits is
11111111 11111111 11111111 11110001
Therefore in order to get the binary form of (-15), we can actually convert it to unsigned 32 bits integer using the unsigned right shift >>>, before passing it to toString(2) to print out the binary form. This is the reason we do (-15 >>> 0).toString(2) which will give us 11111111111111111111111111110001, the correct binary representation of -15 in 2's complement.

Categories