what does this javascript code do? - javascript

Found this code in our code base the other day. Not sure what it is used for. Any guesses?
function checkIntegerRange(x) {
return ((x >= 0) && (x < 2020202020)) || (x == 2147483647) || (x == 4294967295);
}

2147483647 is the highest value that can be stored in a typical signed 32-bit integer type. 4294967295 is the analogous value for a 32-bit unsigned integer type. Possibly some other part of your code is using these as special marker values.
I have no idea what 2020202020 might signify, though it has the look of an arbitrarily chosen upper bound on something.

2020202020 is the conversion of " " (5 spaces) to a hex string. The author (probably one prone to writing obfuscated code :) may have wanted to ensure that a string of minimum of 5 characters converted to hex was not an considered an integer.
Here is a sample converter http://www.string-functions.com/string-hex.aspx

What it does is validate that x is in the range 0..2020202020 or x == 2^31-1 (2147483647, the maximum positive value in a 32-bit signed integer) or x == 2^32-1 (4294967295; which would be -1 in a two's complement 32-bit signed integer value, or the highest value that can be stored in a 32-bit unsigned integer value).
My suspicion is that it's trying to figure out whether x will fit in a 32-bit integer, but I can't for the life of me figure out why it has the odd range at the beginning and why it makes the big positive exception and the -1 (or other big positive, depending) exception.

it returns a boolean (true, false) if the number sent to it is between 0 (inclusive) and 2020202020 (non inclusive) or if the number equals 2147483647 or if it equals 4294967295.
As for the purpose... that's up to you to find out ;)

seems to be a kind of filtering/flagging:
2147483647: Hex 7FFFFFFF or bin 1111111111111111111111111111111
4294967295 Hex: FFFFFFFF or bin 11111111111111111111111111111111
BTW: 2*2147483647 = 4294967295-1
I would say it should check between a certain range or against some funny flags

Related

Why my two's complement gives me a completely different result in javascript

I'm currently figuring out a checksum of a byte buffer of 32 bits each byte, I have to calculate two checksums, 32bit unint sum and 32 bit uint xor for every component of the byte buffer (except some locations). The sum works as expected but the xor gives me a weird value.
The value I get from the xor is -58679487 and when applying two's complement I get 58679487 but when converting it to a hex value it is 0x037F60BF and I'm looking for 0xFC809F41. If I place the initial xor value (-58679487) in rapidtables and convert it from dec to hex it displays the correct two's complement value in hex. What am I doing wrong?
i=startAddress;
while(i<buf.length){
if(i !== chk1 && i!== chk2 && i!== chk3 && i!== chk4){
file32Sumt += buf.readUint32BE(i);
file32Xort ^= buf.readUint32BE(i);
i+=4;
}else{
console.log('cks location.'+ buf.readUint32BE(i).toString(16));
i+=4;
}
}
//two's complement
console.log((~file32Sumt+1).toString(16));
console.log((~file32Xort+1).toString(16));
Already did the two's complement by using the bitwise NOT operator (~) then adding 1 but seems it's not working. Also tried using Math.abs(file32Xort) but got the same result.
Don't negate. Use this:
console.log((file32Xort >>> 0).toString(16));
file32Xort has the right value already, but as a signed 32-bit integer. What you need here is the unsigned interpretation of the same value. number >>> 0 is a simple way to do that reinterpretation.
Additionally, I think you should write
file32Sumt = (file32Sumt + buf.readUint32BE(i)) >>> 0;
.. or something to that effect, this time using some bitwise operator (it doesn't have to be an unsigned right shift, but I think that makes sense in this context) to prevent the sum from becoming too large (by limiting it to 32 bits) and potentially exhibiting floating point rounding. The calculation of file32Xort already uses a bitwise operator so it doesn't need an extra one like that, only at the end to reinterpret the result as unsigned.

How to convert a BigInt, to two's complement binary in JavaScript?

With "normal" numbers(32bit range), i'm using zero fill right shift operator to convert to binary, which works both with positive and negative numbers(results in the two's complement binary):
const numberToConvert = -100
(numberToConvert >>> 0).toString(2);
//Result is correct, in two's complement: '11111111111111111111111110011100'
But how can this be done with a negative BigInt?
If i do:
(-1000000000000000000n >>> 0).toString(2)
I get an error "Uncaught TypeError: Cannot mix BigInt and other types, use explicit conversions"
So then i try to use 0 as a bigint:
(-1000000000000000000n >>> 0n).toString(2)
I get the following error: Uncaught TypeError: BigInts have no unsigned right shift, use >> instead
Doing so, results in the non two's complement binary, with "-" appended to it:
(-1000000000000000000n >> 0n).toString(2)
//Result is:'-110111100000101101101011001110100111011001000000000000000000'
How can I get the two's complement binary, of a negative bigint?
The bitwise operators are for 32-bit integers anyway, and why it doesn't work with BigInt, as quoted in JavaScript Definitive Guide, 7th Ed, David Flanagan, O'Reilly, p. 78:
Shift right with zero fill (>>>): This is the only one of the JavaScript bitwise operators that cannot be used with BigInt values. BigInt does not represent negative numbers by setting the high bit the way that 32-bit integers do, and this operator only makes sense for that particular two’s complement representation.
Also note that it looks like it is giving you two's complement, but in fact, the negative number is converted to 32-bit unsigned integer, and then printed as binary, giving you the impression that it is two's complement:
console.log(-100 >>> 0); // => 4294967196
The two's complement has this property:
You have a number, say 123, which is 01111011 in 8 bit binary, and you want the negative number of that, which is -123.
Two complement says: the answer you want, just treat it as a positive number, and add it with the original number 123, and you will just get all 0's with the overflow of the 8 bit number.
As an example, treating everything as positive, 123 + theAnswerYouWant is 01111011 + 10000101, which is exactly 00000000 with an overflow, which is 100000000 (note the extra 1 in front). In other words, you want 256 - 123, which is 133 and if you render 133 as 8 bit, that's the answer you want.
As a result, you can use 28 to subtract the orignal number, and treat it as a positive number and display it, using .toString(2), which you already have.
The following is for 64 bits:
function getBinary(a, nBits) {
[a, nBits] = [BigInt(a), BigInt(nBits)];
if ((a > 0 && a >= 2n ** (nBits - 1n)) || (a < 0 && -a > 2n ** (nBits - 1n))) {
throw new RangeError("overflow error");
}
return a >= 0
? a.toString(2).padStart(Number(nBits), "0")
: (2n ** nBits + a).toString(2);
}
console.log(getBinary(1000000000000000000n, 64));
console.log(getBinary(-1000000000000000000n, 64));
console.log(getBinary(-1, 64));
console.log(getBinary(-2, 64));
console.log(getBinary(-3, 64));
console.log(getBinary(-4, 64n)); // trying the nBits as a BigInt as a test
console.log(getBinary(2n ** 63n - 1n, 64));
console.log(getBinary(-(2n ** 63n), 64));
// console.log(getBinary(2n ** 63n, 64)); // throw Error
// console.log(getBinary(-(2n ** 63n) - 1n, 64)); // throw Error
Note that you don't have to pad it when a is negative, because for example, if it is 8 bit, the number being displayed is any where from 11111111 to 10000000 and it is always 8 bits.
Some more details:
You may already know ones' complement is just simply flipping the bits (from 0 to 1, and 1 to 0). Another way to think of it is, you add the two numbers together and it will becomes all 1s.
The usual way two's complement is described, is to flip the bits, and add 1 to it. You see, if you start with 11111111 and subtract 01111011 (which is 123 decimal), you get 10000100 and it is exactly the same as flipping the bit. (actually this follows from above: adding them get all 1s, so using all 1s to subtract one of them get the other one.
Well, so if you start with 11111111 and subtract that number, and then add 1, isn't it the same as using 11111111, add 1, and subtract that number? Well, 11111111 plus 1 is 100000000 (note the extra 1 in front) -- that's exactly starting with 2n where n is the n-bit integer, and then subtract that number. So you see why the property at the beginning of this post is true.
In fact, two's complement is designed with such purpose: if we want to find out 2 - 1, to make the computer calculate that, we only need to consider this "two's complement" as positive numbers and add them together using the processor's "add circuitry": 00000010 plus 11111111. We get 00000001 but have a carry (the overflow). If we handle the overflow correctly by discarding it, we get the answer: 1. If we use ones' complement instead, we can't use the same addition circuitry to carry out 00000010 + 11111110 to get a 1 because the result is 00000000 which is 0
Another way to think about (4) is, if you have a car's odometer, and it says 000002 miles so far, how do you subtract 1 from it? Well, if you represent -1 as 9999999, then you just add 999999 to the 2, and get 1000001 but the leftmost 1 does not show on the odometer, and now the odometer will become 000001. In decimal, representing -1 as 999999 is 10's complement. In binary, representing -1 as 11111111 is called two's complement.
Two's complement only makes sense with fixed bit lengths. Numbers are converted to 32-bit integers (this is an old convention from back when javascript was messier). BigInt doesn't have that kind of conversion as the length is considered arbitrary. So, in order to use two's complement with BigInt, you'll need to figure out what length you want to use then convert it. Conversion to two's complement is described many places including Wikipedia.
Here, we use the LSB to MSB method since it's pretty easy to implement as string processing in javascript:
const toTwosComplement = (n, len) => {
// `n` must be an integer
// `len` must be a positive integer greater than bit-length of `n`
n = BigInt(n);
len = Number(len);
if(!Number.isInteger(len)) throw '`len` must be an integer';
if(len <= 0) throw '`len` must be greater than zero';
// If non-negative, a straight conversion works
if(n >= 0){
n = n.toString(2)
if(n.length >= len) throw 'out of range';
return n.padStart(len, '0');
}
n = (-n).toString(2); // make positive and convert to bit string
if(!(n.length < len || n === '1'.padEnd(len, '0'))) throw 'out of range';
// Start at the LSB and work up. Copy bits up to and including the
// first 1 bit then invert the remaining
let invert = false;
return n.split('').reverse().map(bit => {
if(invert) return bit === '0' ? '1' : '0';
if(bit === '0') return bit;
invert = true;
return bit;
}).reverse().join('').padStart(len, '1');
};
console.log(toTwosComplement( 1000000000000000000n, 64));
console.log(toTwosComplement(-1000000000000000000n, 64));
console.log(toTwosComplement(-1, 64));
console.log(toTwosComplement(2n**63n-1n, 64));
console.log(toTwosComplement(-(2n**63n), 64));
div.as-console-wrapper{max-height:none;height:100%;}

Javascript xor ^ with 0 return bad result

I'm using the binary xor operator ^ with 2 variables like this :
var v1 = 0;
var v2 = 3834034524;
var result = v1 ^ v2;
The result is -460932772.
Have you an idea why ?
Thank you
This is an expected behavior these are signed numbers.
Just truncate the result to an unsigned integer
var result = (v1 ^ v2) >>> 0;
3834034524, as a 32bit unsigned integer is hex E486B95C or binary 11100100100001101011100101011100. Notice that the most significant (leftmost) bit is set. This is the sign bit on 32bit signed integers.
There, that bit pattern translates to decimal -460932772. The XOR operation is forcing the result into signed integers.
Additional info: a 32bit signed integer can handle values from -2147483648 to +2147483647 (which your original value exceeded and it thus wrapped around). 32bit unsigned integers handle values from 0 to +4294967295. JavaScript is a dynamically typed language and the values may change types as needed. The number may become a floating point value, or bitwise operations may turn it into an integer, or it could become a string. There are some ways to use specific datatypes in recent versions of JavaScript, but this is not something you'd do with simple calculations.
The ToInt32 operation does not preserve the sign - it casts your number to a signed 32-bit representation. Since 3834034524 is larger than 231, it will overflow and result in a negative integer.
010 --ToInt32--> 000000000000000000000000000000002
^ 383403452410 --ToInt32--> 111001001000011010111001010111002
V xor V
= -46093277210 <-fromInt32- 111001001000011010111001010111002

What does -~ before .indexOf() mean?

I am looking at SocketIO source code and it has this statement:
if (-~manager.get('blacklist').indexOf(packet.name)) {
What does -~ shorthand mean here?
It is appears to be a trick for:
if(manager.get('blacklist').indexOf(packet.name) !== -1)
As mentioned by others ~ is bitwise negation which will flip the binary digits. 00000001 becomes 11111110 for example, or in hexidecimal, 0x01 becomes 0xFE.
-1 as a signed int 32 which is what all bitwise operators return (other than >>> which returns a unsigned int 32) is represented in hex as 0xFFFFFFFF. ~(-1) flips the bits to result in 0x00000000 which is 0.
The minus simply numerically negates the number. As zzzBov mentioned, in this case it does nothing.
-~(-1) === 0
And
~(-1) === 0
The code could be changed to:
if(~manager.get('blacklist').indexOf(packet.name))
But, in my opinion, characters aren't at such a premium so the longer version, which is arguably a bit more readable, would be better, or implementing a contains method would be even better, this version is best left to a JavaScript compiler or compressor to perform this optimization.
Bitwise inversion.
~0 == 0xFFFFFFFF == -1
~1 == 0xFFFFFFFE
Minus is arithmetic inversion. So result is 0 if indexOf failed (return -1)
The two operators are not a shorthand form of anything. ~ is bitwise negation, and - is standard negation.
~foo.indexOf(bar) is a common shorthand for foo.contains(bar). Because the result is used in an if statement, the - sign immediately after is completely useless and does nothing of consequence.
-~ together is a means to add 1 to a number. It's generally not useful, and would be better expressed as + 1, unless you're competing in a code golf where you're not allowed to use the digit 1

Fastest way to create this number?

I'm writing a function to extend a number with sign to a wider bit length. This is a very frequently used action in the PowerPC instruction set. This is what I have so far:
function exts(value, from, to) {
return (value | something_goes_here);
}
value is the integer input, from is the number of bits that the value is using, and to is the target bit length.
What is the most efficient way to create a number that has to - from bits set to 1, followed by from bits set to 0?
Ignoring the fact that JavaScript has no 0b number syntax, for example, if I called
exts(0b1010101010, 10, 14)
I would want the function to OR the value with 0b11110000000000, returning a sign-extended result of 0b11111010101010.
A number containing p one bits followed by q zero bits can be generated via
((1<<p)-1)<<q
thus in your case
((1<<(to-from))-1)<<from
or much shorter
(1<<to)-(1<<from)
if you have the number 2^q (= 1 shifted left by q) represented as an integer of width p + q bits, it has the representation:
0...010...0
p-1 q
then 2^q - 1 has the representation
0...01...1
p q
which is exactly the opposite of you want. So just flip the bits
hence what you want is NOT((1 LEFT SHIFT by q) - 1)
= ~((1 << q) - 1) in c notation
I am not overly familiar with binary mathematics in JavaScript... But if you need to OR a number with 0b11110000000000, then I assume you would just convert that to decimal (which would get you 15360), and do value | 15360.
Relevant info that you may find useful: parseInt("11110000000000", 2) converts a binary number (specified as a string) to a decimal number, and (15360).toString(2) converts a decimal number (15360 in this case) to a binary number (the result is a string).
Revised solution
There's probably a more elegant and mathematical method, but here's a quick-and-dirty solution:
var S = "";
for(var i=0;i<p;i++)
S += "1";
for(i=0;i<q;i++)
S += "0";
S = parseInt(S, 2); // convert to decimal

Categories