Getting least significant bit in JavaScript - javascript

I am trying to get the least significant bit of a number in JavaScript.
I have the following code:
let lsb = (parseInt("110", 2) & 0xffff);
By my understanding, the least significant bit of 110 is 110 as it is the right-most set bit.
However, the code above returns '6', which is the total value of 110 and not the least significant bit.
How can I get the least significant bit?

I take you at your example that you are looking for the lowest set bit, not the least significant bit
What you're looking for is a bit of a bitwise hack.
We can do this with some exploitation of the way negative numbers are represented (two's complement)
var lowestSetBit = (value) & (-value)
If you are actually looking for the least significant bit, then you can just mask on that bit
var leastSignificantBit = value & 1

The least significant bit is the rightmost bit, not the rightmost bit that's set. To get that, AND with 1.
let lsb = parseInt("110", 2) & 1;

https://en.wikipedia.org/wiki/Least_significant_bit:
least significant bit (LSB) is the bit position in a binary integer
giving the units value, that is, determining whether the number is
even or odd
So it's easy:
let lsb = parseInt("110", 2) & 1
or even this:
let lsb = parseInt("110", 2) % 2

Finding the least significant bit of a number can easily be done by:
someNumber & 1
or in your specific case:
let lsb = (parseInt("110", 2) & 1
This works by masking every bit with a zero except for the least significant bit, which is &'d with that 1.
For example, let's have our input number be 21
21 & 1
Is the same as:
10101
& 00001
-------
00001 // => returns 1 since the last bit is turned on

Related

Big numbers and floating point numbers in JavaScript

I have this Java code:
nextDouble = 0.2515933907977884;
long numerator = (long) (nextDouble * (1L << 53));
I would like to be able to produce the same output this line produces in Java but within JavaScript.
nextDouble = 0.2515933907977884;
const numerator = nextDouble * (1 << 53);
Has anybody got an idea for how to replicate a long within JavaScript ? I know there is BigInt in JavaScript but thing is, it doesn't support floating point numbers, so I am a bit stuck on what to do. Does anybody know any interesting libraries, that could solve this issue ?
Thank you in advance !
The problem is what 1<<53 does in javascript. It doesn't do what you think it does.
Here, try this in the console:
1<<30
1073741824
// okay.. looks good
1<<31
> -2147483648
// a negative num.. wha??
1<<32
> 1
// WHAA????????
1<<53
> 2097152
// That seems VERY low to me
1<<21
> 2097152
// How is that the same? as 1<<53??
numbers in javascript are forced into being doubles, and doing bit shifts on a double is utterly ridiculous. Javascript nevertheless lets you, because, well, javascript. When you do silly things in javascript, javascript will give you silly answers, and in that way javascript is rather worthless - programmers doing crazy stuff that cannot reasonably be interpreted as having any particular meaning should be answered with a clear error and not a wild stab in the dark. But that's just how javascript is. The usual way to deal with this crazy behaviour is to never ask javascript silly things, as it will give you silly answers. Such as 1<<32 being 1*.
You may be wondering 'but how is asking to bit shift 1 by 53 positions 'crazy'? - and the answer is, that bit shifts, given that they make no sense on doubles, are interpreted as: "You wish to emulate 32-bit signed int behaviour", and that is exactly what javascript does, notably including the weirdish java/C-ism that only the bottom 5 bits of the number on the RHS count. In other words, <<32 is the same thing as <<0 - after all, the bottom 5 bits of 32.. is 0. Said differently, take the right hand side number, divide it by 32, toss the result, keep the remainder ('modulo'). 32 divided by 32 leaves a remainder of 0. 53 divided by 32 leaves a remainder of 21, and that's why 1<<53 in javascript prints 2097152.
So, in javascript your code is effectively doing the double multiplied by 2 to the 21st power, or theDouble * 2097152, whereas in java it is doing the double multiplied by 2 to the 53rd power, or theDouble * 9007199254740992.
Rather obviously then your answers are wildly different.
The fix seems trivial. 1<<53 may look like a nice way to convey the notion of 2 to 53rd power or in bits, a 1 bit, followed by 53 zeroes, but as syntax goes it just does not work that way in javascript. You can't use that syntax for this purpose. Try literally 9007199254740992.
var d = 0.2515933907977884;
const n = d * 9007199254740992;
n
> 2266151802091599
so that works.
If you have a need to derive the value 9007199254740992 from the value 53:
Math.pow(2, 53)
> 9007199254740992
note that you're dancing on the edge of disaster here. standard IEEE doubles use 53 bits for the exponent, so you're at the very edges. Soon you'll get into the territory where 'x + 1' is equal to 'x' because the gap between representable numbers is larger than 1. You'll need to get cracking on BigInt if you want to move away from the precipice.
*) It is specced behaviour. But surely you agree this is highly surprising. How many people do you know that just know off-hand that javascripts << is specced to convert to a 32-bit signed integer, take the RHS and modulo 32 it, and then operate, and then convert back to double afterwards?

Unexpected negative value in Int32Array

const x = new Int32Array(1);
x[0] = 699044815921;
console.log(x[0]);
-1034853327
Who can explain why there is a negetive number?
Int32Array allows 32 bits per value, with 32nd bit (from the right) being reserved to specify the sign of the number. The number you're trying to fit is (699044815921).toString(2).length == 40 bits long, so 8 leftmost bits are discarded, 32nd bit is interpreted as a sign bit, and you get what you get as a result.

Why bitwise shift acts differently when sequenced?

Why there are different results of bitwise left shift?
1 << 32; # 1
1 << 31 << 1; # 0
That's because of
Let shiftCount be the result of masking out all but the least significant 5 bits of rnum, that is, compute rnum & 0x1F.
of how the << operation is defined. See http://www.ecma-international.org/ecma-262/6.0/#sec-left-shift-operator-runtime-semantics-evaluation
So according to it - 32 & 0x1F equals 0
So 1 << 32 equals to 1 << 0 so is basically no op.
Whereas 2 consecutive shifts by 31 and 1 literally perform calculations
JavaScript defines a left-shift by 32 to do nothing, presumably because it smacks up against the 32-bit boundary. You cannot actually shift anything more than 31 bits across.
Your approach of first shifting 31 bits, then a final bit, works around JavaScript thinking that shifting so much doesn't make sense. Indeed, it's pointless to execute those calculations when you could just write = 0 in the first place.
The reason is that the shift count is considered modulo 32.
This itself happens because (my guess) this is how most common hardware for desktops/laptops works today (x86).
This itself happens because.... well, just because.
These shift limitations are indeed in some cases annoying... for example it would have been better in my opionion to have just one shift operator, working in both directions depending on the sign of the count (like ASH works for Common Lisp).

Can anyone explain this process with converting decimal numbers to Binary

I have looked around the internet for a way to convert decimal numbers into binary numbers. and i found this piece of code in some forum.
var number = prompt("Type a number!") //Asks user to input a number
var converted = []; // creates an array with nothing in it
while(number>=1) { //While the number the user typed is over or equal to 1 its shoud loop
converted.unshift(number%2); // takes the "number" and see if you can divid it by 2 and if theres any rest it puts a "1" otherwise "0"
number = Math.floor(number/2); // Divides the number by 2, then starts over again
}
console.log(converted)
I'm not understanding everything completely, so i made some comments of what i think the pieces of code do. But anyone that can explain in more detail? or is the way i think the code does correct?
This code is based on a technique for converting decimal numbers to binary.
If I take a decimal number. I divide it by two and get the remainder which will either be 0 or 1. Once you divide 57 all the way down to 0. You get the binary number for example:
57 / 2 = 28 r 1; 28 / 2 = 14 r 0; 14 / 2 = 7 r 0; 7 / 2 = 3 r 1; 3 / 2 = 1 r 1; 1 / 2 = 0 r 1;
The remainders are the binary number. Sorry if it's a bit hard to read. I definitely recommend writing it out on paper. Read from the last remainder to the first, the remainders look like this: 111001
Reverse it to make it correct. array.unshift() can do this or you could use array.push() then array.reverse() after the while loop. Unshift() is probably a better approach.
57 in decimal is equal to 111001, which you can check.
BTW, this algorithm works for other bases, as long you are converting from decimal. Or at least as far as I know.
I hope this helped.
It seems like you've got the gist of it down.
Let's start with a random number:
6 === 110b
Now let's see what the above method does:
The number is geq than 1, hence, let's add the last bit of the number to the output
6%2 === 0 //output [0]
the number we're working with after dividing the number by two, which is essentially just bit-shifting the whole thing to the right is now 11b (from the original 110b). 11b === 3, as you'd expect.
You can alternatively think of number % 2 as a bit-wise AND operation (number & 1):
110
& 1
-----
0
The rest of the loop simply carries the same operation out as long as needed: find the last bit of the current state, add it to the output, shift the current state.

Using bitwise operators in javascript

I am creating a bitmask in javascript. It works fine for bit 0 through 14. When I set only bit fifteen to 1. It yields the integer value of "-2147483648" instead of "2147483648". I can do a special case hack here by returning hardcoded "2147483648" for bit fifteen but I would like to know the correct way of doing it.
Sample code:
function join_bitmap(hex_lower_word, hex_upper_word)
{
var lower_word = parseInt(hex_lower_word, 16);
var upper_word = parseInt(hex_upper_word, 16);
return (0x00000000ffffffff & ((upper_word<<16) | lower_word));
}
Above code returns -2147483648 when hex_lower_word is "0x0" and hex_upper_word is "0x8000" instead of 2147483648
The reason for this is because Javascript's bit shift operations use signed 32-bit integers. So if you do this:
0x1 << 31 // sets the 15th bit of the high word
It will set the sign bit to 1, which means negative.
On the other hand instead of bit shifting you multiply by powers of two, you'll get the result you want:
1 * Math.pow(2, 31)
The reason is, you are setting the sign bit...
2147483648 is 1 followed by 31 zeros in binary...
As you are doing a bitwise operation, the output is always a signed 32 bit number, which makes the 32nd bit the sign bit, so you get a negative number...
Update
(upper_word * Math.pow(2, 16))
will give positive 2147483648.
But, you still have the OR operation, which puts us back to square one...
As previous answers explained, the bitwise operators are 32 bit signed. Thus, if at any point along the way you set bit 31, things will go badly wrong.
In your code, the expression
(upper_word<<16) | lower_word)
is evaluated first because of the parentheses, and since upper_word has the top bit set, you will now have a negative number (0x80000000 = -2147483648)
The solution is to make sure that you do not shift a 1into bit 31 - so you have to set bit 15 of the upper word to zero before shifting:
mask15 = 0x7fff;
((upper_word&mask15)<<16|lower_word)
This will take care of "numbers that are too big become negative", but it won't solve the problem completely - it will just give the wrong answer! To get back to the right answer, you need to set bit 31 in the answer, iff bit 15 was set in upper_word:
bit15 = 0x8000;
bit31 = 0x80000000;
answer = answer + (upper_word & bit15)?bit31:0;
The rewritten function then becomes:
function join_bitmap(hex_lower_word, hex_upper_word)
{
var lower_word = parseInt(hex_lower_word, 16);
var upper_word = parseInt(hex_upper_word, 16);
var mask15 = 0x7fff;
var bit15 = 0x8000;
var bit31 = 0x80000000;
return 0xffffffff & (((upper_word&mask15)<<16) | lower_word) + ((upper_word & bit15)?bit31:0);
}
There isn't just a single "hard coded special case" - there are 2 billion or so. This takes care of all of them.

Categories