I am using bitwise operations in javascript but I have noticed something that appears inconsistent.
Right now, I'm using the XOR (^) operation. When I do '0101'^'0001', I get 4, which makes sense since 4 = 0100 in binary.
But when I do '10001'^'01111', I get 9030, when I think I should get 11110.
The format is, to best that I can tell, the same; only the strings are different.
console.log(5^1); //4
console.log('0101'^'0001'); // 100 = 4
console.log(17^15); //30
console.log('10001'^'01111'); //9030...why? shouldn't it be '11110'?
Why is this code producing this result?
Now, If I do this:
console.log(('0b'+'10001')^('0b' + '01111')); //30
Why did I have to add '0b' to specify that the strings were binary sequences when doing bitwise ops on 17 and 15, but not with 5 and 1?
As has been pointed out already, you are not using binary numbers. Use the 0b prefix for binary numbers and toString(2) to convert back:
console.log((0b10001 ^ 0b01111).toString(2));
11110
The first example just works because 1 in decimal is the same as 1 in binary. The result is 100 (decimal) though, not 4. Javascript doesn't store the number format internally.
You are using the ^ operator on strings. When using any numerical operator on a string JavaScript will implicitly convert the string to a number. Adding 0b tells JavaScript to treat your number as a binary or base 2 number.
Otherwise, by default they will be converted to decimal or base 10 values.
Do operations on binary values represented in a string you have to convert your values to numerical base 2 values first.
You can verify in a calculator 10,001^1111 = 9030.
In binary 1111=15 and 10001=17. 15^17= 30 which is 11110 in binary.
101 Xor 1 is a special case, In decimal 101 Xor 1 = 100. In Binary 101 = 5 and 5 Xor 1 = 4 which is written out in binary as 100.
Related
I am student studying programming.
As far as I know, Javascript saves number as float.
However, bitwise operator in Javascript run as a type of number is integer.
For instance,
> 1 | 2 // -> 3
> 2 << 4 // -> 32
How is it possible?
I find official documentation(mdn web docs), but I can not find the reason.
The bitwise operators in JavaScript convert numbers to 32-bit integers before performing the operation. This means that even though JavaScript saves numbers as floats, the bitwise operators treat them as 32-bit integers. This is why you are able to use the bitwise operators on numbers and get integer results.
As #rviretural_001 mentioned, JavaScript does some automatic conversions by spec. For example, in addition to IEEE 754 doubles to 32 signed integers:
0 == '0' // true
String to Integer (Shift Right)
'4' >> 1 // 2
Converting to Integer (Bitwise OR)
'1.3' | 0 // 1
This last one was used in asm.js for 32 signed integers
just to contribute to the answers above note that the conversion of a floating-point number to an integer is done by removing the fractional part of the number.
For example, if you have the number 4.5 and you use a bitwise operator on it, JavaScript will convert it to the integer 4 before performing the operation.
This tutorial might help you to find out more https://www.programiz.com/javascript/bitwise-operators
Also note that performing bitwise operations on floating-point numbers can lead to unexpected results. This is due to the way floating-point numbers are represented in memory. So, it's recommended to avoid using bitwise operators on floating-point numbers in JavaScript.
I was trying to repeat a character N times, and came across the Math.pow function.
But when I use it in the console, the results don't make any sense to me:
Math.pow(10,15) - 1 provides the correct result 999999999999999
But why does Math.pow(10,16) - 1 provide 10000000000000000?
You are producing results which exceed the Number.MAX_SAFE_INTEGER value, and so they are not accurate any more up to the unit.
This is related to the fact that JavaScript uses 64-bit floating point representation for numbers, and so in practice you only have about 16 (decimal) digits of precision.
Since the introduction of BigInt in EcmaScript, you can get an accurate result with that data type, although it cannot be used in combination with Math.pow. Instead you can use the ** operator.
See how the use of number and bigint (with the n suffix) differ:
10 ** 16 - 1 // == 10000000000000000
10n ** 16n - 1n // == 9999999999999999n
Unlike many other programming languages, JavaScript does not define different types of numbers, like integers, short, long, floating-point etc.
JavaScript numbers are always stored as double-precision floating-point numbers, following the international IEEE 754 standard.
This format stores numbers in 64 bits, where the number (the fraction) is stored in bits 0 to 51, the exponent in bits 52 to 62, and the sign-in bit 63:
Integers (numbers without a period or exponent notation) are accurate up to 15 digits.
In one of my applications in order to simplify logic / heavy db stuff I created a mechanism that relies on the javascript bitwise '&' operator. However this seems to act weird in some occasions.
1 & 0 => 0; 11 & 00 => 0; 111 & 000 => 0; 111 & 100 => 100
everything ok so far.. but when I try to do this:
1100 & 0011 => 8 ;
1100 & 1111 => 1092
I get weird results, instead of 0 or 1100. I found out that this happens due to some 'javascript interpretation in a specific base' stuff, however I wonder if there is a solution to this.
As per your question you are performing bitwise operation between decimal numbers not binary numbers. In javascript binary number is represented by prefixing 0b for eg 2 should be represented as 0b10.
Another thing is that javascript returns decimal number as a result of bitwise operation. Similarly hexadecimal number is represented using prefex 0x.
When you type 1100 you aren't producing the binary representation of 12, you're writing 1100. When a number is prefixed with a 0, Javascript interprets that number as being an octal number.
In short, make sure you give the correct decimal numbers to get the proper binary representation.
Javascript doesn't act weird.
You're typing decimal numbers which translate to:
decimal 1100 = binary 0000010001001100
decimal 0011 = binary 0000000000001011
if you & them you will get
0000000000001000
which is 8
the same is with 1100 & 1111
Anyone knows why javascript Number.toString function does not represents negative numbers correctly?
//If you try
(-3).toString(2); //shows "-11"
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
I am really curious why it doesn't work properly or what is the reason it works this way?
I've searched it but didn't find anything that helps.
Short answer:
The toString() function takes the decimal, converts it
to binary and adds a "-" sign.
A zero fill right shift converts it's operands to signed 32-bit
integers in two complements format.
A more detailed answer:
Question 1:
//If you try
(-3).toString(2); //show "-11"
It's in the function .toString(). When you output a number via .toString():
Syntax
numObj.toString([radix])
If the numObj is negative, the sign is preserved. This is the case
even if the radix is 2; the string returned is the positive binary
representation of the numObj preceded by a - sign, not the two's
complement of the numObj.
It takes the decimal, converts it to binary and adds a "-" sign.
Base 10 "3" converted to base 2 is "11"
Add a sign gives us "-11"
Question 2:
// but if you fake a bit shift operation it works as expected
(-3 >>> 0).toString(2); // print "11111111111111111111111111111101"
A zero fill right shift converts it's operands to signed 32-bit integers. The result of that operation is always an unsigned 32-bit integer.
The operands of all bitwise operators are converted to signed 32-bit
integers in two's complement format.
-3 >>> 0 (right logical shift) coerces its arguments to unsigned integers, which is why you get the 32-bit two's complement representation of -3.
http://en.wikipedia.org/wiki/Two%27s_complement
http://en.wikipedia.org/wiki/Logical_shift
var binary = (-3 >>> 0).toString(2); // coerced to uint32
console.log(binary);
console.log(parseInt(binary, 2) >> 0); // to int32
on jsfiddle
output is
11111111111111111111111111111101
-3
.toString() is designed to return the sign of the number in the string representation. See EcmaScript 2015, section 7.1.12.1:
If m is less than zero, return the String concatenation of the String "-" and ToString(−m).
This rule is no different for when a radix is passed as argument, as can be concluded from section 20.1.3.6:
Return the String representation of this Number value using the radix specified by radixNumber. [...] the algorithm should be a generalization of that specified in 7.1.12.1.
Once that is understood, the surprising thing is more as to why it does not do the same with -3 >>> 0.
But that behaviour has actually nothing to do with .toString(2), as the value is already different before calling it:
console.log (-3 >>> 0); // 4294967293
It is the consequence of how the >>> operator behaves.
It does not help either that (at the time of writing) the information on mdn is not entirely correct. It says:
The operands of all bitwise operators are converted to signed 32-bit integers in two's complement format.
But this is not true for all bitwise operators. The >>> operator is an exception to the rule. This is clear from the evaluation process specified in EcmaScript 2015, section 12.5.8.1:
Let lnum be ToUint32(lval).
The ToUint32 operation has a step where the operand is mapped into the unsigned 32 bit range:
Let int32bit be int modulo 232.
When you apply the above mentioned modulo operation (not to be confused with JavaScript's % operator) to the example value of -3, you get indeed 4294967293.
As -3 and 4294967293 are evidently not the same number, it is no surprise that (-3).toString(2) is not the same as (4294967293).toString(2).
Just to summarize a few points here, if the other answers are a little confusing:
what we want to obtain is the string representation of a negative number in binary representation; this means the string should show a signed binary number (using 2's complement)
the expression (-3 >>> 0).toString(2), let's call it A, does the job; but we want to know why and how it works
had we used var num = -3; num.toString(-3) we would have gotten -11, which is simply the unsigned binary representation of the number 3 with a negative sign in front, which is not what we want
expression A works like this:
1) (-3 >>> 0)
The >>> operation takes the left operand (-3), which is a signed integer, and simply shifts the bits 0 positions to the left (so the bits are unchanged), and the unsigned number corresponding to these unchanged bits.
The bit sequence of the signed number -3 is the same bit sequence as the unsigned number 4294967293, which is what node gives us if we simply type -3 >>> 0 into the REPL.
2) (-3 >>> 0).toString
Now, if we call toString on this unsigned number, we will just get the string representation of the bits of the number, which is the same sequence of bits as -3.
What we effectively did was say "hey toString, you have normal behavior when I tell you to print out the bits of an unsigned integer, so since I want to print out a signed integer, I'll just convert it to an unsigned integer, and you print the bits out for me."
Daan's answer explains it well.
toString(2) does not really convert the number to two's complement, instead it just do simple translation of the number to its positive binary form, while preserve the sign of it.
Example
Assume the given input is -15,
1. negative sign will be preserved
2. `15` in binary is 1111, therefore (-15).toString(2) gives output
-1111 (this is not in 2's complement!)
We know that in 2's complement of -15 in 32 bits is
11111111 11111111 11111111 11110001
Therefore in order to get the binary form of (-15), we can actually convert it to unsigned 32 bits integer using the unsigned right shift >>>, before passing it to toString(2) to print out the binary form. This is the reason we do (-15 >>> 0).toString(2) which will give us 11111111111111111111111111110001, the correct binary representation of -15 in 2's complement.
var ddd = Math.random() * 16;
console.log((ddd & 3 | 8).toString(16));
Help me please. I dont understand how works this operators (| and &) and why this code returns a-f symbols?
The expression ddd & 2 | 8 is doing bitwise arithmetic by taking the bitwise OR operation of 8 and the bitwise AND operation of ddd and 2. If you don;t understand bitwise operations, you should consult this article explaining what they are.
The code can return characters in the range a-f because you passed in a radix parameter 16 to the Number.toString prototype method, which means that it will display the number in hexadecimal.
This picks a random real number from 0 to 15:
var ddd = Math.random() * 16;
For example, you might get 11.114714370026688.
ddd & 3
That's a bitwise AND of the result with the number 3. The first thing that does is take the number from ddd and convert it to an integer, because the bitwise operators aren't defined for floating point numbers. So in my example, it treats ddd as the whole number 11.
The next thing it does is perform an AND of the two numbers' binary representations. Eleven in binary is 1011 and three is 0011. When you AND them together, you get a binary number that is all zeroes except where there's a 1 in both numbers. Only the last two digits have 1's in both numbers, so the result is 0011, which is again equal to decimal 3.
| 8
That does a bitwise OR of the result so far (3) with the number 8. OR is similar to AND, but the result has 1's wherever there's a 1 in either number. Since three is still 0011 in binary and eight is 1000, the result is 1011 - which is back to decimal eleven.
In general, the above calculation sets the 8-bit (third from the right) to 1 and the 4-bit (second from the right) to 0, while leaving the other bits alone. The end result is to take your original random number, which was in the range 0-15, and turn it into one of only four numbers: 8, 9, 10, or 11. So it's a very roundabout way of generating a random number between 8 and 11, inclusive. Math.floor(8 + Math.random()*4) would have done the same thing in a more straightforward manner.
It then prints the result out in hexadecimal (base 16), so you get 8, 9, a (which is ten in base 16), or b (which is eleven).