JavaScript: Read 8 bytes to 64 bit integer - javascript

I have a buffer object which contains eight bytes. These eight bytes should now be interpreted as 64 bit integer.
Currently I use following algorithm:
var int = buff[0];
for (var i = 1; i < buff.length; i++) {
int += (buff[i] * Math.pow(2, 8 * i));
}
console.log(int);
this works but I believe there are better ways (maybe using Uint64Array).
Unfortunately I cannot find how a Uint16Array could help me here for.
Regards
Update:
// puts two 32bit integers to one 64bit integer
var bufInt = (buf.readUInt32BE(0) << 8) + buf.readUInt32BE(4);

Javascript does not support 64 bit integers, because the native number type is a 64-bit double, giving only 53 bits of integer range.
You can create arrays of 32-bit numbers (i.e. Uint32Array) but if there were a 64-bit version of those there'd be no way to copy values from it into standalone variables.

Since the latest Node.js v12.0.0, you can now use buf.readBigUInt64LE :))

You can use node-int64 for 64-bit integer support:
var Int64 = require('node-int64');
var int64 = new Int64(buff);

As javascript dont support 64bit integer readily,here is one solution that worked for me.Here i am converting 8 Byte unix timestamp.
inputString = "0000016b40d8ea30"
var buf = Buffer.from(inputString, 'hex');
timestamp = (parseInt(buf.toString("hex"),16).toString());
console.log(timestamp); //1560161086000

For a pity, correct answer is that Javascript not supports 64 bit integers (till now).
So, trying to get exact 64bit integer stored into your 8 bytes into single JS number-type variable will fail. Anyway.
Some decisions:
Exact bits from 0 to 52:
If you do not need upper 11 bits of 64bit and it's enought for you to deal with exact 53-bit integers, you can use this way:
// puts up to 53 bits by 32bit integers to one "64bit" integer
var bufInt = (buf.readUInt32BE(0) & 0x001FFFFF) * 4294967296 + buf.readUInt32BE(4);
(edited question)
"64bit" int with possible loose of low 11 bit correct states:
Otherwise, if you need "common big value" of 64bit and you don't interest about exact values of till 11 low bits (rightmost 2-3 digits of huge 64bit value), you can use this way:
// puts 64 bit value by 32bit integers to one "64bit" integer
// with possible loose of lower 11 bits correctness
var bufInt = buf.readUInt32BE(0) * 4294967296 + buf.readUInt32BE(4);
For those who interest int64 (64 bit integers support) in Javascript, BEWARE!
Look:
var x1 = 1 << 30;
var x2 = 1 << 31;
var x3 = 1 << 32;
var x4 = 1 << 33;
var a = 1240611072103715194;
var b = 1240611072103715193;
var c = 1240611072103700000;
alert(''
+ 'x1=' + x1 + ' (should =1073741824)\n'
+ 'x2=' + x2 + ' (should =2147483648)\n'
+ 'x3=' + x3 + ' (should =4294967296)\n'
+ 'x4=' + x4 + ' (should =8589934592)\n'
+ 'a=' + a + ' (should =1240611072103715194)\n'
+ 'a-b=' + (a-b) + ' (should =1)\n'
+ 'a-c=' + (a-c) + ' (should =15194)\n'
);
RESULT:
x1=1073741824 (should =1073741824)
x2=-2147483648 (should =2147483648)
x3=1 (should =4294967296)
x4=2 (should =8589934592)
a=1240611072103715000 (should =1240611072103715194)
a-b=0 (should =1)
a-c=15104 (should =15194)

There are some modules around to provide 64bit integer support:
node-bigint
bignum (based on OpenSSL)
Maybe your problem can be solved using one of those libraries.

Related

What is the initial seed used in Math.random() in javascript, to generate Pseudo random number? [duplicate]

I recently figured out how to get a random number via google, and it got me thinking how does Math.random() work. So here I am I can not figure out how they did Math.random() unless they used a time like thing does anyone know how JavaScript's Math.random() works or an equivalent?
Math.random() returns a Number value with a positive sign, greater than or equal to 0 but less than 1, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-dependent algorithm or strategy.
Here's V8's implementation:
uint32_t V8::Random() {
// Random number generator using George Marsaglia's MWC algorithm.
static uint32_t hi = 0;
static uint32_t lo = 0;
// Initialize seed using the system random(). If one of the seeds
// should ever become zero again, or if random() returns zero, we
// avoid getting stuck with zero bits in hi or lo by reinitializing
// them on demand.
if (hi == 0) hi = random();
if (lo == 0) lo = random();
// Mix the bits.
hi = 36969 * (hi & 0xFFFF) + (hi >> 16);
lo = 18273 * (lo & 0xFFFF) + (lo >> 16);
return (hi << 16) + (lo & 0xFFFF);
}
Source: http://dl.packetstormsecurity.net/papers/general/Google_Chrome_3.0_Beta_Math.random_vulnerability.pdf
Here are a couple of related threads on StackOverflow:
Why is Google Chrome's Math.random number generator not *that* random?
How random is JavaScript's Math.random?
See: There's Math.random(), and then there's Math.random()
Until recently (up to version 4.9.40), V8’s choice of PRNG was MWC1616 (multiply with carry, combining two 16-bit parts). It uses 64 bits of internal state and looks roughly like this:
uint32_t state0 = 1;
uint32_t state1 = 2;
uint32_t mwc1616() {
state0 = 18030 * (state0 & 0xffff) + (state0 >> 16);
state1 = 30903 * (state1 & 0xffff) + (state1 >> 16);
return state0 << 16 + (state1 & 0xffff);
The 32-bit value is then turned into a floating point number between 0 and 1 in agreement with the specification.
MWC1616 uses little memory and is pretty fast to compute, but unfortunately offers sub-par quality:
The number of random values it can generate is limited to 232 as
opposed to the 252 numbers between 0 and 1 that double precision
floating point can represent.
The more significant upper half of the
result is almost entirely dependent on the value of state0. The
period length would be at most 232, but instead of few large
permutation cycles, there are many short ones. With a badly chosen
initial state, the cycle length could be less than 40 million.
It fails many statistical tests in the TestU01 suite.
This has been pointed out to us, and having understood the problem and after some research, we decided to reimplement Math.random based on an algorithm called xorshift128+. It uses 128 bits of internal state, has a period length of 2^128 - 1, and passes all tests from the TestU01 suite.
uint64_t state0 = 1;
uint64_t state1 = 2;
uint64_t xorshift128plus() {
uint64_t s1 = state0;
uint64_t s0 = state1;
state0 = s0;
s1 ^= s1 << 23;
s1 ^= s1 >> 17;
s1 ^= s0;
s1 ^= s0 >> 26;
state1 = s1;
return state0 + state1;
}
The new implementation landed in V8 4.9.41.0 within a few days of us becoming aware of the issue. It will become available with Chrome 49. Both Firefox and Safari switched to xorshift128+ as well.
It's correct that they use a "time like thing". A pseudo random generator is typically seeded using the system clock, because that is a good source of a number that isn't always the same.
Once the random generator is seeded with a number, it will generate a series of numbers that all depending on the initial value, but in such a way that they seem random.
A simple random generator (that was actually used in programming languages a while back) is to use a prime number in an algorithm like this:
rnd = (rnd * 7919 + 1) & 0xffff;
This will produce a series of numbers that jump back and forth, seemingly random. For example:
seed = 1337
36408
22089
7208
63833
14360
11881
41480
13689
6648
The random generator in Javascript is just a bit more complex (to give even better distribution) and uses larger numbers (as it has to produce a number that is about 60 bits instead of 16), but it follows the same basic principle.
you may want this article for a reference: https://hackernoon.com/how-does-javascripts-math-random-generate-random-numbers-ef0de6a20131
And btw, recently I am also curious about this question and then read the source code of NodeJS. We can know one possible implementation from Google V8:
The main entry for the random (MathRandom::RefillCache function):
https://github.com/v8/v8/blob/master/src/math-random.cc
How the seed initialized? see also here: https://github.com/v8/v8/blob/master/src/base/utils/random-number-generator.cc#L31
The key function is (XorShift128 function):
https://github.com/v8/v8/blob/master/src/base/utils/random-number-generator.h#L119
in this header file, there are references to some papers:
// See Marsaglia: http://www.jstatsoft.org/v08/i14/paper
// And Vigna: http://vigna.di.unimi.it/ftp/papers/xorshiftplus.pdf
<script>
function generateRandom(){ // Generate and return a random number
var num = Math.random();
num = (Math.round((num*10)))%10;
return num;
}
function generateSum(){ // Generate a problem
document.getElementById("ans").focus();
var num1 = generateRandom();
var num2 = generateRandom();
document.getElementById("num1").innerHTML = num1;
document.getElementById("num2").innerHTML = num2;
document.getElementById("pattern1").innerHTML = printPattern(num1);
document.getElementById("pattern2").innerHTML = printPattern(num2);
}
function printPattern(num){ // Generate the star pattern with 'num' number of stars
var pattern = "";
for(i=0; i<num; i++){
if((i+1)%4 == 0){
pattern = pattern+"*<br>";
}
else{
pattern = pattern+"*";
}
}
return pattern;
}
function checkAns(){ // Check the answer and give the response
var num1 = parseInt(document.getElementById("num1").innerHTML);
var num2 = parseInt(document.getElementById("num2").innerHTML);
var enteredAns = parseInt(document.getElementById("ans").value);
if ((num1+num2) == enteredAns){
document.getElementById("patternans").innerHTML = printPattern(enteredAns);
document.getElementById("patternans").innerHTML += "<br>Correct";
}
else{
document.getElementById("patternans").innerHTML += "Wrong";
//remove + mark to remove the error
}
}
function newSum(){
generateSum();
document.getElementById("patternans").innerHTML = "";
document.getElementById("ans").value = "";
}
</script>

BigInteger to a Uint8Array of bytes

I need to get the bytes of a big integer in JavaScript.
I've tried a couple of big integer libraries, but the one that actually offered this function wouldn't work.
I am not quite sure how to implement this myself, given a string containing a large number, which is generally what the libraries give access to.
Is there a library that works and allows to do this?
Or is it actually not hard, and I am just missing something?
I was googling for quick and elegant solution of this problem in JavaScript, but the only what I found was the method of conversion, based on intermediate hex-string. What is suboptimal for sure and that code also didn't work for me, unfortunately. So, I implemented my own code and wanted to post it as an answer to my own question, but found this one.
Explanation
First of all, I will answer to the opposite question, since it is more illustrative.
Reading BigInteger from a bytes array
What is an array of bytes for us? This is a number in 256-base numeral system, which we want to convert to more convenient for us 10-base (decimal) system.
For instance, let's take an array of bytes
[AA][BB][CC][DD] (1 byte is 8 bits or 2 hexadecimal digits).
Depending on the side we start from (see https://en.wikipedia.org/wiki/Endianness), we can read it as:
(AA*1 + BB*256 + CC*256^2 + DD*256^3) in little-endian
or (DD*1 + CC*256 + BB*256^2 + AA*256^3) in big-endian.
Let's use little-endian here. So, our number encoded by the array [AA][BB][CC][DD] is:
AA + BB*256 + CC*256^2 + DD*256^3
= 170 + 187*256 + 204*65536 + 221*16777216
= 170 + 47872 + 13369344 + 3707764736
= 3721182122
Writing BigInteger to a bytes array
For writing a number into an array of bytes we have to perform an opposite operation, i.e. having a number in decimal system to find all digits of it in 256-base numeral system. Let's take the same number: 3721182122
To find it's least significant byte (https://en.wikipedia.org/wiki/Bit_numbering#Least_significant_byte), we have to just divide it by 256. The remainder represents higher digits. So, we divide the remainder again by 256 and so on, until we receive 0 remainder:
3721182122 = 14535867*256 + 170
14535867 = 56780*256 + 187
56780 = 221*256 + 204
221 = 0*256 + 221
So, the result is [170][187][204][221] in decimal, [AA][BB][CC][DD] in hex.
Solution in JavaScript
Now, here is this algorithm encoded in NodeJS with big-integer library.
const BigInteger = require('big-integer');
const zero = BigInteger(0);
const one = BigInteger(1);
const n256 = BigInteger(256);
function fromLittleEndian(bytes) {
let result = zero;
let base = one;
bytes.forEach(function (byte) {
result = result.add(base.multiply(BigInteger(byte)));
base = base.multiply(n256);
});
return result;
}
function fromBigEndian(bytes) {
return fromLittleEndian(bytes.reverse());
}
function toLittleEndian(bigNumber) {
let result = new Uint8Array(32);
let i = 0;
while (bigNumber.greater(zero)) {
result[i] = bigNumber.mod(n256);
bigNumber = bigNumber.divide(n256);
i += 1;
}
return result;
}
function toBigEndian(bytes) {
return toLittleEndian(bytes).reverse();
}
console.log('Reading BigInteger from an array of bytes');
let bigInt = fromLittleEndian(new Uint8Array([170, 187, 204, 221]));
console.log(bigInt.toString());
console.log('Writing BigInteger to an array of bytes');
let bytes = toLittleEndian(bigInt);
console.log(bytes);
Benchmark
I have written small benchmark for this approach. Anybody is welcome to modify it for his own conversion method and to compare with my one.
https://repl.it/repls/EvenSturdyEquipment
Set "i" to be your BigInt's value. You can see the bytes by looking at "a" after running this:
i=11111n;n=1500;a=new Uint8Array(n);while(i>0){a[--n]=Number(i&255n);i>>=8n}
You can also extract the BigInt back out from the Uint8Array:
a.reduce((p,c)=>BigInt(p)*256n+BigInt(c))
I've got a version that works with BigInt that's supported by the browser:
const big0 = BigInt(0)
const big1 = BigInt(1)
const big8 = BigInt(8)
bigToUint8Array(big: bigint) {
if (big < big0) {
const bits: bigint = (BigInt(big.toString(2).length) / big8 + big1) * big8
const prefix1: bigint = big1 << bits
big += prefix1
}
let hex = big.toString(16)
if (hex.length % 2) {
hex = '0' + hex
}
const len = hex.length / 2
const u8 = new Uint8Array(len)
var i = 0
var j = 0
while (i < len) {
u8[i] = parseInt(hex.slice(j, j + 2), 16)
i += 1
j += 2
}
return u8
}
I've got a BigDecimal implementation that works with sending & receiving bytes as arbitary precision big decimal: https://jackieli.dev/posts/bigint-to-uint8array/

Trying to encode in base 64 a byte array from a BigInteger in javaScript

I am trying to implement the exact same java algorythm in javaScript :
String s = "6332878812086272"; // For example
long id = Long.valueOf(s);
final byte[] idBytes = BigInteger.valueOf(id).toByteArray();
final String idBase64 = Base64.getEncoder().encodeToString(idBytes);
Is this possible as javaScript does not handle big number like java BigInteger/long? Any libraries recommendations?
You can probably try something like that:
// number -> number in base 2 as string
var nbrAsBits = (6332878812086272).toString(2); //"10110 01111111 10111000 01000000 00000000 00000000 00000000"
// number in base 2 as string -> array of bytes (8 bits) as numbers
var nbrAsIntArray = convertionTool(nbrAsBits); //[22, 127, 184, 64, 0, 0, 0]
// array of bytes as numbers -> bytes
var nbrAsByteArray= new Uint8Array(bytes);
// concat everything in a string
var str = "";
for(var i=0; i<byteArray.length; i++) {
str += String.fromCharCode(byteArray[i]);
}
// get the base64
btoa(str); //"Fn+4QAAAAA=="
You will have to implement the convertionTool() function which will convert every byte into its numerical value. Use the power of 2 like in this example:
01111111 = 2^6 + 2^5 + 2^4 + 2^3 + 2^2 + 2^1 + 2^0 = 127
Note
The java long data type has a maximum value of 2^63 - 1 (primitive data type) while the javascript number has a maximum value of 2^53 - 1 (Number.MAX_SAFE_INTEGER).
So in javascript you won't be able to process number between 9007199254740991 2^63 - 1 and 9223372036854775807 2^53 - 1, when in java you can.

How does Math.random() work in javascript?

I recently figured out how to get a random number via google, and it got me thinking how does Math.random() work. So here I am I can not figure out how they did Math.random() unless they used a time like thing does anyone know how JavaScript's Math.random() works or an equivalent?
Math.random() returns a Number value with a positive sign, greater than or equal to 0 but less than 1, chosen randomly or pseudo randomly with approximately uniform distribution over that range, using an implementation-dependent algorithm or strategy.
Here's V8's implementation:
uint32_t V8::Random() {
// Random number generator using George Marsaglia's MWC algorithm.
static uint32_t hi = 0;
static uint32_t lo = 0;
// Initialize seed using the system random(). If one of the seeds
// should ever become zero again, or if random() returns zero, we
// avoid getting stuck with zero bits in hi or lo by reinitializing
// them on demand.
if (hi == 0) hi = random();
if (lo == 0) lo = random();
// Mix the bits.
hi = 36969 * (hi & 0xFFFF) + (hi >> 16);
lo = 18273 * (lo & 0xFFFF) + (lo >> 16);
return (hi << 16) + (lo & 0xFFFF);
}
Source: http://dl.packetstormsecurity.net/papers/general/Google_Chrome_3.0_Beta_Math.random_vulnerability.pdf
Here are a couple of related threads on StackOverflow:
Why is Google Chrome's Math.random number generator not *that* random?
How random is JavaScript's Math.random?
See: There's Math.random(), and then there's Math.random()
Until recently (up to version 4.9.40), V8’s choice of PRNG was MWC1616 (multiply with carry, combining two 16-bit parts). It uses 64 bits of internal state and looks roughly like this:
uint32_t state0 = 1;
uint32_t state1 = 2;
uint32_t mwc1616() {
state0 = 18030 * (state0 & 0xffff) + (state0 >> 16);
state1 = 30903 * (state1 & 0xffff) + (state1 >> 16);
return state0 << 16 + (state1 & 0xffff);
The 32-bit value is then turned into a floating point number between 0 and 1 in agreement with the specification.
MWC1616 uses little memory and is pretty fast to compute, but unfortunately offers sub-par quality:
The number of random values it can generate is limited to 232 as
opposed to the 252 numbers between 0 and 1 that double precision
floating point can represent.
The more significant upper half of the
result is almost entirely dependent on the value of state0. The
period length would be at most 232, but instead of few large
permutation cycles, there are many short ones. With a badly chosen
initial state, the cycle length could be less than 40 million.
It fails many statistical tests in the TestU01 suite.
This has been pointed out to us, and having understood the problem and after some research, we decided to reimplement Math.random based on an algorithm called xorshift128+. It uses 128 bits of internal state, has a period length of 2^128 - 1, and passes all tests from the TestU01 suite.
uint64_t state0 = 1;
uint64_t state1 = 2;
uint64_t xorshift128plus() {
uint64_t s1 = state0;
uint64_t s0 = state1;
state0 = s0;
s1 ^= s1 << 23;
s1 ^= s1 >> 17;
s1 ^= s0;
s1 ^= s0 >> 26;
state1 = s1;
return state0 + state1;
}
The new implementation landed in V8 4.9.41.0 within a few days of us becoming aware of the issue. It will become available with Chrome 49. Both Firefox and Safari switched to xorshift128+ as well.
It's correct that they use a "time like thing". A pseudo random generator is typically seeded using the system clock, because that is a good source of a number that isn't always the same.
Once the random generator is seeded with a number, it will generate a series of numbers that all depending on the initial value, but in such a way that they seem random.
A simple random generator (that was actually used in programming languages a while back) is to use a prime number in an algorithm like this:
rnd = (rnd * 7919 + 1) & 0xffff;
This will produce a series of numbers that jump back and forth, seemingly random. For example:
seed = 1337
36408
22089
7208
63833
14360
11881
41480
13689
6648
The random generator in Javascript is just a bit more complex (to give even better distribution) and uses larger numbers (as it has to produce a number that is about 60 bits instead of 16), but it follows the same basic principle.
you may want this article for a reference: https://hackernoon.com/how-does-javascripts-math-random-generate-random-numbers-ef0de6a20131
And btw, recently I am also curious about this question and then read the source code of NodeJS. We can know one possible implementation from Google V8:
The main entry for the random (MathRandom::RefillCache function):
https://github.com/v8/v8/blob/master/src/math-random.cc
How the seed initialized? see also here: https://github.com/v8/v8/blob/master/src/base/utils/random-number-generator.cc#L31
The key function is (XorShift128 function):
https://github.com/v8/v8/blob/master/src/base/utils/random-number-generator.h#L119
in this header file, there are references to some papers:
// See Marsaglia: http://www.jstatsoft.org/v08/i14/paper
// And Vigna: http://vigna.di.unimi.it/ftp/papers/xorshiftplus.pdf
<script>
function generateRandom(){ // Generate and return a random number
var num = Math.random();
num = (Math.round((num*10)))%10;
return num;
}
function generateSum(){ // Generate a problem
document.getElementById("ans").focus();
var num1 = generateRandom();
var num2 = generateRandom();
document.getElementById("num1").innerHTML = num1;
document.getElementById("num2").innerHTML = num2;
document.getElementById("pattern1").innerHTML = printPattern(num1);
document.getElementById("pattern2").innerHTML = printPattern(num2);
}
function printPattern(num){ // Generate the star pattern with 'num' number of stars
var pattern = "";
for(i=0; i<num; i++){
if((i+1)%4 == 0){
pattern = pattern+"*<br>";
}
else{
pattern = pattern+"*";
}
}
return pattern;
}
function checkAns(){ // Check the answer and give the response
var num1 = parseInt(document.getElementById("num1").innerHTML);
var num2 = parseInt(document.getElementById("num2").innerHTML);
var enteredAns = parseInt(document.getElementById("ans").value);
if ((num1+num2) == enteredAns){
document.getElementById("patternans").innerHTML = printPattern(enteredAns);
document.getElementById("patternans").innerHTML += "<br>Correct";
}
else{
document.getElementById("patternans").innerHTML += "Wrong";
//remove + mark to remove the error
}
}
function newSum(){
generateSum();
document.getElementById("patternans").innerHTML = "";
document.getElementById("ans").value = "";
}
</script>

How to do bitwise AND in javascript on variables that are longer than 32 bit?

I have 2 numbers in javascript that I want to bit and. They both are 33bit long
in C#:
((4294967296 & 4294967296 )==0) is false
but in javascript:
((4294967296 & 4294967296 )==0) is true
4294967296 is ((long)1) << 32
As I understand it, it is due to the fact that javascript converts values to int32 when performing bit wise operations.
How do I work around this?
Any suggestions on how to replace bit and with a set of other math operations so that bits are not lost?
Here's a fun function for arbitrarily large integers:
function BitwiseAndLarge(val1, val2) {
var shift = 0, result = 0;
var mask = ~((~0) << 30); // Gives us a bit mask like 01111..1 (30 ones)
var divisor = 1 << 30; // To work with the bit mask, we need to clear bits at a time
while( (val1 != 0) && (val2 != 0) ) {
var rs = (mask & val1) & (mask & val2);
val1 = Math.floor(val1 / divisor); // val1 >>> 30
val2 = Math.floor(val2 / divisor); // val2 >>> 30
for(var i = shift++; i--;) {
rs *= divisor; // rs << 30
}
result += rs;
}
return result;
}
Assuming that the system handles at least 30-bit bitwise operations properly.
You could split each of the vars into 2 32-bit values (like a high word and low word), then do a bitwise operation on both pairs.
The script below runs as a Windows .js script. You can replace WScript.Echo() with alert() for Web.
var a = 4294967296;
var b = 4294967296;
var w = 4294967296; // 2^32
var aHI = a / w;
var aLO = a % w;
var bHI = b / w;
var bLO = b % w;
WScript.Echo((aHI & bHI) * w + (aLO & bLO));
There are several BigInteger librairy in Javascript, but none of them offer bitwise operation you need at the moment. If you are motivated and really need that functionality you can modify one of those librairy and add a method to do so. They already offer a good code base to work with huge number.
You can find a list of the BigInteger librairy in Javascript in this question :
Huge Integer JavaScript Library
The simplest bit-wise AND, that works up to JavaScript's maximum number
JavaScript's max integer value is 2^53 for internal reasons (it's a double float). If you need more there are good libraries for that big integer handling.
2^53 is 9,007,199,254,740,992, or about 9,000 trillion (~9 quadrillion).
// Works with values up to 2^53
function bitwiseAnd_53bit(value1, value2) {
const maxInt32Bits = 4294967296; // 2^32
const value1_highBits = value1 / maxInt32Bits;
const value1_lowBits = value1 % maxInt32Bits;
const value2_highBits = value2 / maxInt32Bits;
const value2_lowBits = value2 % maxInt32Bits;
return (value1_highBits & value2_highBits) * maxInt32Bits + (value1_lowBits & value2_lowBits)
}
Ran into this problem today and this is what I came up with:
function bitwiseAnd(firstNumber, secondNumber) {
let // convert the numbers to binary strings
firstBitArray = (firstNumber).toString(2),
secondBitArray = (secondNumber).toString(2),
resultedBitArray = [],
// get the length of the largest number
maxLength = Math.max(firstBitArray.length, secondBitArray.length);
//add zero fill ahead in case the binary strings have different lengths
//so we can have strings equal in length and compare bit by bit
firstBitArray = firstBitArray.padStart(maxLength,'0');
secondBitArray = secondBitArray.padStart(maxLength,'0');
// bit by bit comparison
for(let i = 0; i < maxLength; i++) {
resultedBitArray.push(parseInt(firstBitArray[i]) && secondBitArray[i]);
}
//concat the result array back to a string and parse the binary string back to an integer
return parseInt(resultedBitArray.join(''),2);
}
Hope this helps anyone else who runs into this problem.

Categories