I am trying in javascript to convert an integer (which I know will be between 0 and 32), to an array of 0s and 1s. I have looked around but couldn't find something that works..
So, if I have an integer as 22 (binary 10110), I would like to access it as:
Bitarr[0] = 0
Bitarr[1] = 1
Bitarr[2] = 1
Bitarr[3] = 0
Bitarr[4] = 1
Any suggestions?
Many thanks
convert to base 2:
var base2 = (yourNumber).toString(2);
access the characters (bits):
base2[0], base2[1], base2[3], etc...
Short (ES6)
Shortest (32 chars) version which fill last bits by zero. I assume that n is your number, b is base (number of output bits):
[...Array(b)].map((x,i)=>n>>i&1)
let bits = (n,b=32) => [...Array(b)].map((x,i)=>(n>>i)&1);
let Bitarr = bits(22,8);
console.log(Bitarr[0]); // = 0
console.log(Bitarr[1]); // = 1
console.log(Bitarr[2]); // = 1
console.log(Bitarr[3]); // = 0
console.log(Bitarr[4]); // = 1
var a = 22;
var b = [];
for (var i = 0; i < 5; i++)
b[i] = (a >> i) & 1;
alert(b);
Assuming 5 bits (it seemed from your question), so 0 <= a < 32. If you like you can make the 5 larger, upto 32 (bitshifting in JavaScript works with 32 bit integer).
This should do
for(int i = 0; i < 32; ++i)
Bitarr[i] = (my_int >> i) & 1;
You can convert your integer to a binary String like this. Note the base 2 parameter.
var i = 20;
var str = i.toString(2); // 10100
You can access chars in a String as if it were an array:
alert(str[0]); // 1
alert(str[1]); // 0
etc...
Building up on previous answers: you may want your array to be an array of integers, not strings, so here is a one-liner:
(1234).toString(2).split('').map(function(s) { return parseInt(s); });
Note, that shorter version, (11).toString(2).split('').map(parseInt) will not work (chrome), for unknown to me reason it converts "0"s to NaNs
In addition, this code gives 32length array
function get_bits(value){
var base2_ = (value).toString(2).split("").reverse().join("");
var baseL_ = new Array(32 - base2_.length).join("0");
var base2 = base2_ + baseL_;
return base2;
}
1 => 1000000000000000000000000000000
2 => 0100000000000000000000000000000
3 => 1100000000000000000000000000000
You might do as follows;
var n = 1071,
b = Array(Math.floor(Math.log2(n))+1).fill()
.map((_,i,a) => n >> a.length-1-i & 1);
console.log(b);
just for the sake of refernce:
(121231241).toString(2).split('').reverse().map((x, index) => x === '1' ? 1 << index : 0).reverse().filter(x => x > 0).join(' + ');
would give you:
67108864 + 33554432 + 16777216 + 2097152 + 1048576 + 524288 + 65536 + 32768 + 16384 + 4096 + 1024 + 512 + 256 + 128 + 8 + 1
Related
This is a slightly odd/unique request. I am trying to achieve a result where e.g "yes" becomes, "yyes", "yees", "yess", "yyees", "yyess", "yyeess".
I have looked at this: Find all lowercase and uppercase combinations of a string in Javascript which completes it for capitalisation, however my understanding is prohibiting me from manipulating this into character duplication (if this method is even possible to use in this way).
export function letterDuplication(level: number, input: string){
const houseLength = input.length;
if (level == 1){
var resultsArray: string[] = [];
const letters = input.split("");
const permCount = 1 << input.length;
for (let perm = 0; perm < permCount; perm++) {
// Update the capitalization depending on the current permutation
letters.reduce((perm, letter, i) => {
if (perm & 1) {
letters[i] = (letter.slice(0, perm) + letter.slice(perm-1, perm) + letter.slice(perm));
} else {
letters[i] = (letter.slice(0, perm - 1) + letter.slice(perm, perm) + letter.slice(perm))
}
return perm >> 1;
}, perm);
var result = letters.join("");
console.log(result);
resultsArray[perm] = result;
}
If I haven't explained this particularly well please let me know and I'll clarify. I'm finding it quite the challenge!
General idea
To get the list of all words we can get from ordered array of letters, we need to get all combinations of letters, passed 1 or 2 times into word, like:
word = 'sample'
array = 's'{1,2} + 'a'{1,2} + 'm'{1,2} + 'p'{1,2} + 'l'{1,2} + 'e'{1,2}
Amount of all possible words equal to 2 ^ word.length (8 for "yes"), so we can build binary table with 8 rows that represent all posible combinations just via convering numbers from 0 to 7 (from decimal to binary):
0 -> 000
1 -> 001
2 -> 010
3 -> 011
4 -> 100
5 -> 101
6 -> 110
7 -> 111
Each decimal we can use as pattern for new word, where 0 means letter must be used once, and 1 - letter must be used twice:
0 -> 000 -> yes
1 -> 001 -> yess
2 -> 010 -> yees
3 -> 011 -> yeess
4 -> 100 -> yyes
5 -> 101 -> yyess
6 -> 110 -> yyees
7 -> 111 -> yyeess
Code
So, your code representation may looks like this:
// Initial word
const word = 'yes';
// List of all possible words
const result = [];
// Iterating (2 ^ word.length) times
for (let i = 0; i < Math.pow(2, word.length); i++) {
// Get binary pattern for each word
const bin = decToBin(i, word.length);
// Make a new word from pattern ...
let newWord = '';
for (let i = 0; i < word.length; i++) {
// ... by adding letter 1 or 2 times based on bin char value
newWord += word[i].repeat(+bin[i] ? 2 : 1);
}
result.push(newWord);
}
// Print result (all possible words)
console.log(result);
// Method for decimal to binary convertion with leading zeroes
// (always returns string with length = len)
function decToBin(x, len) {
let rem, bin = 0, i = 1;
while (x != 0) {
rem = x % 2;
x = parseInt(x / 2);
bin += rem * i;
i = i * 10;
}
bin = bin.toString();
return '0'.repeat(len - bin.length) + bin;
}
Maybe this example can help you. It's a bit not neat and not optimal, but it seems to work:
function getCombinations(word = '') {
const allCombination = [];
const generate = (n, arr, i = 0) => {
if (i === n) {
return allCombination.push([...arr]);
} else {
arr[i] = 0;
generate(n, arr, i + 1);
arr[i] = 1;
generate(n, arr, i + 1);
}
}
generate(word.length, Array(word.length).fill(0));
return allCombination.map((el) => {
return el.map((isCopy, i) => isCopy ? word[i].repeat(2) : word[i]).join('')
});
}
console.log(getCombinations('yes'));
console.log(getCombinations('cat'));
For instance, 10100 would be inverted to 01011; 010 would be inverted to 101; 101 would be converted to 010.
The problem is when I use ~5, it becomes -6 because js uses 32 bit signed.
How do I invert an unsigned arbitrary-bit binary number?
I would like to create a function that takes in this unsigned arbitrary-bit binary number and return its inverted form( 101->010)
I want to convert from string 101 to 010
You can create a function that flips the required number of digits like so
var flipbits = function (v, digits) {
return ~v & (Math.pow(2, digits) - 1);
}
console.log(flipbits(5, 3)); // outputs 2
console.log(flipbits(2, 3)); // outputs 5
note - this isn't "arbitrary number of bits" ... it's 32 at best
working with strings, you can have arbitrary bit length (this one wont work without transpiling in Internet Exploder)
var flipbits = str => str.split('').map(b => (1 - b).toString()).join('');
console.log(flipbits('010')); // outputs 101
console.log(flipbits('101')); // outputs 010
The above in ES5
var flipbits = function flipbits(str) {
return str.split('').map(function (b) {
return (1 - b).toString();
}).join('');
};
console.log(flipbits('010')); // outputs 101
console.log(flipbits('101')); // outputs 010
Inverting the bits will always be the same, but to convert an unsigned integer to a signed integer you can use the unsigned >>> shift operator to work on unsigned numbers:
console.log(~5); // -6
console.log(~5>>>0); // 4294967290
If you want to make sure you only flip the significant bits in the number, you'll instead want to mask it via an & operation with how many significant bits you need. Here is an example of the significant bit masking:
function invert(x) {
let significant = 0;
let test = x;
while (test > 1) {
test = test >> 1;
significant = (significant << 1) | 1;
}
return (~x) & significant;
}
console.log(invert(5)); // 2 (010 in binary)
In JavaScript, ~ or tilde does this
-(N+1)
So your current operation is correct but not what you are looking for:
~5
-(5 + 1)
-6
Reference
You can use String.prototype.replace() with RegExp /(0)|(1)/
function toggle(n) {
return n.replace(/(0)|(1)/g, function(m, p1, p2) { return p2 ? 0 : 1 });
}
console.log(
toggle("10100"),
toggle("101")
)
You can use a function that converts numbers to binary as a string, flips the 0s and 1s, then converts back to a number. It seems to give the expected results, but looks pretty ugly:
function flipBits(n) {
return parseInt(n.toString(2).split('').map(bit => 1 - bit).join(''),2)
}
[0,1,2,3,4,5,123,987679876,987679875].forEach(
n => console.log(n + ' -> ' + flipBits(n))
);
Maybe there's a mix of bitwise operators to do the same thing.
Edit
It seems you're working with strings, so just split, flip and join again:
// Requires support for ECMAScript ed 5.1 for map and
// ECMAScript 2015 for arrow functions
function flipStringBits(s) {
return s.split('').map(c => 1 - c).join('');
}
['0','010','110','10011100110'].forEach(
v => console.log(v + ' -> ' + flipStringBits(v))
);
Basic function for ECMAScript ed 3 (works everywhere, even IE 4).
function flipStringBitsEd3(s) {
var b = s.split('')
for (var i = 0, iLen = b.length; i < iLen; i++) {
b[i] = 1 - b[i];
}
return b.join('');
}
// Tests
console.log('Ed 3 version');
var data = ['0', '010', '110', '10011100110'];
for (var i = 0, iLen = data.length; i < iLen; i++) {
console.log(data[i] + ' ->\n' + flipStringBitsEd3(data[i]) + '\n');
}
Works with any length string. The ed 3 version will work everywhere and is probably faster than functions using newer features.
You can create a mask for number's width and take xor to flip the bits.
/**
* #param {number} num
* #return {number}
*/
var findComplement = function(num) {
let len = num.toString(2).length;
let mask = Math.pow(2, len) - 1;
return num ^ mask;
};
console.log(findComplement(5));
For Integer values, you can use the javaScript program to reverse the order of the bits in a given integer and as a result return new integer as described below:
function binaryReverse(value) {
return parseInt(value.toString(2).split('').reverse().join(''), 2);
}
console.log(binaryReverse(25));
console.log(binaryReverse(19));
Output:
19
25
Maybe i am just not that good enough in math, but I am having a problem in converting a number into pure alphabetical Bijective Hexavigesimal just like how Microsoft Excel/OpenOffice Calc do it.
Here is a version of my code but did not give me the output i needed:
var toHexvg = function(a){
var x='';
var let="_abcdefghijklmnopqrstuvwxyz";
var len=let.length;
var b=a;
var cnt=0;
var y = Array();
do{
a=(a-(a%len))/len;
cnt++;
}while(a!=0)
a=b;
var vnt=0;
do{
b+=Math.pow((len),vnt)*Math.floor(a/Math.pow((len),vnt+1));
vnt++;
}while(vnt!=cnt)
var c=b;
do{
y.unshift( c%len );
c=(c-(c%len))/len;
}while(c!=0)
for(var i in y)x+=let[y[i]];
return x;
}
The best output of my efforts can get is: a b c d ... y z ba bb bc - though not the actual code above. The intended output is suppose to be a b c ... y z aa ab ac ... zz aaa aab aac ... zzzzz aaaaaa aaaaab, you get the picture.
Basically, my problem is more on doing the ''math'' rather than the function. Ultimately my question is: How to do the Math in Hexavigesimal conversion, till a [supposed] infinity, just like Microsoft Excel.
And if possible, a source code, thank you in advance.
Okay, here's my attempt, assuming you want the sequence to be start with "a" (representing 0) and going:
a, b, c, ..., y, z, aa, ab, ac, ..., zy, zz, aaa, aab, ...
This works and hopefully makes some sense. The funky line is there because it mathematically makes more sense for 0 to be represented by the empty string and then "a" would be 1, etc.
alpha = "abcdefghijklmnopqrstuvwxyz";
function hex(a) {
// First figure out how many digits there are.
a += 1; // This line is funky
c = 0;
var x = 1;
while (a >= x) {
c++;
a -= x;
x *= 26;
}
// Now you can do normal base conversion.
var s = "";
for (var i = 0; i < c; i++) {
s = alpha.charAt(a % 26) + s;
a = Math.floor(a/26);
}
return s;
}
However, if you're planning to simply print them out in order, there are far more efficient methods. For example, using recursion and/or prefixes and stuff.
Although #user826788 has already posted a working code (which is even a third quicker), I'll post my own work, that I did before finding the posts here (as i didnt know the word "hexavigesimal"). However it also includes the function for the other way round. Note that I use a = 1 as I use it to convert the starting list element from
aa) first
ab) second
to
<ol type="a" start="27">
<li>first</li>
<li>second</li>
</ol>
:
function linum2int(input) {
input = input.replace(/[^A-Za-z]/, '');
output = 0;
for (i = 0; i < input.length; i++) {
output = output * 26 + parseInt(input.substr(i, 1), 26 + 10) - 9;
}
console.log('linum', output);
return output;
}
function int2linum(input) {
var zeros = 0;
var next = input;
var generation = 0;
while (next >= 27) {
next = (next - 1) / 26 - (next - 1) % 26 / 26;
zeros += next * Math.pow(27, generation);
generation++;
}
output = (input + zeros).toString(27).replace(/./g, function ($0) {
return '_abcdefghijklmnopqrstuvwxyz'.charAt(parseInt($0, 27));
});
return output;
}
linum2int("aa"); // 27
int2linum(27); // "aa"
You could accomplish this with recursion, like this:
const toBijective = n => (n > 26 ? toBijective(Math.floor((n - 1) / 26)) : "") + ((n % 26 || 26) + 9).toString(36);
// Parsing is not recursive
const parseBijective = str => str.split("").reverse().reduce((acc, x, i) => acc + ((parseInt(x, 36) - 9) * (26 ** i)), 0);
toBijective(1) // "a"
toBijective(27) // "aa"
toBijective(703) // "aaa"
toBijective(18279) // "aaaa"
toBijective(127341046141) // "overflow"
parseBijective("Overflow") // 127341046141
I don't understand how to work it out from a formula, but I fooled around with it for a while and came up with the following algorithm to literally count up to the requested column number:
var getAlpha = (function() {
var alphas = [null, "a"],
highest = [1];
return function(decNum) {
if (alphas[decNum])
return alphas[decNum];
var d,
next,
carry,
i = alphas.length;
for(; i <= decNum; i++) {
next = "";
carry = true;
for(d = 0; d < highest.length; d++){
if (carry) {
if (highest[d] === 26) {
highest[d] = 1;
} else {
highest[d]++;
carry = false;
}
}
next = String.fromCharCode(
highest[d] + 96)
+ next;
}
if (carry) {
highest.push(1);
next = "a" + next;
}
alphas[i] = next;
}
return alphas[decNum];
};
})();
alert(getAlpha(27)); // "aa"
alert(getAlpha(100000)); // "eqxd"
Demo: http://jsfiddle.net/6SE2f/1/
The highest array holds the current highest number with an array element per "digit" (element 0 is the least significant "digit").
When I started the above it seemed a good idea to cache each value once calculated, to save time if the same value was requested again, but in practice (with Chrome) it only took about 3 seconds to calculate the 1,000,000th value (bdwgn) and about 20 seconds to calculate the 10,000,000th value (uvxxk). With the caching removed it took about 14 seconds to the 10,000,000th value.
Just finished writing this code earlier tonight, and I found this question while on a quest to figure out what to name the damn thing. Here it is (in case anybody feels like using it):
/**
* Convert an integer to bijective hexavigesimal notation (alphabetic base-26).
*
* #param {Number} int - A positive integer above zero
* #return {String} The number's value expressed in uppercased bijective base-26
*/
function bijectiveBase26(int){
const sequence = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const length = sequence.length;
if(int <= 0) return int;
if(int <= length) return sequence[int - 1];
let index = (int % length) || length;
let result = [sequence[index - 1]];
while((int = Math.floor((int - 1) / length)) > 0){
index = (int % length) || length;
result.push(sequence[index - 1]);
}
return result.reverse().join("")
}
I had to solve this same problem today for work. My solution is written in Elixir and uses recursion, but I explain the thinking in plain English.
Here are some example transformations:
0 -> "A", 1 -> "B", 2 -> "C", 3 -> "D", ..
25 -> "Z", 26 -> "AA", 27 -> "AB", ...
At first glance it might seem like a normal 26-base counting system
but unfortunately it is not so simple.
The "problem" becomes clear when you realize:
A = 0
AA = 26
This is at odds with a normal counting system, where "0" does not behave
as "1" when it is in a decimal place other than then unit.
To understand the algorithm, consider a simpler but equivalent base-2 system:
A = 0
B = 1
AA = 2
AB = 3
BA = 4
BB = 5
AAA = 6
In a normal binary counting system we can determine the "value" of decimal places by
taking increasing powers of 2 (1, 2, 4, 8, 16) and the value of a binary number is
calculated by multiplying each digit by that digit place's value.
e.g. 10101 = 1 * (2 ^ 4) + 0 * (2 ^ 3) + 1 * (2 ^ 2) + 0 * (2 ^ 1) + 1 * (2 ^ 0) = 21
In our more complicated AB system, we can see by inspection that the decimal place values are:
1, 2, 6, 14, 30, 62
The pattern reveals itself to be (previous_unit_place_value + 1) * 2.
As such, to get the next lower unit place value, we divide by 2 and subtract 1.
This can be extended to a base-26 system. Simply divide by 26 and subtract 1.
Now a formula for transforming a normal base-10 number to special base-26 is apparent.
Say the input is x.
Create an accumulator list l.
If x is less than 26, set l = [x | l] and go to step 5. Otherwise, continue.
Divide x by 2. The floored result is d and the remainder is r.
Push the remainder as head on an accumulator list. i.e. l = [r | l]
Go to step 2 with with (d - 1) as input, e.g. x = d - 1
Convert """ all elements of l to their corresponding chars. 0 -> A, etc.
So, finally, here is my answer, written in Elixir:
defmodule BijectiveHexavigesimal do
def to_az_string(number, base \\ 26) do
number
|> to_list(base)
|> Enum.map(&to_char/1)
|> to_string()
end
def to_09_integer(string, base \\ 26) do
string
|> String.to_charlist()
|> Enum.reverse()
|> Enum.reduce({0, nil}, fn
char, {_total, nil} ->
{to_integer(char), 1}
char, {total, previous_place_value} ->
char_value = to_integer(char + 1)
place_value = previous_place_value * base
new_total = total + char_value * place_value
{new_total, place_value}
end)
|> elem(0)
end
def to_list(number, base, acc \\ []) do
if number < base do
[number | acc]
else
to_list(div(number, base) - 1, base, [rem(number, base) | acc])
end
end
defp to_char(x), do: x + 65
end
You use it simply as BijectiveHexavigesimal.to_az_string(420). It also accepts on optional "base" arg.
I know the OP asked about Javascript but I wanted to provide an Elixir solution for posterity.
I have published these functions in npm package here:
https://www.npmjs.com/package/#gkucmierz/utils
Converting bijective numeration to number both ways (also BigInt version is included).
https://github.com/gkucmierz/utils/blob/main/src/bijective-numeration.mjs
What is the best way of implementing a bit array in JavaScript?
Here's one I whipped up:
UPDATE - something about this class had been bothering me all day - it wasn't size based - creating a BitArray with N slots/bits was a two step operation - instantiate, resize. Updated the class to be size based with an optional second paramter for populating the size based instance with either array values or a base 10 numeric value.
(Fiddle with it here)
/* BitArray DataType */
// Constructor
function BitArray(size, bits) {
// Private field - array for our bits
this.m_bits = new Array();
//.ctor - initialize as a copy of an array of true/false or from a numeric value
if (bits && bits.length) {
for (var i = 0; i < bits.length; i++)
this.m_bits.push(bits[i] ? BitArray._ON : BitArray._OFF);
} else if (!isNaN(bits)) {
this.m_bits = BitArray.shred(bits).m_bits;
}
if (size && this.m_bits.length != size) {
if (this.m_bits.length < size) {
for (var i = this.m_bits.length; i < size; i++) {
this.m_bits.push(BitArray._OFF);
}
} else {
for(var i = size; i > this.m_bits.length; i--){
this.m_bits.pop();
}
}
}
}
/* BitArray PUBLIC INSTANCE METHODS */
// read-only property - number of bits
BitArray.prototype.getLength = function () { return this.m_bits.length; };
// accessor - get bit at index
BitArray.prototype.getAt = function (index) {
if (index < this.m_bits.length) {
return this.m_bits[index];
}
return null;
};
// accessor - set bit at index
BitArray.prototype.setAt = function (index, value) {
if (index < this.m_bits.length) {
this.m_bits[index] = value ? BitArray._ON : BitArray._OFF;
}
};
// resize the bit array (append new false/0 indexes)
BitArray.prototype.resize = function (newSize) {
var tmp = new Array();
for (var i = 0; i < newSize; i++) {
if (i < this.m_bits.length) {
tmp.push(this.m_bits[i]);
} else {
tmp.push(BitArray._OFF);
}
}
this.m_bits = tmp;
};
// Get the complimentary bit array (i.e., 01 compliments 10)
BitArray.prototype.getCompliment = function () {
var result = new BitArray(this.m_bits.length);
for (var i = 0; i < this.m_bits.length; i++) {
result.setAt(i, this.m_bits[i] ? BitArray._OFF : BitArray._ON);
}
return result;
};
// Get the string representation ("101010")
BitArray.prototype.toString = function () {
var s = new String();
for (var i = 0; i < this.m_bits.length; i++) {
s = s.concat(this.m_bits[i] === BitArray._ON ? "1" : "0");
}
return s;
};
// Get the numeric value
BitArray.prototype.toNumber = function () {
var pow = 0;
var n = 0;
for (var i = this.m_bits.length - 1; i >= 0; i--) {
if (this.m_bits[i] === BitArray._ON) {
n += Math.pow(2, pow);
}
pow++;
}
return n;
};
/* STATIC METHODS */
// Get the union of two bit arrays
BitArray.getUnion = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._union(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Get the intersection of two bit arrays
BitArray.getIntersection = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._intersect(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Get the difference between to bit arrays
BitArray.getDifference = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._difference(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Convert a number into a bit array
BitArray.shred = function (number) {
var bits = new Array();
var q = number;
do {
bits.push(q % 2);
q = Math.floor(q / 2);
} while (q > 0);
return new BitArray(bits.length, bits.reverse());
};
/* BitArray PRIVATE STATIC CONSTANTS */
BitArray._ON = 1;
BitArray._OFF = 0;
/* BitArray PRIVATE STATIC METHODS */
// Calculate the intersection of two bits
BitArray._intersect = function (bit1, bit2) {
return bit1 === BitArray._ON && bit2 === BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Calculate the union of two bits
BitArray._union = function (bit1, bit2) {
return bit1 === BitArray._ON || bit2 === BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Calculate the difference of two bits
BitArray._difference = function (bit1, bit2) {
return bit1 === BitArray._ON && bit2 !== BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Get the longest or shortest (smallest) length of the two bit arrays
BitArray._getLen = function (bitArray1, bitArray2, smallest) {
var l1 = bitArray1.getLength();
var l2 = bitArray2.getLength();
return l1 > l2 ? smallest ? l2 : l1 : smallest ? l2 : l1;
};
CREDIT TO #Daniel Baulig for asking for the refactor from quick and dirty to prototype based.
I don't know about bit arrays, but you can make byte arrays easy with new features.
Look up typed arrays. I've used these in both Chrome and Firefox. The important one is Uint8Array.
To make an array of 512 uninitialized bytes:
var arr = new UintArray(512);
And accessing it (the sixth byte):
var byte = arr[5];
For node.js, use Buffer (server-side).
EDIT:
To access individual bits, use bit masks.
To get the bit in the one's position, do num & 0x1
The Stanford Javascript Crypto Library (SJCL) provides a Bit Array implementation and can convert different inputs (Hex Strings, Byte Arrays, etc.) to Bit Arrays.
Their code is public on GitHub: bitwiseshiftleft/sjcl. So if you lookup bitArray.js, you can find their bit array implementation.
A conversion from bytes to bits can be found here.
Something like this is as close as I can think of. Saves bit arrays as 32 bit numbers, and has a standard array backing it to handle larger sets.
class bitArray {
constructor(length) {
this.backingArray = Array.from({length: Math.ceil(length/32)}, ()=>0)
this.length = length
}
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) > 0
}
on(n) {
this.backingArray[n/32|0] |= 1 << n % 32
}
off(n) {
this.backingArray[n/32|0] &= ~(1 << n % 32)
}
toggle(n) {
this.backingArray[n/32|0] ^= 1 << n % 32
}
forEach(callback) {
this.backingArray.forEach((number, container)=>{
const max = container == this.backingArray.length-1 ? this.length%32 : 32
for(let x=0; x<max; x++) {
callback((number & 1<<x)>0, 32*container+x)
}
})
}
}
let bits = new bitArray(10)
bits.get(2) //false
bits.on(2)
bits.get(2) //true
bits.forEach(console.log)
/* outputs:
false
false
true
false
false
false
false
false
false
false
*/
bits.toggle(2)
bits.forEach(console.log)
/* outputs:
false
false
false
false
false
false
false
false
false
false
*/
bits.toggle(0)
bits.toggle(1)
bits.toggle(2)
bits.off(2)
bits.off(3)
bits.forEach(console.log)
/* outputs:
true
true
false
false
false
false
false
false
false
false
*/
2022
As can be seen from past answers and comments, the question of "implementing a bit array" can be understood in two different (non-exclusive) ways:
an array that takes 1-bit in memory for each entry
an array on which bitwise operations can be applied
As #beatgammit points out, ecmascript specifies typed arrays, but bit arrays are not part of it. I have just published #bitarray/typedarray, an implementation of typed arrays for bits, that emulates native typed arrays and takes 1 bit in memory for each entry.
Because it reproduces the behaviour of native typed arrays, it does not include any bitwise operations though. So, I have also published #bitarray/es6, which extends the previous with bitwise operations.
I wouldn't debate what is the best way of implementing bit array, as per the asked question, because "best" could be argued at length, but those are certainly some way of implementing bit arrays, with the benefit that they behave like native typed arrays.
import BitArray from "#bitarray/es6"
const bits1 = BitArray.from("11001010");
const bits2 = BitArray.from("10111010");
for (let bit of bits1.or(bits2)) console.log(bit) // 1 1 1 1 1 0 1 0
You can easily do that by using bitwise operators. It's quite simple.
Let's try with the number 75.
Its representation in binary is 100 1011. So, how do we obtain each bit from the number?
You can use an AND "&" operator to select one bit and set the rest of them to 0. Then with a Shift operator, you remove the rest of 0 that doesn't matter at the moment.
Example:
Let's do an AND operation with 4 (000 0010)
0100 1011 & 0000 0010 => 0000 0010
Now we need to filter the selected bit, in this case, was the second-bit reading right to left.
0000 0010 >> 1 => 1
The zeros on the left are no representative. So the output will be the bit we selected, in this case, the second one.
var word=75;
var res=[];
for(var x=7; x>=0; x--){
res.push((word&Math.pow(2,x))>>x);
}
console.log(res);
The output:
Expected:
In case you need more than a simple number, you can apply the same function for a byte. Let's say you have a file with multiple bytes. So, you can decompose that file in a ByteArray, then each byte in the array in a BitArray.
Good luck!
#Commi's implementation is what I ended up using .
I believe there is a bug in this implementation. Bits on every 31st boundary give the wrong result. (ie when index is (32 * index - 1), so 31, 63, 95 etc.
I fixed it in the get() method by replacing > 0 with != 0.
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) != 0
}
The reason for the bug is that the ints are 32-bit signed. Shifting 1 left by 31 gets you a negative number. Since the check is for >0, this will be false when it should be true.
I wrote a program to prove the bug before, and the fix after. Will post it running out of space.
for (var i=0; i < 100; i++) {
var ar = new bitArray(1000);
ar.on(i);
for(var j=0;j<1000;j++) {
// we should have TRUE only at one position and that is "i".
// if something is true when it should be false or false when it should be true, then report it.
if(ar.get(j)) {
if (j != i) console.log('we got a bug at ' + i);
}
if (!ar.get(j)) {
if (j == i) console.log('we got a bug at ' + i);
}
}
}
2022
We can implement a BitArray class which behaves similar to TypedArrays by extending DataView. However in order to avoid the cost of trapping the direct accesses to the numerical properties (the indices) by using a Proxy, I believe it's best to stay in DataView domain. DataView is preferable to TypedArrays these days anyway as it's performance is highly improved in recent V8 versions (v7+).
Just like TypedArrays, BitArray will have a predetermined length at construction time. I just include a few methods in the below snippet. The popcnt property very efficiently returns the total number of 1s in BitArray. Unlike normal arrays popcnt is a highly sought after functionality for BitArrays. So much so that Web Assembly and even modern CPU's have a dedicated pop count instruction. Apart from these you can easily add methods like .forEach(), .map() etc. if need be.
class BitArray extends DataView{
constructor(n,ab){
if (n > 1.5e10) throw new Error("BitArray size can not exceed 1.5e10");
super(ab instanceof ArrayBuffer ? ab
: new ArrayBuffer(Number((BigInt(n + 31) & ~31n) >> 3n))); // Sets ArrayBuffer.byteLength to multiples of 4 bytes (32 bits)
}
get length(){
return this.buffer.byteLength << 3;
}
get popcount(){
var m1 = 0x55555555,
m2 = 0x33333333,
m4 = 0x0f0f0f0f,
h01 = 0x01010101,
pc = 0,
x;
for (var i = 0, len = this.buffer.byteLength >> 2; i < len; i++){
x = this.getUint32(i << 2);
x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits
x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits
x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits
pc += (x * h01) >> 56;
}
return pc;
}
// n >> 3 is Math.floor(n/8)
// n & 7 is n % 8
and(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) & bar.getUint32(i));
return res;
}
at(n){
return this.getUint8(n >> 3) & (1 << (n & 7)) ? 1 : 0;
}
or(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) | bar.getUint32(i));
return res;
}
not(){
var len = this.buffer.byteLength,
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,~(this.getUint32(i) >> 0));
return res;
}
reset(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) & ~(1 << (n & 7)));
}
set(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) | (1 << (n & 7)));
}
slice(a = 0, b = this.length){
return new BitArray(b-a,this.buffer.slice(a >> 3, b >> 3));
}
toggle(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) ^ (1 << (n & 7)));
}
toString(){
return new Uint8Array(this.buffer).reduce((p,c) => p + ((BigInt(c)* 0x0202020202n & 0x010884422010n) % 1023n).toString(2).padStart(8,"0"),"");
}
xor(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) ^ bar.getUint32(i));
return res;
}
}
Just do like
var u = new BitArray(12);
I hope it helps.
Probably [definitely] not the most efficient way to do this, but a string of zeros and ones can be parsed as a number as a base 2 number and converted into a hexadecimal number and finally a buffer.
const bufferFromBinaryString = (binaryRepresentation = '01010101') =>
Buffer.from(
parseInt(binaryRepresentation, 2).toString(16), 'hex');
Again, not efficient; but I like this approach because of the relative simplicity.
Thanks for a wonderfully simple class that does just what I need.
I did find a couple of edge-case bugs while testing:
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) != 0
// test of > 0 fails for bit 31
}
forEach(callback) {
this.backingArray.forEach((number, container)=>{
const max = container == this.backingArray.length-1 && this.length%32
? this.length%32 : 32;
// tricky edge-case: at length-1 when length%32 == 0,
// need full 32 bits not 0 bits
for(let x=0; x<max; x++) {
callback((number & 1<<x)!=0, 32*container+x) // see fix in get()
}
})
My final implementation fixed the above bugs and changed the backArray to be a Uint8Array instead of Array, which avoids signed int bugs.
var number = 1310;
should be left alone.
var number = 120;
should be changed to "0120";
var number = 10;
should be changed to "0010";
var number = 7;
should be changed to "0007";
In all modern browsers you can use
numberStr.padStart(4, "0");
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/String/padStart
function zeroPad(num) {
return num.toString().padStart(4, "0");
}
var numbers = [1310, 120, 10, 7];
numbers.forEach(
function(num) {
var paddedNum = zeroPad(num);
console.log(paddedNum);
}
);
function pad_with_zeroes(number, length) {
var my_string = '' + number;
while (my_string.length < length) {
my_string = '0' + my_string;
}
return my_string;
}
try these:
('0000' + number).slice(-4);
or
(number+'').padStart(4,'0');
Here's another way. Comes from something I did that needs to be done thousands of times on a page load. It's pretty CPU efficient to hard code a string of zeroes one time, and chop as many as you need for the pad as many times as needed. I do really like the power of 10 method -- that's pretty flexible.
Anyway, this is as efficient as I could come up with:
For the original question, CHOOSE ONE of the cases...
var number = 1310;
var number = 120;
var number = 10;
var number = 7;
then
// only needs to happen once
var zeroString = "00000";
// one assignment gets the padded number
var paddedNum = zeroString.substring((number + "").length, 4) + bareNum;
//output
alert("The padded number string is: " + paddedNum);
Of course you still need to validate the input. Because this ONLY works reliably under the following conditions:
Number of zeroes in the zeroString is desired_length + 1
Number of digits in your starting number is less than or equal to your desired length
Backstory:
I have a case that needs a fixed length (14 digit) zero-padded number. I wanted to see how basic I could make this. It's run tens of thousands of times on a page load, so efficiency matters. It's not quite re-usable as-is, and it's a bit inelegant. Except that it is very very simple.
For desired n digits padded string, this method requires a string of (at least) n+1 zeroes. Index 0 is the first character in the string, which won't ever be used, so really, it could be anything.
Note also that string.substring() is different from string.substr()!
var bareNum = 42 + '';
var zeroString = "000000000000000";
var paddedNum = zeroString.substring(bareNumber.length, 14) + bareNum
This pulls zeroes from zeroString starting at the position matching the length of the string, and continues to get zeroes to the necessary length of 14. As long as that "14" in the third line is a lower integer than the number of characters in zeroString, it will work.
function pad(n, len) {
return (new Array(len + 1).join('0') + n).slice(-len);
}
might not work in old IE versions.
//to: 0 - to left, 1 - to right
String.prototype.pad = function(_char, len, to) {
if (!this || !_char || this.length >= len) {
return this;
}
to = to || 0;
var ret = this;
var max = (len - this.length)/_char.length + 1;
while (--max) {
ret = (to) ? ret + _char : _char + ret;
}
return ret;
};
Usage:
someString.pad(neededChars, neededLength)
Example:
'332'.pad('0', 6); //'000332'
'332'.pad('0', 6, 1); //'332000'
An approach I like is to add 10^N to the number, where N is the number of zeros you want. Treat the resultant number as a string and slice off the zeroth digit. Of course, you'll want to be careful if your input number might be larger than your pad length, but it's still much faster than the loop method:
// You want to pad four places:
>>> var N = Math.pow(10, 4)
>>> var number = 1310
>>> number < N ? ("" + (N + number)).slice(1) : "" + number
"1310"
>>> var number = 120
>>> number < N ? ("" + (N + number)).slice(1) : "" + number
"0120"
>>> var number = 10
>>> number < N ? ("" + (N + number)).slice(1) : "" + number
"0010"
…
etc. You can make this into a function easily enough:
/**
* Pad a number with leading zeros to "pad" places:
*
* #param number: The number to pad
* #param pad: The maximum number of leading zeros
*/
function padNumber(number, pad) {
var N = Math.pow(10, pad);
return number < N ? ("" + (N + number)).slice(1) : "" + number
}
I wrote a general function for this. It takes an input control and pad length as input.
function padLeft(input, padLength) {
var num = $("#" + input).val();
$("#" + input).val(('0'.repeat(padLength) + num).slice(-padLength));
}
With RegExp/JavaScript:
var number = 7;
number = ('0000'+number).match(/\d{4}$/);
console.log(number);
With Function/RegExp/JavaScript:
var number = 7;
function padFix(n) {
return ('0000'+n).match(/\d{4}$/);
}
console.log(padFix(number));
No loop, no functions
let n = "" + 100;
let x = ("0000000000" + n).substring(n.length);//add your amount of zeros
alert(x + "-" + x.length);
Nate as the best way I found, it's just way too long to read. So I provide you with 3 simples solutions.
1. So here's my simplification of Nate's answer.
//number = 42
"0000".substring(number.toString().length, 4) + number;
2. Here's a solution that make it more reusable by using a function that takes the number and the desired length in parameters.
function pad_with_zeroes(number, len) {
var zeroes = "0".repeat(len);
return zeroes.substring(number.toString().length, len) + number;
}
// Usage: pad_with_zeroes(42,4);
// Returns "0042"
3. Here's a third solution, extending the Number prototype.
Number.prototype.toStringMinLen = function(len) {
var zeroes = "0".repeat(len);
return zeroes.substring(self.toString().length, len) + self;
}
//Usage: tmp=42; tmp.toStringMinLen(4)
Use String.JS librairy function padLeft:
S('123').padLeft(5, '0').s --> 00123