Duplicating each character for all permutations of a string - javascript

This is a slightly odd/unique request. I am trying to achieve a result where e.g "yes" becomes, "yyes", "yees", "yess", "yyees", "yyess", "yyeess".
I have looked at this: Find all lowercase and uppercase combinations of a string in Javascript which completes it for capitalisation, however my understanding is prohibiting me from manipulating this into character duplication (if this method is even possible to use in this way).
export function letterDuplication(level: number, input: string){
const houseLength = input.length;
if (level == 1){
var resultsArray: string[] = [];
const letters = input.split("");
const permCount = 1 << input.length;
for (let perm = 0; perm < permCount; perm++) {
// Update the capitalization depending on the current permutation
letters.reduce((perm, letter, i) => {
if (perm & 1) {
letters[i] = (letter.slice(0, perm) + letter.slice(perm-1, perm) + letter.slice(perm));
} else {
letters[i] = (letter.slice(0, perm - 1) + letter.slice(perm, perm) + letter.slice(perm))
}
return perm >> 1;
}, perm);
var result = letters.join("");
console.log(result);
resultsArray[perm] = result;
}
If I haven't explained this particularly well please let me know and I'll clarify. I'm finding it quite the challenge!

General idea
To get the list of all words we can get from ordered array of letters, we need to get all combinations of letters, passed 1 or 2 times into word, like:
word = 'sample'
array = 's'{1,2} + 'a'{1,2} + 'm'{1,2} + 'p'{1,2} + 'l'{1,2} + 'e'{1,2}
Amount of all possible words equal to 2 ^ word.length (8 for "yes"), so we can build binary table with 8 rows that represent all posible combinations just via convering numbers from 0 to 7 (from decimal to binary):
0 -> 000
1 -> 001
2 -> 010
3 -> 011
4 -> 100
5 -> 101
6 -> 110
7 -> 111
Each decimal we can use as pattern for new word, where 0 means letter must be used once, and 1 - letter must be used twice:
0 -> 000 -> yes
1 -> 001 -> yess
2 -> 010 -> yees
3 -> 011 -> yeess
4 -> 100 -> yyes
5 -> 101 -> yyess
6 -> 110 -> yyees
7 -> 111 -> yyeess
Code
So, your code representation may looks like this:
// Initial word
const word = 'yes';
// List of all possible words
const result = [];
// Iterating (2 ^ word.length) times
for (let i = 0; i < Math.pow(2, word.length); i++) {
// Get binary pattern for each word
const bin = decToBin(i, word.length);
// Make a new word from pattern ...
let newWord = '';
for (let i = 0; i < word.length; i++) {
// ... by adding letter 1 or 2 times based on bin char value
newWord += word[i].repeat(+bin[i] ? 2 : 1);
}
result.push(newWord);
}
// Print result (all possible words)
console.log(result);
// Method for decimal to binary convertion with leading zeroes
// (always returns string with length = len)
function decToBin(x, len) {
let rem, bin = 0, i = 1;
while (x != 0) {
rem = x % 2;
x = parseInt(x / 2);
bin += rem * i;
i = i * 10;
}
bin = bin.toString();
return '0'.repeat(len - bin.length) + bin;
}

Maybe this example can help you. It's a bit not neat and not optimal, but it seems to work:
function getCombinations(word = '') {
const allCombination = [];
const generate = (n, arr, i = 0) => {
if (i === n) {
return allCombination.push([...arr]);
} else {
arr[i] = 0;
generate(n, arr, i + 1);
arr[i] = 1;
generate(n, arr, i + 1);
}
}
generate(word.length, Array(word.length).fill(0));
return allCombination.map((el) => {
return el.map((isCopy, i) => isCopy ? word[i].repeat(2) : word[i]).join('')
});
}
console.log(getCombinations('yes'));
console.log(getCombinations('cat'));

Related

Replace / mask alternative N chars in a string

I want to replace some characters in a string with a "*", in the following way:
Given N, leave the first N characters as-is, but mask the next N characters with "*", then leave the next N characters unchanged, ...etc, alternating every N characters in the string.
I am able to mask every alternating character with "*" (the case where N is 1):
let str = "abcdefghijklmnopqrstuvwxyz"
for (let i =0; i<str.length; i +=2){
str = str.substring(0, i) + '*' + str.substring(i + 1);
}
console.log(str)
Output:
"*b*d*f*h*j*l*n*p*r*t*v*x*z"
But I don't know how to perform the mask with different values for N.
Example:
let string = "9876543210"
N = 1; Output: 9*7*5*3*1*
N = 2; Output: 98**54**10
N = 3; Output: 987***321*
What is the best way to achieve this without regular expressions?
You could use Array.from to map each character to either "*" or the unchanged character, depending on the index. If the integer division of the index by n is odd, it should be "*". Finally turn that array back to string with join:
function mask(s, n) {
return Array.from(s, (ch, i) => Math.floor(i / n) % 2 ? "*" : ch).join("");
}
let string = "9876543210";
console.log(mask(string, 1));
console.log(mask(string, 2));
console.log(mask(string, 3));
This code should work:
function stars(str, n = 1) {
const parts = str.split('')
let num = n
let printStars = false
return parts.map((letter) => {
if (num > 0 && !printStars) {
num -= 1
return letter
}
printStars = true
num += 1
if (num === n) {
printStars = false
}
return '*'
}).join('')
}
console.log(stars('14124123123'), 1)
console.log(stars('14124123123', 2), 2)
console.log(stars('14124123123', 3), 3)
console.log(stars('14124123123', 5), 5)
console.log(stars(''))
Cheers
This requires you to use the current mask as an argument and build your code upon it.
I also edited the function to allow other characters than the "*"
const number = 'abcdefghijklmnopqrstuvwxyzzz';
for(let mask = 1; mask <= 9; mask++){
console.log("Current mask:", mask, " Value: ",applyMask(number, mask));
}
function applyMask(data, mask, defaultMask = '*'){
// start i with the value of mask as we allow the first "n" characters to appear
let i;
let str = data;
for(i = mask; i < data.length ; i+=mask*2){
// I used the same substring method you used the diff is that i used the mask to get the next shown values
// HERE
str = str.substring(0, i) + defaultMask.repeat(mask) + str.substring(i + mask);
}
// this is to trim the string if any extra "defaultMask" were found
// this is to handle the not full size of the mask input at the end of the string
return str.slice(0, data.length)
}

Javascript - convert integer to array of bits

I am trying in javascript to convert an integer (which I know will be between 0 and 32), to an array of 0s and 1s. I have looked around but couldn't find something that works..
So, if I have an integer as 22 (binary 10110), I would like to access it as:
Bitarr[0] = 0
Bitarr[1] = 1
Bitarr[2] = 1
Bitarr[3] = 0
Bitarr[4] = 1
Any suggestions?
Many thanks
convert to base 2:
var base2 = (yourNumber).toString(2);
access the characters (bits):
base2[0], base2[1], base2[3], etc...
Short (ES6)
Shortest (32 chars) version which fill last bits by zero. I assume that n is your number, b is base (number of output bits):
[...Array(b)].map((x,i)=>n>>i&1)
let bits = (n,b=32) => [...Array(b)].map((x,i)=>(n>>i)&1);
let Bitarr = bits(22,8);
console.log(Bitarr[0]); // = 0
console.log(Bitarr[1]); // = 1
console.log(Bitarr[2]); // = 1
console.log(Bitarr[3]); // = 0
console.log(Bitarr[4]); // = 1
var a = 22;
var b = [];
for (var i = 0; i < 5; i++)
b[i] = (a >> i) & 1;
alert(b);
Assuming 5 bits (it seemed from your question), so 0 <= a < 32. If you like you can make the 5 larger, upto 32 (bitshifting in JavaScript works with 32 bit integer).
This should do
for(int i = 0; i < 32; ++i)
Bitarr[i] = (my_int >> i) & 1;
You can convert your integer to a binary String like this. Note the base 2 parameter.
var i = 20;
var str = i.toString(2); // 10100
You can access chars in a String as if it were an array:
alert(str[0]); // 1
alert(str[1]); // 0
etc...
Building up on previous answers: you may want your array to be an array of integers, not strings, so here is a one-liner:
(1234).toString(2).split('').map(function(s) { return parseInt(s); });
Note, that shorter version, (11).toString(2).split('').map(parseInt) will not work (chrome), for unknown to me reason it converts "0"s to NaNs
In addition, this code gives 32length array
function get_bits(value){
var base2_ = (value).toString(2).split("").reverse().join("");
var baseL_ = new Array(32 - base2_.length).join("0");
var base2 = base2_ + baseL_;
return base2;
}
1 => 1000000000000000000000000000000
2 => 0100000000000000000000000000000
3 => 1100000000000000000000000000000
You might do as follows;
var n = 1071,
b = Array(Math.floor(Math.log2(n))+1).fill()
.map((_,i,a) => n >> a.length-1-i & 1);
console.log(b);
just for the sake of refernce:
(121231241).toString(2).split('').reverse().map((x, index) => x === '1' ? 1 << index : 0).reverse().filter(x => x > 0).join(' + ');
would give you:
67108864 + 33554432 + 16777216 + 2097152 + 1048576 + 524288 + 65536 + 32768 + 16384 + 4096 + 1024 + 512 + 256 + 128 + 8 + 1

How to create a function that converts a Number to a Bijective Hexavigesimal?

Maybe i am just not that good enough in math, but I am having a problem in converting a number into pure alphabetical Bijective Hexavigesimal just like how Microsoft Excel/OpenOffice Calc do it.
Here is a version of my code but did not give me the output i needed:
var toHexvg = function(a){
var x='';
var let="_abcdefghijklmnopqrstuvwxyz";
var len=let.length;
var b=a;
var cnt=0;
var y = Array();
do{
a=(a-(a%len))/len;
cnt++;
}while(a!=0)
a=b;
var vnt=0;
do{
b+=Math.pow((len),vnt)*Math.floor(a/Math.pow((len),vnt+1));
vnt++;
}while(vnt!=cnt)
var c=b;
do{
y.unshift( c%len );
c=(c-(c%len))/len;
}while(c!=0)
for(var i in y)x+=let[y[i]];
return x;
}
The best output of my efforts can get is: a b c d ... y z ba bb bc - though not the actual code above. The intended output is suppose to be a b c ... y z aa ab ac ... zz aaa aab aac ... zzzzz aaaaaa aaaaab, you get the picture.
Basically, my problem is more on doing the ''math'' rather than the function. Ultimately my question is: How to do the Math in Hexavigesimal conversion, till a [supposed] infinity, just like Microsoft Excel.
And if possible, a source code, thank you in advance.
Okay, here's my attempt, assuming you want the sequence to be start with "a" (representing 0) and going:
a, b, c, ..., y, z, aa, ab, ac, ..., zy, zz, aaa, aab, ...
This works and hopefully makes some sense. The funky line is there because it mathematically makes more sense for 0 to be represented by the empty string and then "a" would be 1, etc.
alpha = "abcdefghijklmnopqrstuvwxyz";
function hex(a) {
// First figure out how many digits there are.
a += 1; // This line is funky
c = 0;
var x = 1;
while (a >= x) {
c++;
a -= x;
x *= 26;
}
// Now you can do normal base conversion.
var s = "";
for (var i = 0; i < c; i++) {
s = alpha.charAt(a % 26) + s;
a = Math.floor(a/26);
}
return s;
}
However, if you're planning to simply print them out in order, there are far more efficient methods. For example, using recursion and/or prefixes and stuff.
Although #user826788 has already posted a working code (which is even a third quicker), I'll post my own work, that I did before finding the posts here (as i didnt know the word "hexavigesimal"). However it also includes the function for the other way round. Note that I use a = 1 as I use it to convert the starting list element from
aa) first
ab) second
to
<ol type="a" start="27">
<li>first</li>
<li>second</li>
</ol>
:
function linum2int(input) {
input = input.replace(/[^A-Za-z]/, '');
output = 0;
for (i = 0; i < input.length; i++) {
output = output * 26 + parseInt(input.substr(i, 1), 26 + 10) - 9;
}
console.log('linum', output);
return output;
}
function int2linum(input) {
var zeros = 0;
var next = input;
var generation = 0;
while (next >= 27) {
next = (next - 1) / 26 - (next - 1) % 26 / 26;
zeros += next * Math.pow(27, generation);
generation++;
}
output = (input + zeros).toString(27).replace(/./g, function ($0) {
return '_abcdefghijklmnopqrstuvwxyz'.charAt(parseInt($0, 27));
});
return output;
}
linum2int("aa"); // 27
int2linum(27); // "aa"
You could accomplish this with recursion, like this:
const toBijective = n => (n > 26 ? toBijective(Math.floor((n - 1) / 26)) : "") + ((n % 26 || 26) + 9).toString(36);
// Parsing is not recursive
const parseBijective = str => str.split("").reverse().reduce((acc, x, i) => acc + ((parseInt(x, 36) - 9) * (26 ** i)), 0);
toBijective(1) // "a"
toBijective(27) // "aa"
toBijective(703) // "aaa"
toBijective(18279) // "aaaa"
toBijective(127341046141) // "overflow"
parseBijective("Overflow") // 127341046141
I don't understand how to work it out from a formula, but I fooled around with it for a while and came up with the following algorithm to literally count up to the requested column number:
var getAlpha = (function() {
var alphas = [null, "a"],
highest = [1];
return function(decNum) {
if (alphas[decNum])
return alphas[decNum];
var d,
next,
carry,
i = alphas.length;
for(; i <= decNum; i++) {
next = "";
carry = true;
for(d = 0; d < highest.length; d++){
if (carry) {
if (highest[d] === 26) {
highest[d] = 1;
} else {
highest[d]++;
carry = false;
}
}
next = String.fromCharCode(
highest[d] + 96)
+ next;
}
if (carry) {
highest.push(1);
next = "a" + next;
}
alphas[i] = next;
}
return alphas[decNum];
};
})();
alert(getAlpha(27)); // "aa"
alert(getAlpha(100000)); // "eqxd"
Demo: http://jsfiddle.net/6SE2f/1/
The highest array holds the current highest number with an array element per "digit" (element 0 is the least significant "digit").
When I started the above it seemed a good idea to cache each value once calculated, to save time if the same value was requested again, but in practice (with Chrome) it only took about 3 seconds to calculate the 1,000,000th value (bdwgn) and about 20 seconds to calculate the 10,000,000th value (uvxxk). With the caching removed it took about 14 seconds to the 10,000,000th value.
Just finished writing this code earlier tonight, and I found this question while on a quest to figure out what to name the damn thing. Here it is (in case anybody feels like using it):
/**
* Convert an integer to bijective hexavigesimal notation (alphabetic base-26).
*
* #param {Number} int - A positive integer above zero
* #return {String} The number's value expressed in uppercased bijective base-26
*/
function bijectiveBase26(int){
const sequence = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const length = sequence.length;
if(int <= 0) return int;
if(int <= length) return sequence[int - 1];
let index = (int % length) || length;
let result = [sequence[index - 1]];
while((int = Math.floor((int - 1) / length)) > 0){
index = (int % length) || length;
result.push(sequence[index - 1]);
}
return result.reverse().join("")
}
I had to solve this same problem today for work. My solution is written in Elixir and uses recursion, but I explain the thinking in plain English.
Here are some example transformations:
0 -> "A", 1 -> "B", 2 -> "C", 3 -> "D", ..
25 -> "Z", 26 -> "AA", 27 -> "AB", ...
At first glance it might seem like a normal 26-base counting system
but unfortunately it is not so simple.
The "problem" becomes clear when you realize:
A = 0
AA = 26
This is at odds with a normal counting system, where "0" does not behave
as "1" when it is in a decimal place other than then unit.
To understand the algorithm, consider a simpler but equivalent base-2 system:
A = 0
B = 1
AA = 2
AB = 3
BA = 4
BB = 5
AAA = 6
In a normal binary counting system we can determine the "value" of decimal places by
taking increasing powers of 2 (1, 2, 4, 8, 16) and the value of a binary number is
calculated by multiplying each digit by that digit place's value.
e.g. 10101 = 1 * (2 ^ 4) + 0 * (2 ^ 3) + 1 * (2 ^ 2) + 0 * (2 ^ 1) + 1 * (2 ^ 0) = 21
In our more complicated AB system, we can see by inspection that the decimal place values are:
1, 2, 6, 14, 30, 62
The pattern reveals itself to be (previous_unit_place_value + 1) * 2.
As such, to get the next lower unit place value, we divide by 2 and subtract 1.
This can be extended to a base-26 system. Simply divide by 26 and subtract 1.
Now a formula for transforming a normal base-10 number to special base-26 is apparent.
Say the input is x.
Create an accumulator list l.
If x is less than 26, set l = [x | l] and go to step 5. Otherwise, continue.
Divide x by 2. The floored result is d and the remainder is r.
Push the remainder as head on an accumulator list. i.e. l = [r | l]
Go to step 2 with with (d - 1) as input, e.g. x = d - 1
Convert """ all elements of l to their corresponding chars. 0 -> A, etc.
So, finally, here is my answer, written in Elixir:
defmodule BijectiveHexavigesimal do
def to_az_string(number, base \\ 26) do
number
|> to_list(base)
|> Enum.map(&to_char/1)
|> to_string()
end
def to_09_integer(string, base \\ 26) do
string
|> String.to_charlist()
|> Enum.reverse()
|> Enum.reduce({0, nil}, fn
char, {_total, nil} ->
{to_integer(char), 1}
char, {total, previous_place_value} ->
char_value = to_integer(char + 1)
place_value = previous_place_value * base
new_total = total + char_value * place_value
{new_total, place_value}
end)
|> elem(0)
end
def to_list(number, base, acc \\ []) do
if number < base do
[number | acc]
else
to_list(div(number, base) - 1, base, [rem(number, base) | acc])
end
end
defp to_char(x), do: x + 65
end
You use it simply as BijectiveHexavigesimal.to_az_string(420). It also accepts on optional "base" arg.
I know the OP asked about Javascript but I wanted to provide an Elixir solution for posterity.
I have published these functions in npm package here:
https://www.npmjs.com/package/#gkucmierz/utils
Converting bijective numeration to number both ways (also BigInt version is included).
https://github.com/gkucmierz/utils/blob/main/src/bijective-numeration.mjs

Convert numbers to letters beyond the 26 character alphabet

I'm creating some client side functions for a mappable spreadsheet export feature.
I'm using jQuery to manage the sort order of the columns, but each column is ordered like an Excel spreadsheet i.e. a b c d e......x y z aa ab ac ad etc etc
How can I generate a number as a letter? Should I define a fixed array of values? Or is there a dynamic way to generate this?
I think you're looking for something like this
function colName(n) {
var ordA = 'a'.charCodeAt(0);
var ordZ = 'z'.charCodeAt(0);
var len = ordZ - ordA + 1;
var s = "";
while(n >= 0) {
s = String.fromCharCode(n % len + ordA) + s;
n = Math.floor(n / len) - 1;
}
return s;
}
// Example:
for(n = 0; n < 125; n++)
document.write(n + ":" + colName(n) + "<br>");
This is a very easy way:
function numberToLetters(num) {
let letters = ''
while (num >= 0) {
letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'[num % 26] + letters
num = Math.floor(num / 26) - 1
}
return letters
}
function getColumnDescription(i) {
const m = i % 26;
const c = String.fromCharCode(65 + m);
const r = i - m;
return r > 0
? `${getColumnDescription((r - 1) / 26)}${c}`
: `Column ${c}`
}
Usage:
getColumnDescription(15)
"Column P"
getColumnDescription(26)
"Column AA"
getColumnDescription(4460)
"Column FOO"
If you have your data in a two-dimensional array, e.g.
var data = [
['Day', 'score],
['Monday', 99],
];
you can map the rows/columns to spreadsheet cell numbers as follows (building on the code examples above):
function getSpreadSheetCellNumber(row, column) {
let result = '';
// Get spreadsheet column letter
let n = column;
while (n >= 0) {
result = String.fromCharCode(n % 26 + 65) + result;
n = Math.floor(n / 26) - 1;
}
// Get spreadsheet row number
result += `${row + 1}`;
return result;
};
E.g. the 'Day' value from data[0][0] would go in spreadsheet cell A1.
> getSpreadSheetCellNumber(0, 0)
> "A1"
This also works when you have 26+ columns:
> getSpreadSheetCellNumber(0, 26)
> "AA1"
You can use code like this, assuming that numbers contains the numbers of your columns. So after this code you'll get the string names for your columns:
var letters = ['a', 'b', 'c', ..., 'z'];
var numbers = [1, 2, 3, ...];
var columnNames = [];
for(var i=0;i<numbers.length;i++) {
var firstLetter = parseInt(i/letters.length) == 0 ? '' : letters[parseInt(i/letters.length)];
var secondLetter = letters[i%letters.length-1];
columnNames.push(firstLetter + secondLetter);
}
Simple recursive solution:
function numberToColumn(n) {
const res = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'[n % 26];
return n >= 26 ? numberToColumn(Math.floor(n / 26) - 1) + res : res;
}
Here is an alternative approach that relies on .toString(26). It uses conversion to base-26 and then translates the characters so they are in the a..z range:
const conv = ((base, alpha) => { // Closure for preparing the function
const map = Object.fromEntries(Array.from(alpha, (c, i) => [c, alpha[i + 10]]));
return n => (n + base).toString(26).replace(/o*p/, "").replace(/./g, m => map[m]);
})(parseInt("ooooooooop0", 26), "0123456789abcdefghijklmnopqrstuvwxyz");
// Example:
for (let n = 0; n < 29; n++) console.log(n, conv(n));
console.log("...");
for (let n = 690; n < 705; n++) console.log(n, conv(n));
About the magical number
The magical value "ooooooooop0" is derived as follows:
It is a number expressed in radix 26, in the standard way, i.e. where the ten digits also play a role, and then the first letters of the alphabet.
The greatest "digit" in this radix 26 is "p" (the 16th letter of the Latin alphabet), and "o" is the second greatest.
The magical value is formed by a long enough series of the one-but-greatest digit, followed by the greatest digit and ended by a 0.
As JavaScript integer numbers max out around Number.MAX_SAFE_INTEGER (greater integers numbers would suffer from rounding errors), there is no need to have a longer series of "o" than was selected. We can see that Number.MAX_SAFE_INTEGER.toString(26) has 12 digits, so precision is ensured up to 11 digits in radix 26, meaning we need 9 "o".
This magical number ensures that if we add units to it (in radix 26), we will always have a representation which starts with a series of "o" and then a "p". That is because at some point the last digit will wrap around to 0 again, and the "p" will also wrap around to 0, bringing the preceding "o" to "p". And so we have this invariant that the number always starts with zero or more "o" and then a "p".
More generic
The above magic number could be derived via code, and we could make it more generic by providing the target alphabet. The length of that target alphabet then also directly determines the radix (i.e. the number of characters in that string).
Here is the same output generated as above, but with a more generic function:
function createConverter(targetDigits) {
const radix = targetDigits.length,
alpha = "0123456789abcdefghijklmnopqrstuvwxyz",
map = Object.fromEntries(Array.from(alpha,
(src, i) => [src, targetDigits[i]]
)),
base = parseInt((alpha[radix-1]+'0').padStart(
Number.MAX_SAFE_INTEGER.toString(radix).length - 1, alpha[radix-2]
), radix),
trimmer = RegExp("^" + alpha[radix-2] + "*" + alpha[radix-1]);
return n => (n + base).toString(radix)
.replace(trimmer, "")
.replace(/./g, m => map[m]);
}
// Example:
const conv = createConverter("abcdefghijklmnopqrstuvwxyz");
for (let n = 0; n < 29; n++) console.log(n, conv(n));
console.log("...");
for (let n = 690; n < 705; n++) console.log(n, conv(n));
This can now easily be adapted to use a more reduced target alphabet (like without the letters "l" and "o"), giving a radix of 24 instead of 26:
function createConverter(targetDigits) {
const radix = targetDigits.length,
alpha = "0123456789abcdefghijklmnopqrstuvwxyz",
map = Object.fromEntries(Array.from(alpha,
(src, i) => [src, targetDigits[i]]
)),
base = parseInt((alpha[radix-1]+'0').padStart(
Number.MAX_SAFE_INTEGER.toString(radix).length - 1, alpha[radix-2]
), radix),
trimmer = RegExp("^" + alpha[radix-2] + "*" + alpha[radix-1]);
return n => (n + base).toString(radix)
.replace(trimmer, "")
.replace(/./g, m => map[m]);
}
// Example without "l" and "o" in target alphabet:
const conv = createConverter("abcdefghijkmnpqrstuvwxyz");
for (let n = 0; n < 29; n++) console.log(n, conv(n));
console.log("...");
for (let n = 690; n < 705; n++) console.log(n, conv(n));
This covers the range from 1 to 1000. Beyond that I haven't checked.
function colToletters(num) {
let a = " ABCDEFGHIJKLMNOPQRSTUVWXYZ";
if (num < 27) return a[num % a.length];
if (num > 26) {
num--;
let letters = ''
while (num >= 0) {
letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ'[num % 26] + letters
num = Math.floor(num / 26) - 1
}
return letters;
}
}
I could be wrong but I've checked the other functions in this answer and they seem to fail at 26 which should be Z. Remember there are 26 letters in the alphabet not 25.

How do I create bit array in Javascript?

What is the best way of implementing a bit array in JavaScript?
Here's one I whipped up:
UPDATE - something about this class had been bothering me all day - it wasn't size based - creating a BitArray with N slots/bits was a two step operation - instantiate, resize. Updated the class to be size based with an optional second paramter for populating the size based instance with either array values or a base 10 numeric value.
(Fiddle with it here)
/* BitArray DataType */
// Constructor
function BitArray(size, bits) {
// Private field - array for our bits
this.m_bits = new Array();
//.ctor - initialize as a copy of an array of true/false or from a numeric value
if (bits && bits.length) {
for (var i = 0; i < bits.length; i++)
this.m_bits.push(bits[i] ? BitArray._ON : BitArray._OFF);
} else if (!isNaN(bits)) {
this.m_bits = BitArray.shred(bits).m_bits;
}
if (size && this.m_bits.length != size) {
if (this.m_bits.length < size) {
for (var i = this.m_bits.length; i < size; i++) {
this.m_bits.push(BitArray._OFF);
}
} else {
for(var i = size; i > this.m_bits.length; i--){
this.m_bits.pop();
}
}
}
}
/* BitArray PUBLIC INSTANCE METHODS */
// read-only property - number of bits
BitArray.prototype.getLength = function () { return this.m_bits.length; };
// accessor - get bit at index
BitArray.prototype.getAt = function (index) {
if (index < this.m_bits.length) {
return this.m_bits[index];
}
return null;
};
// accessor - set bit at index
BitArray.prototype.setAt = function (index, value) {
if (index < this.m_bits.length) {
this.m_bits[index] = value ? BitArray._ON : BitArray._OFF;
}
};
// resize the bit array (append new false/0 indexes)
BitArray.prototype.resize = function (newSize) {
var tmp = new Array();
for (var i = 0; i < newSize; i++) {
if (i < this.m_bits.length) {
tmp.push(this.m_bits[i]);
} else {
tmp.push(BitArray._OFF);
}
}
this.m_bits = tmp;
};
// Get the complimentary bit array (i.e., 01 compliments 10)
BitArray.prototype.getCompliment = function () {
var result = new BitArray(this.m_bits.length);
for (var i = 0; i < this.m_bits.length; i++) {
result.setAt(i, this.m_bits[i] ? BitArray._OFF : BitArray._ON);
}
return result;
};
// Get the string representation ("101010")
BitArray.prototype.toString = function () {
var s = new String();
for (var i = 0; i < this.m_bits.length; i++) {
s = s.concat(this.m_bits[i] === BitArray._ON ? "1" : "0");
}
return s;
};
// Get the numeric value
BitArray.prototype.toNumber = function () {
var pow = 0;
var n = 0;
for (var i = this.m_bits.length - 1; i >= 0; i--) {
if (this.m_bits[i] === BitArray._ON) {
n += Math.pow(2, pow);
}
pow++;
}
return n;
};
/* STATIC METHODS */
// Get the union of two bit arrays
BitArray.getUnion = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._union(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Get the intersection of two bit arrays
BitArray.getIntersection = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._intersect(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Get the difference between to bit arrays
BitArray.getDifference = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._difference(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Convert a number into a bit array
BitArray.shred = function (number) {
var bits = new Array();
var q = number;
do {
bits.push(q % 2);
q = Math.floor(q / 2);
} while (q > 0);
return new BitArray(bits.length, bits.reverse());
};
/* BitArray PRIVATE STATIC CONSTANTS */
BitArray._ON = 1;
BitArray._OFF = 0;
/* BitArray PRIVATE STATIC METHODS */
// Calculate the intersection of two bits
BitArray._intersect = function (bit1, bit2) {
return bit1 === BitArray._ON && bit2 === BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Calculate the union of two bits
BitArray._union = function (bit1, bit2) {
return bit1 === BitArray._ON || bit2 === BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Calculate the difference of two bits
BitArray._difference = function (bit1, bit2) {
return bit1 === BitArray._ON && bit2 !== BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Get the longest or shortest (smallest) length of the two bit arrays
BitArray._getLen = function (bitArray1, bitArray2, smallest) {
var l1 = bitArray1.getLength();
var l2 = bitArray2.getLength();
return l1 > l2 ? smallest ? l2 : l1 : smallest ? l2 : l1;
};
CREDIT TO #Daniel Baulig for asking for the refactor from quick and dirty to prototype based.
I don't know about bit arrays, but you can make byte arrays easy with new features.
Look up typed arrays. I've used these in both Chrome and Firefox. The important one is Uint8Array.
To make an array of 512 uninitialized bytes:
var arr = new UintArray(512);
And accessing it (the sixth byte):
var byte = arr[5];
For node.js, use Buffer (server-side).
EDIT:
To access individual bits, use bit masks.
To get the bit in the one's position, do num & 0x1
The Stanford Javascript Crypto Library (SJCL) provides a Bit Array implementation and can convert different inputs (Hex Strings, Byte Arrays, etc.) to Bit Arrays.
Their code is public on GitHub: bitwiseshiftleft/sjcl. So if you lookup bitArray.js, you can find their bit array implementation.
A conversion from bytes to bits can be found here.
Something like this is as close as I can think of. Saves bit arrays as 32 bit numbers, and has a standard array backing it to handle larger sets.
class bitArray {
constructor(length) {
this.backingArray = Array.from({length: Math.ceil(length/32)}, ()=>0)
this.length = length
}
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) > 0
}
on(n) {
this.backingArray[n/32|0] |= 1 << n % 32
}
off(n) {
this.backingArray[n/32|0] &= ~(1 << n % 32)
}
toggle(n) {
this.backingArray[n/32|0] ^= 1 << n % 32
}
forEach(callback) {
this.backingArray.forEach((number, container)=>{
const max = container == this.backingArray.length-1 ? this.length%32 : 32
for(let x=0; x<max; x++) {
callback((number & 1<<x)>0, 32*container+x)
}
})
}
}
let bits = new bitArray(10)
bits.get(2) //false
bits.on(2)
bits.get(2) //true
bits.forEach(console.log)
/* outputs:
false
false
true
false
false
false
false
false
false
false
*/
bits.toggle(2)
bits.forEach(console.log)
/* outputs:
false
false
false
false
false
false
false
false
false
false
*/
bits.toggle(0)
bits.toggle(1)
bits.toggle(2)
bits.off(2)
bits.off(3)
bits.forEach(console.log)
/* outputs:
true
true
false
false
false
false
false
false
false
false
*/
2022
As can be seen from past answers and comments, the question of "implementing a bit array" can be understood in two different (non-exclusive) ways:
an array that takes 1-bit in memory for each entry
an array on which bitwise operations can be applied
As #beatgammit points out, ecmascript specifies typed arrays, but bit arrays are not part of it. I have just published #bitarray/typedarray, an implementation of typed arrays for bits, that emulates native typed arrays and takes 1 bit in memory for each entry.
Because it reproduces the behaviour of native typed arrays, it does not include any bitwise operations though. So, I have also published #bitarray/es6, which extends the previous with bitwise operations.
I wouldn't debate what is the best way of implementing bit array, as per the asked question, because "best" could be argued at length, but those are certainly some way of implementing bit arrays, with the benefit that they behave like native typed arrays.
import BitArray from "#bitarray/es6"
const bits1 = BitArray.from("11001010");
const bits2 = BitArray.from("10111010");
for (let bit of bits1.or(bits2)) console.log(bit) // 1 1 1 1 1 0 1 0
You can easily do that by using bitwise operators. It's quite simple.
Let's try with the number 75.
Its representation in binary is 100 1011. So, how do we obtain each bit from the number?
You can use an AND "&" operator to select one bit and set the rest of them to 0. Then with a Shift operator, you remove the rest of 0 that doesn't matter at the moment.
Example:
Let's do an AND operation with 4 (000 0010)
0100 1011 & 0000 0010 => 0000 0010
Now we need to filter the selected bit, in this case, was the second-bit reading right to left.
0000 0010 >> 1 => 1
The zeros on the left are no representative. So the output will be the bit we selected, in this case, the second one.
var word=75;
var res=[];
for(var x=7; x>=0; x--){
res.push((word&Math.pow(2,x))>>x);
}
console.log(res);
The output:
Expected:
In case you need more than a simple number, you can apply the same function for a byte. Let's say you have a file with multiple bytes. So, you can decompose that file in a ByteArray, then each byte in the array in a BitArray.
Good luck!
#Commi's implementation is what I ended up using .
I believe there is a bug in this implementation. Bits on every 31st boundary give the wrong result. (ie when index is (32 * index - 1), so 31, 63, 95 etc.
I fixed it in the get() method by replacing > 0 with != 0.
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) != 0
}
The reason for the bug is that the ints are 32-bit signed. Shifting 1 left by 31 gets you a negative number. Since the check is for >0, this will be false when it should be true.
I wrote a program to prove the bug before, and the fix after. Will post it running out of space.
for (var i=0; i < 100; i++) {
var ar = new bitArray(1000);
ar.on(i);
for(var j=0;j<1000;j++) {
// we should have TRUE only at one position and that is "i".
// if something is true when it should be false or false when it should be true, then report it.
if(ar.get(j)) {
if (j != i) console.log('we got a bug at ' + i);
}
if (!ar.get(j)) {
if (j == i) console.log('we got a bug at ' + i);
}
}
}
2022
We can implement a BitArray class which behaves similar to TypedArrays by extending DataView. However in order to avoid the cost of trapping the direct accesses to the numerical properties (the indices) by using a Proxy, I believe it's best to stay in DataView domain. DataView is preferable to TypedArrays these days anyway as it's performance is highly improved in recent V8 versions (v7+).
Just like TypedArrays, BitArray will have a predetermined length at construction time. I just include a few methods in the below snippet. The popcnt property very efficiently returns the total number of 1s in BitArray. Unlike normal arrays popcnt is a highly sought after functionality for BitArrays. So much so that Web Assembly and even modern CPU's have a dedicated pop count instruction. Apart from these you can easily add methods like .forEach(), .map() etc. if need be.
class BitArray extends DataView{
constructor(n,ab){
if (n > 1.5e10) throw new Error("BitArray size can not exceed 1.5e10");
super(ab instanceof ArrayBuffer ? ab
: new ArrayBuffer(Number((BigInt(n + 31) & ~31n) >> 3n))); // Sets ArrayBuffer.byteLength to multiples of 4 bytes (32 bits)
}
get length(){
return this.buffer.byteLength << 3;
}
get popcount(){
var m1 = 0x55555555,
m2 = 0x33333333,
m4 = 0x0f0f0f0f,
h01 = 0x01010101,
pc = 0,
x;
for (var i = 0, len = this.buffer.byteLength >> 2; i < len; i++){
x = this.getUint32(i << 2);
x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits
x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits
x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits
pc += (x * h01) >> 56;
}
return pc;
}
// n >> 3 is Math.floor(n/8)
// n & 7 is n % 8
and(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) & bar.getUint32(i));
return res;
}
at(n){
return this.getUint8(n >> 3) & (1 << (n & 7)) ? 1 : 0;
}
or(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) | bar.getUint32(i));
return res;
}
not(){
var len = this.buffer.byteLength,
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,~(this.getUint32(i) >> 0));
return res;
}
reset(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) & ~(1 << (n & 7)));
}
set(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) | (1 << (n & 7)));
}
slice(a = 0, b = this.length){
return new BitArray(b-a,this.buffer.slice(a >> 3, b >> 3));
}
toggle(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) ^ (1 << (n & 7)));
}
toString(){
return new Uint8Array(this.buffer).reduce((p,c) => p + ((BigInt(c)* 0x0202020202n & 0x010884422010n) % 1023n).toString(2).padStart(8,"0"),"");
}
xor(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) ^ bar.getUint32(i));
return res;
}
}
Just do like
var u = new BitArray(12);
I hope it helps.
Probably [definitely] not the most efficient way to do this, but a string of zeros and ones can be parsed as a number as a base 2 number and converted into a hexadecimal number and finally a buffer.
const bufferFromBinaryString = (binaryRepresentation = '01010101') =>
Buffer.from(
parseInt(binaryRepresentation, 2).toString(16), 'hex');
Again, not efficient; but I like this approach because of the relative simplicity.
Thanks for a wonderfully simple class that does just what I need.
I did find a couple of edge-case bugs while testing:
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) != 0
// test of > 0 fails for bit 31
}
forEach(callback) {
this.backingArray.forEach((number, container)=>{
const max = container == this.backingArray.length-1 && this.length%32
? this.length%32 : 32;
// tricky edge-case: at length-1 when length%32 == 0,
// need full 32 bits not 0 bits
for(let x=0; x<max; x++) {
callback((number & 1<<x)!=0, 32*container+x) // see fix in get()
}
})
My final implementation fixed the above bugs and changed the backArray to be a Uint8Array instead of Array, which avoids signed int bugs.

Categories