Related
Say you have two integers 10 and 20. That is 00001010 and 00010100. I would then like to just basically concat these as strings, but have the result be a new integer.
00001010 + 00010100 == 0000101000010100
That final number is 2580.
However, I am looking for a way to do this without actually converting them to string. Looking for something more efficient that just does some bit twiddling on the integers themselves. I'm not too familiar with that, but I imagine it would be along the lines of:
var a = 00001010 // == 10
var b = 00010100 // == 20
var c = a << b // == 2580
Note, I would like for this to work with any sequences of bits. So even:
var a = 010101
var b = 01110
var c = a + b == 01010101110
You basic equation is:
c = b + (a << 8).
The trick here is that you need to always shift by 8. But since a and b do not always use all 8 bits in the byte, JavaScript will automatically omit any leading zeros. We need to recover the number of leading zeros (of b), or trailing zeros of a, and prepend them back before adding. This way, all the bits stay in their proper position. This requires an equation like this:
c = b + (a << s + r)
Where s is the highest set bit (going from right to left) in b, and r is the remaining number of bits such that s + r = 8.
Essentially, all you are doing is shifting the first operand a over by 8 bits, to effectively add trailing zeros to a or equally speaking, padding leading zeros to the second operand b. Then you add normally. This can be accomplishing using logarithms, and shifting, and bitwise OR operation to provide an O(1) solution for some arbitrary positive integers a and b where the number of bits in a and b do not exceed some positive integer n. In the case of a byte, n = 8.
// Bitwise log base 2 in O(1) time
function log2(n) {
// Check if n > 0
let bits = 0;
if (n > 0xffff) {
n >>= 16;
bits = 0x10;
}
if (n > 0xff) {
n >>= 8;
bits |= 0x8;
}
if (n > 0xf) {
n >>= 4;
bits |= 0x4;
}
if (n > 0x3) {
n >>= 2;
bits |= 0x2;
}
if (n > 0x1) {
bits |= 0x1;
}
return bits;
}
// Computes the max set bit
// counting from the right to left starting
// at 0. For 20 (10100) we get bit # 4.
function msb(n) {
n |= n >> 1;
n |= n >> 2;
n |= n >> 4;
n |= n >> 8;
n |= n >> 16;
n = n + 1;
// We take the log here because
// n would otherwise be the largest
// magnitude of base 2. So, for 20,
// n+1 would be 16. Which, to
// find the number of bits to shift, we must
// take the log base 2
return log2(n >> 1);
}
// Operands
let a = 0b00001010 // 10
let b = 0b00010100 // 20
// Max number of bits in
// in binary number
let n = 8
// Max set bit is the 16 bit, which is in position
// 4. We will need to pad 4 more zeros
let s = msb(b)
// How many zeros to pad on the left
// 8 - 4 = 4
let r = Math.abs(n - s)
// Shift a over by the computed
// number of bits including padded zeros
let c = b + (a << s + r)
console.log(c)
Output:
2580
Notes:
This is NOT commutative.
Add error checking to log2() for negative numbers, and other edge cases.
References:
https://www.geeksforgeeks.org/find-significant-set-bit-number/
https://github.com/N02870941/java_data_structures/blob/master/src/main/java/util/misc/Mathematics.java
so the problem:
a is 10 (in binary 0000 1010)
b is 20 (in binary 0100 0100)
you want to get 2580 using bit shift somehow.
if you right shift a by 8 using a<<=8 (this is the same as multiplying a by 2^8) you get 1010 0000 0000 which is the same as 10*2^8 = 2560. since the lower bits of a are all 0's (when you use << it fills the new bits with 0) you can just add b on top of it 1010 0000 0000 + 0100 0100 gives you 1010 0001 0100.
so in 1 line of code, it's var result = a<<8 + b. Remember in programming languages, most of them have no explicit built-in types for "binary". But everything is binary in its nature. so int is a "binary", an object is "binary" ....etc. When you want to do some binary operations on some data you can just use the datatype you have as operands for binary operations.
this is a more general version of how to concatenate two numbers' binary representations using no string operations and data
/*
This function concate b to the end of a and put 0's in between them.
b will be treated starting with it's first 1 as its most significant bit
b needs to be bigger than 0, otherwise, Math.log2 will give -Infinity for 0 and NaN for negative b
padding is the number of 0's to add at the end of a
*/
function concate_bits(a, b, padding) {
//add the padding 0's to a
a <<= padding;
//this gets the largest power of 2
var power_of_2 = Math.floor(Math.log2(b));
var power_of_2_value;
while (power_of_2 >= 0) {
power_of_2_value = 2 ** power_of_2;
a <<= 1;
if (b >= power_of_2_value) {
a += 1;
b -= power_of_2_value;
}
power_of_2--;
}
return a;
}
//this will print 2580 as the result
let result = concate_bits(10, 20, 3);
console.log(result);
Note, I would like for this to work with any sequences of bits. So even:
var a = 010101
var b = 01110
var c = a + b == 01010101110
This isn't going to be possible unless you convert to a string or otherwise store the number of bits in each number. 10101 010101 0010101 etc are all the same number (21), and once this is converted to a number, there is no way to tell how many leading zeroes the number originally had.
For instance, 10100 would be inverted to 01011; 010 would be inverted to 101; 101 would be converted to 010.
The problem is when I use ~5, it becomes -6 because js uses 32 bit signed.
How do I invert an unsigned arbitrary-bit binary number?
I would like to create a function that takes in this unsigned arbitrary-bit binary number and return its inverted form( 101->010)
I want to convert from string 101 to 010
You can create a function that flips the required number of digits like so
var flipbits = function (v, digits) {
return ~v & (Math.pow(2, digits) - 1);
}
console.log(flipbits(5, 3)); // outputs 2
console.log(flipbits(2, 3)); // outputs 5
note - this isn't "arbitrary number of bits" ... it's 32 at best
working with strings, you can have arbitrary bit length (this one wont work without transpiling in Internet Exploder)
var flipbits = str => str.split('').map(b => (1 - b).toString()).join('');
console.log(flipbits('010')); // outputs 101
console.log(flipbits('101')); // outputs 010
The above in ES5
var flipbits = function flipbits(str) {
return str.split('').map(function (b) {
return (1 - b).toString();
}).join('');
};
console.log(flipbits('010')); // outputs 101
console.log(flipbits('101')); // outputs 010
Inverting the bits will always be the same, but to convert an unsigned integer to a signed integer you can use the unsigned >>> shift operator to work on unsigned numbers:
console.log(~5); // -6
console.log(~5>>>0); // 4294967290
If you want to make sure you only flip the significant bits in the number, you'll instead want to mask it via an & operation with how many significant bits you need. Here is an example of the significant bit masking:
function invert(x) {
let significant = 0;
let test = x;
while (test > 1) {
test = test >> 1;
significant = (significant << 1) | 1;
}
return (~x) & significant;
}
console.log(invert(5)); // 2 (010 in binary)
In JavaScript, ~ or tilde does this
-(N+1)
So your current operation is correct but not what you are looking for:
~5
-(5 + 1)
-6
Reference
You can use String.prototype.replace() with RegExp /(0)|(1)/
function toggle(n) {
return n.replace(/(0)|(1)/g, function(m, p1, p2) { return p2 ? 0 : 1 });
}
console.log(
toggle("10100"),
toggle("101")
)
You can use a function that converts numbers to binary as a string, flips the 0s and 1s, then converts back to a number. It seems to give the expected results, but looks pretty ugly:
function flipBits(n) {
return parseInt(n.toString(2).split('').map(bit => 1 - bit).join(''),2)
}
[0,1,2,3,4,5,123,987679876,987679875].forEach(
n => console.log(n + ' -> ' + flipBits(n))
);
Maybe there's a mix of bitwise operators to do the same thing.
Edit
It seems you're working with strings, so just split, flip and join again:
// Requires support for ECMAScript ed 5.1 for map and
// ECMAScript 2015 for arrow functions
function flipStringBits(s) {
return s.split('').map(c => 1 - c).join('');
}
['0','010','110','10011100110'].forEach(
v => console.log(v + ' -> ' + flipStringBits(v))
);
Basic function for ECMAScript ed 3 (works everywhere, even IE 4).
function flipStringBitsEd3(s) {
var b = s.split('')
for (var i = 0, iLen = b.length; i < iLen; i++) {
b[i] = 1 - b[i];
}
return b.join('');
}
// Tests
console.log('Ed 3 version');
var data = ['0', '010', '110', '10011100110'];
for (var i = 0, iLen = data.length; i < iLen; i++) {
console.log(data[i] + ' ->\n' + flipStringBitsEd3(data[i]) + '\n');
}
Works with any length string. The ed 3 version will work everywhere and is probably faster than functions using newer features.
You can create a mask for number's width and take xor to flip the bits.
/**
* #param {number} num
* #return {number}
*/
var findComplement = function(num) {
let len = num.toString(2).length;
let mask = Math.pow(2, len) - 1;
return num ^ mask;
};
console.log(findComplement(5));
For Integer values, you can use the javaScript program to reverse the order of the bits in a given integer and as a result return new integer as described below:
function binaryReverse(value) {
return parseInt(value.toString(2).split('').reverse().join(''), 2);
}
console.log(binaryReverse(25));
console.log(binaryReverse(19));
Output:
19
25
I have an existing fn that does the following:
public GetColor = (argb: number) => {
var a = 1;// (argb & -16777216) >> 0x18; // gives me FF
var r = (argb & 0xff0000) >> 0x10;
var g = (argb & 0x00ff00) >> 0x8;
var b = (argb & 0x0000ff);
var curKendoColor = kendo.parseColor("rgba(" + r + "," + g + "," + b + "," + a + ")", false);
I would like to know the function I would need to return that back to a number.
for example if i have AARRGGBB of (FFFF0000) and I would like to get back to the number version that toColor would have derived from.
I would be ok with the unsigned version of the return number or the signed version. Unsigned would be -65536 but the unsigned would be fine as well (not sure what that number would be off top of my head now)
I tried to do this but the attempts all end out at 0 which i know is not correct:
colorSend |= (parseInt(this.Color.substr(0,2),16) & 255) << 24;
colorSend |= (parseInt(this.Color.substr(1, 2), 16) & 255) << 16;
colorSend |= (parseInt(this.Color.substr(3, 2), 16) & 255) << 8;
the parseInt gives me the 255,0,0 that I think I would expect but the & and the shift logic does not seem correct because & zeros the integers out thus the result was 0
Ok I was able to get something that seems to work for me. I am simplifying it a bit but for the ARGB i can do something like
val = "0xFFFF0000"
val = parseInt(val, 16)
if((val & 0x80000000) != 0)
{
val = val - 0x100000000;
}
So for example for Red with an A of FF I would get-
-65536 or with unsigned I can omit the last part.
Actually it's pretty easy. You just have to understand what the HEX values mean.
In the hexadecimal system each number from 10 to 15 is represented by the letters A to F. The numbers lower ten are normal numbers 1 - 9. So if you want to convert 11 into hex it would be 0B. Because you need no number below 11 (0) and the letter B represents 11. If you read a little bit about the hex system, you should be able to write you new function very easily ;)
*"Efficient" here basically means in terms of smaller size (to reduce the IO waiting time), and speedy retrieval/deserialization times. Storing times are not as important.
I have to store a couple of dozen arrays of integers, each with 1800 values in the range 0-50, in the browser's localStorage -- that is, as a string.
Obviously, the simplest method is to just JSON.stringify it, however, that adds a lot of unnecessary information, considering that the ranges of the data is well known. An average size for one of these arrays is then ~5500 bytes.
Here are some other methods I've tried (resultant size, and time to deserialize it 1000 times at the end)
zero-padding the numbers so each was 2 characters long, eg:
[5, 27, 7, 38] ==> "05270738"
base 50 encoding it:
[5, 11, 7, 38] ==> "5b7C"
just using the value as a character code (adding 32 to avoid the weird control characters at the start):
[5, 11, 7, 38] ==> "%+'F" (String.fromCharCode(37), String.fromCharCode(43) ...)
Here are my results:
size Chrome 18 Firefox 11
-------------------------------------------------
JSON.stringify 5286 60ms 99ms
zero-padded 3600 354ms 703ms
base 50 1800 315ms 400ms
charCodes 1800 21ms 178ms
My question is if there is an even better method I haven't yet considered?
Update
MДΓΓБДLL suggested using compression on the data. Combining this LZW implementation with the base 50 and charCode data. I also tested aroth's code (packing 4 integers into 3 bytes). I got these results:
size Chrome 18 Firefox 11
-------------------------------------------------
LZW base 50 1103 494ms 999ms
LZW charCodes 1103 194ms 882ms
bitpacking 1350 2395ms 331ms
If your range is 0-50, then you can pack 4 numbers into 3 bytes (6 bits per number). This would allow you to store 1800 numbers using ~1350 bytes. This code should do it:
window._firstChar = 48;
window.decodeArray = function(encodedText) {
var result = [];
var temp = [];
for (var index = 0; index < encodedText.length; index += 3) {
//skipping bounds checking because the encoded text is assumed to be valid
var firstChar = encodedText.charAt(index).charCodeAt() - _firstChar;
var secondChar = encodedText.charAt(index + 1).charCodeAt() - _firstChar;
var thirdChar = encodedText.charAt(index + 2).charCodeAt() - _firstChar;
temp.push((firstChar >> 2) & 0x3F); //6 bits, 'a'
temp.push(((firstChar & 0x03) << 4) | ((secondChar >> 4) & 0xF)); //2 bits + 4 bits, 'b'
temp.push(((secondChar & 0x0F) << 2) | ((thirdChar >> 6) & 0x3)); //4 bits + 2 bits, 'c'
temp.push(thirdChar & 0x3F); //6 bits, 'd'
}
//filter out 'padding' numbers, if present; this is an extremely inefficient way to do it
for (var index = 0; index < temp.length; index++) {
if(temp[index] != 63) {
result.push(temp[index]);
}
}
return result;
};
window.encodeArray = function(array) {
var encodedData = [];
for (var index = 0; index < dataSet.length; index += 4) {
var num1 = dataSet[index];
var num2 = index + 1 < dataSet.length ? dataSet[index + 1] : 63;
var num3 = index + 2 < dataSet.length ? dataSet[index + 2] : 63;
var num4 = index + 3 < dataSet.length ? dataSet[index + 3] : 63;
encodeSet(num1, num2, num3, num4, encodedData);
}
return encodedData;
};
window.encodeSet = function(a, b, c, d, outArray) {
//we can encode 4 numbers in 3 bytes
var firstChar = ((a & 0x3F) << 2) | ((b >> 4) & 0x03); //6 bits for 'a', 2 from 'b'
var secondChar = ((b & 0x0F) << 4) | ((c >> 2) & 0x0F); //remaining 4 bits from 'b', 4 from 'c'
var thirdChar = ((c & 0x03) << 6) | (d & 0x3F); //remaining 2 bits from 'c', 6 bits for 'd'
//add _firstChar so that all values map to a printable character
outArray.push(String.fromCharCode(firstChar + _firstChar));
outArray.push(String.fromCharCode(secondChar + _firstChar));
outArray.push(String.fromCharCode(thirdChar + _firstChar));
};
Here's a quick example: http://jsfiddle.net/NWyBx/1
Note that storage size can likely be further reduced by applying gzip compression to the resulting string.
Alternately, if the ordering of your numbers is not significant, then you can simply do a bucket-sort using 51 buckets (assuming 0-50 includes both 0 and 50 as valid numbers) and store the counts for each bucket instead of the numbers themselves. That would likely give you better compression and efficiency than any other approach.
Assuming (as in your test) that compression takes more time than the size reduction saves you, your char encoding is the smallest you'll get without bitshifting. You're currently using one byte for each number, but if they're guaranteed to be small enough you could put two numbers in each byte. That would probably be an over-optimization, unless this is a very hot piece of your code.
You might want to consider using Uint8Array or ArrayBuffer. This blogpost shows how it's done. Copying his logic, here's an example, assuming you have an existing Uint8Array named arr.
function arrayBufferToBinaryString(buffer, cb) {
var blobBuilder = new BlobBuilder();
blobBuilder.append(buffer);
var blob = blobBuilder.getBlob();
var reader = new FileReader();
reader.onload = function (e) {
cb(reader.result);
};
reader.readAsBinaryString(blob);
}
arrayBufferToBinaryString(arr.buffer, function(s) {
// do something with s
});
Maybe i am just not that good enough in math, but I am having a problem in converting a number into pure alphabetical Bijective Hexavigesimal just like how Microsoft Excel/OpenOffice Calc do it.
Here is a version of my code but did not give me the output i needed:
var toHexvg = function(a){
var x='';
var let="_abcdefghijklmnopqrstuvwxyz";
var len=let.length;
var b=a;
var cnt=0;
var y = Array();
do{
a=(a-(a%len))/len;
cnt++;
}while(a!=0)
a=b;
var vnt=0;
do{
b+=Math.pow((len),vnt)*Math.floor(a/Math.pow((len),vnt+1));
vnt++;
}while(vnt!=cnt)
var c=b;
do{
y.unshift( c%len );
c=(c-(c%len))/len;
}while(c!=0)
for(var i in y)x+=let[y[i]];
return x;
}
The best output of my efforts can get is: a b c d ... y z ba bb bc - though not the actual code above. The intended output is suppose to be a b c ... y z aa ab ac ... zz aaa aab aac ... zzzzz aaaaaa aaaaab, you get the picture.
Basically, my problem is more on doing the ''math'' rather than the function. Ultimately my question is: How to do the Math in Hexavigesimal conversion, till a [supposed] infinity, just like Microsoft Excel.
And if possible, a source code, thank you in advance.
Okay, here's my attempt, assuming you want the sequence to be start with "a" (representing 0) and going:
a, b, c, ..., y, z, aa, ab, ac, ..., zy, zz, aaa, aab, ...
This works and hopefully makes some sense. The funky line is there because it mathematically makes more sense for 0 to be represented by the empty string and then "a" would be 1, etc.
alpha = "abcdefghijklmnopqrstuvwxyz";
function hex(a) {
// First figure out how many digits there are.
a += 1; // This line is funky
c = 0;
var x = 1;
while (a >= x) {
c++;
a -= x;
x *= 26;
}
// Now you can do normal base conversion.
var s = "";
for (var i = 0; i < c; i++) {
s = alpha.charAt(a % 26) + s;
a = Math.floor(a/26);
}
return s;
}
However, if you're planning to simply print them out in order, there are far more efficient methods. For example, using recursion and/or prefixes and stuff.
Although #user826788 has already posted a working code (which is even a third quicker), I'll post my own work, that I did before finding the posts here (as i didnt know the word "hexavigesimal"). However it also includes the function for the other way round. Note that I use a = 1 as I use it to convert the starting list element from
aa) first
ab) second
to
<ol type="a" start="27">
<li>first</li>
<li>second</li>
</ol>
:
function linum2int(input) {
input = input.replace(/[^A-Za-z]/, '');
output = 0;
for (i = 0; i < input.length; i++) {
output = output * 26 + parseInt(input.substr(i, 1), 26 + 10) - 9;
}
console.log('linum', output);
return output;
}
function int2linum(input) {
var zeros = 0;
var next = input;
var generation = 0;
while (next >= 27) {
next = (next - 1) / 26 - (next - 1) % 26 / 26;
zeros += next * Math.pow(27, generation);
generation++;
}
output = (input + zeros).toString(27).replace(/./g, function ($0) {
return '_abcdefghijklmnopqrstuvwxyz'.charAt(parseInt($0, 27));
});
return output;
}
linum2int("aa"); // 27
int2linum(27); // "aa"
You could accomplish this with recursion, like this:
const toBijective = n => (n > 26 ? toBijective(Math.floor((n - 1) / 26)) : "") + ((n % 26 || 26) + 9).toString(36);
// Parsing is not recursive
const parseBijective = str => str.split("").reverse().reduce((acc, x, i) => acc + ((parseInt(x, 36) - 9) * (26 ** i)), 0);
toBijective(1) // "a"
toBijective(27) // "aa"
toBijective(703) // "aaa"
toBijective(18279) // "aaaa"
toBijective(127341046141) // "overflow"
parseBijective("Overflow") // 127341046141
I don't understand how to work it out from a formula, but I fooled around with it for a while and came up with the following algorithm to literally count up to the requested column number:
var getAlpha = (function() {
var alphas = [null, "a"],
highest = [1];
return function(decNum) {
if (alphas[decNum])
return alphas[decNum];
var d,
next,
carry,
i = alphas.length;
for(; i <= decNum; i++) {
next = "";
carry = true;
for(d = 0; d < highest.length; d++){
if (carry) {
if (highest[d] === 26) {
highest[d] = 1;
} else {
highest[d]++;
carry = false;
}
}
next = String.fromCharCode(
highest[d] + 96)
+ next;
}
if (carry) {
highest.push(1);
next = "a" + next;
}
alphas[i] = next;
}
return alphas[decNum];
};
})();
alert(getAlpha(27)); // "aa"
alert(getAlpha(100000)); // "eqxd"
Demo: http://jsfiddle.net/6SE2f/1/
The highest array holds the current highest number with an array element per "digit" (element 0 is the least significant "digit").
When I started the above it seemed a good idea to cache each value once calculated, to save time if the same value was requested again, but in practice (with Chrome) it only took about 3 seconds to calculate the 1,000,000th value (bdwgn) and about 20 seconds to calculate the 10,000,000th value (uvxxk). With the caching removed it took about 14 seconds to the 10,000,000th value.
Just finished writing this code earlier tonight, and I found this question while on a quest to figure out what to name the damn thing. Here it is (in case anybody feels like using it):
/**
* Convert an integer to bijective hexavigesimal notation (alphabetic base-26).
*
* #param {Number} int - A positive integer above zero
* #return {String} The number's value expressed in uppercased bijective base-26
*/
function bijectiveBase26(int){
const sequence = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const length = sequence.length;
if(int <= 0) return int;
if(int <= length) return sequence[int - 1];
let index = (int % length) || length;
let result = [sequence[index - 1]];
while((int = Math.floor((int - 1) / length)) > 0){
index = (int % length) || length;
result.push(sequence[index - 1]);
}
return result.reverse().join("")
}
I had to solve this same problem today for work. My solution is written in Elixir and uses recursion, but I explain the thinking in plain English.
Here are some example transformations:
0 -> "A", 1 -> "B", 2 -> "C", 3 -> "D", ..
25 -> "Z", 26 -> "AA", 27 -> "AB", ...
At first glance it might seem like a normal 26-base counting system
but unfortunately it is not so simple.
The "problem" becomes clear when you realize:
A = 0
AA = 26
This is at odds with a normal counting system, where "0" does not behave
as "1" when it is in a decimal place other than then unit.
To understand the algorithm, consider a simpler but equivalent base-2 system:
A = 0
B = 1
AA = 2
AB = 3
BA = 4
BB = 5
AAA = 6
In a normal binary counting system we can determine the "value" of decimal places by
taking increasing powers of 2 (1, 2, 4, 8, 16) and the value of a binary number is
calculated by multiplying each digit by that digit place's value.
e.g. 10101 = 1 * (2 ^ 4) + 0 * (2 ^ 3) + 1 * (2 ^ 2) + 0 * (2 ^ 1) + 1 * (2 ^ 0) = 21
In our more complicated AB system, we can see by inspection that the decimal place values are:
1, 2, 6, 14, 30, 62
The pattern reveals itself to be (previous_unit_place_value + 1) * 2.
As such, to get the next lower unit place value, we divide by 2 and subtract 1.
This can be extended to a base-26 system. Simply divide by 26 and subtract 1.
Now a formula for transforming a normal base-10 number to special base-26 is apparent.
Say the input is x.
Create an accumulator list l.
If x is less than 26, set l = [x | l] and go to step 5. Otherwise, continue.
Divide x by 2. The floored result is d and the remainder is r.
Push the remainder as head on an accumulator list. i.e. l = [r | l]
Go to step 2 with with (d - 1) as input, e.g. x = d - 1
Convert """ all elements of l to their corresponding chars. 0 -> A, etc.
So, finally, here is my answer, written in Elixir:
defmodule BijectiveHexavigesimal do
def to_az_string(number, base \\ 26) do
number
|> to_list(base)
|> Enum.map(&to_char/1)
|> to_string()
end
def to_09_integer(string, base \\ 26) do
string
|> String.to_charlist()
|> Enum.reverse()
|> Enum.reduce({0, nil}, fn
char, {_total, nil} ->
{to_integer(char), 1}
char, {total, previous_place_value} ->
char_value = to_integer(char + 1)
place_value = previous_place_value * base
new_total = total + char_value * place_value
{new_total, place_value}
end)
|> elem(0)
end
def to_list(number, base, acc \\ []) do
if number < base do
[number | acc]
else
to_list(div(number, base) - 1, base, [rem(number, base) | acc])
end
end
defp to_char(x), do: x + 65
end
You use it simply as BijectiveHexavigesimal.to_az_string(420). It also accepts on optional "base" arg.
I know the OP asked about Javascript but I wanted to provide an Elixir solution for posterity.
I have published these functions in npm package here:
https://www.npmjs.com/package/#gkucmierz/utils
Converting bijective numeration to number both ways (also BigInt version is included).
https://github.com/gkucmierz/utils/blob/main/src/bijective-numeration.mjs