Related
Is it possible to seed the random number generator (Math.random) in JavaScript?
No, it is not possible to seed Math.random(). The ECMAScript specification is intentionally vague on the subject, providing no means for seeding nor require that browsers even use the same algorithm. So such a function must be externally provided, which thankfully isn't too difficult.
I've implemented a number of good, short and fast Pseudorandom number generator (PRNG) functions in plain JavaScript. All of them can be seeded and provide high quality numbers. These are not intended for security purposes--if you need a seedable CSPRNG, look into ISAAC.
First of all, take care to initialize your PRNGs properly. To keep things simple, the generators below have no built-in seed generating procedure, but accept one or more 32-bit numbers as the initial seed state of the PRNG. Similar or sparse seeds (e.g. a simple seed of 1 and 2) have low entropy, and can cause correlations or other randomness quality issues, sometimes resulting in the output having similar properties (such as randomly generated levels being similar). To avoid this, it is best practice to initialize PRNGs with a well-distributed, high entropy seed and/or advancing past the first 15 or so numbers.
There are many ways to do this, but here are two methods. Firstly, hash functions are very good at generating seeds from short strings. A good hash function will generate very different results even when two strings are similar, so you don't have to put much thought into the string. Here's an example hash function:
function cyrb128(str) {
let h1 = 1779033703, h2 = 3144134277,
h3 = 1013904242, h4 = 2773480762;
for (let i = 0, k; i < str.length; i++) {
k = str.charCodeAt(i);
h1 = h2 ^ Math.imul(h1 ^ k, 597399067);
h2 = h3 ^ Math.imul(h2 ^ k, 2869860233);
h3 = h4 ^ Math.imul(h3 ^ k, 951274213);
h4 = h1 ^ Math.imul(h4 ^ k, 2716044179);
}
h1 = Math.imul(h3 ^ (h1 >>> 18), 597399067);
h2 = Math.imul(h4 ^ (h2 >>> 22), 2869860233);
h3 = Math.imul(h1 ^ (h3 >>> 17), 951274213);
h4 = Math.imul(h2 ^ (h4 >>> 19), 2716044179);
return [(h1^h2^h3^h4)>>>0, (h2^h1)>>>0, (h3^h1)>>>0, (h4^h1)>>>0];
}
Calling cyrb128 will produce a 128-bit hash value from a string which can be used to seed a PRNG. Here's how you might use it:
// Create cyrb128 state:
var seed = cyrb128("apples");
// Four 32-bit component hashes provide the seed for sfc32.
var rand = sfc32(seed[0], seed[1], seed[2], seed[3]);
// Only one 32-bit component hash is needed for mulberry32.
var rand = mulberry32(seed[0]);
// Obtain sequential random numbers like so:
rand();
rand();
Note: If you want a slightly more robust 128-bit hash, consider MurmurHash3_x86_128, it's more thorough, but intended for use with large arrays.
Alternatively, simply choose some dummy data to pad the seed with, and advance the generator beforehand a few times (12-20 iterations) to mix the initial state thoroughly. This has the benefit of being simpler, and is often used in reference implementations of PRNGs, but it does limit the number of initial states:
var seed = 1337 ^ 0xDEADBEEF; // 32-bit seed with optional XOR value
// Pad seed with Phi, Pi and E.
// https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number
var rand = sfc32(0x9E3779B9, 0x243F6A88, 0xB7E15162, seed);
for (var i = 0; i < 15; i++) rand();
Note: the output of these PRNG functions produce a positive 32-bit number (0 to 232-1) which is then converted to a floating-point number between 0-1 (0 inclusive, 1 exclusive) equivalent to Math.random(), if you want random numbers of a specific range, read this article on MDN. If you only want the raw bits, simply remove the final division operation.
JavaScript numbers can only represent whole integers up to 53-bit resolution. And when using bitwise operations, this is reduced to 32. Modern PRNGs in other languages often use 64-bit operations, which require shims when porting to JS that can drastically reduce performance. The algorithms here only use 32-bit operations, as it is directly compatible with JS.
Now, onward to the the generators. (I maintain the full list with references and license info here)
sfc32 (Simple Fast Counter)
sfc32 is part of the PractRand random number testing suite (which it passes of course). sfc32 has a 128-bit state and is very fast in JS.
function sfc32(a, b, c, d) {
return function() {
a >>>= 0; b >>>= 0; c >>>= 0; d >>>= 0;
var t = (a + b) | 0;
a = b ^ b >>> 9;
b = c + (c << 3) | 0;
c = (c << 21 | c >>> 11);
d = d + 1 | 0;
t = t + d | 0;
c = c + t | 0;
return (t >>> 0) / 4294967296;
}
}
You may wonder what the | 0 and >>>= 0 are for. These are essentially 32-bit integer casts, used for performance optimizations. Number in JS are basically floats, but during bitwise operations, they switch into a 32-bit integer mode. This mode is processed faster by JS interpreters, but any multiplication or addition will cause it to switch back to a float, resulting in a performance hit.
Mulberry32
Mulberry32 is a simple generator with a 32-bit state, but is extremely fast and has good quality randomness (author states it passes all tests of gjrand testing suite and has a full 232 period, but I haven't verified).
function mulberry32(a) {
return function() {
var t = a += 0x6D2B79F5;
t = Math.imul(t ^ t >>> 15, t | 1);
t ^= t + Math.imul(t ^ t >>> 7, t | 61);
return ((t ^ t >>> 14) >>> 0) / 4294967296;
}
}
I would recommend this if you just need a simple but decent PRNG and don't need billions of random numbers (see Birthday problem).
xoshiro128**
As of May 2018, xoshiro128** is the new member of the Xorshift family, by Vigna & Blackman (professor Vigna was also responsible for the Xorshift128+ algorithm powering most Math.random implementations under the hood). It is the fastest generator that offers a 128-bit state.
function xoshiro128ss(a, b, c, d) {
return function() {
var t = b << 9, r = a * 5; r = (r << 7 | r >>> 25) * 9;
c ^= a; d ^= b;
b ^= c; a ^= d; c ^= t;
d = d << 11 | d >>> 21;
return (r >>> 0) / 4294967296;
}
}
The authors claim it passes randomness tests well (albeit with caveats). Other researchers have pointed out that it fails some tests in TestU01 (particularly LinearComp and BinaryRank). In practice, it should not cause issues when floats are used (such as in these implementations), but may cause issues if relying on the raw lowest order bit.
JSF (Jenkins' Small Fast)
This is JSF or 'smallprng' by Bob Jenkins (2007), who also made ISAAC and SpookyHash. It passes PractRand tests and should be quite fast, although not as fast as sfc32.
function jsf32(a, b, c, d) {
return function() {
a |= 0; b |= 0; c |= 0; d |= 0;
var t = a - (b << 27 | b >>> 5) | 0;
a = b ^ (c << 17 | c >>> 15);
b = c + d | 0;
c = d + t | 0;
d = a + t | 0;
return (d >>> 0) / 4294967296;
}
}
No, it is not possible to seed Math.random(), but it's fairly easy to write your own generator, or better yet, use an existing one.
Check out: this related question.
Also, see David Bau's blog for more information on seeding.
NOTE: Despite (or rather, because of) succinctness and apparent elegance, this algorithm is by no means a high-quality one in terms of randomness. Look for e.g. those listed in this answer for better results.
(Originally adapted from a clever idea presented in a comment to another answer.)
var seed = 1;
function random() {
var x = Math.sin(seed++) * 10000;
return x - Math.floor(x);
}
You can set seed to be any number, just avoid zero (or any multiple of Math.PI).
The elegance of this solution, in my opinion, comes from the lack of any "magic" numbers (besides 10000, which represents about the minimum amount of digits you must throw away to avoid odd patterns - see results with values 10, 100, 1000). Brevity is also nice.
It's a bit slower than Math.random() (by a factor of 2 or 3), but I believe it's about as fast as any other solution written in JavaScript.
No, but here's a simple pseudorandom generator, an implementation of Multiply-with-carry I adapted from Wikipedia (has been removed since):
var m_w = 123456789;
var m_z = 987654321;
var mask = 0xffffffff;
// Takes any integer
function seed(i) {
m_w = (123456789 + i) & mask;
m_z = (987654321 - i) & mask;
}
// Returns number between 0 (inclusive) and 1.0 (exclusive),
// just like Math.random().
function random()
{
m_z = (36969 * (m_z & 65535) + (m_z >> 16)) & mask;
m_w = (18000 * (m_w & 65535) + (m_w >> 16)) & mask;
var result = ((m_z << 16) + (m_w & 65535)) >>> 0;
result /= 4294967296;
return result;
}
Antti Sykäri's algorithm is nice and short. I initially made a variation that replaced JavaScript's Math.random when you call Math.seed(s), but then Jason commented that returning the function would be better:
Math.seed = function(s) {
return function() {
s = Math.sin(s) * 10000; return s - Math.floor(s);
};
};
// usage:
var random1 = Math.seed(42);
var random2 = Math.seed(random1());
Math.random = Math.seed(random2());
This gives you another functionality that JavaScript doesn't have: multiple independent random generators. That is especially important if you want to have multiple repeatable simulations running at the same time.
Please see Pierre L'Ecuyer's work going back to the late 1980s and early 1990s. There are others as well. Creating a (pseudo) random number generator on your own, if you are not an expert, is pretty dangerous, because there is a high likelihood of either the results not being statistically random or in having a small period. Pierre (and others) have put together some good (pseudo) random number generators that are easy to implement. I use one of his LFSR generators.
https://www.iro.umontreal.ca/~lecuyer/myftp/papers/handstat.pdf
Combining some of the previous answers, this is the seedable random function you are looking for:
Math.seed = function(s) {
var mask = 0xffffffff;
var m_w = (123456789 + s) & mask;
var m_z = (987654321 - s) & mask;
return function() {
m_z = (36969 * (m_z & 65535) + (m_z >>> 16)) & mask;
m_w = (18000 * (m_w & 65535) + (m_w >>> 16)) & mask;
var result = ((m_z << 16) + (m_w & 65535)) >>> 0;
result /= 4294967296;
return result;
}
}
var myRandomFunction = Math.seed(1234);
var randomNumber = myRandomFunction();
It's not possible to seed the builtin Math.random function, but it is possible to implement a high quality RNG in Javascript with very little code.
Javascript numbers are 64-bit floating point precision, which can represent all positive integers less than 2^53. This puts a hard limit to our arithmetic, but within these limits you can still pick parameters for a high quality Lehmer / LCG random number generator.
function RNG(seed) {
var m = 2**35 - 31
var a = 185852
var s = seed % m
return function () {
return (s = s * a % m) / m
}
}
Math.random = RNG(Date.now())
If you want even higher quality random numbers, at the cost of being ~10 times slower, you can use BigInt for the arithmetic and pick parameters where m is just able to fit in a double.
function RNG(seed) {
var m_as_number = 2**53 - 111
var m = 2n**53n - 111n
var a = 5667072534355537n
var s = BigInt(seed) % m
return function () {
return Number(s = s * a % m) / m_as_number
}
}
See this paper by Pierre l'Ecuyer for the parameters used in the above implementations:
https://www.ams.org/journals/mcom/1999-68-225/S0025-5718-99-00996-5/S0025-5718-99-00996-5.pdf
And whatever you do, avoid all the other answers here that use Math.sin!
To write your own pseudo random generator is quite simple.
The suggestion of Dave Scotese is useful but, as pointed out by others, it is not quite uniformly distributed.
However, it is not because of the integer arguments of sin. It's simply because of the range of sin, which happens to be a one dimensional projection of a circle. If you would take the angle of the circle instead it would be uniform.
So instead of sin(x) use arg(exp(i * x)) / (2 * PI).
If you don't like the linear order, mix it a bit up with xor. The actual factor doesn't matter that much either.
To generate n pseudo random numbers one could use the code:
function psora(k, n) {
var r = Math.PI * (k ^ n)
return r - Math.floor(r)
}
n = 42; for(k = 0; k < n; k++) console.log(psora(k, n))
Please also note that you cannot use pseudo random sequences when real entropy is needed.
Many people who need a seedable random-number generator in Javascript these days are using David Bau's seedrandom module.
Math.random no, but the ran library solves this. It has almost all distributions you can imagine and supports seeded random number generation. Example:
ran.core.seed(0)
myDist = new ran.Dist.Uniform(0, 1)
samples = myDist.sample(1000)
Here's the adopted version of Jenkins hash, borrowed from here
export function createDeterministicRandom(): () => number {
let seed = 0x2F6E2B1;
return function() {
// Robert Jenkins’ 32 bit integer hash function
seed = ((seed + 0x7ED55D16) + (seed << 12)) & 0xFFFFFFFF;
seed = ((seed ^ 0xC761C23C) ^ (seed >>> 19)) & 0xFFFFFFFF;
seed = ((seed + 0x165667B1) + (seed << 5)) & 0xFFFFFFFF;
seed = ((seed + 0xD3A2646C) ^ (seed << 9)) & 0xFFFFFFFF;
seed = ((seed + 0xFD7046C5) + (seed << 3)) & 0xFFFFFFFF;
seed = ((seed ^ 0xB55A4F09) ^ (seed >>> 16)) & 0xFFFFFFFF;
return (seed & 0xFFFFFFF) / 0x10000000;
};
}
You can use it like this:
const deterministicRandom = createDeterministicRandom()
deterministicRandom()
// => 0.9872818551957607
deterministicRandom()
// => 0.34880331158638
No, like they said it is not possible to seed Math.random()
but you can install external package which make provision for that. i used these package which can be install using these command
npm i random-seed
the example is gotten from the package documentation.
var seed = 'Hello World',
rand1 = require('random-seed').create(seed),
rand2 = require('random-seed').create(seed);
console.log(rand1(100), rand2(100));
follow the link for documentation https://www.npmjs.com/package/random-seed
SIN(id + seed) is a very interesting replacement for RANDOM functions that cannot be seeded like SQLite:
https://stackoverflow.com/a/75089040/7776828
Most of the answers here produce biased results. So here's a tested function based on seedrandom library from github:
!function(f,a,c){var s,l=256,p="random",d=c.pow(l,6),g=c.pow(2,52),y=2*g,h=l-1;function n(n,t,r){function e(){for(var n=u.g(6),t=d,r=0;n<g;)n=(n+r)*l,t*=l,r=u.g(1);for(;y<=n;)n/=2,t/=2,r>>>=1;return(n+r)/t}var o=[],i=j(function n(t,r){var e,o=[],i=typeof t;if(r&&"object"==i)for(e in t)try{o.push(n(t[e],r-1))}catch(n){}return o.length?o:"string"==i?t:t+"\0"}((t=1==t?{entropy:!0}:t||{}).entropy?[n,S(a)]:null==n?function(){try{var n;return s&&(n=s.randomBytes)?n=n(l):(n=new Uint8Array(l),(f.crypto||f.msCrypto).getRandomValues(n)),S(n)}catch(n){var t=f.navigator,r=t&&t.plugins;return[+new Date,f,r,f.screen,S(a)]}}():n,3),o),u=new m(o);return e.int32=function(){return 0|u.g(4)},e.quick=function(){return u.g(4)/4294967296},e.double=e,j(S(u.S),a),(t.pass||r||function(n,t,r,e){return e&&(e.S&&v(e,u),n.state=function(){return v(u,{})}),r?(c[p]=n,t):n})(e,i,"global"in t?t.global:this==c,t.state)}function m(n){var t,r=n.length,u=this,e=0,o=u.i=u.j=0,i=u.S=[];for(r||(n=[r++]);e<l;)i[e]=e++;for(e=0;e<l;e++)i[e]=i[o=h&o+n[e%r]+(t=i[e])],i[o]=t;(u.g=function(n){for(var t,r=0,e=u.i,o=u.j,i=u.S;n--;)t=i[e=h&e+1],r=r*l+i[h&(i[e]=i[o=h&o+t])+(i[o]=t)];return u.i=e,u.j=o,r})(l)}function v(n,t){return t.i=n.i,t.j=n.j,t.S=n.S.slice(),t}function j(n,t){for(var r,e=n+"",o=0;o<e.length;)t[h&o]=h&(r^=19*t[h&o])+e.charCodeAt(o++);return S(t)}function S(n){return String.fromCharCode.apply(0,n)}if(j(c.random(),a),"object"==typeof module&&module.exports){module.exports=n;try{s=require("crypto")}catch(n){}}else"function"==typeof define&&define.amd?define(function(){return n}):c["seed"+p]=n}("undefined"!=typeof self?self:this,[],Math);
function randIntWithSeed(seed, max=1) {
/* returns a random number between [0,max] including zero and max
seed can be either string or integer */
return Math.round(new Math.seedrandom('seed' + seed)()) * max
}
test for true randomness of this code: https://es6console.com/kkjkgur2/
There are plenty of good answers here but I had a similar issue with the additional requirement that I would like portability between Java's random number generator and whatever I ended up using in JavaScript.
I found the java-random package
These two pieces of code had identical output assuming the seed is the same:
Java:
Random randomGenerator = new Random(seed);
int randomInt;
for (int i=0; i<10; i++) {
randomInt = randomGenerator.nextInt(50);
System.out.println(randomInt);
}
JavaScript:
let Random = require('java-random');
let rng = new Random(seed);
for (let i=0; i<10; i++) {
let val = rng.nextInt(50);
console.log(val);
}
I have written a function that returns a seeded random number, it uses Math.sin to have a long random number and uses the seed to pick numbers from that.
Use :
seedRandom("k9]:2#", 15)
it will return your seeded number
the first parameter is any string value ; your seed.
the second parameter is how many digits will return.
function seedRandom(inputSeed, lengthOfNumber){
var output = "";
var seed = inputSeed.toString();
var newSeed = 0;
var characterArray = ['0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','y','x','z','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','U','R','S','T','U','V','W','X','Y','Z','!','#','#','$','%','^','&','*','(',')',' ','[','{',']','}','|',';',':',"'",',','<','.','>','/','?','`','~','-','_','=','+'];
var longNum = "";
var counter = 0;
var accumulator = 0;
for(var i = 0; i < seed.length; i++){
var a = seed.length - (i+1);
for(var x = 0; x < characterArray.length; x++){
var tempX = x.toString();
var lastDigit = tempX.charAt(tempX.length-1);
var xOutput = parseInt(lastDigit);
addToSeed(characterArray[x], xOutput, a, i);
}
}
function addToSeed(character, value, a, i){
if(seed.charAt(i) === character){newSeed = newSeed + value * Math.pow(10, a)}
}
newSeed = newSeed.toString();
var copy = newSeed;
for(var i=0; i<lengthOfNumber*9; i++){
newSeed = newSeed + copy;
var x = Math.sin(20982+(i)) * 10000;
var y = Math.floor((x - Math.floor(x))*10);
longNum = longNum + y.toString()
}
for(var i=0; i<lengthOfNumber; i++){
output = output + longNum.charAt(accumulator);
counter++;
accumulator = accumulator + parseInt(newSeed.charAt(counter));
}
return(output)
}
A simple approach for a fixed seed:
function fixedrandom(p){
const seed = 43758.5453123;
return (Math.abs(Math.sin(p)) * seed)%1;
}
In PHP, there is function srand(seed) which generate fixed random value for particular seed.
But, in JS, there is no such inbuilt function.
However, we can write simple and short function.
Step 1: Choose some Seed (Fix Number).
var seed = 100;
Number should be Positive Integer and greater than 1, further explanation in Step 2.
Step 2: Perform Math.sin() function on Seed, it will give sin value of that number. Store this value in variable x.
var x;
x = Math.sin(seed); // Will Return Fractional Value between -1 & 1 (ex. 0.4059..)
sin() method returns a Fractional value between -1 and 1.And we don't need Negative value, therefore, in first step choose number greater than 1.
Step 3: Returned Value is a Fractional value between -1 and 1. So mulitply this value with 10 for making it more than 1.
x = x * 10; // 10 for Single Digit Number
Step 4: Multiply the value with 10 for additional digits
x = x * 10; // Will Give value between 10 and 99 OR
x = x * 100; // Will Give value between 100 and 999
Multiply as per requirement of digits.
The result will be in decimal.
Step 5: Remove value after Decimal Point by Math's Round (Math.round()) Method.
x = Math.round(x); // This will give Integer Value.
Step 6: Turn Negative Values into Positive (if any) by Math.abs method
x = Math.abs(x); // Convert Negative Values into Positive(if any)
Explanation End.Final Code
var seed = 111; // Any Number greater than 1
var digit = 10 // 1 => single digit, 10 => 2 Digits, 100 => 3 Digits and so. (Multiple of 10)
var x; // Initialize the Value to store the result
x = Math.sin(seed); // Perform Mathematical Sin Method on Seed.
x = x * 10; // Convert that number into integer
x = x * digit; // Number of Digits to be included
x = Math.round(x); // Remove Decimals
x = Math.abs(x); // Convert Negative Number into Positive
Clean and Optimized Functional Code
function random_seed(seed, digit = 1) {
var x = Math.abs(Math.round(Math.sin(seed++) * 10 * digit));
return x;
}
Then Call this function using
random_seed(any_number, number_of_digits)any_number is must and should be greater than 1.number_of_digits is optional parameter and if nothing passed, 1 Digit will return.
random_seed(555); // 1 Digit
random_seed(234, 1); // 1 Digit
random_seed(7895656, 1000); // 4 Digit
For a number between 0 and 100.
Number.parseInt(Math.floor(Math.random() * 100))
I am looking to implement an O(1) lookup for binary quadkeys
I know that there is no true hashtable for javascript, instead using objects and o(1) lookup with their properties, but the issue there is that the keys are always converted to strings.
I suspect to have over a 10 million entries in memory, and if I have to rely on keys being strings and with the average quadkey string being 11.5 characters that would equate to (10 million entries * 11.5 length * 2 bytes) = 230,000,000 bytes or 230 MB.
Compared to storing as int64 (10 million entries * 8 bytes) = 80,000,000 bytes or 80 MB
I know that int64 isn't supported natively with javascript, but there are libraries that can assist with that in terms of doing the bitwise operations i'd want.
Now, even though there exists libraries that can deal with int64, they are ultimately objects that are not truly representing 8 bytes, so I believe I cannot use any int64 library in a hashtable, instead I was considering using 2-deep hash table with two int32. The first key would be the first 4 bytes, and the 2nd key being the last 4 bytes. It's not as ideal as 1 operation lookup to find a value, but 2 operations which is still good enough.
However, I feel this is not worth it if all keys are stored as strings, or the fact that all numbers are double-precision floating-point format numbers (8 bytes) so it each hash entry would then take up 16 bytes (two "int32" numbers).
My questions are:
1. If you store a number as the key to a property, will it take up 8
bytes of memory or will it convert to string and take up
length*2bytes?
Is there a way to encode binary quadkeys into the double-precision
floating-point format number standard that javascript has adopted,
even if the number makes no sense , the underlying bits serve a
purpose (unique identification of a construct).
PS: I am marking this with nodejs as there may exist libraries that could assist in my end goal
Edit 1:
Seems 1 is possible with Map() and node 0.12.x+
As far as number 2 I was able to use a int64 lib (bytebuffer) and convert a 64int to a buffer.
I wanted to just use the buffer as the key to Map() but it would not let me as the buffer was internally an object, which each instance acts as a new key to Map().
So I considered turning the buffer back into native type, a 64bit double.
Using readDoubleBE I read the buffer as a double, which represents my 64int binary and successfully lets me use it in a map and allows for O(1) lookup.
var ByteBuffer = require("bytebuffer");
var lookup = new Map();
var startStr = "123123738232777";
for(var i = 1; i <= 100000; i++) {
var longStr = startStr + i.toString();
var longVal = new ByteBuffer.Long.fromValue(longStr);
var buffer = new ByteBuffer().writeUint64(longVal).flip().toBuffer();
var doubleRepresentation = buffer.readDoubleBE();
lookup.set(doubleRepresentation, longStr);
}
console.log(exists("12312373823277744234")); // true
console.log(exists("123123738232777442341232332322")); // false
function exists(longStr) {
var longVal = new ByteBuffer.Long.fromValue(longStr);
var doubleRepresentation = new ByteBuffer().writeUint64(longVal).flip().toBuffer().readDoubleBE();
return lookup.has(doubleRepresentation);
}
The code is sloppy and there are probably shortcuts I am missing, so any suggestions/hints are welcomed.
Ideally I wanted to take advantage of bytebuffer's varints so that I can have even more savings with memory but I wasn't sure if that is possible in a Map, because I wasn't able to use a buffer as a key.
Edit 2:
Using memwatch-next I was able to see that max heap size was 497962856 bytes with this method with Set() whereas using a string in Set() was 1111082864 bytes. That is about 500MB vs 1GB, which is a far cry from 80mb and 230mb, not sure where the extra memory use is coming from. It's worth nothing for these memory tests I used Set over Map that way it should only store unique keys in the data structure. (Using Set as just a existence checker, where Map would serve as a lookup)
Because your keys are integers (and are unique) you could just use them as array indices. However, JS arrays are limited to contain max entries bounded by 32 or 64 bit integer depending on your platform.
To overcome this, you could use your two-step approach, but without using objects and their string hashes. You can store it something like this
store[0010][1000] = 'VALUE' - the item with binary key 00101000 is stored under index 0010 in first array and index 1000 in child array
In decimal, you're dealing with store[2][8] = 'VALUE', which is equivalent to doing store[40] = 'VALUE' in 64bit space
You get yourself a tree with all the properties you want:
You can easily look up by key (in two steps)
Your keys are integers
You're dealing with 32bit integers (or less, depending on how you chunk it)
The doubling in memory from Map to Set in your version of node comes from a bad implementation. Well, not "bad" per se just not fit for millions of entries. The simpler handling of Set gets bought with memory. There's no free lunch, as always, sorry.
Why do they use so much in general? They are supposed to handle any object and the method to be able to handle all of the possible varieties is really expensive. It is possible to optimize if all you have is of one kind but you have to check for it and in 99,99% of all cases it's not worth the hassle because the maps and sets and arrays are short, a couple of thousand entries at most. To be bland: developer time is expensive and better spend elsewhere. I could add: it's open source, do it yourself! but I know it's way easier said than done ;-)
You need to roll it yourself. You can use a Uint32Array for that and build a hash-table around it.
The Bing-Maps are encoded with strings of base-4 digits (at most 23) according to MS and the Quad-key description. Using the encoding of the latter (haven't read the former, so it might be wrong in the details) we can put it into two 32-bit integers:
function quadToInts(quad, zoom){
var high,low, qlen, i, c;
high = 0>>>0;
low = 0>>>0
zoom = zoom>>>0;
// checks & balances omitted!
qlen = quad.length;
for(i = 0; i < 16 && i < qlen ;i++){
c = parseInt(quad.charAt(i));
high |= c << (32-(i*2 + 2));
}
// max = 23 characters (says MS)
for(i = 0; i < 8 && i < qlen - 16 ;i++){
c = parseInt(quad.charAt(16 + i));
low |= c << (32-(i*2 + 2));
}
low |= zoom;
return [high,low];
}
And back
// obligatory https://graphics.stanford.edu/~seander/bithacks.html
function rev(v){
var s = 32;
var mask = (~0)>>>0;
while ((s >>>= 1) > 0) {
mask ^= (mask << s)>>>0;
v = ((v >>> s) & mask) | ((v << s) & (~mask)>>>0);
}
return v;
}
function intsToQuad(k1,k2){
var r, high, low, zoom, c, mask;
r = "";
mask = 0x3; // 0b11
high = k1>>>0;
low = k2>>>0;
zoom = low & (0x20 - 1);
low ^= zoom;
high = rev(high);
for(var i = 0;i<16;i++){
c = high & mask;
c = (c<<1 | c>>>1) & mask;
r += c.toString(10);
high >>>= 2;
}
low = rev(low);
for(var i = 0;i<16;i++){
c = low & mask;
c = (c<<1 | c>>>1) & mask;
r += c.toString(10);
low >>>= 2;
}
return [r,zoom];
}
(All quick hacks, please check before use! And the C&P devil might have had its hand in here, too)
A rough sketch for a hash-table base on the following hash-function
// shamelessly stolen from http://www.burtleburtle.net/bob/c/lookup3.c
function hashword(k1, // high word of 64 bit value
k2, // low word of 64 bit value
seed // the seed
){
var a,b,c;
var rot = function(x,k){
return (((x)<<(k)) | ((x)>>>(32-(k))));
};
/* Set up the internal state */
a = b = c = 0xdeadbeef + ((2<<2)>>>0) + seed>>>0;
if(arguments.length === 2){
seed = k1^k2;
}
b+=k2;
a+=k1;
c ^= b; c -= rot(b,14)>>>0;
a ^= c; a -= rot(c,11)>>>0;
b ^= a; b -= rot(a,25)>>>0;
c ^= b; c -= rot(b,16)>>>0;
a ^= c; a -= rot(c,4)>>>0;
b ^= a; b -= rot(a,14)>>>0;
c ^= b; c -= rot(b,24)>>>0;
return c;
}
function hashsize(N){
var highbit = function(n) {
var r = 0 >>> 0;
var m = n >>> 0;
while (m >>>= 1) {
r++;
}
return r;
};
return (1<<(highbit(N)+1))>>>0;
}
function hashmask(N){
return (hashsize(N)-1)>>>0;
}
And the (rather incomplete) code for table handling
/*
Room for 8-byte (64-bit) entries
Table pos. Array pos.
0 0 high, low
1 2 high, low
2 4 high, lowl
...
n n*2 high, low
*/
function HashTable(N){
var buf;
if(!N)
return null;
N = (N+1) * 2;
buf = new ArrayBuffer(hashsize(N) * 8);
this.table = new Uint32Array(buf);
this.mask = hashmask(N);
this.length = this.table.length;
}
HashTable.prototype.set = function(s,z){
var hash, quad, entry, check, i;
entry = null;
quad = quadToInts(s,z);
hash = hashword(quad[0],quad[1]);
entry = hash & this.mask;
check = entry * 2;
if(this.table[check] != 0 || this.table[check + 1] != 0){
//handle collisions here
console.log("collision in SET found")
return null;
} else {
this.table[check] = quad[0];
this.table[check + 1] = quad[1];
}
return entry;
};
HashTable.prototype.exists = function(s,z){
var hash, quad, entry, check, i;
entry = null;
quad = quadToInts(s,z);
hash = hashword(quad[0],quad[1]);
entry = hash & this.mask;
check = entry * 2;
if(this.table[check] != 0 || this.table[check + 1] != 0){
return entry;
}
return -1;
};
HashTable.prototype.get = function(index){
var entry = [0,0];
if(index > this.length)
return null;
entry[0] = this.table[index * 2];
entry[1] = this.table[index * 2 + 1];
return entry;
};
// short test
var ht = new HashTable(100);
ht.set("01231031020112310",17);
ht.set("11231031020112311",12);
ht.set("21231033020112310",1);
ht.set("31231031220112311321312",23);
var s = "";
for(var i=0;i<ht.table.length;i+=2){
if(ht.table[i] !== 0){
var e = intsToQuad(ht.table[i],ht.table[i+1]);
s += e[0] +", " +e[1] + "\n";
}
}
console.log(s)
Collisions should be rare, so a couple of short standard arrays would do to catch them. To handle it you need to add another byte to the 8 bytes for the two integers representing the Quad or, better, set the second integer to all ones (won't happen with a Quad) and the first to the position(s) in the collision array(s).
To add payload is a bit more complicated because you have only a fixed length to do so.
I have set the size of the table to the next higher power of two. That can be too much or even waaay too much and you might be tempted to adapt it, so be aware that the masking doesn't work as expected anymore you need to do a modulo instead.
Maybe i am just not that good enough in math, but I am having a problem in converting a number into pure alphabetical Bijective Hexavigesimal just like how Microsoft Excel/OpenOffice Calc do it.
Here is a version of my code but did not give me the output i needed:
var toHexvg = function(a){
var x='';
var let="_abcdefghijklmnopqrstuvwxyz";
var len=let.length;
var b=a;
var cnt=0;
var y = Array();
do{
a=(a-(a%len))/len;
cnt++;
}while(a!=0)
a=b;
var vnt=0;
do{
b+=Math.pow((len),vnt)*Math.floor(a/Math.pow((len),vnt+1));
vnt++;
}while(vnt!=cnt)
var c=b;
do{
y.unshift( c%len );
c=(c-(c%len))/len;
}while(c!=0)
for(var i in y)x+=let[y[i]];
return x;
}
The best output of my efforts can get is: a b c d ... y z ba bb bc - though not the actual code above. The intended output is suppose to be a b c ... y z aa ab ac ... zz aaa aab aac ... zzzzz aaaaaa aaaaab, you get the picture.
Basically, my problem is more on doing the ''math'' rather than the function. Ultimately my question is: How to do the Math in Hexavigesimal conversion, till a [supposed] infinity, just like Microsoft Excel.
And if possible, a source code, thank you in advance.
Okay, here's my attempt, assuming you want the sequence to be start with "a" (representing 0) and going:
a, b, c, ..., y, z, aa, ab, ac, ..., zy, zz, aaa, aab, ...
This works and hopefully makes some sense. The funky line is there because it mathematically makes more sense for 0 to be represented by the empty string and then "a" would be 1, etc.
alpha = "abcdefghijklmnopqrstuvwxyz";
function hex(a) {
// First figure out how many digits there are.
a += 1; // This line is funky
c = 0;
var x = 1;
while (a >= x) {
c++;
a -= x;
x *= 26;
}
// Now you can do normal base conversion.
var s = "";
for (var i = 0; i < c; i++) {
s = alpha.charAt(a % 26) + s;
a = Math.floor(a/26);
}
return s;
}
However, if you're planning to simply print them out in order, there are far more efficient methods. For example, using recursion and/or prefixes and stuff.
Although #user826788 has already posted a working code (which is even a third quicker), I'll post my own work, that I did before finding the posts here (as i didnt know the word "hexavigesimal"). However it also includes the function for the other way round. Note that I use a = 1 as I use it to convert the starting list element from
aa) first
ab) second
to
<ol type="a" start="27">
<li>first</li>
<li>second</li>
</ol>
:
function linum2int(input) {
input = input.replace(/[^A-Za-z]/, '');
output = 0;
for (i = 0; i < input.length; i++) {
output = output * 26 + parseInt(input.substr(i, 1), 26 + 10) - 9;
}
console.log('linum', output);
return output;
}
function int2linum(input) {
var zeros = 0;
var next = input;
var generation = 0;
while (next >= 27) {
next = (next - 1) / 26 - (next - 1) % 26 / 26;
zeros += next * Math.pow(27, generation);
generation++;
}
output = (input + zeros).toString(27).replace(/./g, function ($0) {
return '_abcdefghijklmnopqrstuvwxyz'.charAt(parseInt($0, 27));
});
return output;
}
linum2int("aa"); // 27
int2linum(27); // "aa"
You could accomplish this with recursion, like this:
const toBijective = n => (n > 26 ? toBijective(Math.floor((n - 1) / 26)) : "") + ((n % 26 || 26) + 9).toString(36);
// Parsing is not recursive
const parseBijective = str => str.split("").reverse().reduce((acc, x, i) => acc + ((parseInt(x, 36) - 9) * (26 ** i)), 0);
toBijective(1) // "a"
toBijective(27) // "aa"
toBijective(703) // "aaa"
toBijective(18279) // "aaaa"
toBijective(127341046141) // "overflow"
parseBijective("Overflow") // 127341046141
I don't understand how to work it out from a formula, but I fooled around with it for a while and came up with the following algorithm to literally count up to the requested column number:
var getAlpha = (function() {
var alphas = [null, "a"],
highest = [1];
return function(decNum) {
if (alphas[decNum])
return alphas[decNum];
var d,
next,
carry,
i = alphas.length;
for(; i <= decNum; i++) {
next = "";
carry = true;
for(d = 0; d < highest.length; d++){
if (carry) {
if (highest[d] === 26) {
highest[d] = 1;
} else {
highest[d]++;
carry = false;
}
}
next = String.fromCharCode(
highest[d] + 96)
+ next;
}
if (carry) {
highest.push(1);
next = "a" + next;
}
alphas[i] = next;
}
return alphas[decNum];
};
})();
alert(getAlpha(27)); // "aa"
alert(getAlpha(100000)); // "eqxd"
Demo: http://jsfiddle.net/6SE2f/1/
The highest array holds the current highest number with an array element per "digit" (element 0 is the least significant "digit").
When I started the above it seemed a good idea to cache each value once calculated, to save time if the same value was requested again, but in practice (with Chrome) it only took about 3 seconds to calculate the 1,000,000th value (bdwgn) and about 20 seconds to calculate the 10,000,000th value (uvxxk). With the caching removed it took about 14 seconds to the 10,000,000th value.
Just finished writing this code earlier tonight, and I found this question while on a quest to figure out what to name the damn thing. Here it is (in case anybody feels like using it):
/**
* Convert an integer to bijective hexavigesimal notation (alphabetic base-26).
*
* #param {Number} int - A positive integer above zero
* #return {String} The number's value expressed in uppercased bijective base-26
*/
function bijectiveBase26(int){
const sequence = "ABCDEFGHIJKLMNOPQRSTUVWXYZ";
const length = sequence.length;
if(int <= 0) return int;
if(int <= length) return sequence[int - 1];
let index = (int % length) || length;
let result = [sequence[index - 1]];
while((int = Math.floor((int - 1) / length)) > 0){
index = (int % length) || length;
result.push(sequence[index - 1]);
}
return result.reverse().join("")
}
I had to solve this same problem today for work. My solution is written in Elixir and uses recursion, but I explain the thinking in plain English.
Here are some example transformations:
0 -> "A", 1 -> "B", 2 -> "C", 3 -> "D", ..
25 -> "Z", 26 -> "AA", 27 -> "AB", ...
At first glance it might seem like a normal 26-base counting system
but unfortunately it is not so simple.
The "problem" becomes clear when you realize:
A = 0
AA = 26
This is at odds with a normal counting system, where "0" does not behave
as "1" when it is in a decimal place other than then unit.
To understand the algorithm, consider a simpler but equivalent base-2 system:
A = 0
B = 1
AA = 2
AB = 3
BA = 4
BB = 5
AAA = 6
In a normal binary counting system we can determine the "value" of decimal places by
taking increasing powers of 2 (1, 2, 4, 8, 16) and the value of a binary number is
calculated by multiplying each digit by that digit place's value.
e.g. 10101 = 1 * (2 ^ 4) + 0 * (2 ^ 3) + 1 * (2 ^ 2) + 0 * (2 ^ 1) + 1 * (2 ^ 0) = 21
In our more complicated AB system, we can see by inspection that the decimal place values are:
1, 2, 6, 14, 30, 62
The pattern reveals itself to be (previous_unit_place_value + 1) * 2.
As such, to get the next lower unit place value, we divide by 2 and subtract 1.
This can be extended to a base-26 system. Simply divide by 26 and subtract 1.
Now a formula for transforming a normal base-10 number to special base-26 is apparent.
Say the input is x.
Create an accumulator list l.
If x is less than 26, set l = [x | l] and go to step 5. Otherwise, continue.
Divide x by 2. The floored result is d and the remainder is r.
Push the remainder as head on an accumulator list. i.e. l = [r | l]
Go to step 2 with with (d - 1) as input, e.g. x = d - 1
Convert """ all elements of l to their corresponding chars. 0 -> A, etc.
So, finally, here is my answer, written in Elixir:
defmodule BijectiveHexavigesimal do
def to_az_string(number, base \\ 26) do
number
|> to_list(base)
|> Enum.map(&to_char/1)
|> to_string()
end
def to_09_integer(string, base \\ 26) do
string
|> String.to_charlist()
|> Enum.reverse()
|> Enum.reduce({0, nil}, fn
char, {_total, nil} ->
{to_integer(char), 1}
char, {total, previous_place_value} ->
char_value = to_integer(char + 1)
place_value = previous_place_value * base
new_total = total + char_value * place_value
{new_total, place_value}
end)
|> elem(0)
end
def to_list(number, base, acc \\ []) do
if number < base do
[number | acc]
else
to_list(div(number, base) - 1, base, [rem(number, base) | acc])
end
end
defp to_char(x), do: x + 65
end
You use it simply as BijectiveHexavigesimal.to_az_string(420). It also accepts on optional "base" arg.
I know the OP asked about Javascript but I wanted to provide an Elixir solution for posterity.
I have published these functions in npm package here:
https://www.npmjs.com/package/#gkucmierz/utils
Converting bijective numeration to number both ways (also BigInt version is included).
https://github.com/gkucmierz/utils/blob/main/src/bijective-numeration.mjs
What is the best way of implementing a bit array in JavaScript?
Here's one I whipped up:
UPDATE - something about this class had been bothering me all day - it wasn't size based - creating a BitArray with N slots/bits was a two step operation - instantiate, resize. Updated the class to be size based with an optional second paramter for populating the size based instance with either array values or a base 10 numeric value.
(Fiddle with it here)
/* BitArray DataType */
// Constructor
function BitArray(size, bits) {
// Private field - array for our bits
this.m_bits = new Array();
//.ctor - initialize as a copy of an array of true/false or from a numeric value
if (bits && bits.length) {
for (var i = 0; i < bits.length; i++)
this.m_bits.push(bits[i] ? BitArray._ON : BitArray._OFF);
} else if (!isNaN(bits)) {
this.m_bits = BitArray.shred(bits).m_bits;
}
if (size && this.m_bits.length != size) {
if (this.m_bits.length < size) {
for (var i = this.m_bits.length; i < size; i++) {
this.m_bits.push(BitArray._OFF);
}
} else {
for(var i = size; i > this.m_bits.length; i--){
this.m_bits.pop();
}
}
}
}
/* BitArray PUBLIC INSTANCE METHODS */
// read-only property - number of bits
BitArray.prototype.getLength = function () { return this.m_bits.length; };
// accessor - get bit at index
BitArray.prototype.getAt = function (index) {
if (index < this.m_bits.length) {
return this.m_bits[index];
}
return null;
};
// accessor - set bit at index
BitArray.prototype.setAt = function (index, value) {
if (index < this.m_bits.length) {
this.m_bits[index] = value ? BitArray._ON : BitArray._OFF;
}
};
// resize the bit array (append new false/0 indexes)
BitArray.prototype.resize = function (newSize) {
var tmp = new Array();
for (var i = 0; i < newSize; i++) {
if (i < this.m_bits.length) {
tmp.push(this.m_bits[i]);
} else {
tmp.push(BitArray._OFF);
}
}
this.m_bits = tmp;
};
// Get the complimentary bit array (i.e., 01 compliments 10)
BitArray.prototype.getCompliment = function () {
var result = new BitArray(this.m_bits.length);
for (var i = 0; i < this.m_bits.length; i++) {
result.setAt(i, this.m_bits[i] ? BitArray._OFF : BitArray._ON);
}
return result;
};
// Get the string representation ("101010")
BitArray.prototype.toString = function () {
var s = new String();
for (var i = 0; i < this.m_bits.length; i++) {
s = s.concat(this.m_bits[i] === BitArray._ON ? "1" : "0");
}
return s;
};
// Get the numeric value
BitArray.prototype.toNumber = function () {
var pow = 0;
var n = 0;
for (var i = this.m_bits.length - 1; i >= 0; i--) {
if (this.m_bits[i] === BitArray._ON) {
n += Math.pow(2, pow);
}
pow++;
}
return n;
};
/* STATIC METHODS */
// Get the union of two bit arrays
BitArray.getUnion = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._union(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Get the intersection of two bit arrays
BitArray.getIntersection = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._intersect(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Get the difference between to bit arrays
BitArray.getDifference = function (bitArray1, bitArray2) {
var len = BitArray._getLen(bitArray1, bitArray2, true);
var result = new BitArray(len);
for (var i = 0; i < len; i++) {
result.setAt(i, BitArray._difference(bitArray1.getAt(i), bitArray2.getAt(i)));
}
return result;
};
// Convert a number into a bit array
BitArray.shred = function (number) {
var bits = new Array();
var q = number;
do {
bits.push(q % 2);
q = Math.floor(q / 2);
} while (q > 0);
return new BitArray(bits.length, bits.reverse());
};
/* BitArray PRIVATE STATIC CONSTANTS */
BitArray._ON = 1;
BitArray._OFF = 0;
/* BitArray PRIVATE STATIC METHODS */
// Calculate the intersection of two bits
BitArray._intersect = function (bit1, bit2) {
return bit1 === BitArray._ON && bit2 === BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Calculate the union of two bits
BitArray._union = function (bit1, bit2) {
return bit1 === BitArray._ON || bit2 === BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Calculate the difference of two bits
BitArray._difference = function (bit1, bit2) {
return bit1 === BitArray._ON && bit2 !== BitArray._ON ? BitArray._ON : BitArray._OFF;
};
// Get the longest or shortest (smallest) length of the two bit arrays
BitArray._getLen = function (bitArray1, bitArray2, smallest) {
var l1 = bitArray1.getLength();
var l2 = bitArray2.getLength();
return l1 > l2 ? smallest ? l2 : l1 : smallest ? l2 : l1;
};
CREDIT TO #Daniel Baulig for asking for the refactor from quick and dirty to prototype based.
I don't know about bit arrays, but you can make byte arrays easy with new features.
Look up typed arrays. I've used these in both Chrome and Firefox. The important one is Uint8Array.
To make an array of 512 uninitialized bytes:
var arr = new UintArray(512);
And accessing it (the sixth byte):
var byte = arr[5];
For node.js, use Buffer (server-side).
EDIT:
To access individual bits, use bit masks.
To get the bit in the one's position, do num & 0x1
The Stanford Javascript Crypto Library (SJCL) provides a Bit Array implementation and can convert different inputs (Hex Strings, Byte Arrays, etc.) to Bit Arrays.
Their code is public on GitHub: bitwiseshiftleft/sjcl. So if you lookup bitArray.js, you can find their bit array implementation.
A conversion from bytes to bits can be found here.
Something like this is as close as I can think of. Saves bit arrays as 32 bit numbers, and has a standard array backing it to handle larger sets.
class bitArray {
constructor(length) {
this.backingArray = Array.from({length: Math.ceil(length/32)}, ()=>0)
this.length = length
}
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) > 0
}
on(n) {
this.backingArray[n/32|0] |= 1 << n % 32
}
off(n) {
this.backingArray[n/32|0] &= ~(1 << n % 32)
}
toggle(n) {
this.backingArray[n/32|0] ^= 1 << n % 32
}
forEach(callback) {
this.backingArray.forEach((number, container)=>{
const max = container == this.backingArray.length-1 ? this.length%32 : 32
for(let x=0; x<max; x++) {
callback((number & 1<<x)>0, 32*container+x)
}
})
}
}
let bits = new bitArray(10)
bits.get(2) //false
bits.on(2)
bits.get(2) //true
bits.forEach(console.log)
/* outputs:
false
false
true
false
false
false
false
false
false
false
*/
bits.toggle(2)
bits.forEach(console.log)
/* outputs:
false
false
false
false
false
false
false
false
false
false
*/
bits.toggle(0)
bits.toggle(1)
bits.toggle(2)
bits.off(2)
bits.off(3)
bits.forEach(console.log)
/* outputs:
true
true
false
false
false
false
false
false
false
false
*/
2022
As can be seen from past answers and comments, the question of "implementing a bit array" can be understood in two different (non-exclusive) ways:
an array that takes 1-bit in memory for each entry
an array on which bitwise operations can be applied
As #beatgammit points out, ecmascript specifies typed arrays, but bit arrays are not part of it. I have just published #bitarray/typedarray, an implementation of typed arrays for bits, that emulates native typed arrays and takes 1 bit in memory for each entry.
Because it reproduces the behaviour of native typed arrays, it does not include any bitwise operations though. So, I have also published #bitarray/es6, which extends the previous with bitwise operations.
I wouldn't debate what is the best way of implementing bit array, as per the asked question, because "best" could be argued at length, but those are certainly some way of implementing bit arrays, with the benefit that they behave like native typed arrays.
import BitArray from "#bitarray/es6"
const bits1 = BitArray.from("11001010");
const bits2 = BitArray.from("10111010");
for (let bit of bits1.or(bits2)) console.log(bit) // 1 1 1 1 1 0 1 0
You can easily do that by using bitwise operators. It's quite simple.
Let's try with the number 75.
Its representation in binary is 100 1011. So, how do we obtain each bit from the number?
You can use an AND "&" operator to select one bit and set the rest of them to 0. Then with a Shift operator, you remove the rest of 0 that doesn't matter at the moment.
Example:
Let's do an AND operation with 4 (000 0010)
0100 1011 & 0000 0010 => 0000 0010
Now we need to filter the selected bit, in this case, was the second-bit reading right to left.
0000 0010 >> 1 => 1
The zeros on the left are no representative. So the output will be the bit we selected, in this case, the second one.
var word=75;
var res=[];
for(var x=7; x>=0; x--){
res.push((word&Math.pow(2,x))>>x);
}
console.log(res);
The output:
Expected:
In case you need more than a simple number, you can apply the same function for a byte. Let's say you have a file with multiple bytes. So, you can decompose that file in a ByteArray, then each byte in the array in a BitArray.
Good luck!
#Commi's implementation is what I ended up using .
I believe there is a bug in this implementation. Bits on every 31st boundary give the wrong result. (ie when index is (32 * index - 1), so 31, 63, 95 etc.
I fixed it in the get() method by replacing > 0 with != 0.
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) != 0
}
The reason for the bug is that the ints are 32-bit signed. Shifting 1 left by 31 gets you a negative number. Since the check is for >0, this will be false when it should be true.
I wrote a program to prove the bug before, and the fix after. Will post it running out of space.
for (var i=0; i < 100; i++) {
var ar = new bitArray(1000);
ar.on(i);
for(var j=0;j<1000;j++) {
// we should have TRUE only at one position and that is "i".
// if something is true when it should be false or false when it should be true, then report it.
if(ar.get(j)) {
if (j != i) console.log('we got a bug at ' + i);
}
if (!ar.get(j)) {
if (j == i) console.log('we got a bug at ' + i);
}
}
}
2022
We can implement a BitArray class which behaves similar to TypedArrays by extending DataView. However in order to avoid the cost of trapping the direct accesses to the numerical properties (the indices) by using a Proxy, I believe it's best to stay in DataView domain. DataView is preferable to TypedArrays these days anyway as it's performance is highly improved in recent V8 versions (v7+).
Just like TypedArrays, BitArray will have a predetermined length at construction time. I just include a few methods in the below snippet. The popcnt property very efficiently returns the total number of 1s in BitArray. Unlike normal arrays popcnt is a highly sought after functionality for BitArrays. So much so that Web Assembly and even modern CPU's have a dedicated pop count instruction. Apart from these you can easily add methods like .forEach(), .map() etc. if need be.
class BitArray extends DataView{
constructor(n,ab){
if (n > 1.5e10) throw new Error("BitArray size can not exceed 1.5e10");
super(ab instanceof ArrayBuffer ? ab
: new ArrayBuffer(Number((BigInt(n + 31) & ~31n) >> 3n))); // Sets ArrayBuffer.byteLength to multiples of 4 bytes (32 bits)
}
get length(){
return this.buffer.byteLength << 3;
}
get popcount(){
var m1 = 0x55555555,
m2 = 0x33333333,
m4 = 0x0f0f0f0f,
h01 = 0x01010101,
pc = 0,
x;
for (var i = 0, len = this.buffer.byteLength >> 2; i < len; i++){
x = this.getUint32(i << 2);
x -= (x >> 1) & m1; //put count of each 2 bits into those 2 bits
x = (x & m2) + ((x >> 2) & m2); //put count of each 4 bits into those 4 bits
x = (x + (x >> 4)) & m4; //put count of each 8 bits into those 8 bits
pc += (x * h01) >> 56;
}
return pc;
}
// n >> 3 is Math.floor(n/8)
// n & 7 is n % 8
and(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) & bar.getUint32(i));
return res;
}
at(n){
return this.getUint8(n >> 3) & (1 << (n & 7)) ? 1 : 0;
}
or(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) | bar.getUint32(i));
return res;
}
not(){
var len = this.buffer.byteLength,
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,~(this.getUint32(i) >> 0));
return res;
}
reset(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) & ~(1 << (n & 7)));
}
set(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) | (1 << (n & 7)));
}
slice(a = 0, b = this.length){
return new BitArray(b-a,this.buffer.slice(a >> 3, b >> 3));
}
toggle(n){
this.setUint8(n >> 3, this.getUint8(n >> 3) ^ (1 << (n & 7)));
}
toString(){
return new Uint8Array(this.buffer).reduce((p,c) => p + ((BigInt(c)* 0x0202020202n & 0x010884422010n) % 1023n).toString(2).padStart(8,"0"),"");
}
xor(bar){
var len = Math.min(this.buffer.byteLength,bar.buffer.byteLength),
res = new BitArray(len << 3);
for (var i = 0; i < len; i += 4) res.setUint32(i,this.getUint32(i) ^ bar.getUint32(i));
return res;
}
}
Just do like
var u = new BitArray(12);
I hope it helps.
Probably [definitely] not the most efficient way to do this, but a string of zeros and ones can be parsed as a number as a base 2 number and converted into a hexadecimal number and finally a buffer.
const bufferFromBinaryString = (binaryRepresentation = '01010101') =>
Buffer.from(
parseInt(binaryRepresentation, 2).toString(16), 'hex');
Again, not efficient; but I like this approach because of the relative simplicity.
Thanks for a wonderfully simple class that does just what I need.
I did find a couple of edge-case bugs while testing:
get(n) {
return (this.backingArray[n/32|0] & 1 << n % 32) != 0
// test of > 0 fails for bit 31
}
forEach(callback) {
this.backingArray.forEach((number, container)=>{
const max = container == this.backingArray.length-1 && this.length%32
? this.length%32 : 32;
// tricky edge-case: at length-1 when length%32 == 0,
// need full 32 bits not 0 bits
for(let x=0; x<max; x++) {
callback((number & 1<<x)!=0, 32*container+x) // see fix in get()
}
})
My final implementation fixed the above bugs and changed the backArray to be a Uint8Array instead of Array, which avoids signed int bugs.
I have 2 numbers in javascript that I want to bit and. They both are 33bit long
in C#:
((4294967296 & 4294967296 )==0) is false
but in javascript:
((4294967296 & 4294967296 )==0) is true
4294967296 is ((long)1) << 32
As I understand it, it is due to the fact that javascript converts values to int32 when performing bit wise operations.
How do I work around this?
Any suggestions on how to replace bit and with a set of other math operations so that bits are not lost?
Here's a fun function for arbitrarily large integers:
function BitwiseAndLarge(val1, val2) {
var shift = 0, result = 0;
var mask = ~((~0) << 30); // Gives us a bit mask like 01111..1 (30 ones)
var divisor = 1 << 30; // To work with the bit mask, we need to clear bits at a time
while( (val1 != 0) && (val2 != 0) ) {
var rs = (mask & val1) & (mask & val2);
val1 = Math.floor(val1 / divisor); // val1 >>> 30
val2 = Math.floor(val2 / divisor); // val2 >>> 30
for(var i = shift++; i--;) {
rs *= divisor; // rs << 30
}
result += rs;
}
return result;
}
Assuming that the system handles at least 30-bit bitwise operations properly.
You could split each of the vars into 2 32-bit values (like a high word and low word), then do a bitwise operation on both pairs.
The script below runs as a Windows .js script. You can replace WScript.Echo() with alert() for Web.
var a = 4294967296;
var b = 4294967296;
var w = 4294967296; // 2^32
var aHI = a / w;
var aLO = a % w;
var bHI = b / w;
var bLO = b % w;
WScript.Echo((aHI & bHI) * w + (aLO & bLO));
There are several BigInteger librairy in Javascript, but none of them offer bitwise operation you need at the moment. If you are motivated and really need that functionality you can modify one of those librairy and add a method to do so. They already offer a good code base to work with huge number.
You can find a list of the BigInteger librairy in Javascript in this question :
Huge Integer JavaScript Library
The simplest bit-wise AND, that works up to JavaScript's maximum number
JavaScript's max integer value is 2^53 for internal reasons (it's a double float). If you need more there are good libraries for that big integer handling.
2^53 is 9,007,199,254,740,992, or about 9,000 trillion (~9 quadrillion).
// Works with values up to 2^53
function bitwiseAnd_53bit(value1, value2) {
const maxInt32Bits = 4294967296; // 2^32
const value1_highBits = value1 / maxInt32Bits;
const value1_lowBits = value1 % maxInt32Bits;
const value2_highBits = value2 / maxInt32Bits;
const value2_lowBits = value2 % maxInt32Bits;
return (value1_highBits & value2_highBits) * maxInt32Bits + (value1_lowBits & value2_lowBits)
}
Ran into this problem today and this is what I came up with:
function bitwiseAnd(firstNumber, secondNumber) {
let // convert the numbers to binary strings
firstBitArray = (firstNumber).toString(2),
secondBitArray = (secondNumber).toString(2),
resultedBitArray = [],
// get the length of the largest number
maxLength = Math.max(firstBitArray.length, secondBitArray.length);
//add zero fill ahead in case the binary strings have different lengths
//so we can have strings equal in length and compare bit by bit
firstBitArray = firstBitArray.padStart(maxLength,'0');
secondBitArray = secondBitArray.padStart(maxLength,'0');
// bit by bit comparison
for(let i = 0; i < maxLength; i++) {
resultedBitArray.push(parseInt(firstBitArray[i]) && secondBitArray[i]);
}
//concat the result array back to a string and parse the binary string back to an integer
return parseInt(resultedBitArray.join(''),2);
}
Hope this helps anyone else who runs into this problem.