Create a random BigInt for Miller-Rabin test - javascript

I'm implementing a Miller Rabin primality test using JavaScript BigInts.
The basic algorithm is no problem - I have that done, but it requires a random number in the range 0 to (number being tested-3). I can't use Math.random() and scale since I'm using BigInt.
This doesn't need to be cryptographically secure, just random enough, so I've opted for generating a string of randomly selected hex digits and converting that to a BigInt.
Here's the code:
function getRandomBigint(lower, upper) {
// Convert to hex strings so that we know how many digits to generate
let hexDigits = new Array(upper.toString(16).length).fill('');
let rand;
let newDigits;
do {
// Fill the array with random hex digits and convert to a BigInt
newDigits = hexDigits.map(()=>Math.floor(Math.random()*16).toString(16));
rand = BigInt('0x'+newDigits.join(''));
} while (rand < lower || rand > upper);
return rand;
}
The problem here is that the generated number could be out of range. This function handles that (badly) by iterating until it gets a number in range. Obviously, that could potentially take a very long time. In practice it has never iterated more than a couple of dozen times before delivering a number, but the nature of randomness means that the problem is 'out there' waiting to bite me!
I could scale or truncate the result to get it in range, rather than iterating, but I am concerned that this would affect the randomness. I already have some evidence that this is not as random as it might be, but that might not matter in this application.
So, two questions:
is this random enough for Miller Rabin?
how to deal with results out of range?
This is a JavaScript-only project - no libraries, please.

The Miller Rabin test is a probabilistic1 test. That is, it is deterministic for establishing that a number is not prime, but a prime indication is only that - an indication that a number might be prime.
The reason is that certain bases used in the test can result in a prime indication, even when the target number is not prime. The use of a random number as a base in the test allows for the test to be run repeatedly with a different base each time, thus increasing the probability that the result correctly indicates prime or not prime2.
Thus, the random numbers selected must just be less than the number being tested. There is no requirement for them to be selected from the complete range.
With this in mind I now have this:
function getRandomBigInt(upper) {
let maxInt = BigInt(Number.MAX_SAFE_INTEGER);
if (upper <= maxInt) {
return BigInt((Math.floor(Math.random()*Number(upper))));
} else {
return BigInt((Math.floor(Math.random()*Number.MAX_SAFE_INTEGER)));
}
}
9007199254740991 (Number.MAX_SAFE_INTEGER) should provide a sufficiently large range of numbers for this purpose. It works with my Miller-Rabin implementation as far as I have tested it so far.
1 Testing numbers up to 3,317,044,064,679,887,385,961,981 against a short specific list of bases will yield a definitive prime/not prime result.
2 For non-deterministic prime results it is still necessary to perform a deterministic test (such as trial division) to confirm.

A way to get random number in a range is
lower + rand() % (upper - lower)
rand() is any function that returns a random number larger than (upper - lower). The math can be done with Integer or Bigint.
function rand16() {
// 0 .. 2^16-1
return BigInt(Math.floor(Math.random()*65536));
}
function rand() {
// -2^63 .. 2^63-1
return BigInt( (((rand16() * 65536) + rand16())* 65536 + rand16()) * 65536 + rand16() );
}

Related

Javascript number size

i just wanted to know javascript number size because i want to send lot of them via network per frame and i must know a measure of how many im gonna send per second.
As i readed:
According to the ECMAScript standard, there is only one number type: the double-precision 64-bit binary format IEEE 754 value (number between -(2^53 -1) and 2^53 -1).
So if im gonna send lot of diferent numbers(example later) if all numbers between -(2^53 -1) and (2^53 -1) use same memory i may just combinate them like 567832332423556 and then locally split them locally when received instead of sending a lot of diferent numbers, because anyway that unique number "567832332423556" sends same information as a separated 5,6,7,8... but in one so its supossed to waste many less if it haves same size as a single 5.
Is this true or just im so confused? pls explain me :(.
var data = Array2d(obj.size); //Size can be between 125 and 200;`
Array2d: function (rows) { //The number of rows and files are same
var arr = [];
for (var i=0;i<rows;i++) arr[i] = [];
return arr;
},
...
if (this.calculate()) {
data[x][y] = 1;
} else {
data[x][y] = 0;
}
and somewhere in the code i change those 1 to any number from 2 to 5 so numbers may be from 0 to 5 depends of the situation.
Example:
[
[0,0,2,1,3,4,5,0,2,3,4,5,4(200 numbers)],
[0,5,2,1,5,1,0,2,3,0,0,0,0(200 numbers)]
...(200 times)
]
*And i really need All numbers, i cant miss even one.
If in therms of size 5 is shame as 34234 so i could just do something like:
[
[0021345023454...(20 numbers 10 times)],
[0021345023454...(20 numbers 10 times)]
...(200 times)
]
and it may use 20 times less because if 5 size is the same as 2^53 i just stack numbers 20 by 20 and they should waste lot less (ofc, 20 numbers less by stacking 20, at least in the network, maybe the local split is a little big but locally i do few things so i can handle that).
Precise limits on numbers are covered in What is JavaScript's highest integer value that a Number can go to without losing precision? - 9007199254740991 for regualr arithmetic operations, 2^32 for bit operations.
But it sounds like you are more interested in network representation than memory usage at run-time. Below is list of options from less to more compact. Make sure to understand your performance goals before moving away from basic JSON solution -as cost and complexity of constructing data rises more compatc representation you pick.
Most basic solution - JSON representation of existing array gives pretty decent ~2 characters per value representation:
[[0,1,5,0,0],[1,1,1,1,1],[0,0,0,0,0]]
Representing all numbers in a row as one big string gives ~1 character number:
["01500","11111","00000"]
Representing same values as concatenated numbers does not bring much savings - "11111" as string is about as long as the same 11111 as number - you add pair of quotes per row for string but one coma pare 16 values when packing as numbers.
You can indeed pack values to number in more compact form since the range is 0-5 using standard 6-ary value you get ~6^20 per on JavaScript number which is not significant savings over 16 values per number which you get with just representing as digits concatenation.
Better packing would be to represent 2 or 3 values as one character - 2 values give 36 combinations (v1 * 6 + v2) which can be written with just [A-Z0-9], 3 - 216 value which mostly fits into regular characters range.
you can go strictly binary representation (3-4 bit per value) and send via WebSockets to avoid cost of converting to text with regular requests.
Whether you go with binary or text representation one more option is compression - basic RLE compression may be fine if your data have long sequences of same value, other compression algorithms may work better on more random data. There are libraries to perform compression in JavaScript too.

Math.sin() Different Precision between Node.js and C#

I have a problem in precision in the last digit after the comma.The javascript code generates one less Digit in compare with the C# code.
Here is the simple Node.js code
var seed = 45;
var x = Math.sin(seed) * 0.5;
console.log(x);//0.4254517622670592
Here is the simple C# code
public String pseudorandom()
{
int seed = 45;
double num = Math.Sin(seed) * (0.5);
return num.ToString("G15");//0.42545176226705922
}
How to achieve the same precision?
The JavaScript Number type is quite complex. It looks like floating point number will probably be like IEEE 754-2008 but some aspects are left to the implementation. See http://www.ecma-international.org/ecma-262/6.0/#sec-number-objects sec 12.7.
There is a note
The output of toFixed may be more precise than toString for some
values because toString only prints enough significant digits to
distinguish the number from adjacent number values. For example,
(1000000000000000128).toString() returns "1000000000000000100", while
(1000000000000000128).toFixed(0) returns "1000000000000000128".
Hence to get full digit accuracy you need something like
seed = 45;
x = Math.sin(seed) * 0.5;
x.toFixed(17);
// on my platform its "0.42545176226705922"
Also, note the specification for how the implementation of sin and cos allow for some variety in the actual algorithm. It's only guaranteed to within +/- 1 ULP.
Using java the printing algorithm is different. Even forcing 17 digits gives the result as 0.42545176226705920.
You can check you are getting the same bit patterns using x.toString(2) and Double.doubleToLongBits(x) in Java.
return num.ToString("G15");//0.42545176226705922
actually returns "0.425451762267059" (no significant digit + 15 decimal places in this example), and not the precision shown in the comment after.
So you would use:
return num.ToString("G16");
to get "0.4254517622670592"
(for your example - where the significant digit is always 0) G16 will be 16 decimal places.

Large numbers - Math in JavaScript

I'm developing a 3D space game, which is using alot of math formulas, navigation, ease effects, rotations, huge distances between planets, objects mass, and so on...
My Question is what would be the best way in doing so using math. Should I calculate everything as integers and obtain really large integers(over 20 digits), or use small numbers with decimals.
In my experience, math when using digits with decimals is not accurate, causing strange behavior when using large numbers with decimals.
I would avoid using decimals. They have known issues with precision: http://floating-point-gui.de/
I would recommend using integers, though if you need to work with very large integers I would suggest using a big number or big integer library such as one of these:
http://jsfromhell.com/classes/bignumber
https://silentmatt.com/biginteger/
The downside is you have to use these number objects and their methods rather than the primitive Number type and standard JS operators, but you'll have a lot more flexibility with operating on large numbers.
Edit:
As le_m pointed out, another downside is speed. The library methods won't run as fast as the native operators. You'll have to test for yourself to see if the performance is acceptable.
Use the JavaScript Number Object
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number
Number.MAX_SAFE_INTEGER
The maximum safe integer in JavaScript (2^53 - 1).
Number.MIN_SAFE_INTEGER
The minimum safe integer in JavaScript (-(253 - 1)).
var biggestInt = 9007199254740992;
var smallestInt = -9007199254740992;
var biggestNum = Number.MAX_VALUE;
var smallestNum = Number.MIN_VALUE;
var infiniteNum = Number.POSITIVE_INFINITY;
var negInfiniteNum = Number.NEGATIVE_INFINITY;
var notANum = Number.NaN;
console.log(biggestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=1.79769313&b=308
console.log(smallestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=5&b=-32
console.log(biggestNum);
console.log(smallestNum);
console.log(infiniteNum);
console.log(negInfiniteNum);
console.log(notANum);
debugger;
I can only imagine that this is a sign of a bigger problem with your application complicating something that could be very simple.
Please read numerical literals
http://www.ecma-international.org/ecma-262/5.1/#sec-7.8.3
Once the exact MV for a numeric literal has been determined, it is
then rounded to a value of the Number type. If the MV is 0, then the
rounded value is +0; otherwise, the rounded value must be the Number
value for the MV (as specified in 8.5), unless the literal is a
DecimalLiteral and the literal has more than 20 significant digits, in
which case the Number value may be either the Number value for the MV
of a literal produced by replacing each significant digit after the
20th with a 0 digit or the Number value for the MV of a literal
produced by replacing each significant digit after the 20th with a 0
digit and then incrementing the literal at the 20th significant digit
position. A digit is significant if it is not part of an ExponentPart
and
it is not 0;
or there is a nonzero digit to its left and there is a nonzero digit, not in the ExponentPart, to its right.
Clarification
I should add that the Number Object wrapper supposedly offers precision to 100 (Going above this number will give you a RangeType error) significant digits in some browsers, however most environments currently only implement the precision to the required 21 significant digits.
Reading through OPs original question, I believe skyline provided the best answer by recommending a library which offers well over 100 significant digits (some of the tests that I got to pass were using 250 significant digits). In reality, it would be interesting to see someone revive one of those projects again.
The distance from our Sun to Alpha Centauri is 4.153×1018 cm. You can represent this value well with the Number datatype which stores values up to 1.7977×10308 with about 17 significant figures.
However, what if you want to model a spaceship stationed at Alpha Centauri?
Due to the limited precision of Number, you can either store the value 4153000000000000000 or 4153000000000000500, but nothing in between. This means that you would have a maximal spacial resolution of 500 cm at Alpha Centauri. Your spaceship would look really clunky.
Could we use another datatype than Number? Of course you could use a library such as BigNumber.js which provides support for nearly unlimited precision. You can park your spaceship one milimeter next to the hot core of Alpha Centauri without (numerical) issues:
pos_acentauri = new BigNumber(4153000000000000000);
pos_spaceship = pos_acentauri.add(0.1); // one milimeter from Alpha Centauri
console.log(pos_spaceship); // 4153000000000000000.1
<script src="https://cdnjs.cloudflare.com/ajax/libs/bignumber.js/2.3.0/bignumber.min.js"></script>
However, not only would the captain of that ship burn to death, but your 3D engine would be slow as hell, too. That is because Number allows for really fast arithmetic computations in constant time, whereas e. g. the BigNumber addition computation time grows with the size of the stored value.
Solution: Use Number for your 3D engine. You could use different local coordinate systems, e. g. one for Alpha Centauri and one for our solar system. Only use BigNumber for things like the HUD, game stats and so on.
The problem with BigNumber is with
Precision loss from using numeric literals with more than 15
significant digits
My solution would be a combination of BigNumber and web3.js:
var web3 = new Web3();
let one = new BigNumber("1234567890121234567890123456789012345");
let two = new BigNumber("1000000000000000000");
let three = new BigNumber("1000000000000000000");
const minus = two.times(three).minus(one);
const plus = one.plus(two.times(three));
const compare = minus.comparedTo(plus);
const results = {
minus: web3.toBigNumber(minus).toString(10),
plus: web3.toBigNumber(plus).toString(10),
compare
}
console.log(results); // {minus: "-234567890121234567890123456789012345", plus: "2234567890121234567890123456789012345", compare: -1}

MDN example (of Math.random()): could it be parseInt() instead of Math.floor()?

I was reading a JavaScript tutorial and searching for functions on MDN website when I stumbled across this example of Math.random():
function getRandomIntInclusive(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
While I understand that Math.floor chooses the biggest number and with that erases all numbers that have decimal values, I already learnt another function called parseInt() which just deletes all of the numbers after point. So, what's the difference between those two? Couldn't I just use
function getRandomInclusive(min, max) {
return parseInt(Math.random() * (max - min + 1)) + min;
}
instead?
I got idea while writing this question that Math.floor() might write 5 when it's 4,5 and parseInt() would write 4, but it's not really important for random number generator as I understand (if you know any examples of when it would be important, please tell me!) So, there's still not much of a difference in this case?
parseInt parses a string into an integer, reading only digits at the beginning of that string. It is not appropriate to use to round a number to an integer. The number will be converted to a string first, sometimes with unexpected results:
var num = 1000000000000000000000;
parseInt(num) // 1, because the string representation is 1e+21
I got idea while writing this question that Math.floor() might write 5 when it's 4,5 and parseInt() would write 4
You’re thinking of rounding to the nearest whole number, which is what Math.round does. Math.floor always rounds down.
The clear difference is that Math.floor is a function that works on numbers while parseInt is a function that takes a string. You don't want to use the latter here as we deal with random numbers.
If you are unsure about the rounding mode, compare Math.floor, Math.round and Math.ceil.
Here's an example where their behaviour differs drastically: negative numbers.
Math.floor(-2.5) -> -3
parseInt(-2.5) -> -2
The semantic difference pointed out in other answers, however, is probably more important. Even if it does work for your particular and specific use-case, it's best to pick the one with the correct semantics. (There are countless other ways to return a seemingly random integer from a function, you should pick the one that's the least likely to make your code hard to maintain.)

JavaScript flooring number to order of magnitude

I want to floor any integers >= 10 according to its order of magnitude. For instance,
15 -> 10
600 -> 100
8,547 -> 1,000
32,123 -> 10,000
3,218,748 -> 1,000,000
544,221,323,211 -> 100,000,000,000
....
I was thinking parsing the int to string and count how many digits are there, then set the new string to 1 + a bunch of zeros and convert back to number.
function convert(n) {
nStr = n.toString();
nLen = nStr.length;
newStr = "1" + Array(nLen).join("0");
return parseInt(newStr);
}
Is there a more mathematical way to do this? I want to avoid converting between int and str because it might waste a lot of memory and disk space when n is huge and if I want to run this function a million times.
So you're looking for the order of magnitude.
function convert(n) {
var order = Math.floor(Math.log(n) / Math.LN10
+ 0.000000001); // because float math sucks like that
return Math.pow(10,order);
}
Simple ^_^ Math is awesome! Floating point imprecisions, however, are not. Note that this won't be completely accurate in certain edge cases, but it will do its best.

Categories