Javascript number size - javascript

i just wanted to know javascript number size because i want to send lot of them via network per frame and i must know a measure of how many im gonna send per second.
As i readed:
According to the ECMAScript standard, there is only one number type: the double-precision 64-bit binary format IEEE 754 value (number between -(2^53 -1) and 2^53 -1).
So if im gonna send lot of diferent numbers(example later) if all numbers between -(2^53 -1) and (2^53 -1) use same memory i may just combinate them like 567832332423556 and then locally split them locally when received instead of sending a lot of diferent numbers, because anyway that unique number "567832332423556" sends same information as a separated 5,6,7,8... but in one so its supossed to waste many less if it haves same size as a single 5.
Is this true or just im so confused? pls explain me :(.
var data = Array2d(obj.size); //Size can be between 125 and 200;`
Array2d: function (rows) { //The number of rows and files are same
var arr = [];
for (var i=0;i<rows;i++) arr[i] = [];
return arr;
},
...
if (this.calculate()) {
data[x][y] = 1;
} else {
data[x][y] = 0;
}
and somewhere in the code i change those 1 to any number from 2 to 5 so numbers may be from 0 to 5 depends of the situation.
Example:
[
[0,0,2,1,3,4,5,0,2,3,4,5,4(200 numbers)],
[0,5,2,1,5,1,0,2,3,0,0,0,0(200 numbers)]
...(200 times)
]
*And i really need All numbers, i cant miss even one.
If in therms of size 5 is shame as 34234 so i could just do something like:
[
[0021345023454...(20 numbers 10 times)],
[0021345023454...(20 numbers 10 times)]
...(200 times)
]
and it may use 20 times less because if 5 size is the same as 2^53 i just stack numbers 20 by 20 and they should waste lot less (ofc, 20 numbers less by stacking 20, at least in the network, maybe the local split is a little big but locally i do few things so i can handle that).

Precise limits on numbers are covered in What is JavaScript's highest integer value that a Number can go to without losing precision? - 9007199254740991 for regualr arithmetic operations, 2^32 for bit operations.
But it sounds like you are more interested in network representation than memory usage at run-time. Below is list of options from less to more compact. Make sure to understand your performance goals before moving away from basic JSON solution -as cost and complexity of constructing data rises more compatc representation you pick.
Most basic solution - JSON representation of existing array gives pretty decent ~2 characters per value representation:
[[0,1,5,0,0],[1,1,1,1,1],[0,0,0,0,0]]
Representing all numbers in a row as one big string gives ~1 character number:
["01500","11111","00000"]
Representing same values as concatenated numbers does not bring much savings - "11111" as string is about as long as the same 11111 as number - you add pair of quotes per row for string but one coma pare 16 values when packing as numbers.
You can indeed pack values to number in more compact form since the range is 0-5 using standard 6-ary value you get ~6^20 per on JavaScript number which is not significant savings over 16 values per number which you get with just representing as digits concatenation.
Better packing would be to represent 2 or 3 values as one character - 2 values give 36 combinations (v1 * 6 + v2) which can be written with just [A-Z0-9], 3 - 216 value which mostly fits into regular characters range.
you can go strictly binary representation (3-4 bit per value) and send via WebSockets to avoid cost of converting to text with regular requests.
Whether you go with binary or text representation one more option is compression - basic RLE compression may be fine if your data have long sequences of same value, other compression algorithms may work better on more random data. There are libraries to perform compression in JavaScript too.

Related

Why does code points between U+D800 and U+DBFF generate one-length string in ECMAScript 6?

I'm getting too confused. Why do code points from U+D800 to U+DBFF encode as a single (2 bytes) String element, when using the ECMAScript 6 native Unicode helpers?
I'm not asking how JavaScript/ECMAScript encodes Strings natively, I'm asking about an extra functionality to encode UTF-16 that makes use of UCS-2.
var str1 = '\u{D800}';
var str2 = String.fromCodePoint(0xD800);
console.log(
str1.length, str1.charCodeAt(0), str1.charCodeAt(1)
);
console.log(
str2.length, str2.charCodeAt(0), str2.charCodeAt(1)
);
Re-TL;DR: I want to know why the above approaches return a string of length 1. Shouldn't U+D800 generate a 2 length string, since my browser's ES6 implementation incorporates UCS-2 encoding in strings, which uses 2 bytes for each character code?
Both of these approaches return a one-element String for the U+D800 code point (char code: 55296, same as 0xD800). But for code points bigger than U+FFFF each one returns a two-element String, the lead and trail. lead would be a number between U+D800 and U+DBFF, and trail I'm not sure about, I only know it helps changing the result code point. For me the return value doesn't make sense, it represents a lead without trail. Am I understanding something wrong?
I think your confusion is about how Unicode encodings work in general, so let me try to explain.
Unicode itself just specifies a list of characters, called "code points", in a particular order. It doesn't tell you how to convert those to bits, it just gives them all a number between 0 and 1114111 (in hexadecimal, 0x10FFFF). There are several different ways these numbers from U+0 to U+10FFFF can be represented as bits.
In an earlier version, it was expected that a range of 0 to 65535 (0xFFFF) would be enough. This can be naturally represented in 16 bits, using the same convention as an unsigned integer. This was the original way of storing Unicode, and is now known as UCS-2. To store a single code point, you reserve 16 bits of memory.
Later, it was decided that this range was not large enough; this meant that there were code points higher than 65535, which you can't represent in a 16-bit piece of memory. UTF-16 was invented as a clever way of storing these higher code points. It works by saying "if you look at a 16-bit piece of memory, and it's a number between 0xD800 and 0xDBF (a "low surrogate"), then you need to look at the next 16 bits of memory as well". Any piece of code which is performing this extra check is processing its data as UTF-16, and not UCS-2.
It's important to understand that the memory itself doesn't "know" which encoding it's in, the difference between UCS-2 and UTF-16 is how you interpret that memory. When you write a piece of software, you have to choose which interpretation you're going to use.
Now, onto Javascript...
Javascript handles input and output of strings by interpreting its internal representation as UTF-16. That's great, it means that you can type in and display the famous 💩 character, which can't be stored in one 16-bit piece of memory.
The problem is that most of the built in string functions actually handle the data as UCS-2 - that is, they look at 16 bits at a time, and don't care if what they see is a special "surrogate". The function you used, charCodeAt(), is an example of this: it reads 16 bits out of memory, and gives them to you as a number between 0 and 65535. If you feed it 💩, it will just give you back the first 16 bits; ask it for the next "character" after, and it will give you the second 16 bits (which will be a "high surrogate", between 0xDC00 and 0xDFFF).
In ECMAScript 6 (2015), a new function was added: codePointAt(). Instead of just looking at 16 bits and giving them to you, this function checks if they represent one of the UTF-16 surrogate code units, and if so, looks for the "other half" - so it gives you a number between 0 and 1114111. If you feed it 💩, it will correctly give you 128169.
var poop = '💩';
console.log('Treat it as UCS-2, two 16-bit numbers: ' + poop.charCodeAt(0) + ' and ' + poop.charCodeAt(1));
console.log('Treat it as UTF-16, one value cleverly encoded in 32 bits: ' + poop.codePointAt(0));
// The surrogates are 55357 and 56489, which encode 128169 as follows:
// 0x010000 + ((55357 - 0xD800) << 10) + (56489 - 0xDC00) = 128169
Your edited question now asks this:
I want to know why the above approaches return a string of length 1. Shouldn't U+D800 generate a 2 length string?
The hexadecimal value D800 is 55296 in decimal, which is less than 65536, so given everything I've said above, this fits fine in 16 bits of memory. So if we ask charCodeAt to read 16 bits of memory, and it finds that number there, it's not going to have a problem.
Similarly, the .length property measures how many sets of 16 bits there are in the string. Since this string is stored in 16 bits of memory, there is no reason to expect any length other than 1.
The only unusual thing about this number is that in Unicode, that value is reserved - there isn't, and never will be, a character U+D800. That's because it's one of the magic numbers that tells a UTF-16 algorithm "this is only half a character". So a possible behaviour would be for any attempt to create this string to simply be an error - like opening a pair of brackets that you never close, it's unbalanced, incomplete.
The only way you could end up with a string of length 2 is if the engine somehow guessed what the second half should be; but how would it know? There are 1024 possibilities, from 0xDC00 to 0xDFFF, which could be plugged into the formula I show above. So it doesn't guess, and since it doesn't error, the string you get is 16 bits long.
Of course, you can supply the matching halves, and codePointAt will interpret them for you.
// Set up two 16-bit pieces of memory
var high=String.fromCharCode(55357), low=String.fromCharCode(56489);
// Note: String.fromCodePoint will give the same answer
// Glue them together (this + is string concatenation, not number addition)
var poop = high + low;
// Read out the memory as UTF-16
console.log(poop);
console.log(poop.codePointAt(0));
Well, it does this because the specification says it has to:
http://www.ecma-international.org/ecma-262/6.0/#sec-string.fromcodepoint
http://www.ecma-international.org/ecma-262/6.0/#sec-utf16encoding
Together these two say that if an argument is < 0 or > 0x10FFFF, a RangeError is thrown, but otherwise any codepoint <= 65535 is incorporated into the result string as-is.
As for why things are specified this way, I don't know. It seems like JavaScript doesn't really support Unicode, only UCS-2.
Unicode.org has the following to say on the matter:
http://www.unicode.org/faq/utf_bom.html#utf16-2
Q: What are surrogates?
A: Surrogates are code points from two special ranges of Unicode values, reserved for use as the leading, and trailing values of paired code units in UTF-16. Leading, also called high, surrogates are from D80016 to DBFF16, and trailing, or low, surrogates are from DC0016 to DFFF16. They are called surrogates, since they do not represent characters directly, but only as a pair.
http://www.unicode.org/faq/utf_bom.html#utf16-7
Q: Are there any 16-bit values that are invalid?
A: Unpaired surrogates are invalid in UTFs. These include any value in the range D80016 to DBFF16 not followed by a value in the range DC0016 to DFFF16, or any value in the range DC0016 to DFFF16 not preceded by a value in the range D80016 to DBFF16.
Therefore the result of String.fromCodePoint is not always valid UTF-16 because it can emit unpaired surrogates.

Large numbers - Math in JavaScript

I'm developing a 3D space game, which is using alot of math formulas, navigation, ease effects, rotations, huge distances between planets, objects mass, and so on...
My Question is what would be the best way in doing so using math. Should I calculate everything as integers and obtain really large integers(over 20 digits), or use small numbers with decimals.
In my experience, math when using digits with decimals is not accurate, causing strange behavior when using large numbers with decimals.
I would avoid using decimals. They have known issues with precision: http://floating-point-gui.de/
I would recommend using integers, though if you need to work with very large integers I would suggest using a big number or big integer library such as one of these:
http://jsfromhell.com/classes/bignumber
https://silentmatt.com/biginteger/
The downside is you have to use these number objects and their methods rather than the primitive Number type and standard JS operators, but you'll have a lot more flexibility with operating on large numbers.
Edit:
As le_m pointed out, another downside is speed. The library methods won't run as fast as the native operators. You'll have to test for yourself to see if the performance is acceptable.
Use the JavaScript Number Object
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number
Number.MAX_SAFE_INTEGER
The maximum safe integer in JavaScript (2^53 - 1).
Number.MIN_SAFE_INTEGER
The minimum safe integer in JavaScript (-(253 - 1)).
var biggestInt = 9007199254740992;
var smallestInt = -9007199254740992;
var biggestNum = Number.MAX_VALUE;
var smallestNum = Number.MIN_VALUE;
var infiniteNum = Number.POSITIVE_INFINITY;
var negInfiniteNum = Number.NEGATIVE_INFINITY;
var notANum = Number.NaN;
console.log(biggestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=1.79769313&b=308
console.log(smallestInt); // http://www.miniwebtool.com/scientific-notation-to-decimal-converter/?a=5&b=-32
console.log(biggestNum);
console.log(smallestNum);
console.log(infiniteNum);
console.log(negInfiniteNum);
console.log(notANum);
debugger;
I can only imagine that this is a sign of a bigger problem with your application complicating something that could be very simple.
Please read numerical literals
http://www.ecma-international.org/ecma-262/5.1/#sec-7.8.3
Once the exact MV for a numeric literal has been determined, it is
then rounded to a value of the Number type. If the MV is 0, then the
rounded value is +0; otherwise, the rounded value must be the Number
value for the MV (as specified in 8.5), unless the literal is a
DecimalLiteral and the literal has more than 20 significant digits, in
which case the Number value may be either the Number value for the MV
of a literal produced by replacing each significant digit after the
20th with a 0 digit or the Number value for the MV of a literal
produced by replacing each significant digit after the 20th with a 0
digit and then incrementing the literal at the 20th significant digit
position. A digit is significant if it is not part of an ExponentPart
and
it is not 0;
or there is a nonzero digit to its left and there is a nonzero digit, not in the ExponentPart, to its right.
Clarification
I should add that the Number Object wrapper supposedly offers precision to 100 (Going above this number will give you a RangeType error) significant digits in some browsers, however most environments currently only implement the precision to the required 21 significant digits.
Reading through OPs original question, I believe skyline provided the best answer by recommending a library which offers well over 100 significant digits (some of the tests that I got to pass were using 250 significant digits). In reality, it would be interesting to see someone revive one of those projects again.
The distance from our Sun to Alpha Centauri is 4.153×1018 cm. You can represent this value well with the Number datatype which stores values up to 1.7977×10308 with about 17 significant figures.
However, what if you want to model a spaceship stationed at Alpha Centauri?
Due to the limited precision of Number, you can either store the value 4153000000000000000 or 4153000000000000500, but nothing in between. This means that you would have a maximal spacial resolution of 500 cm at Alpha Centauri. Your spaceship would look really clunky.
Could we use another datatype than Number? Of course you could use a library such as BigNumber.js which provides support for nearly unlimited precision. You can park your spaceship one milimeter next to the hot core of Alpha Centauri without (numerical) issues:
pos_acentauri = new BigNumber(4153000000000000000);
pos_spaceship = pos_acentauri.add(0.1); // one milimeter from Alpha Centauri
console.log(pos_spaceship); // 4153000000000000000.1
<script src="https://cdnjs.cloudflare.com/ajax/libs/bignumber.js/2.3.0/bignumber.min.js"></script>
However, not only would the captain of that ship burn to death, but your 3D engine would be slow as hell, too. That is because Number allows for really fast arithmetic computations in constant time, whereas e. g. the BigNumber addition computation time grows with the size of the stored value.
Solution: Use Number for your 3D engine. You could use different local coordinate systems, e. g. one for Alpha Centauri and one for our solar system. Only use BigNumber for things like the HUD, game stats and so on.
The problem with BigNumber is with
Precision loss from using numeric literals with more than 15
significant digits
My solution would be a combination of BigNumber and web3.js:
var web3 = new Web3();
let one = new BigNumber("1234567890121234567890123456789012345");
let two = new BigNumber("1000000000000000000");
let three = new BigNumber("1000000000000000000");
const minus = two.times(three).minus(one);
const plus = one.plus(two.times(three));
const compare = minus.comparedTo(plus);
const results = {
minus: web3.toBigNumber(minus).toString(10),
plus: web3.toBigNumber(plus).toString(10),
compare
}
console.log(results); // {minus: "-234567890121234567890123456789012345", plus: "2234567890121234567890123456789012345", compare: -1}

Math.pow() not giving precise answer

I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.

Concatenate nested array to vector?

Given I have an array like this:
array = [Array[8], Array[8], Array[8], ...]
# array.length is 81; each octet represents a point on a 9x9 grid
where each nested array contains 8 numeric elements ranging from -2 to 2, how would I apply the following step to get a vector in Javascript?
Step 5. The signature of an image is simply the concatenation of the
8-element arrays corresponding to the grid points, ordered
left-to-right, top-to-bottom. Our signatures are thus vectors of
length 648. We store them in 648-byte arrays, but because some of the
entries for the first and last rows and columns are known to be zeros
and because each byte is used to hold only 5 values, signatures could
be represented by as few as ⌈544 log2 5⌉ = 1264
bits.
(Towards the end, those are supposed to be ceiling notations; best I could do given SO's lack of Latex formatting)
I have the array ready to go and ordered properly, but my knowledge of matricies and vectors is a little rusty, so I'm not sure how to tackle this next step. I'd appreciate any clarifications!
Background: I'm trying to create a JS implementation of an image processing algorithm published by the Xerox Palo Alto Research Center for a side-project I'm currently working on.
Conceptually you could convert this to a single 1264 bit number using the following algorithm:
Initialize an accumulator variable to zero
Iterate over all elements, but skipt those which you know to be zero
For the other elements, add 2 to obtain values in the range [0,1,2,3,4]
For each such value, multiply the accumulator by 5 then add the corresponding value
When you have processed all elements, the accumulator will encode your arrays
To reverse that encoding, youd do this:
Read the encoded value into the accumulator
Iterate over all elements, in reverse order, but skipt those which you know to be zero
For each element, you obtain the corresponding value as the accumulator modulo 5
Subtract 2 from that value
Divide the accumulator by 5 using a truncating division
The problem with all of this is the fact that JS doesn't provide 1264 bit numbers out of the box. You might try one of the libraries suggested in How to deal with big numbers in javascript.
But unless you absolutely requre an extremely small representation, I'd suggest an alternative approach: you can encode up to 13 such values in a 32 bit signed integer, since 513=1,220,703,125 < 2,147,483,648=231. So after encoding 13 values I'd write out the result using such a number, then reset the accumulator to zero. This way you'll need ⌈544/13⌉∙32=1376 bits, which is not that much worse in terms of space requirements, but will be a lot faster to implement.
Instead of iterating once in forward and once in reverse direction, it might be easier to not multiply the accumulator by 5, but instead multiply the value you add to that by a suitable power of 5. In other words, you maintain a factor which you initialize to 1, and multiply by 5 every time you add a value. So in this case, first data values will have less significant positions than later data values, both for encoding and decoding, which means you can use the same iteration order for both.
See the ideone link mentioned in my comment below for an example of this latter approach. It encodes the full 9*9*8 values in 50 integers, each of them using no more than 31 bits. It then decodes the original matrix from that encoded form, to show that all the information is still present. The example does not use any fixed zeros, in your case ⌈544/13⌉=42 integers should be enough.

Javascript CS-PRNG - 64-bit random

I need to generate a cryptographically secure 64-bit unsigned random integer in Javascript. The first problem is that Javascript only allows 64-bit signed integers, so 9223372036854775808 is the biggest supported integer without going into floating point use I think? To fix this I can use a big number library, no problem.
My Method:
var randNum = SHA256( randBigInt(128, 0) ) % 2^64;
Where SHA256() is a secure hash function and randBigInt() is defined below as a non-crypto PRNG, im giving it a 128bit seed so brute force shouldn't be a problem.
randBigInt(n,s) //return an n-bit random BigInt (n>=1). If s=1, then the most significant of those n bits is set to 1.
Is this a secure method to generate a cryptographically secure 64-bit random int? And importantly does taking the 2^64 mod guarantee 100% I have a 64-bit number?
An abstract example, say this number is prime (it isn't i know), I will use it in the Galois Field [2^p], where p must be 64bits so that every possible 1-63bit number is a field element. In this query, my random int must be larger than any 63-bit number. And Im not sure im correct in taking the 2^64 mod of a 256bit hash output.
Thanks (hope that makes sense)
You can't turn a non-crypto-secure PRNG into a secure one simply by hashing the output in this way. You've only got as much entropy as you provided as input to seed the PRNG. Sure, the output looks random, but if the attacker knows your scheme (and you ought to assume they do, taking Kerchoffs' principles as axiomatic) then they can guess and/or brute force the inputs.
Also, you seem a little unclear over what you want by a "64-bit number". Do you want 64 bits of random data - in which case the chance of the highest bit being set should be 50% - or do you want some other property like a number between 2^63 and 2^64-1? What are you trying to do, anyway?
The output of a crypto-secure hash function (careful: I'm not sure that SHA256 on its own is ideal as a PRNG) is supposed to pass statistical tests, so you can be pretty sure that the probability of every individual bit being 1 is (very close to) 50%. This is great for generating symmetric keys, but that's not what you're hinting at here?
(As you go on to say, if you're talking about GF(2^p) you do indeed need a prime of a given size. If that's what you're doing, there are algorithms which generate probably and provably prime numbers, and you should look into those instead.)
All JavaScript numbers are actually IEEE 754 doubles, which means you can only exactly represent integers with magnitude <= 2^53. See this answer. So you will need a bignum library.
Taking the mod (%) gives you a number >= 0 and <= 2^64 - 1. 2^64 - 1 is the largest 63-bit number.
Finally, if you feed non-random input into SHA256, you'll get non-random output. In the extreme, if randBigInt always returns 0, than SHA256 will always return the same thing.
Go to:
http://www.number.com.pt/index.html
And on
PRECISION AND IMPLEMENTATION
Get 1000 decimal pseudorandom numbers and see the code.

Categories