How do you generate cryptographically secure floats in Javascript?
This should be a plug-in for Math.random, with range (0, 1), but cryptographically secure. Example usage
cryptoFloat.random();
0.8083966837153522
Secure random numbers in javascript? shows how to create a cryptographically secure Uint32Array. Maybe this could be converted to a float somehow?
The Mozilla Uint32Array documentation was not totally clear on how to convert from an int.
Google was not to the point, either.
Float32Array.from(someUintBuf); always gave a whole number.
Since the following code is quite simple and functionally equivalent to the division method, here is the alternate method of altering the bits. (This code is copied and modified from #T.J. Crowder's very helpful answer).
// A buffer with just the right size to convert to Float64
let buffer = new ArrayBuffer(8);
// View it as an Int8Array and fill it with 8 random ints
let ints = new Int8Array(buffer);
window.crypto.getRandomValues(ints);
// Set the sign (ints[7][7]) to 0 and the
// exponent (ints[7][6]-[6][5]) to just the right size
// (all ones except for the highest bit)
ints[7] = 63;
ints[6] |= 0xf0;
// Now view it as a Float64Array, and read the one float from it
let float = new DataView(buffer).getFloat64(0, true) - 1;
document.body.innerHTML = "The number is " + float;
Explanation:
The format of a IEEE754 double is 1 sign bit (ints[7][7]), 11 exponent bits (ints[7][6] to ints[6][5]), and the rest as mantissa (which holds the values). The formula to compute is
To set the factor to 1, the exponent needs to be 1023. It has 11 bits, thus the highest-order bit gives 2048. This needs to be set to 0, the other bits to 1.
Related
I found this snippet online along with this Stackoverflow post which converts it into a TypeScript class.
I basically copy and pasted it verbatim (because I am not qualified to modify this sort of cryptographic code), but I noticed that VS Code has a little underline in the very last function:
/**
* generates a random number on [0,1) with 53-bit resolution
*/
nextNumber53(): number {
let a = this._nextInt32() >>> 5;
let b = this._nextInt32() >>> 6;
return (a * 67108864.0 + b) * (1.0 / 9007199254740992.0);
}
Specifically the 9007199254740992.0
VS Code says Numeric literals with absolute values equal to 2^53 or greater are too large to be represented accurately as integers.ts(80008)
I notice that if I subtract that number by one and instead make it 9007199254740991.0, then the warning goes away. But I don't necessarily want to modify the code and break it if this is indeed a significant difference.
Basically, I am unsure, because while my intuition says that having a numerical overflow is bad, my intuition also says that I shouldn't try to fix cryptographic code that was posted in several places, as it is probably correct.
But is it? Or should this number be subtracted by one?
9007199254740992 is the right value to use if you want Uniform values in [0,1), i.e. 0.0 <= x < 1.0.
This is just the automatics going awry, this value can be accurately represented by a JavaScript Number, i.e. a 64bit float. It's just 253 and binary IEEE 754 floats have no trouble with numbers of this form (it would even be represented accurately with a 32bit float).
Using 9007199254740991 would make the range [0,1], i.e. 0.0 <= x <= 1.0. Most libraries generate uniform values in [0,1) and other distributions are derived from that, but you are obviously free to do whatever is best for your application.
Note that the actual chance of getting the maximum value back is 2-53 (~1e-16) so you're unlikely not actually see it in practice.
How to generate (in JavaScript or Node.JS) float in range from 0.00000000 - 100.00000000 from given long HEX hash? For example SHA-256?
I am open to solutions with crypto library because I using it to generate given hash :)
If you're not concerned about precision loss, and your hex strings are of a fixed length (as with SHA-256), you could simply map from one value space to the other:
function hexStrToFraction(hexStr) {
// Expresses a hexadecimal string of N characters as a fraction
// of one above its maximum possible value (16^N).
// Returns a number between 0 and 1.
return parseInt(hexStr, 16) / Math.pow(16, hexStr.length);
}
function sha256ToPercent(sha256) {
return 100 * hexStrToFraction(sha256);
}
Note that the precision loss is high enough to render the majority of a SHA-256 redundant:
var a = 'b2339969703a8c4b49e4a5c99cca6013ed455f52d06f8f03adb927aee7d9c3c0'
var b = 'b2339969703a8c8b8504772b860b9ed2cb6aa0186ff6750981e7ccd5344e4bf1'
// ^ differences start here
hexStrToFraction(a) === hexStrToFraction(b) // evaluates true
I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.
I'm writing a function to extend a number with sign to a wider bit length. This is a very frequently used action in the PowerPC instruction set. This is what I have so far:
function exts(value, from, to) {
return (value | something_goes_here);
}
value is the integer input, from is the number of bits that the value is using, and to is the target bit length.
What is the most efficient way to create a number that has to - from bits set to 1, followed by from bits set to 0?
Ignoring the fact that JavaScript has no 0b number syntax, for example, if I called
exts(0b1010101010, 10, 14)
I would want the function to OR the value with 0b11110000000000, returning a sign-extended result of 0b11111010101010.
A number containing p one bits followed by q zero bits can be generated via
((1<<p)-1)<<q
thus in your case
((1<<(to-from))-1)<<from
or much shorter
(1<<to)-(1<<from)
if you have the number 2^q (= 1 shifted left by q) represented as an integer of width p + q bits, it has the representation:
0...010...0
p-1 q
then 2^q - 1 has the representation
0...01...1
p q
which is exactly the opposite of you want. So just flip the bits
hence what you want is NOT((1 LEFT SHIFT by q) - 1)
= ~((1 << q) - 1) in c notation
I am not overly familiar with binary mathematics in JavaScript... But if you need to OR a number with 0b11110000000000, then I assume you would just convert that to decimal (which would get you 15360), and do value | 15360.
Relevant info that you may find useful: parseInt("11110000000000", 2) converts a binary number (specified as a string) to a decimal number, and (15360).toString(2) converts a decimal number (15360 in this case) to a binary number (the result is a string).
Revised solution
There's probably a more elegant and mathematical method, but here's a quick-and-dirty solution:
var S = "";
for(var i=0;i<p;i++)
S += "1";
for(i=0;i<q;i++)
S += "0";
S = parseInt(S, 2); // convert to decimal
I am trying to perform something that is brain-dead simple in any other language but not javascript: get the bits out of float (and the other way around).
In C/C++ it would be something like
float a = 3.1415;
int b = *((int*)&a);
and vise-versa
int a = 1000;
float b = *((float*)&a);
In C# you can use the BitConverter
...floatBits or something alike in Java... Even in VB6 for Christ's sake you can memcpy a float32 into an int32. How on earth can I translate between and int and a float in javascript?
function DoubleToIEEE(f)
{
var buf = new ArrayBuffer(8);
(new Float64Array(buf))[0] = f;
return [ (new Uint32Array(buf))[0] ,(new Uint32Array(buf))[1] ];
}
You certainly don't get anything low-level like that in JavaScript. It would be extremely dangerous to allow recasting and pointer-frobbing in a language that has to be safe for untrusted potential-attacker web sites to use.
If you want to get a 32-bit IEEE754 representation of a single-precision value in a Number (which remember is not an int either; the only number type you get in JavaScript is double), you will have to make it yourself by fiddling the sign, exponent and mantissa bits together. There's example code here.
function FloatToIEEE(f)
{
var buf = new ArrayBuffer(4);
(new Float32Array(buf))[0] = f;
return (new Uint32Array(buf))[0];
}
Unfortunately, this doesn't work with doubles and in old browsers.
JavaScript uses double (IEEE 754) to represent all numbers
double consists of [sign, exponent(11bit), mantissa(52bit)] fields.
Value of number is computed using formula (-1)^sign * (1.mantissa) * 2^(exponent - 1023). (1.mantissa - means that we take bits of mantissa add 1 at the beginning and tread that value as number, e.g. if mantissa = 101 we get number 1.101 (bin) = 1 + 1/2 + 1/8 (dec) = 1.625 (dec).
We can get value of sign bit testing if number is greater than zero. There is a small issue with 0 here because double have +0 and -0 values, but we can distinguish these two by computing 1/value and checking if value is +Inf or -Inf.
Since 1 <= 1.mantissa < 2 we can get value of exponent using Math.log2 e.g. Math.floor(Math.log2(666.0)) = 9 so exponent is exponent - 1023 = 9 and exponent = 1032, which in binary is (1032).toString(2) = "10000001000"
After we get exponent we can scale number to zero exponent without changing mantissa, value = value / Math.pow(2, Math.floor(Math.log2(666.0))), now value represents number (-1)^sign * (1.mantissa). If we ignore sign and multiply that by 2^52 we get integer value that have same bits as 1.mantissa: ((666 / Math.pow(2, Math.floor(Math.log2(666)))) * Math.pow(2, 52)).toString(2) = "10100110100000000000000000000000000000000000000000000" (we must ignore leading 1).
After some string concat's you will get what you want
This is only proof of concept, we didn't discuss denormalized numbers or special values such as NaN - but I think it can be expanded to account for these cases too.
#bensiu answers is fine, but if find yourself using some old JS interpreter you can use this approach.
Like the other posters have said, JavaScript is loose typed, so there is no differentiation in data types from float to int or vice versa.
However, what you're looking for is
float to int:
Math.floor( 3.9 ); // result: 3 (truncate everything past .) or
Math.round( 3.9 ); // result: 4 (round to nearest whole number)
Depending on which you'd like. In C/C++ it would essentially be using Math.floor to convert to integer from float.
int to float:
var a = 10;
a.toFixed( 3 ); // result: 10.000