I'm implementing the Keccak-f ι step using javascript, the formula is as follow:
# ι step
A[0,0] = A[0,0] xor RC
RC is an array of round constants in Hex and A[0,0] is a single bit in Binary
The problem occurs on the following line
A[0,0] = 0, RC = 0x0000000000008082
when I convert RC to Binary using the code:
let temp = (parseInt(RC, 16).toString(2)).padStart(8, '0');//110010100010011000
A[0][0] = A[0][0] ^ temp; //16975297
the result is not a single bit, rather is a large number 16975297, but I need a single bit.
I tried to convert the large number 16975297 to binary but it is not a single bit as well.
I need some help, thank you!
Related
I don't want to share my primary key in the API, So I used UUID4 to generate unique row id.
But I need to find that row by using generated uuid which may cause performance issues as it's string and also length is too long.
I tried converting this uuid to decimal on base 16.
const uuid = UUIDV4() //57d419d7-8ab9-4edf-9945-f9a1b3602c93
const uuidToInt = parseInt(uuid, 16)
console.log(uuidToInt) //1473518039
By default it is only converting first chunk to decimal
Is safe to use it this way?
How much possibility is there to loose uniqueness of the row?
I tried converting this uuid to decimal on base 16.
decimal or hexadecimal. A number can't be both. Besides that, the uuid is already a hexadecimal format.
That's how you can convert it into a decimal value.
var uuid = "57d419d7-8ab9-4edf-9945-f9a1b3602c93";
var hex = "0x" + uuid.replace(/-/g, "");
var decimal = BigInt(hex).toString(); // don't convert this to a number.
var base64 = btoa(hex.slice(2).replace(/../g, v => String.fromCharCode(parseInt(v, 16))));
console.log({
uuid,
hex,
decimal,
base64
});
Careful, don't convert the BigInt value to a regular number, JS Numbers can not deal with values that big. They have only 53bits of precision. You'll lose the 75 least significant bits of your uuid.
Edit: added base64.
Is safe to use it this way?
That depends on your definition of safe.
How much possibility is there to loose uniqueness of the row?
A UUIDv4 has 128 bits, so there are 2128 theoretical possible combinations.
So that's 18'446'744'073'709'551'616 possible UUIDs.
Taking the first section of an UUID leaves you with 32 bits which gives you 232 possible combinations: 4'294'967'296.
In JavaScript
I am having a variable of 20 bit(16287008619584270370) and I want to convert it into binary of 64 bit but when I used the binary conversion code(Mentioned below) then it doesn't show me real binary of 64 bit.
var tag = 16287008619584270370;
var binary = parseInt(tag, 10).toString(2);
After dec2bin code implementation:
-1110001000000111000101000000110000011000000010111011000000000000
The correct binary should be:
-1110001000000111000101000000110000011000000010111011000011000010
(last 8 binary changed)
When I checked the problem then I get to know that code only reads the variable up to 16 bit after that it assumes 0000 and shows the binary of this (16287008619584270000).
So finally i need a code from anywhere that convert my whole 20 bit number into its actual binary in java Script.
The problem arises because of the limited precision of 64-bit floating point representation. Already when you do:
var tag = 16287008619584270370;
... you have lost precision. If you output that number you'll notice it will be 370 less. JS cannot represent the given number in its number data type.
You can use the BigNumber library (or the many alternatives):
const tag = BigNumber("16287008619584270370");
console.log(tag.toString(2));
<script src="https://cdnjs.cloudflare.com/ajax/libs/bignumber.js/8.0.1/bignumber.min.js"></script>
Make sure to pass large numbers as strings, as otherwise you already lose precision even before you started.
Future
At the time of writing the proposal for a native BigInt is at stage 3, "BigInt has been shipped in Chrome and is underway in Node, Firefox, and Safari."
That change includes a language extension introducing BigInt literals that have an "n" suffix:
var tag = 16287008619584270370n;
To read more than 16 chars we use BigInt instead of int.
var tag = BigInt("16287008619584270370"); // as string
var binary = tag.toString(2);
console.log(binary);
How would I get this negative large exponential number 6.849391775995509e-276 to 6.84 in javascript?
I'm getting back negative exponential numbers from an api which I would like to use a shortened version, to 2 decimal points, in my ui. If it matters, im using the number in a d3 chart rendering in react.
I have been trying some different techniques from the javascript doc sites but can not seem to get it to. Is using a library like Immutable.js an option? Any help would be greatly appreciated. All of the attempts in the code snippet which use the exponent notation return 0.00.
function financial(x) {
return Number.parseFloat(x).toFixed(2);
}
console.log(financial(6.849391775995509e-276));
const num = 6.849391775995509e-276
console.log(num.toFixed(2));
const num2 = 6.84939;
console.log(num2.toFixed(2));
console.log(financial('6.849391775995509e-276'));
+num.toString().substr(0,3)
Just convert it to a string and take the first digits.
Similar to Jonas' answer, however you want to wrap it in parseFloat() to get an actual number rather than a string.
const num = 6.849391775995509e-276
var newNum = parseFloat(num.toString().substr(0,4))
console.log(newNum)
In the following code it is my understanding that & is supposed to give a resulting binary string with ones where each corresponding digit on each string are both 1's, however the result I got is: "98435", what I expected was: "101011". Where is my misunderstanding? how can I achieve what I am attempting to do?
const bool = "101011";
const bool2 = "111011";
const and = bool & bool2;
console.log("bool: "+bool+", bool2: "+bool2+", &: "+and);
Javascript, like most languages, assumes humans use base 10 in code
Your code uses STRINGS though
When you use any mathematical operator (except +) Javascript tries to be nice, and make a Number out of the string - but, it's a BASE 10 number (unless the first digit in the string is a 0 and the rest of the digits are octal (0 to 7), in that case, the number is considered to be an BASE 8)
So the string 101011 is "coerced" to be the Number 101011 = 11000101010010011 and 111011 becomes 111011 = 11011000110100011
11000101010010011 (binary) &
11011000110100011 (binary)
-----------------
11000000010000011 (binary) = 98435 (decimal)
However, easy to fix:
const bool = "101011";
const bool2 = "111011";
const and = (parseInt(bool,2) & parseInt(bool2,2)).toString(2);
console.log("bool: "+bool+", bool2: "+bool2+", &: "+and);
When I write a float to a buffer, it does not read back the same value:
> var b = new Buffer(4);
undefined
> b.fill(0)
undefined
> b.writeFloatBE(3.14159,0)
undefined
> b.readFloatBE(0)
3.141590118408203
>
(^C again to quit)
>
Why?
EDIT:
My working theory is that because javascript stores all numbers as double precision, it's possible that the buffer implementation does not properly zero the other 4 bytes of the double when it reads the float back in:
> var b = new Buffer(4)
undefined
> b.fill(0)
undefined
> b.writeFloatBE(0.1,0)
undefined
> b.readFloatBE(0)
0.10000000149011612
>
I think it's telling that we have zeros for 7 digits past the decimal (well, 8 actually) and then there's garbage. I think there's a bug in the node buffer code that reads these floats. That's what I think. This is node version 0.10.26.
Floating point numbers ("floats") are never a fully-accurate representation of a number; this is a common feature that is seen across multiple languages, not just JavaScript / NodeJS. For example, I encountered something similar in C# when using float instead of double.
Double-precision floating point numbers are more accurate and should better meet your expectations. Try changing the above code to write to the buffer as a double instead of a float:
var b = new Buffer(8);
b.fill(0);
b.writeDoubleBE(3.14159, 0);
b.readDoubleBE(0);
This will return:
3.14159
EDIT:
Wikipedia has some pretty good articles on floats and doubles, if you're interested in learning more:
http://en.wikipedia.org/wiki/Floating_point
http://en.wikipedia.org/wiki/Double-precision_floating-point_format
SECOND EDIT:
Here is some code that illustrates the limitation of the single-precision vs. double-precision float formats, using typed arrays. Hopefully this can act as proof of this limitation, as I'm having a hard time explaining in words:
var floats32 = new Float32Array(1),
floats64 = new Float64Array(1),
n = 3.14159;
floats32[0] = n;
floats64[0] = n;
console.log("float", floats32[0]);
console.log("double", floats64[0]);
This will print:
float 3.141590118408203
double 3.14159
Also, if my understanding is correct, single-precision floating point numbers can store up to 7 total digits (significant digits), not 7 digits after the decimal point. This means that they should be accurate up to 7 total significant digits, which lines up with your results (3.14159 has 6 significant digits, 3.141590118408203 => first 7 digits => 3.141590 => 3.141590 === 3.14159).
readFloat in node is implemented in c++ and bytes are interpreted exactly the way your compiler stores/reads them. I doubt there is a bug here. What I think is that "7 digits" is incorrect assumption for float. This answer suggest 6 digits (and it's the value of std::numeric_limits<float>::digits10 ) so the result of readFloatBE is within expected error