In JavaScript
I am having a variable of 20 bit(16287008619584270370) and I want to convert it into binary of 64 bit but when I used the binary conversion code(Mentioned below) then it doesn't show me real binary of 64 bit.
var tag = 16287008619584270370;
var binary = parseInt(tag, 10).toString(2);
After dec2bin code implementation:
-1110001000000111000101000000110000011000000010111011000000000000
The correct binary should be:
-1110001000000111000101000000110000011000000010111011000011000010
(last 8 binary changed)
When I checked the problem then I get to know that code only reads the variable up to 16 bit after that it assumes 0000 and shows the binary of this (16287008619584270000).
So finally i need a code from anywhere that convert my whole 20 bit number into its actual binary in java Script.
The problem arises because of the limited precision of 64-bit floating point representation. Already when you do:
var tag = 16287008619584270370;
... you have lost precision. If you output that number you'll notice it will be 370 less. JS cannot represent the given number in its number data type.
You can use the BigNumber library (or the many alternatives):
const tag = BigNumber("16287008619584270370");
console.log(tag.toString(2));
<script src="https://cdnjs.cloudflare.com/ajax/libs/bignumber.js/8.0.1/bignumber.min.js"></script>
Make sure to pass large numbers as strings, as otherwise you already lose precision even before you started.
Future
At the time of writing the proposal for a native BigInt is at stage 3, "BigInt has been shipped in Chrome and is underway in Node, Firefox, and Safari."
That change includes a language extension introducing BigInt literals that have an "n" suffix:
var tag = 16287008619584270370n;
To read more than 16 chars we use BigInt instead of int.
var tag = BigInt("16287008619584270370"); // as string
var binary = tag.toString(2);
console.log(binary);
Related
I don't want to share my primary key in the API, So I used UUID4 to generate unique row id.
But I need to find that row by using generated uuid which may cause performance issues as it's string and also length is too long.
I tried converting this uuid to decimal on base 16.
const uuid = UUIDV4() //57d419d7-8ab9-4edf-9945-f9a1b3602c93
const uuidToInt = parseInt(uuid, 16)
console.log(uuidToInt) //1473518039
By default it is only converting first chunk to decimal
Is safe to use it this way?
How much possibility is there to loose uniqueness of the row?
I tried converting this uuid to decimal on base 16.
decimal or hexadecimal. A number can't be both. Besides that, the uuid is already a hexadecimal format.
That's how you can convert it into a decimal value.
var uuid = "57d419d7-8ab9-4edf-9945-f9a1b3602c93";
var hex = "0x" + uuid.replace(/-/g, "");
var decimal = BigInt(hex).toString(); // don't convert this to a number.
var base64 = btoa(hex.slice(2).replace(/../g, v => String.fromCharCode(parseInt(v, 16))));
console.log({
uuid,
hex,
decimal,
base64
});
Careful, don't convert the BigInt value to a regular number, JS Numbers can not deal with values that big. They have only 53bits of precision. You'll lose the 75 least significant bits of your uuid.
Edit: added base64.
Is safe to use it this way?
That depends on your definition of safe.
How much possibility is there to loose uniqueness of the row?
A UUIDv4 has 128 bits, so there are 2128 theoretical possible combinations.
So that's 18'446'744'073'709'551'616 possible UUIDs.
Taking the first section of an UUID leaves you with 32 bits which gives you 232 possible combinations: 4'294'967'296.
When I run, Number(123456789012345.12).toFixed(3), it is returning "123456789012345.125" as a String. Where is last 5 (in the decimal) coming from? I would have expected it to return "123456789012345.120". I executed this code on Mac with an Intel processor using Chrome version 68.
Your number is too long (has too many digits), it does not fit into the 64-bit floating point precision of a JavaScript number.
Below is an example of using less digits:
Number(123456789012345.12).toFixed(3): '123456789012345.125'
Number(12345678901234.12).toFixed(3): '12345678901234.119'
Number(1234567890123.12).toFixed(3): '1234567890123.120'
Number(123456789012.12).toFixed(3): '123456789012.120'
Number(12345678901.12).toFixed(3): '12345678901.120'
JavaScript numbers are represented by a 64-bit floating point value.
It's not possible to represent the number you show using normal JavaScript numbers. You would need to implement something like bignumber.js.
If using bignumber.js then you can do the same using the following:
let BigNumber = require('bignumber.js');
BigNumber('123456789012345.12').toFixed(3): '123456789012345.120'
BigNumber('12345678901234.12').toFixed(3): '12345678901234.120'
BigNumber('1234567890123.12').toFixed(3): '1234567890123.120'
BigNumber('123456789012.12').toFixed(3): '123456789012.120'
I've used Math.pow() to calculate the exponential value in my project.
Now, For specific values like Math.pow(3,40), it returns 12157665459056929000.
But when i tried the same value using a scientific Calculator, it returns 12157665459056928801.
Then i tried to traverse the loop till the exponential value :
function calculateExpo(base,power){
base = parseInt(base);
power = parseInt(power);
var output = 1;
gameObj.OutPutString = ''; //base + '^' + power + ' = ';
for(var i=0;i<power;i++){
output *= base;
gameObj.OutPutString += base + ' x ';
}
// to remove the last comma
gameObj.OutPutString = gameObj.OutPutString.substring(0,gameObj.OutPutString.lastIndexOf('x'));
gameObj.OutPutString += ' = ' + output;
return output;
}
This also returns 12157665459056929000.
Is there any restriction to Int type in JS ?
This behavior is highly dependent on the platform you are running this code at. Interestingly even the browser matters even on the same very machine.
<script>
document.write(Math.pow(3,40));
</script>
On my 64-bit machine Here are the results:
IE11: 12157665459056928000
FF25: 12157665459056929000
CH31: 12157665459056929000
SAFARI: 12157665459056929000
52 bits of JavaScript's 64-bit double-precision number values are used to store the "fraction" part of a number (the main part of the calculations performed), while 11 bits are used to store the "exponent" (basically, the position of the decimal point), and the 64th bit is used for the sign. (Update: see this illustration: http://en.wikipedia.org/wiki/File:IEEE_754_Double_Floating_Point_Format.svg)
There are slightly more than 63 bits worth of significant figures in the base-two expansion of 3^40 (63.3985... in a continuous sense, and 64 in a discrete sense), so hence it cannot be accurately computed using Math.pow(3, 40) in JavaScript. Only numbers with 52 or fewer significant figures in their base-two expansion (and a similar restriction on their order of magnitude fitting within 11 bits) have a chance to be represented accurately by a double-precision floating point value.
Take note that how large the number is does not matter as much as how many significant figures are used to represent it in base two. There are many numbers as large or larger than 3^40 which can be represented accurately by JavaScript's 64-bit double-precision number values.
Note:
3^40 = 1010100010111000101101000101001000101001000111111110100000100001 (base two)
(The length of the largest substring beginning and ending with a 1 is the number of base-two significant figures, which in this case is the entire string of 64 digits.)
Haskell (ghci) gives
Prelude> 3^40
12157665459056928801
Erlang gives
1> io:format("~f~n", [math:pow(3,40)]).
12157665459056929000.000000
2> io:format("~p~n", [crypto:mod_exp(3,40,trunc(math:pow(10,21)))]).
12157665459056928801
JavaScript
> Math.pow(3,40)
12157665459056929000
You get 12157665459056929000 because it uses IEEE floating point for computation. You get 12157665459056928801 because it uses arbitrary precision (bignum) for computation.
JavaScript can only represent distinct integers to 253 (or ~16 significant digits). This is because all JavaScript numbers have an internal representation of IEEE-754 base-2 doubles.
As a consequence, the result from Math.pow (even if was accurate internally) is brutally "rounded" such that the result is still a JavaScript integer (as it is defined to return an integer per the specification) - and the resulting number is thus not the correct value, but the closest integer approximation of it JavaScript can handle.
I have put underscores above the digits that don't [entirely] make the "significant digit" cutoff so it can be see how this would affect the results.
................____
12157665459056928801 - correct value
12157665459056929000 - closest JavaScript integer
Another way to see this is to run the following (which results in true):
12157665459056928801 == 12157665459056929000
From the The Number Type section in the specification:
Note that all the positive and negative integers whose magnitude is no greater than 253 are representable in the Number type ..
.. but not all integers with large magnitudes are representable.
The only way to handle this situation in JavaScript (such that information is not lost) is to use an external number encoding and pow function. There are a few different options mentioned in https://stackoverflow.com/questions/287744/good-open-source-javascript-math-library-for-floating-point-operations and Is there a decimal math library for JavaScript?
For instance, with big.js, the code might look like this fiddle:
var z = new Big(3)
var r = z.pow(40)
var str = r.toString()
// str === "12157665459056928801"
Can't say I know for sure, but this does look like a range problem.
I believe it is common for mathematics libraries to implement exponentiation using logarithms. This requires that both values are turned into floats and thus the result is also technically a float. This is most telling when I ask MySQL to do the same calculation:
> select pow(3, 40);
+-----------------------+
| pow(3, 40) |
+-----------------------+
| 1.2157665459056929e19 |
+-----------------------+
It might be a courtesy that you are actually getting back a large integer.
I'm trying to generate LLVM textual IR containing floating point literals. In order for this to work reliably, I need to be able to convert floats to their hexidecimal literal representation. For example, here is what the results should be:
f2hex(0.0001) -> "0x3F1A36E2E0000000"
f2hex(0.1) -> "0x3FB99999A0000000"
f2hex(1.1) -> "0x3FF19999A0000000"
f2hex(3.33333) -> "0x400AAAA8E0000000"
f2hex(4.9) -> "0x40139999A0000000"
f2hex(111.99998) -> "0x405BFFFFA0000000"
I would settle for a detailed description of the algorithm (that does not rely on libraries or machine code which is unavailable from Javascript), but working Javascript code is even better.
The LLVM language reference describes the format here: http://llvm.org/docs/LangRef.html#simple-constants
What you're trying to do is to dump the binary representation of the double. Here's how it can be done in C:
float f = ... // also works with double
char str[19];
sprintf(str, "0x%llX", f);
To do this in Javascript you need to extract the binary representation of the float. This isn't trivial, but fortunately it seems to already have a solution here on Stackoverflow: Read/Write bytes of float in JS (and specifically, this answer seems convenient)
I ended up leveraging the source for the IEEE-754 Floating-Point Conversion Page. I added the following two functions:
function llvm_double_hex(input) {
ieee64 = new ieee(64)
ieee64.Dec2Bin(input.toString())
ieee64.BinString =
ieee64.Convert2Bin(ieee64.DispStr, ieee64.StatCond64, ieee64.Result[0],
ieee64.BinaryPower, false)
return '0x' + ieee64.Convert2Hex()
};
function llvm_float_hex(input) {
var d64 = llvm_double_hex(input);
return d64.replace(/(.*)........$/, "$1A0000000");
};
The second calls the first and zeros out the last half as expected for LLVM IR.
Some of my data are 64-bit integers. I would like to send these to a JavaScript program running on a page.
However, as far as I can tell, integers in most JavaScript implementations are 32-bit signed quantities.
My two options seem to be:
Send the values as strings
Send the values as 64-bit floating point numbers
Option (1) isn't perfect, but option (2) seems far less perfect (loss of data).
How have you handled this situation?
There is in fact a limitation at JavaScript/ECMAScript level of precision to 53-bit for integers (they are stored in the mantissa of a "double-like" 8 bytes memory buffer). So transmitting big numbers as JSON won't be unserialized as expected by the JavaScript client, which would truncate them to its 53-bit resolution.
> parseInt("10765432100123456789")
10765432100123458000
See the Number.MAX_SAFE_INTEGER constant and Number.isSafeInteger() function:
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The
reasoning behind that number is that JavaScript uses double-precision
floating-point format numbers as specified in IEEE 754 and can only
safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers
exactly and to correctly compare them. For example,
Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will
evaluate to true, which is mathematically incorrect. See
Number.isSafeInteger() for more information.
Due to the resolution of floats in JavaScript, using "64-bit floating point numbers" as you proposed would suffer from the very same restriction.
IMHO the best option is to transmit such values as text. It would be still perfectly readable JSON content, and would be easy do work with at JavaScript level.
A "pure string" representation is what OData specifies, for its Edm.Int64 or Edm.Decimal types.
What the Twitter API does in this case, is to add a specific ".._str": field in the JSON, as such:
{
"id": 10765432100123456789, // for JSON compliant clients
"id_str": "10765432100123456789", // for JavaScript
...
}
I like this option very much, since it would be still compatible with int64 capable clients. In practice, such duplicated content in the JSON won't hurt much, if it is deflated/gzipped at HTTP level.
Once transmitted as string, you may use libraries like strint – a JavaScript library for string-encoded integers to handle such values.
Update: Newer versions of JavaScript engines include a BigInt object class, which is able to handle more than 53-bit. In fact, it can be used for arbitrarily large integers, so a good fit for 64-bit integer values. But when serializing as JSON, the BigInt value will be serialized as a JSON string - weirdly enough, but for compatibility purposes I guess.
This seems to be less a problem with JSON and more a problem with Javascript itself. What are you planning to do with these numbers? If it's just a magic token that you need to pass back to the website later on, by all means simply use a string containing the value. If you actually have to do arithmetic on the value, you could possibly write your own Javascript routines for 64-bit arithmetic.
One way that you could represent values in Javascript (and hence JSON) would be by splitting the numbers into two 32-bit values, eg.
[ 12345678, 12345678 ]
To split a 64-bit value into two 32-bit values, do something like this:
output_values[0] = (input_value >> 32) & 0xffffffff;
output_values[1] = input_value & 0xffffffff;
Then to recombine two 32-bit values to a 64-bit value:
input_value = ((int64_t) output_values[0]) << 32) | output_values[1];
Javascript's Number type (64 bit IEEE 754) only has about 53 bits of precision.
But, if you don't need to do any addition or multiplication, then you could keep 64-bit value as 4-character strings as JavaScript uses UTF-16.
For example, 1 could be encoded as "\u0000\u0000\u0000\u0001". This has the advantage that value comparison (==, >, <) works on strings as expected. It also seems straightforward to write bit operations:
function and64(a,b) {
var r = "";
for (var i = 0; i < 4; i++)
r += String.fromCharCode(a.charCodeAt(i) & b.charCodeAt(i));
return r;
}
The JS number representation is a standard ieee double, so you can't represent a 64 bit integer. iirc you get maybe 48 bits of actual int precision in a double, but all JS bitops reduce to 32bit precision (that's what the spec requires. yay!) so if you really need a 64bit int in js you'll need to implement your own 64 bit int logic library.
JSON itself doesn't care about implementation limits.
your problem is that JS can't handle your data, not the protocol.
In other words, your JS client code has to use either of those non-perfect options.
This thing happened to me. All hell broke loose when sending large integers via json into JSON.parse. I spent days trying to debug. Problem immediately solved when i transmitted the values as strings.
Use
{ "the_sequence_number": "20200707105904535" }
instead of
{ "the_sequence_number": 20200707105904535 }
To make it worse, it would seem that where every JSON.parse is implemented, is some shared lib between Firefox, Chrome and Opera because they all behaved exactly the same. Opera error messages have Chrome URL references in it, almost like WebKit shared by browsers.
console.log('event_listen[' + global_weird_counter + ']: to be sure, server responded with [' + aresponsetxt + ']');
var response = JSON.parse(aresponsetxt);
console.log('event_listen[' + global_weird_counter + ']: after json parse: ' + JSON.stringify(response));
The behaviour i got was the sort of stuff where pointer math went horribly bad. Ghosts were flying out of my workstation wreaking havoc in my sleep. They are all exorcised now that i switched to string.