Some of my data are 64-bit integers. I would like to send these to a JavaScript program running on a page.
However, as far as I can tell, integers in most JavaScript implementations are 32-bit signed quantities.
My two options seem to be:
Send the values as strings
Send the values as 64-bit floating point numbers
Option (1) isn't perfect, but option (2) seems far less perfect (loss of data).
How have you handled this situation?
There is in fact a limitation at JavaScript/ECMAScript level of precision to 53-bit for integers (they are stored in the mantissa of a "double-like" 8 bytes memory buffer). So transmitting big numbers as JSON won't be unserialized as expected by the JavaScript client, which would truncate them to its 53-bit resolution.
> parseInt("10765432100123456789")
10765432100123458000
See the Number.MAX_SAFE_INTEGER constant and Number.isSafeInteger() function:
The MAX_SAFE_INTEGER constant has a value of 9007199254740991. The
reasoning behind that number is that JavaScript uses double-precision
floating-point format numbers as specified in IEEE 754 and can only
safely represent numbers between -(2^53 - 1) and 2^53 - 1.
Safe in this context refers to the ability to represent integers
exactly and to correctly compare them. For example,
Number.MAX_SAFE_INTEGER + 1 === Number.MAX_SAFE_INTEGER + 2 will
evaluate to true, which is mathematically incorrect. See
Number.isSafeInteger() for more information.
Due to the resolution of floats in JavaScript, using "64-bit floating point numbers" as you proposed would suffer from the very same restriction.
IMHO the best option is to transmit such values as text. It would be still perfectly readable JSON content, and would be easy do work with at JavaScript level.
A "pure string" representation is what OData specifies, for its Edm.Int64 or Edm.Decimal types.
What the Twitter API does in this case, is to add a specific ".._str": field in the JSON, as such:
{
"id": 10765432100123456789, // for JSON compliant clients
"id_str": "10765432100123456789", // for JavaScript
...
}
I like this option very much, since it would be still compatible with int64 capable clients. In practice, such duplicated content in the JSON won't hurt much, if it is deflated/gzipped at HTTP level.
Once transmitted as string, you may use libraries like strint – a JavaScript library for string-encoded integers to handle such values.
Update: Newer versions of JavaScript engines include a BigInt object class, which is able to handle more than 53-bit. In fact, it can be used for arbitrarily large integers, so a good fit for 64-bit integer values. But when serializing as JSON, the BigInt value will be serialized as a JSON string - weirdly enough, but for compatibility purposes I guess.
This seems to be less a problem with JSON and more a problem with Javascript itself. What are you planning to do with these numbers? If it's just a magic token that you need to pass back to the website later on, by all means simply use a string containing the value. If you actually have to do arithmetic on the value, you could possibly write your own Javascript routines for 64-bit arithmetic.
One way that you could represent values in Javascript (and hence JSON) would be by splitting the numbers into two 32-bit values, eg.
[ 12345678, 12345678 ]
To split a 64-bit value into two 32-bit values, do something like this:
output_values[0] = (input_value >> 32) & 0xffffffff;
output_values[1] = input_value & 0xffffffff;
Then to recombine two 32-bit values to a 64-bit value:
input_value = ((int64_t) output_values[0]) << 32) | output_values[1];
Javascript's Number type (64 bit IEEE 754) only has about 53 bits of precision.
But, if you don't need to do any addition or multiplication, then you could keep 64-bit value as 4-character strings as JavaScript uses UTF-16.
For example, 1 could be encoded as "\u0000\u0000\u0000\u0001". This has the advantage that value comparison (==, >, <) works on strings as expected. It also seems straightforward to write bit operations:
function and64(a,b) {
var r = "";
for (var i = 0; i < 4; i++)
r += String.fromCharCode(a.charCodeAt(i) & b.charCodeAt(i));
return r;
}
The JS number representation is a standard ieee double, so you can't represent a 64 bit integer. iirc you get maybe 48 bits of actual int precision in a double, but all JS bitops reduce to 32bit precision (that's what the spec requires. yay!) so if you really need a 64bit int in js you'll need to implement your own 64 bit int logic library.
JSON itself doesn't care about implementation limits.
your problem is that JS can't handle your data, not the protocol.
In other words, your JS client code has to use either of those non-perfect options.
This thing happened to me. All hell broke loose when sending large integers via json into JSON.parse. I spent days trying to debug. Problem immediately solved when i transmitted the values as strings.
Use
{ "the_sequence_number": "20200707105904535" }
instead of
{ "the_sequence_number": 20200707105904535 }
To make it worse, it would seem that where every JSON.parse is implemented, is some shared lib between Firefox, Chrome and Opera because they all behaved exactly the same. Opera error messages have Chrome URL references in it, almost like WebKit shared by browsers.
console.log('event_listen[' + global_weird_counter + ']: to be sure, server responded with [' + aresponsetxt + ']');
var response = JSON.parse(aresponsetxt);
console.log('event_listen[' + global_weird_counter + ']: after json parse: ' + JSON.stringify(response));
The behaviour i got was the sort of stuff where pointer math went horribly bad. Ghosts were flying out of my workstation wreaking havoc in my sleep. They are all exorcised now that i switched to string.
Related
Is it possible to keep trailing or leading zeroes on a number in javascript, without using e.g. a string instead?
const leading = 003; // literal, leading
const trailing = 0.10; // literal, trailing
const parsed = parseFloat('0.100'); // parsed or somehow converted
console.log(leading, trailing, parsed); // desired: 003 0.10 0.100
This question has been regularly asked (and still is), yet I don't have a place I'd feel comfortable linking to (did i miss it?).
Fully analogously would be keeping any other aspect of the representation a number literal was entered as, although asked nowhere near as often:
console.log(0x10); // 16 instead of potentially desired 0x10
console.log(1e1); // 10 instead of potentially desired 1e1
For disambiguation, this is not about the following topics, for some of which I'll add links, as they might be of interest as well:
Padding to a set amount of digits, formatting to some specific string representation, e.g. How can i pad a value with leading zeroes?, How to output numbers with leading zeros in JavaScript?, How to add a trailing zero to a price
Why a certain string representation will be produced for some number by default, e.g. How does JavaScript determine the number of digits to produce when formatting floating-point values?
Floating point precision/accuracy problems, e.g. console.log(0.1 + 0.2) producing 0.30000000000000004, see Is floating point math broken?, and How to deal with floating point number precision in JavaScript?
No. A number stores no information about the representation it was entered as, or parsed from. It only relates to its mathematical value. Perhaps reconsider using a string after all.
If i had to guess, it would be that much of the confusion comes from the thought, that numbers, and their textual representations would either be the same thing, or at least tightly coupled, with some kind of bidirectional binding between them. This is not the case.
The representations like 0.1 and 0.10, which you enter in code, are only used to generate a number. They are convenient names, for what you intend to produce, not the resulting value. In this case, they are names for the same number. It has a lot of other aliases, like 0.100, 1e-1, or 10e-2. In the actual value, there is no contained information, about what or where it came from. The conversion is a one-way street.
When displaying a number as text, by default (Number.prototype.toString), javascript uses an algorithm to construct one of the possible representations from a number. This can only use what's available, the number value, also meaning it will produce the same results for two same numbers. This implies, that 0.1 and 0.10 will produce the same result.
Concerning the number1 value, javascript uses IEEE754-2019 float642. When source code is being evaluated3, and a number literal is encountered, the engine will convert the mathematical value the literal represents to a 64bit value, according to IEEE754-2019. This means any information about the original representation in code is lost4.
There is another problem, which is somewhat unrelated to the main topic. Javascript used to have an octal notation, with a prefix of "0". This means, that 003 is being parsed as an octal, and would throw in strict-mode. Similarly, 010 === 8 (or an error in strict-mode), see Why JavaScript treats a number as octal if it has a leading zero
In conclusion, when trying to keep information about some representation for a number (including leading or trailing zeroes, whether it was written as decimal, hexadecimal, and so on), a number is not a good choice. For how to achieve some specific representation other than the default, which doesn't need access to the originally entered text (e.g. pad to some amount of digits), there are many other questions/articles, some of which were already linked.
[1]: Javascript also has BigInt, but while it uses a different format, the reasoning is completely analogous.
[2]: This is a simplification. Engines are allowed to use other formats internally (and do, e.g. to save space/time), as long as they are guaranteed to behave like an IEEE754-2019 float64 in any regard, when observed from javascript.
[3]: E.g. V8 would convert to bytecode earlier than evaluation, already exchanging the literal. The only relevant thing is, that the information is lost, before we could do anything with it.
[4]: Javascript gives the ability to operate on code itself (e.g. Function.prototype.toString), which i will not discuss here much. Parsing the code yourself, and storing the representation, is an option, but has nothing to do with how number works (you would be operating on code, a string). Also, i don't immediately see any sane reason to do so, over alternatives.
MDN states:
Numbers in JavaScript are
"double-precision 64-bit format IEEE
754 values", according to the spec.
This has some interesting
consequences. There's no such thing as
an integer in JavaScript, so you have
to be a little careful with your
arithmetic if you're used to math in C
or Java.
This suggests all numbers are floats. Is there any way to use integers, not float?
There are really only a few data types in Javascript: Objects, numbers, and strings. As you read, JS numbers are all 64-bit floats. There are no ints.
Firefox 4 will have support for Typed Arrays, where you can have arrays of real ints and such: https://developer.mozilla.org/en/JavaScript_typed_arrays
Until then, there are hackish ways to store integer arrays as strings, and plucking out each integers using charCodeAt().
I don't think it ever will support integers. It isn't a problem as every unsigned 32 bit integer can be accurately represented as a 64 bit floating point number.
Modern JavaScript engines could be smart enough to generate special code when the numbers are integer (with safeguard checks to make sure of it), but I'm not sure.
Use this:
function int(a) { return Math.floor(a); }
And yo can use it like this:
var result = int(2.1 + 8.7 + 9.3); //int(20.1)
alert(result); //alerts 20
There is zero support for integers. It should be a simple matter of rounding off a variable every time you need it. The performance overhead isn't bad at all.
If you really need to use integers, just use the floor method on each number you encounter.
Other than that, the loose typing of Javascript is one of it's strengths.
I need to generate a cryptographically secure 64-bit unsigned random integer in Javascript. The first problem is that Javascript only allows 64-bit signed integers, so 9223372036854775808 is the biggest supported integer without going into floating point use I think? To fix this I can use a big number library, no problem.
My Method:
var randNum = SHA256( randBigInt(128, 0) ) % 2^64;
Where SHA256() is a secure hash function and randBigInt() is defined below as a non-crypto PRNG, im giving it a 128bit seed so brute force shouldn't be a problem.
randBigInt(n,s) //return an n-bit random BigInt (n>=1). If s=1, then the most significant of those n bits is set to 1.
Is this a secure method to generate a cryptographically secure 64-bit random int? And importantly does taking the 2^64 mod guarantee 100% I have a 64-bit number?
An abstract example, say this number is prime (it isn't i know), I will use it in the Galois Field [2^p], where p must be 64bits so that every possible 1-63bit number is a field element. In this query, my random int must be larger than any 63-bit number. And Im not sure im correct in taking the 2^64 mod of a 256bit hash output.
Thanks (hope that makes sense)
You can't turn a non-crypto-secure PRNG into a secure one simply by hashing the output in this way. You've only got as much entropy as you provided as input to seed the PRNG. Sure, the output looks random, but if the attacker knows your scheme (and you ought to assume they do, taking Kerchoffs' principles as axiomatic) then they can guess and/or brute force the inputs.
Also, you seem a little unclear over what you want by a "64-bit number". Do you want 64 bits of random data - in which case the chance of the highest bit being set should be 50% - or do you want some other property like a number between 2^63 and 2^64-1? What are you trying to do, anyway?
The output of a crypto-secure hash function (careful: I'm not sure that SHA256 on its own is ideal as a PRNG) is supposed to pass statistical tests, so you can be pretty sure that the probability of every individual bit being 1 is (very close to) 50%. This is great for generating symmetric keys, but that's not what you're hinting at here?
(As you go on to say, if you're talking about GF(2^p) you do indeed need a prime of a given size. If that's what you're doing, there are algorithms which generate probably and provably prime numbers, and you should look into those instead.)
All JavaScript numbers are actually IEEE 754 doubles, which means you can only exactly represent integers with magnitude <= 2^53. See this answer. So you will need a bignum library.
Taking the mod (%) gives you a number >= 0 and <= 2^64 - 1. 2^64 - 1 is the largest 63-bit number.
Finally, if you feed non-random input into SHA256, you'll get non-random output. In the extreme, if randBigInt always returns 0, than SHA256 will always return the same thing.
Go to:
http://www.number.com.pt/index.html
And on
PRECISION AND IMPLEMENTATION
Get 1000 decimal pseudorandom numbers and see the code.
I am currently storing data inside an XML doc as binary, 20 digits long, each representing a boolean value.
<matrix>
<resource type="single">
<map>10001010100011110000</map>
<name>Resource Title</name>
<url>http://www.yoursite.com</url>
</resource>
</matrix>
I am parsing this with jQuery and am currently using a for loop and charAt() to determine whether to do stuff if the value is == "1".
for (var i = 0; i < _mapLength; i++) {
if (map.charAt(i) == "1") {
//perform something here
}
}
This takes place a few times as part of a HUGE loop that has run sort of slow. Someone told me that I should use bitwise operators to process this and it would run faster.
My question is either:
Can someone offer me an example of how I could do this? I've tried to read tutorials online and they seem to be flying right over my head. (FYI: I am planning on creating a Ruby script that will convert my binary 0 & 1's into bits in my XML.)
Or does anyone know of a good, simple (maybe even dumbed down version) tutorial or something that could help me grasp these bitwise operator concepts?
Assuming you have no more than 32 bits, you can use JavaScript's built-in parseInt() function to convert your string of 1s and 0s into an integer, and then test the flags using the & (and) operator:
var flags = parseInt("10001010100011110000", 2); // base 2
if ( flags & 0x1 )
{
// do something
}
...
See also: How to check my byte flag?
(question is on the use in C, but applies to the same operators in JS as well)
Single ampersand (&, as opposed to &&) does bit-wise comparison. But first you need to convert your strings to numbers using parseInt().
var map = parseInt("10010", 2); // the 2 tells it to treat the string as binary
var maskForOperation1 = parseInt("10000", 2);
var maskForOperation2 = parseInt("01000", 2);
// ...
if (map & maskForOperation1) { Operation1(); }
if (map & maskForOperation2) { Operation2(); }
// ...
Be extremely wary. Javascript does not have integers -- numbers are stored as 64 bit floating-point. You should get accurate conversion out to 52 bits. If you get more flags than that, bad things will happen as your "number" gets rounded to the nearest representable floating-point number. (ouch!)
Also, bitwise manipulation will not help performance, because the floating point number will be converted to an integer, tested, and then converted back.
If you have several places that you want to check the flags, I'd set the flags on an object, preferably with names, like so:
var flags = {};
flags.use_apples = map.charAt(4);
flags.use_bananas = map.charAt(10);
etc...
Then you can test those flags inside your loop:
if(flags.use_apples) {
do_apple_thing();
}
An object slot test will be faster than a bitwise check, since Javascript is not optimized for bitwise operators. However, if your loop is slow, I fear that decoding these flags is probably not the source of the slowness.
Bitwise operators will certainly be faster but only linearly and not by much. You'll probably save a few milliseconds (unless you're processing HUGE amounts of data in Javascript, which is most likely a bad idea anyway).
You should think about profiling other code in your loop to see what's slowing it down the most. What other algorithms, data structures and allocations do you have in there that could use refactoring?
I have an interesting question, I have been doing some work with javascript and a database ID came out as "3494793310847464221", now this is being entered into javascript as a number yet it is using the number as a different value, both when output to an alert and when being passed to another javascript function.
Here is some example code to show the error to its fullest.
<html><head><script language="javascript">alert( 3494793310847464221);
var rar = 3494793310847464221;
alert(rar);
</script></head></html>
This has completly baffeled me and for once google is not my friend...
btw the number is 179 more then the number there...
Your number is larger than the maximum allowed integer value in javascript (2^53). This has previously been covered by What is JavaScript's highest integer value that a Number can go to without losing precision?
In JavaScript, all numbers (even integral ones) are stored as IEEE-754 floating-point numbers. However, FPs have limited "precision" (see the Wikipedia article for more info), so your number isn't able to be represented exactly.
You will need to either store your number as a string or use some other "bignum" approach (unfortunately, I don't know of any JS bignum libraries off the top of my head).
Edit: After doing a little digging, it doesn't seem as if there's been a lot of work done in the way of JavaScript bignum libraries. In fact, the only bignum implementation of any kind that I was able to find is Edward Martin's JavaScript High Precision Calculator.
Use a string instead.
179 more is one way to look at it. Another way is, after the first 16 digits, any further digit is 0. I don't know the details, but it looks like your variable only stores up to 16 digits.
That number exceeds (2^31)-1, and that's the problem; javascript uses 32-bit signed integers (meaning, a range from –2,147,483,648 to 2,147,483,647). Your best choice is to use strings, and create functions to manipulate the strings as numbers.
I wouldn't be all too surprised, if there already was a library that does what you need.
One possible solution is to use a BigInt library such as: http://www.leemon.com/crypto/BigInt.html
This will allow you to store integers of arbitrary precision, but it will not be as fast as standard arithmetic.
Since it's to big to be stored as int, it's converted to float. In JavaScript ther is no explicit integer and float types, there's only universal Number type.
"Can't increment and decrement a string easily..."
Really?
function incr_num(x) {
var lastdigit=Number(x.charAt(x.length-1));
if (lastdigit!=9) return (x.substring(0,x.length-1))+""+(lastdigit+1);
if (x=="9") return "10";
return incr_num(x.substring(0,x.length-1))+"0";
}
function decr_num(x) {
if(x=="0") return "(error: cannot decrement zero)";
var lastdigit=Number(x.charAt(x.length-1));
if (lastdigit!=0) return (x.substring(0,x.length-1))+""+(lastdigit-1);
if (x=="10") return "9"; // delete this line if you like leading zero
return decr_num(x.substring(0,x.length-1))+"9";
}
Just guessing, but perhaps the number is stored as a floating type, and the difference might be because of some rounding error. If that is the case it might work correctly if you use another interpreter (browser, or whatever you are running it in)