I have this Java code:
nextDouble = 0.2515933907977884;
long numerator = (long) (nextDouble * (1L << 53));
I would like to be able to produce the same output this line produces in Java but within JavaScript.
nextDouble = 0.2515933907977884;
const numerator = nextDouble * (1 << 53);
Has anybody got an idea for how to replicate a long within JavaScript ? I know there is BigInt in JavaScript but thing is, it doesn't support floating point numbers, so I am a bit stuck on what to do. Does anybody know any interesting libraries, that could solve this issue ?
Thank you in advance !
The problem is what 1<<53 does in javascript. It doesn't do what you think it does.
Here, try this in the console:
1<<30
1073741824
// okay.. looks good
1<<31
> -2147483648
// a negative num.. wha??
1<<32
> 1
// WHAA????????
1<<53
> 2097152
// That seems VERY low to me
1<<21
> 2097152
// How is that the same? as 1<<53??
numbers in javascript are forced into being doubles, and doing bit shifts on a double is utterly ridiculous. Javascript nevertheless lets you, because, well, javascript. When you do silly things in javascript, javascript will give you silly answers, and in that way javascript is rather worthless - programmers doing crazy stuff that cannot reasonably be interpreted as having any particular meaning should be answered with a clear error and not a wild stab in the dark. But that's just how javascript is. The usual way to deal with this crazy behaviour is to never ask javascript silly things, as it will give you silly answers. Such as 1<<32 being 1*.
You may be wondering 'but how is asking to bit shift 1 by 53 positions 'crazy'? - and the answer is, that bit shifts, given that they make no sense on doubles, are interpreted as: "You wish to emulate 32-bit signed int behaviour", and that is exactly what javascript does, notably including the weirdish java/C-ism that only the bottom 5 bits of the number on the RHS count. In other words, <<32 is the same thing as <<0 - after all, the bottom 5 bits of 32.. is 0. Said differently, take the right hand side number, divide it by 32, toss the result, keep the remainder ('modulo'). 32 divided by 32 leaves a remainder of 0. 53 divided by 32 leaves a remainder of 21, and that's why 1<<53 in javascript prints 2097152.
So, in javascript your code is effectively doing the double multiplied by 2 to the 21st power, or theDouble * 2097152, whereas in java it is doing the double multiplied by 2 to the 53rd power, or theDouble * 9007199254740992.
Rather obviously then your answers are wildly different.
The fix seems trivial. 1<<53 may look like a nice way to convey the notion of 2 to 53rd power or in bits, a 1 bit, followed by 53 zeroes, but as syntax goes it just does not work that way in javascript. You can't use that syntax for this purpose. Try literally 9007199254740992.
var d = 0.2515933907977884;
const n = d * 9007199254740992;
n
> 2266151802091599
so that works.
If you have a need to derive the value 9007199254740992 from the value 53:
Math.pow(2, 53)
> 9007199254740992
note that you're dancing on the edge of disaster here. standard IEEE doubles use 53 bits for the exponent, so you're at the very edges. Soon you'll get into the territory where 'x + 1' is equal to 'x' because the gap between representable numbers is larger than 1. You'll need to get cracking on BigInt if you want to move away from the precipice.
*) It is specced behaviour. But surely you agree this is highly surprising. How many people do you know that just know off-hand that javascripts << is specced to convert to a 32-bit signed integer, take the RHS and modulo 32 it, and then operate, and then convert back to double afterwards?
Related
I found this snippet online along with this Stackoverflow post which converts it into a TypeScript class.
I basically copy and pasted it verbatim (because I am not qualified to modify this sort of cryptographic code), but I noticed that VS Code has a little underline in the very last function:
/**
* generates a random number on [0,1) with 53-bit resolution
*/
nextNumber53(): number {
let a = this._nextInt32() >>> 5;
let b = this._nextInt32() >>> 6;
return (a * 67108864.0 + b) * (1.0 / 9007199254740992.0);
}
Specifically the 9007199254740992.0
VS Code says Numeric literals with absolute values equal to 2^53 or greater are too large to be represented accurately as integers.ts(80008)
I notice that if I subtract that number by one and instead make it 9007199254740991.0, then the warning goes away. But I don't necessarily want to modify the code and break it if this is indeed a significant difference.
Basically, I am unsure, because while my intuition says that having a numerical overflow is bad, my intuition also says that I shouldn't try to fix cryptographic code that was posted in several places, as it is probably correct.
But is it? Or should this number be subtracted by one?
9007199254740992 is the right value to use if you want Uniform values in [0,1), i.e. 0.0 <= x < 1.0.
This is just the automatics going awry, this value can be accurately represented by a JavaScript Number, i.e. a 64bit float. It's just 253 and binary IEEE 754 floats have no trouble with numbers of this form (it would even be represented accurately with a 32bit float).
Using 9007199254740991 would make the range [0,1], i.e. 0.0 <= x <= 1.0. Most libraries generate uniform values in [0,1) and other distributions are derived from that, but you are obviously free to do whatever is best for your application.
Note that the actual chance of getting the maximum value back is 2-53 (~1e-16) so you're unlikely not actually see it in practice.
When I try to perform bitwise XOR operation in php and js, they are producing different results in some cases , for example
2166136261 ^ 101 = -2128831072 on browsers (js)
2166136261 ^ 101 = 2166136224(php)
My understanding is because php is running 64 bit as opposed to 32 bit js.
Can anyone tell me the exact reason and if this could be solved so that both operations result in same value. Thanks!
2,147,483,647 is the biggest possible positive value for an integer in 32 bit computing, (it's 2^16, half of the 32 bits we have, the other half are reserved for negative numbers.)
Once you use a number bigger than that in a 32 bit system you start getting screwy results as the computer thinks it's a negative number. see https://en.wikipedia.org/wiki/Integer_(computer_science)
I have this formula on excel
=(1130000000000*F11^1.85)/(F19^1.85*(F13*(1-2/F15))^4.8655)
when
F11 = q
F19 = 150 (Constant)
F13 = d
F15 = sdr
I convert it to this
Math.round((1130000000000*Math.pow(q,1.85))/(Math.pow(150,1.85)*Math.pow(d*(1-2/sdr)),4.8655))
but the results are wrong
when
q = 120
d = 200
sdr = 17
the result should be 8.76
but I am getting long numbers
any help ?
Thanks
From YUI blog
JavaScript has a single number type: IEEE 754 Double Precision floating point. Having a single number type is one of JavaScript’s best features. Multiple number types can be a source of complexity, confusion, and error. A single type is simplifying and stabilizing.
Unfortunately, a binary floating point type has some significant disadvantages. The worst is that it cannot accurately represent decimal fractions, which is a big problem because humanity has been doing commerce in decimals for a long, long time. There would be advantages to switching to a binary-based number system, but that is not going to happen. As a consequence, 0.1 + 0.2 === 0.3 is false, which is the source of a lot of confusion.
http://www.yuiblog.com/blog/2009/03/10/when-you-cant-count-on-your-numbers/
Completely ignore my previous answer, although it is a problem which you will definitely encounter should you continue with these large numbers, your actual issue is a misplaced bracket in your code. If you use this (simply replaced variable names with values):
Math.round((1130000000000*Math.pow(120,1.85))/(Math.pow(150,1.85)*Math.pow((200*(1-2/17)),4.8655)))
It returns 9 (a rounded 8.76), the bracket you misplaced is around the 1-2 mark.
For future reference, the largest number Javascript can comprehend without fault is +/- 9007199254740992.
References:
What is JavaScript's highest integer value that a Number can go to without losing precision?
ECMA-262 - The Number Type
When I write a float to a buffer, it does not read back the same value:
> var b = new Buffer(4);
undefined
> b.fill(0)
undefined
> b.writeFloatBE(3.14159,0)
undefined
> b.readFloatBE(0)
3.141590118408203
>
(^C again to quit)
>
Why?
EDIT:
My working theory is that because javascript stores all numbers as double precision, it's possible that the buffer implementation does not properly zero the other 4 bytes of the double when it reads the float back in:
> var b = new Buffer(4)
undefined
> b.fill(0)
undefined
> b.writeFloatBE(0.1,0)
undefined
> b.readFloatBE(0)
0.10000000149011612
>
I think it's telling that we have zeros for 7 digits past the decimal (well, 8 actually) and then there's garbage. I think there's a bug in the node buffer code that reads these floats. That's what I think. This is node version 0.10.26.
Floating point numbers ("floats") are never a fully-accurate representation of a number; this is a common feature that is seen across multiple languages, not just JavaScript / NodeJS. For example, I encountered something similar in C# when using float instead of double.
Double-precision floating point numbers are more accurate and should better meet your expectations. Try changing the above code to write to the buffer as a double instead of a float:
var b = new Buffer(8);
b.fill(0);
b.writeDoubleBE(3.14159, 0);
b.readDoubleBE(0);
This will return:
3.14159
EDIT:
Wikipedia has some pretty good articles on floats and doubles, if you're interested in learning more:
http://en.wikipedia.org/wiki/Floating_point
http://en.wikipedia.org/wiki/Double-precision_floating-point_format
SECOND EDIT:
Here is some code that illustrates the limitation of the single-precision vs. double-precision float formats, using typed arrays. Hopefully this can act as proof of this limitation, as I'm having a hard time explaining in words:
var floats32 = new Float32Array(1),
floats64 = new Float64Array(1),
n = 3.14159;
floats32[0] = n;
floats64[0] = n;
console.log("float", floats32[0]);
console.log("double", floats64[0]);
This will print:
float 3.141590118408203
double 3.14159
Also, if my understanding is correct, single-precision floating point numbers can store up to 7 total digits (significant digits), not 7 digits after the decimal point. This means that they should be accurate up to 7 total significant digits, which lines up with your results (3.14159 has 6 significant digits, 3.141590118408203 => first 7 digits => 3.141590 => 3.141590 === 3.14159).
readFloat in node is implemented in c++ and bytes are interpreted exactly the way your compiler stores/reads them. I doubt there is a bug here. What I think is that "7 digits" is incorrect assumption for float. This answer suggest 6 digits (and it's the value of std::numeric_limits<float>::digits10 ) so the result of readFloatBE is within expected error
I was bored, so I started fidlling around in the console, and stumbled onto this (ignore the syntax error):
Some variable "test" has a value, which I multiply by 10K, it suddenly changes into different number (you could call it a rounding error, but that depends on how much accuracy you need). I then multiply that number by 10, and it changes back/again.
That raises a few questions for me:
How in accurate is Javascript? Has this been determined? I.e. a number that can be taken into account?
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
I'm not sure whether this should be a separate question, but I was actually trying to round numbers to a certain amount after the decimal point. I've researched it a bit, and have found two methods:
> Method A
function roundNumber(number, digits) {
var multiple = Math.pow(10, digits);
return Math.floor(number * multiple) / multiple;
}
> Method B
function roundNumber(number, digits) {
return Number(number.toFixed(digits));
}
Intuitively I like method B more (looks more efficient), but I don't know what going on behind the scenes so I can't really judge. Anyone have an idea on that? Or a way to benchmark this? And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
How in accurate is Javascript?
Javascript uses standard double precision floating point numbers, so the precision limitations are the same as for any other language that uses them, which is most languages. It's the native format used by the processor to handle floating point numbers.
Is there a way to fix this? I.e. to do math in Javascript with complete accuracy (within the limitations of its datatype).
No. The precision limitations lies in the way that the number is stored. Floating point numbers doesn't have complete accuracy, so no matter how you do the calculations you can't achieve absolute accuracy as the result goes back into a floating point number.
If you want complete accuracy then you need to use a different data type.
Should the changed number after the second operation be interpreted as
'changing back to the original number' or 'changing again, because of
the inaccuracy'?
It's changing again.
When a number is converted to text to be displayed, it's rounded to a certain number of digits. The numbers that look like they are exact aren't, it's just that the limitations in precision doesn't show up.
When the number "changes back" it's just because the rounding again hides the limitations in the precision. Each calculation adds or subtracts a small inaccuracy in the number, and sometimes it just happens to take the number closer to the number that you had originally. Eventhough it looks like it's more accurate, it's actually less accurate as each calculation adds a bit of uncertainty.
Internally, JavaScript uses 64-bit IEEE 754 floating-point numbers, which are a widely used standard and usually guarantee about 16 digits of accuracy. The error you witnessesed was on the 17th significant digit of the number and was reeeally tiny.
Is there a way to [...] do math in Javascript with complete accuracy (within the limitations of its datatype).
I would say that JavaScript's math is completely accurate within the limitations of its datatype. The error you witnessed was outside of those limitations.
Are you working with calculations that require a higher degree of precision than that?
Should the changed number after the second operation be interpreted as 'changing back to the original number' or 'changing again, because of the inaccuracy'?
The number never really became more or less accurate than the original value. It was only when the value was converted into a decimal value that a rounding error became apparent. But this was not a case of the value "changing back" to an accurate number. The rounding error was just too small to display.
And why is there no native round_to_this_many_decimals function? (one that returns an integer, not a string)
"Why is the language this way" questions are not considered very productive here, but it is easy to get around this limitation (assuming you mean numbers and not integers). This answer has 337 upvotes: +numb.toFixed(digits);, but note that if you try to display a number produced with that expression, there's no guarantee that it will actually display with only six digits. That's probably one of the reasons why JavaScript's "round to N places" function produces a string and not a number.
I came across the same few times and with further research I was able solve the little issues by using the library below
Math.js Library
Sample
import {
atan2, chain, derivative, e, evaluate, log, pi, pow, round, sqrt
} from 'mathjs'
// functions and constants
round(e, 3) // 2.718
atan2(3, -3) / pi // 0.75
log(10000, 10) // 4
sqrt(-4) // 2i
pow([[-1, 2], [3, 1]], 2) // [[7, 0], [0, 7]]
derivative('x^2 + x', 'x') // 2 * x + 1
// expressions
evaluate('12 / (2.3 + 0.7)') // 4
evaluate('12.7 cm to inch') // 5 inch
evaluate('sin(45 deg) ^ 2') // 0.5
evaluate('9 / 3 + 2i') // 3 + 2i
evaluate('det([-1, 2; 3, 1])') // -7
// chaining
chain(3)
.add(4)
.multiply(2)
.done() // 14