Javascript: Precision strange behavior [duplicate] - javascript

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is JavaScript's Math broken?
Suppose,
var x = .6 - .5;
var y = 10.2 – 10.1;
var z = .2 - .1;
Comparison result
x == y; // false
x == .1; // false
y == .1; // false
but
z == .1; // true
Why Javascript show such behavior?

Because floating point is not perfectly precise. You can end up with slight differences.
(Side note: I think you meant var x = .6 - .5; Otherwise, you're comparing -0.1 with 0.1.)
JavaScript uses IEEE-754 double-precision 64-bit floating point (ref). This is an extremely good approximation of floating point numbers, but there is no perfect way to represent all floating point numbers in binary.
Some discrepancies are easier to see than others. For instance:
console.log(0.1 + 0.2); // "0.30000000000000004"
There are some JavaScript libraries out there that do the "decimal" thing a'la C#'s decimal type or Java's BigDecimal. That's where the number is actually stored as a series of decimal digits. But they're not a panacea, they just have a different class of problems (try to represent 1 / 3 accurately with it, for instance). "Decimal" types/libraries are fantastic for financial applications, because we're used to dealing with the style of rounding required in financial stuff, but there is the cost that they tend to be slower than IEEE floating point.
Let's output your x and y values:
var x = .6 - .5;
console.log(x); // "0.09999999999999998"
var y = 10.2 - 10.1;
console.log(y); // "0.09999999999999964"
No great surprise that 0.09999999999999998 is != to 0.09999999999999964. :-)
You can rationalize those a bit to make the comparison work:
function roundTwoPlaces(num) {
return Math.round(num * 100) / 100;
}
var x = roundTwoPlaces(0.6 - 0.5);
var y = roundTwoPlaces(10.2 - 10.1);
console.log(x); // "0.1"
console.log(y); // "0.1"
console.log(x === y); // "true"
Or a more generalized solution:
function round(num, places) {
var mult = Math.pow(10, places);
return Math.round(num * mult) / mult;
}
Live example | source
Note that it's still possible for accuracy crud to be in the resulting number, but at least two numbers that are very, very, very close to each other, if run through round with the same number of places, should end up being the same number (even if that number isn't perfectly accurate).

Related

Calculating π using a Monte Carlo Simulation limitations

I have asked a question very similar to this so I will mention the previous solutions at the end, I have a website that calculates π with the client's CPU while storing it on a server, so far I've got:
'701.766.448.388' points inside the circle, and '893.547.800.000' in total, these numbers are calculated using this code. (working example at: https://jsfiddle.net/d47zwvh5/2/)
let inside = 0;
let size = 500;
for (let i = 0; i < iterations; i++) {
var Xpos = Math.random() * size;
var Ypos = Math.random() * size;
var dist = Math.hypot(Xpos - size / 2, Ypos - size / 2);
if (dist < size / 2) {
inside++;
}
}
The problem
(4 * 701.766.448.388) / 893.547.800.000 = 3,141483638
This is the result we get, which is correct until the fourth digit, 4 should be 5.
Previous problems:
I messed up the distance calculation.
I placed the circle's from 0...499 which should be 0...500
I didn't use float, which decreased the 'resolution'
Disclamer
It might just be that I've reached a limit but this demonstration used 1 million points and got 3.16. considering I've got about 900 billion I think it could be more precisely.
I do understand that if I want to calculate π this isn't the right way to go about it, but I just want to make sure that everything is right so I was hoping anyone could spot something wrong or do I just need more 'dots'.
EDIT: There are quite a few mentions about how unrealistic the numbers where, these mentions where correct and I have now updated them to be correct.
You could easily estimate what kind of error (error bars) you should get, that's the beauty of the Monte Carlo. For this, you have to compute second momentum and estimate variance and std.deviation. Good thing is that collected value would be the same as what you collect for mean, because you just added up 1 after 1 after 1.
Then you could get estimation of the simulation sigma, and error bars for desired value. Sorry, I don't know enough Javascript, so code here is in C#:
using System;
namespace Pi
{
class Program
{
static void Main(string[] args)
{
ulong N = 1_000_000_000UL; // number of samples
var rng = new Random(312345); // RNG
ulong v = 0UL; // collecting mean values here
ulong v2 = 0UL; // collecting squares, should be the same as mean
for (ulong k = 0; k != N; ++k) {
double x = rng.NextDouble();
double y = rng.NextDouble();
var r = (x * x + y * y < 1.0) ? 1UL : 0UL;
v += r;
v2 += r * r;
}
var mean = (double)v / (double)N;
var varc = ((double)v2 / (double)N - mean * mean ) * ((double)N/(N-1UL)); // variance
var stdd = Math.Sqrt(varc); // std.dev, should be sqrt(Pi/4 (1-Pi/4))
var errr = stdd / Math.Sqrt(N);
Console.WriteLine($"Mean = {mean}, StdDev = {stdd}, Err = {errr}");
mean *= 4.0;
errr *= 4.0;
Console.WriteLine($"PI (1 sigma) = {mean - 1.0 * errr}...{mean + 1.0 * errr}");
Console.WriteLine($"PI (2 sigma) = {mean - 2.0 * errr}...{mean + 2.0 * errr}");
Console.WriteLine($"PI (3 sigma) = {mean - 3.0 * errr}...{mean + 3.0 * errr}");
}
}
}
After 109 samples I've got
Mean = 0.785405665, StdDev = 0.410540627166729, Err = 1.29824345388086E-05
PI (1 sigma) = 3.14157073026184...3.14167458973816
PI (2 sigma) = 3.14151880052369...3.14172651947631
PI (3 sigma) = 3.14146687078553...3.14177844921447
which looks about right. It is easy to see that in ideal case variance would be equal to (Pi/4)*(1-Pi/4). It is really not necessary to compute v2, just set it to v after simulation.
I, frankly, don't know why you're getting not what's expected. Precision loss in summation might be the answer, or what I suspect, you simulation is not producing independent samples due to seeding and overlapping sequences (so actual N is a lot lower than 900 trillion).
But using this method you control error and check how computation is going.
UPDATE
I've plugged in your numbers to show that you're clearly underestimating the value. Code
N = 893_547_800_000UL;
v = 701_766_448_388UL;
v2 = v;
var mean = (double)v / (double)N;
var varc = ((double)v2 / (double)N - mean * mean ) * ((double)N/(N-1UL));
var stdd = Math.Sqrt(varc); // should be sqrt(Pi/4 (1-Pi/4))
var errr = stdd / Math.Sqrt(N);
Console.WriteLine($"Mean = {mean}, StdDev = {stdd}, Err = {errr}");
mean *= 4.0;
errr *= 4.0;
Console.WriteLine($"PI (1 sigma) = {mean - 1.0 * errr}...{mean + 1.0 * errr}");
Console.WriteLine($"PI (2 sigma) = {mean - 2.0 * errr}...{mean + 2.0 * errr}");
Console.WriteLine($"PI (3 sigma) = {mean - 3.0 * errr}...{mean + 3.0 * errr}");
And output
Mean = 0.785370909522692, StdDev = 0.410564786603016, Err = 4.34332975349809E-07
PI (1 sigma) = 3.14148190075886...3.14148537542267
PI (2 sigma) = 3.14148016342696...3.14148711275457
PI (3 sigma) = 3.14147842609506...3.14148885008647
So, clearly you have problem somewhere (code? accuracy lost in representation? accuracy lost in summation? repeated/non-independent sampling?)
any FPU operation will decrease your accuracy. Why not do something like this:
let inside = 0;
for (let i = 0; i < iterations; i++)
{
var X = Math.random();
var Y = Math.random();
if ( X*X + Y*Y <= 1.0 ) inside+=4;
}
if we probe first quadrant of unit circle we do not need to change the dynamic range by size and also we can test the distances in powered by 2 form which get rid of the sqrt. These changes should increase the precision and also the speed.
Not a JAVASCRIPT coder so I do not know what datatypes you use but you need to be sure you do not cross its precision. In such case you need to add more counter variables to ease up the load on it. For more info see: [edit1] integration precision.
As your numbers are rather big I bet you crossed the boundary already (there should be no fraction part and trailing zeros are also suspicious) For example 32bit float can store only integers up to
2^23 = 8388608
and your 698,565,481,000,000 is way above that so even a ++ operation on such variable will cause precision loss and when the exponent is too big it even stop adding...
On integers is this not a problem but once you cross the boundary depending on internal format the value wraps around zero or negates ... But I doubd that is the case as then the result would be way off from PI.

Subtracting 2 decimal numbers javascript

Hi im trying to subtract 2 decimal numbers and it keeps returning some weird number.
var x = 0.00085022
var y = 0.00085050
var answer = x - y
alert(answer)
This is the number its returning -2.8000000000007186e-7
The maximum number of decimals is 17, but floating point arithmetic is not always 100% accurate
http://www.w3schools.com/js/js_numbers.asp
Try this:
var x = 0.00085022 * 100000000;
var y = 0.00085050 * 100000000;
var answer = (x - y) / 100000000;
alert(answer);
You are subtracting with a higher number and the calculations are traversing to an even lower number. Yes -2.0 is lower and the decimal places precision is reaching exponentially higher.
If we round them up we get:
var x = 85022
var y = 85050
var answer = x - y
alert(answer); // = -28

Customize chances of picking element randomly

I have two defined objects: x and y
If I do following, chances of getting either x or y are equal – 1 of 2:
var primary = [x, y];
var secondary = primary[Math.floor(Math.random() * primary.length)];
This would take a 1 of 3 (smaller) chances of getting y:
var primary = [x, x, y];
// secondary unchanged
etc.
But I believe, this is bad practice because if I'd wanted to set infinitesimal chances (e.g. 1 of 1e9) of getting y, I would have to do something extremely wasteful like this:
var primary = new Array();
for (i = 1e9 - 1; i--; i) primary.push(x);
primary.push(y);
var secondary = primary[Math.floor(Math.random() * primary.length)];
Is there a better way to do this in JavaScript?
Without digging too much into ECMAScript specification and its actual implementations, Math.random() appears to produce a number from the range 0..1 in a smooth fashion. This means that the number is 50% likely to be less than 0.5, 25% likely to be less than 0.25, 10% likely to be less than 0.1, etc.
To get x 17% of the time (and, conversely, y 83% of the time), one could use the corresponding number to be a gateway for Math.random()’s results:
const x = "x";
const y = "y";
function getRandom() {
return Math.random() < 0.17 ? x : y;
}
const value = getRandom();
// 17% "x", 83% "y"
This works fine for two values, but working with lists of 3+ elements would require different thinking.

Implementing an accurate cbrt() function without extra precision

In JavaScript, there is no native cbrt method available. In theory, you could use a method like this:
function cbrt(x) {
return Math.pow(x, 1 / 3);
}
However, this fails because identities in mathematics don't necessarily apply to floating point arithmetic. For example, 1/3 cannot be accurately represented using a binary floating point format.
An example of when this fails is the following:
cbrt(Math.pow(4, 3)); // 3.9999999999999996
This gets worse as the number gets larger:
cbrt(Math.pow(165140, 3)); // 165139.99999999988
Is there any algorithm which is able to calculate a cube root value to within a few ULP (preferably 1 ULP if possible)?
This question is similar to Computing a correctly rounded / an almost correctly rounded floating-point cubic root, but keep in mind that JavaScript doesn't have any higher-precision number types to work with (there is only one number type in JavaScript), nor is there a built-in cbrt function to begin with.
You can port an existing implementation, like this one in C, to Javascript. That code has two variants, an iterative one that is more accurate and a non-interative one.
Ken Turkowski's implementation relies on splitting up the radicand into mantissa and exponent and then reassembling it, but this is only used to bring it into the range between 1/8 and 1 for the first approximation by enforcing a binary exponent between -2 and 0. In Javascript, you can do this by repeatedly dividing or multiplying by 8, which should not affect accuracy, because it is just an exponent shift.
The implementation as shown in the paper is accurate for single-precision floating-point numbers, but Javascript uses double-precision numbers. Adding two more Newton iterations yields good accuracy.
Here's the Javascript port of the described cbrt algorithm:
Math.cbrt = function(x)
{
if (x == 0) return 0;
if (x < 0) return -Math.cbrt(-x);
var r = x;
var ex = 0;
while (r < 0.125) { r *= 8; ex--; }
while (r > 1.0) { r *= 0.125; ex++; }
r = (-0.46946116 * r + 1.072302) * r + 0.3812513;
while (ex < 0) { r *= 0.5; ex++; }
while (ex > 0) { r *= 2; ex--; }
r = (2.0 / 3.0) * r + (1.0 / 3.0) * x / (r * r);
r = (2.0 / 3.0) * r + (1.0 / 3.0) * x / (r * r);
r = (2.0 / 3.0) * r + (1.0 / 3.0) * x / (r * r);
r = (2.0 / 3.0) * r + (1.0 / 3.0) * x / (r * r);
return r;
}
I haven't tested it extensively, especially not in badly defined corner cases, but the tests and comparisons with pow I have done look okay. Performance is probably not so great.
Math.cbrt has been added to ES6 / ES2015 specification so at least first check to see if it defined. It can be used like:
Math.cbrt(64); //4
instead of
Math.pow(64, 1/3); // 3.9999999999999996
You can use formula for pow computation
x^y = exp2(y*log2(x))
x^(1/3) = exp2(log2(x)*1/3)
= exp2(log2(x)/3)
base for log,exp can be any but 2 is directly implemented on most FPU's
now you divide by 3 ... and 3.0 is represented by FP accurately.
or you can use bit search
find the exponent of output (e= ~1/3 of integer part bit count of x)
create appropriate fixed number y (mantissa=0 and exponent=e)
start bin search from MSB bit of y
toggle bit to one
if (y*y*y>x) toggle bit back to zero
loop #3 with next bit (stop after LSB)
The result of binary search is as precise as it can be (no other method can beat it) ... you need mantissa-bit-count iterations for this. You have to use FP for computation so conversion of your y to float is just copying mantissa bits and set exponent.
See pow on integer arithmetics in C++

Round float values in JavaScript

I am trying to round a value in JS but I am getting not rounded value, this is what I have:
$(document).on("click", ".open-AddBookDialog", function () {
var agentPercentage = parseFloat($('#<%=ddlSplitPerc.ClientID%>').val()).toFixed(2);
var percMod = 1.0 - agentPercentage;
percMod = Math.ceil(percMod * 100) / 100;
var dropdownAgentPerc = $('#<%=ddlPercSplitAgent.ClientID %>');
dropdownAgentPerc.val(percMod);
dropdownAgentPerc.change();
$('#AddNewSplitAgentLife').modal('show');
});
For example, the agentPercentage is 0.7 and when I am subtracting 1 - 0.7 I am getting this value:
0.30000000000000004
What do you think I should change? I tried the Math.cell example as well but I am getting 0.31 as a result.
The solution is in another question already asked: Dealing with float precision in Javascript
(Math.floor(y/x) * x).toFixed(2);
It should work if you subtract .5 from the value you pass to Math.ceil
percMod = Math.ceil((percMod * 100) -.5 )/ 100;
Math.ceil will round up for any decimal above .000
So to simulate the typical rounding behavior of only rounding decimals above .500 you should subtract .5

Categories