I'm translating some c++ code in relation to PMP for attitude controls and part of the code uses FLT_EPSILON.
The code does the following:
while (angle > ((float)M_PI+FLT_EPSILON))
M_PI is simple but I'm not sure what to with FLT_EPSILON. A google has told me:
This is the difference between 1 and the smallest floating point
number of type float that is greater than 1. It's supposed to be no
greater than 1E-5.
However other sources state values like 1.192092896e-07F.
I'm not 100% clear on why it's being used. I suspect it's to do with the granuality of float. So if someone could clarify what it is attempting to do in c++ and if this is a concern for javascript then that would be very helpful.
I'm not sure how javascript goes about handling internally stuff like these values so help would be appreciated.
As an FYI, the code I'm translating is as follows (sourced from QGroundControl, it's open source):
float limitAngleToPMPIf(float angle) {
if (angle > -20*M_PI && angle < 20 * M_PI) {
while (angle > ((float)M_PI + FLT_EPSILON)) {
angle -= 2.0f * (float)M_PI;
}
while (angle <= -((float)M_PI + FLT_EPSILON)) {
angle += 2.0f * (float)M_PI;
}
} else {
// Approximate
angle = fmodf(angle, (float)M_PI);
}
return angle;
}
--- edit ---
Just realised that fmodf isn't defined. Apparently it's a lib function and does the following:
The fmod() function computes the floating-point remainder of dividing
x by y. The return value is x - n * y, where n is the quotient of x /
y, rounded toward zero to an integer.
This code is attempting to keep angle within an interval around zero.
However, managing angles in this way is troublesome and requires considerable care. If it is not accompanied by documentation explaining what is being done, why, and the various errors and specifications that are involved, then it was done improperly.
It is impossible for this sort of angle reduction to keep accumulated changes accurately over a long sequence of changes, because M_PI is only an approximation to π. Therefore, this sort of reduction is generally only useful for aesthetic or interface effect. E.g., as some angle changes, reducing it can keep it from growing to a point where there may be large jumps in calculation results due to floating-point quantization or other calculation errors that would be annoying to a viewer. Thus, keeping the angle within an interval around zero makes the display look good, even though it diverges from what real physics would do over the long term.
The choice of FLT_EPSILON appears to be arbitrary. FLT_EPSILON is important for its representation of the fineness of the float format. However, at the magnitude of M_PI, the ULP (finest change) of a float is actually 2*FLT_EPSILON. Additionally, JavaScript performs the addition with double-precision arithmetic, and FLT_EPSILON is of no particular significance in this double format. I suspect the author simply chose FLT_EPSILON because it was a convenient “small” number. I expect the code would work just as well as if angle > M_PI had been written, without the embellishment, and (float) M_PI were changed to M_PI everywhere it appears. (The addition of FLT_EPSILON may have been intended to add some hysteresis to the system, so that it did not frequently toggle between values near π and values near –π. However, the criterion I suggest, angle > M_PI, also includes some of the same effect, albeit a smaller amount. That might not be apparent to somebody inexperienced with floating-point arithmetic.)
Also, it looks like angle = fmodf(angle, (float) M_PI); may be a bug, since this is reducing modulo M_PI rather than 2*M_PI, so it will add 180º to some angles, producing a completely incorrect result.
It is possible that replacing the entire function body with return fmod(angle, 2*M_PI); would work satisfactorily.
Related
I was looking for a way to perform a linear curve fit in Javascript. I found several libraries, but they don't propagate errors. What I mean is, I have data and associated measurement errors, like:
x = [ 1.0 +/- 0.1, 2.0 +/- 0.1, 3.1 +/- 0.2, 4.0 +/- 0.2 ]
y = [ 2.1 +/- 0.2, 4.0 +/- 0.1, 5.8 +/- 0.4, 8.0 +/- 0.1 ]
Where my notation a +/- b means { value : a, error : b }.
I want to fit this into y = mx + b, and find m and b with their propagated errors. I know the Least Square Method algorithm, that I could implement, but it only take errors on the y variable, and I have distinct errors in both.
I also could not find a library in Javascript to do that; but if there is an open source lib in other language, I can inspect it to find out how and implement it in JS.
Programs like Origin or plotly are able to implement this, but I don't know how. The result for this example dataset is:
m = 1.93 +/- 0.11
b = 0.11 +/- 0.30
The very useful book Numerical Recipes provides a method to fit data to a straight line, with uncertainties in both X and Y coordinates. It can be found online in these two versions:
Numerical Recipes in C, in section 15-3
Numerical Recipes in Fortran 77, in section 15-3
The method is based on minimizing the χ2 (chi-square) which is similar to the least-square but takes into account the individual uncertainty of each data point. When the uncertainty σi is on the Y axis only, a weight proportional to 1/σi2 is assigned to the point in the calculations. When the data has uncertainties in the X and Y coordinates, given by σxi and σyi respectively, the fit to a straight line
y(x) = a + b · x
uses a χ2 where each point has a weight proportional to
1 / (σ2yi + b2 ·
σ2xi)
The detailed method and the code (in C or Fortran) can be found in the book. Due to copyright, I cannot reproduce them here.
It seems that the least squares (LS) method is indeed a good direction.
Given a list of x & y, the least squares return values for m & b that minimize
$$\sum_{i} (m*x_{i}+b -y_{i})^{2} $$.
The benefits of the LS method is that you will find the optimal values for the parameter, the computation is fast and you will probably be able to find implantation in java script, like this one.
Now you should take care of the margins of errors that you have. Note that the way that you treat the margin of errors is more of a "business question" than a mathematical question. Meaning that few might choose few treatments based on their needs and they'll all be indifferent from mathematical point of view.
Without more knowledge about your need, I suggest that you will turn each point (x,y) into 4 points based on the margins.
(x+e,y+e), (x-e, y+e), (x+e, y-e), (x-e,y-e).
The benefits of this representation is that it is simple, it gives way to the end of the margin boundaries that are typically more sensitive and the best of all - it is a reduction. Hence, once you generate the new values you can use the regular LS implementation without having to implement such algorithm on your own.
I've a feeling that this might not be possible, but in case any one has any ideas, it would be of great help.
For any floating point arithmetic operation, is it possible to determine that the operation caused a precision loss. e.g if i calculate z = x/y, I want to determine if z has the exact value or if some precision has been lost during the operation.
Why I need this:
I am doing some mathematical operations using interval arithmetic. If the result is not precise I need to return a range where the exact result lies in the form of [a,b]. Currently for every operation I am assuming a precision loss. If the result is x, I return [previousClosestFPNumber(x), nextClosestFPNumber(x)]. This works flawlessly. However for very complicated equations the range becomes too large. If I can determine that precision has been lost, I can only widen the range in that case.
If you're losing precision in the operation f(x), then it should be detectable by applying the inverse operation to the result and seeing if you get back something other than the original value: in mathematical parlance, f'(f(x)) != x.
For example, something like:
x = y * z
y2 = x / z
if y != y2: precision was lost
If that turns out not to be suitable, you may want to consider a bignum library where you don't lose precision, such as javascript-bignum (though I'm sure there would be others around as well).
My Challenge
I am presently working my way through reddit's /r/dailyprogrammer challenges using Node.js and have caught a snag. Being that I'm finishing out day 3 with this single exercise, I've decided to look for help. I refuse to just move on without knowing how.
Challenge #6: Your challenge for today is to create a program that can calculate pi accurately to at least 30 decimal places.
My Snag
I've managed to obtain the precision arithmetic I was seeking via mathjs, but am left stumped on how to obtain 30 decimal places. Does anyone know a library, workaround or config that could help me reach my goal?
/*jslint node: true */
"use strict";
var mathjs = require('mathjs'),
math = mathjs();
var i,
x,
pi;
console.log(Math.PI);
function getPi(i, x, pi) {
if (i === undefined) {
pi = math.eval('3 + (4/(2*3*4))');
i = 2;
x = 4;
getPi(i, x, pi);
} else {
pi = math.eval('pi + (4/('+x+'*'+x+1+'*'+x+2+')) - (4/('+x+2+'*'+x+3+'*'+x+4+'))');
x += 4;
i += 1;
if (x < 20000) {
getPi(i, x, pi);
} else {
console.log(pi);
}
}
}
getPi();
I have made my way through many interations of this, and in this example am using the Nilakatha Series:
This question uses some algorithm to compute digits of pi, apparently to arbitrary precision. Comments on that question indicate possible sources, in particular this paper. You could easily port that approach to JavaScript.
This algorithm has, as an alternating series, an error of about 4/n^3 if the last term is 4/((n-2)*(n-1)*n), that is, using n-3 fraction terms. To get an error smaller than 0.5*10^(-30), you would need (at least) n=2*10^10 terms of this series. With that number, you have to take care of floating point errors, especially of cancellation effects when adding a large number and a small number. The best way to avoid that is to start the summation with the smallest term and then go backwards. Or do the summation forward, but with a precision of 60 decimals, to then round the result to 30 decimals.
It would be better to use the faster converging Machin formula, or one of the Machin-like formulas, if you want to have some idea of what exactly you are computing. If not, then use one of the super fast formulas used for billions of digits, but for 30 digits this is likely overkill.
See wikipedia on the approximations of pi.
I’m having problems generating normally distributed random numbers (mu=0 sigma=1)
using JavaScript.
I’ve tried Box-Muller's method and ziggurat, but the mean of the generated series of numbers comes out as 0.0015 or -0.0018 — very far from zero!! Over 500,000 randomly generated numbers this is a big issue. It should be close to zero, something like 0.000000000001.
I cannot figure out whether it’s a method problem, or whether JavaScript’s built-in Math.random() generates not exactly uniformly distributed numbers.
Has someone found similar problems?
Here you can find the ziggurat function:
http://www.filosophy.org/post/35/normaldistributed_random_values_in_javascript_using_the_ziggurat_algorithm/
And below is the code for the Box-Muller:
function rnd_bmt() {
var x = 0, y = 0, rds, c;
// Get two random numbers from -1 to 1.
// If the radius is zero or greater than 1, throw them out and pick two
// new ones. Rejection sampling throws away about 20% of the pairs.
do {
x = Math.random()*2-1;
y = Math.random()*2-1;
rds = x*x + y*y;
}
while (rds === 0 || rds > 1)
// This magic is the Box-Muller Transform
c = Math.sqrt(-2*Math.log(rds)/rds);
// It always creates a pair of numbers. I'll return them in an array.
// This function is quite efficient so don't be afraid to throw one away
// if you don't need both.
return [x*c, y*c];
}
If you generate n independent normal random variables, the standard deviation of the mean will be sigma / sqrt(n).
In your case n = 500000 and sigma = 1 so the standard error of the mean is approximately 1 / 707 = 0.0014. The 95% confidence interval, given 0 mean, would be around twice this or (-0.0028, 0.0028). Your sample means are well within this range.
Your expectation of obtaining 0.000000000001 (1e-12) is not mathematically grounded. To get within that range of accuracy, you would need to generate about 10^24 samples. At 10,000 samples per second that would still take 3 quadrillon years to do...this is precisely why it's good to avoid computing things by simulation if possible.
On the other hand, your algorithm does seem to be implemented correctly :)
I'm looking to create a basic Javascript implementation of a projectile that follows a parabolic arc (or something close to one) to arrive at a specific point. I'm not particularly well versed when it comes to complex mathematics and have spent days reading material on the problem. Unfortunately, seeing mathematical solutions is fairly useless to me. I'm ideally looking for pseudo code (or even existing example code) to try to get my head around it. Everything I find seems to only offer partial solutions to the problem.
In practical terms, I'm looking to simulate the flight of an arrow from one location (the location of the bow) to another. I have already simulated the effects of gravity on my projectile by updating its velocity at each logic interval. What I'm now looking to figure out is exactly how I figure out the correct trajectory/angle to fire my arrow at in order to reach my target in the shortest time possible.
Any help would be greatly appreciated.
Pointy's answer is a good summary of how to simulate the movement of an object given an initial trajectory (where a trajectory is considered to be a direction, and a speed, or in combination a vector).
However you've said in the question (if I've read you correctly) that you want to determine the initial trajectory knowing only the point of origin O and the intended point of target P.
The bad news is that in practise for any particular P there's an infinite number of parabolic trajectories that will get you there from O. The angle and speed are interdependent.
If we translate everything so that O is at the origin (i.e. [0, 0]) then:
T_x = P_x - O_x // the X distance to travel
T_y = P_y - O_y // the Y distance to travel
s_x = speed * cos(angle) // the X speed
s_y = speed * sin(angle) // the Y speed
Then the position (x, y) at any point in time (t) is:
x = s_x * t
y = s_y * t - 0.5 * g * (t ^ 2)
so at impact you've got
T_x = s_x * t
T_y = -0.5 * g * (t ^ 2) + s_y * t
but you have three unknowns (t, s_x and s_y) and two simultaneous equations. If you fix one of those, that should be sufficient to solve the equations.
FWIW, fixing s_x or s_y is equivalent to fixing either speed or angle, that bit is just simple trigonometry.
Some combinations are of course impossible - if the speed is too low or the angle too high the projectile will hit the ground before reaching the target.
NB: this assumes that position is evaluated continuously. It doesn't quite match what happens when time passes in discrete increments, per Pointy's answer and your own description of how you're simulating motion. If you recalculate the position sufficiently frequently (i.e. 10s of times per second) it should be sufficiently accurate, though.
I'm not a physicist so all I can do is tell you an approach based on really simple process.
Your "arrow" has an "x" and "y" coordinate, and "vx" and "vy" velocities. The initial position of the arrow is the initial "x" and "y". The initial "vx" is the horizontal speed of the arrow, and the initial "vy" is the vertical speed (well velocity really but those are just words). The values of those two, conceptually, depend on the angle your bowman will use when shooting the arrow off.
You're going to be simulating the progression of time with discrete computations at discrete time intervals. You don't have to worry about the equations for "smooth" trajectory arcs. Thus, you'll run a timer and compute updated positions every 100 milliseconds (or whatever interval you want).
At each time interval, you're going to add "vx" to "x" and "vy" to "y". (Thus, note that the initial choice of "vx" and "vy" is bound up with your choice of time interval.) You'll also update "vx" and "vy" to reflect the effect of gravity and (if you feel like it) wind. If "vx" doesn't change, you're basically simulating shooting an arrow on the moon :-) But "vy" will change because of gravity. That change should be a constant amount subtracted on each time interval. Call that "delta vy", and you'll have to tinker with things to get the values right based on the effect you want. (Math-wise, "vy" is like the "y" component of the first derivative, and the "delta vy" value is the second derivative.)
Because you're adding a small amount to "vy" every time, the incremental change will add up, correctly simulating "gravity's rainbow" as your arrow moves across the screen.
Now a nuance you'll need to work out is the sign of "vy". The initial sign of "vy" should be the opposite of "delta vy". Which should be positive and which should be negative depends on how the coordinate grid relates to the screen.
edit — See #Alnitak's answer for something actually germane to your question.