I was looking for a way to perform a linear curve fit in Javascript. I found several libraries, but they don't propagate errors. What I mean is, I have data and associated measurement errors, like:
x = [ 1.0 +/- 0.1, 2.0 +/- 0.1, 3.1 +/- 0.2, 4.0 +/- 0.2 ]
y = [ 2.1 +/- 0.2, 4.0 +/- 0.1, 5.8 +/- 0.4, 8.0 +/- 0.1 ]
Where my notation a +/- b means { value : a, error : b }.
I want to fit this into y = mx + b, and find m and b with their propagated errors. I know the Least Square Method algorithm, that I could implement, but it only take errors on the y variable, and I have distinct errors in both.
I also could not find a library in Javascript to do that; but if there is an open source lib in other language, I can inspect it to find out how and implement it in JS.
Programs like Origin or plotly are able to implement this, but I don't know how. The result for this example dataset is:
m = 1.93 +/- 0.11
b = 0.11 +/- 0.30
The very useful book Numerical Recipes provides a method to fit data to a straight line, with uncertainties in both X and Y coordinates. It can be found online in these two versions:
Numerical Recipes in C, in section 15-3
Numerical Recipes in Fortran 77, in section 15-3
The method is based on minimizing the χ2 (chi-square) which is similar to the least-square but takes into account the individual uncertainty of each data point. When the uncertainty σi is on the Y axis only, a weight proportional to 1/σi2 is assigned to the point in the calculations. When the data has uncertainties in the X and Y coordinates, given by σxi and σyi respectively, the fit to a straight line
y(x) = a + b · x
uses a χ2 where each point has a weight proportional to
1 / (σ2yi + b2 ·
σ2xi)
The detailed method and the code (in C or Fortran) can be found in the book. Due to copyright, I cannot reproduce them here.
It seems that the least squares (LS) method is indeed a good direction.
Given a list of x & y, the least squares return values for m & b that minimize
$$\sum_{i} (m*x_{i}+b -y_{i})^{2} $$.
The benefits of the LS method is that you will find the optimal values for the parameter, the computation is fast and you will probably be able to find implantation in java script, like this one.
Now you should take care of the margins of errors that you have. Note that the way that you treat the margin of errors is more of a "business question" than a mathematical question. Meaning that few might choose few treatments based on their needs and they'll all be indifferent from mathematical point of view.
Without more knowledge about your need, I suggest that you will turn each point (x,y) into 4 points based on the margins.
(x+e,y+e), (x-e, y+e), (x+e, y-e), (x-e,y-e).
The benefits of this representation is that it is simple, it gives way to the end of the margin boundaries that are typically more sensitive and the best of all - it is a reduction. Hence, once you generate the new values you can use the regular LS implementation without having to implement such algorithm on your own.
Related
I am curious how Stake.com managed to create the game "Limbo" where the odds of a multiplier happening is specific to the probability they've calculated. Here's the game : https://stake.com/casino/games/limbo
For example :
Multiplier -> x2
Probability -> 49.5% chance.
What it means is you have a 49.5% chance of winning because those are the odds that the multiplier will actually hit a number above x2.
If you set the multiplier all the way up to x1,000,000. You have a 0.00099% chance of actually hitting 1,000,000.
It's not a project I'm working on but I'm just extremely curious how we could achieve this.
Example:
Math.floor(Math.random()*1000000)
is not as random as we think, since Math.random() generates a number between 0-1. When paired with a huge multiplier like 1,000,000. We would actually generate a 6-figure number most of the time and it's not as random as we thought.
I've read that we have to convert it into a power law distribution but I'm not sure how it works. Would love to have more material to read up on how it works.
It sounds like you need to define some function that gives the probability of winning for a given multiplier N. These probabilities don't have to add up to 1, because they are not part of the same random variable; there is a unique random variable for each N chosen and two events, win or lose; we can subscript them as win(N) and lose(N). We really only need to define win(N) since lose(N) = 1 - win(N).
Something like an exponential functional would make sense here. Consider win(N) = 2^(1 - N). Then we get the following probabilities of winning:
n win(n)
1 1
2 1/2
3 1/4
4 1/8
etc
Or we could use just an inverse function: win(N) = 1/N
n win(n)
1 1
2 1/2
3 1/3
...
Then to actually see whether you win or lose for a given N, just choose a random number in some range - [0.0, 1.0) works fine for this purpose - and see whether that number is less than the win(N). If so, it's a win, of not, it's a loss.
Yes, technically speaking, it is probably true that the floating point numbers are not really uniformly distributed over [0, 1) when calling standard library functions. If you really need that level of precision then you have a much harder problem. But, for a game, regular rand() type functions should be plenty uniform for your purposes.
I'm implementing spring physics in Javascript, inspired by this blog post. My system uses Hooke's Law formula to calculate spring forces:
F = -k(|x|-d)(x/|x|) - bv
I made a CodePen that shows the implementation of a spring between two points. The CodePen has two points connected by a spring, and every 2 seconds the point positions are randomized. You can see the points bounce on the spring towards each other.
If you look at the source, you can see I've defined a direction vector for my spring:
var spring = {
length: 100,
direction: {
x: 1, y: 1
}
};
I'm trying to make it so that the spring always "resolves" in this direction. Put another way, I'd like the spring to always be "pointing" up and to the right. If this were implemented in the CodePen, it means the resting orientation of the points would always be the green point on the bottom left, and the blue point on the top right. No matter where the points start, they should end up in the following orientation (which matches the direction vector):
I've tried multiplying the normals by the spring vector:
norm1 = multiplyVectors( normalize( sub1 ), spring.direction ),
However this is a noop because the vector is (1,1). I've been hacking on this system for a few days now and I'm stuck. How can I constrain my 2d spring to a specific direction?
Spring forces are central just like gravity, which means that the total angular momentum of the system is conserved. Since you start with zero initial velocities, the angular momentum of the system is initially zero. The spring interaction keeps it zero, therefore the final orientation of the spring equals its initial orientation - the weights only move along the line connecting them.
To have the system rotate into the desired final position, you should also apply torque. The easiest way is to give the blue weight a positive charge and the green weight a negative one and then apply a constant external field in direction (1,1). That way the two charges will form a dipole and the interaction with the external field will generate the desired torque.
I don't get along with JavaScript, but I tried to write something based on your initial code here. The force that an external field with intensity E exerts on charge q is F = q * E, with both F and E being vectors. By adjusting q and E you can control how quickly the dipole will orient in the direction of the external field.
The force now becomes F = -k(|x|-d)(x/|x|) + qE - bv.
This has the probably undesired side effect that the final length of the spring will be slightly longer by delta, where delta = 2 * |q||E| / k. You can always adjust for that by reducing the length of the spring. Also, there is a little problem with that approach. Namely, there are two equilibrium states: one with the dipole facing the direction of the field (stable equilibrium) and one with the dipole facing the opposite direction (unstable equilibrium). A bit of random noise in the initial steps of the simulation will prevent the dipole from being trapped into the latter state.
I want to transform an image in 2 points perspective.
I guess I need to transfer to JS a formula from: http://web.iitd.ac.in/~hegde/cad/lecture/L9_persproj.pdf
But I'm humanities-minded person and I faint when I see matrices.
Here's what I need exactly:
I have a two vanishing points: X(X.x, X.y) and Z(Z.x, Z.y). And rectangle ABCD (A.x, A.y and so on)
(source: take.ms)
And I want to find new nA, nB, nC and nD points with which I can transform my rectangle like that (the points order doesn't really matter):
(source: take.ms)
Right now I'm doing weird approximate calculations: I'm looking for most distant point from X (1), then lay over an interval towards Z (2), than another interval towards X (3) and then again from Z (4):
(source: take.ms)
The result is a bit off but is alright for the precision I need, but this algorithm sometimes gives very weird results if I change vanishing points, so if there's a proper solution I'll gladly use it. Thanks!
I'm translating some c++ code in relation to PMP for attitude controls and part of the code uses FLT_EPSILON.
The code does the following:
while (angle > ((float)M_PI+FLT_EPSILON))
M_PI is simple but I'm not sure what to with FLT_EPSILON. A google has told me:
This is the difference between 1 and the smallest floating point
number of type float that is greater than 1. It's supposed to be no
greater than 1E-5.
However other sources state values like 1.192092896e-07F.
I'm not 100% clear on why it's being used. I suspect it's to do with the granuality of float. So if someone could clarify what it is attempting to do in c++ and if this is a concern for javascript then that would be very helpful.
I'm not sure how javascript goes about handling internally stuff like these values so help would be appreciated.
As an FYI, the code I'm translating is as follows (sourced from QGroundControl, it's open source):
float limitAngleToPMPIf(float angle) {
if (angle > -20*M_PI && angle < 20 * M_PI) {
while (angle > ((float)M_PI + FLT_EPSILON)) {
angle -= 2.0f * (float)M_PI;
}
while (angle <= -((float)M_PI + FLT_EPSILON)) {
angle += 2.0f * (float)M_PI;
}
} else {
// Approximate
angle = fmodf(angle, (float)M_PI);
}
return angle;
}
--- edit ---
Just realised that fmodf isn't defined. Apparently it's a lib function and does the following:
The fmod() function computes the floating-point remainder of dividing
x by y. The return value is x - n * y, where n is the quotient of x /
y, rounded toward zero to an integer.
This code is attempting to keep angle within an interval around zero.
However, managing angles in this way is troublesome and requires considerable care. If it is not accompanied by documentation explaining what is being done, why, and the various errors and specifications that are involved, then it was done improperly.
It is impossible for this sort of angle reduction to keep accumulated changes accurately over a long sequence of changes, because M_PI is only an approximation to π. Therefore, this sort of reduction is generally only useful for aesthetic or interface effect. E.g., as some angle changes, reducing it can keep it from growing to a point where there may be large jumps in calculation results due to floating-point quantization or other calculation errors that would be annoying to a viewer. Thus, keeping the angle within an interval around zero makes the display look good, even though it diverges from what real physics would do over the long term.
The choice of FLT_EPSILON appears to be arbitrary. FLT_EPSILON is important for its representation of the fineness of the float format. However, at the magnitude of M_PI, the ULP (finest change) of a float is actually 2*FLT_EPSILON. Additionally, JavaScript performs the addition with double-precision arithmetic, and FLT_EPSILON is of no particular significance in this double format. I suspect the author simply chose FLT_EPSILON because it was a convenient “small” number. I expect the code would work just as well as if angle > M_PI had been written, without the embellishment, and (float) M_PI were changed to M_PI everywhere it appears. (The addition of FLT_EPSILON may have been intended to add some hysteresis to the system, so that it did not frequently toggle between values near π and values near –π. However, the criterion I suggest, angle > M_PI, also includes some of the same effect, albeit a smaller amount. That might not be apparent to somebody inexperienced with floating-point arithmetic.)
Also, it looks like angle = fmodf(angle, (float) M_PI); may be a bug, since this is reducing modulo M_PI rather than 2*M_PI, so it will add 180º to some angles, producing a completely incorrect result.
It is possible that replacing the entire function body with return fmod(angle, 2*M_PI); would work satisfactorily.
I got interested in this question someone posted yesterday about diamond-square algorithms, Node.js / coffeescript performance on a math-intensive algorithm.
I followed through with adapting his code and want to take the next step in generating some color maps for the resulting data grid.
My initial thoughts were to take the deepest point in the ocean to the height of Everest as my height ranges. That puts sea level at about 2076m and max height at around 10924m (Everest is 8848m). Anyway, so I'm generating my grid of data with values that are pretty close to what I want.
I was thinking of making an array of hex color values starting with say a dark shade of blue to light shade for water, then a dark green to white for sea level to mountains. I can setup ranges of height values to correspond to regions of color.
What are the common techniques for doing this kind of thing? I think more specifically, how do you generate a specific hex color between 2 hex values for a given height value?
yes, what you suggest sounds good. for in-between values, typically you use linear interpolation. the following snippet shows how to interpolate 5% of the way between #00 and #ff (and also shows the conversions you need):
> ("00" + Math.floor(parseInt('00',16) + 5 / 100 * parseInt('ff',16)).toString(16)).slice(-2)
"0c"
obviously, you'd wrap that in a function - i'm just cut+pasting from a chrome command line where i checked it.
[edit:] is that clear? in the case above, i'm calculating the value at 5m if you want #00 and 0m and #ff at 100m, where the value is just for one channel (r, g or b). you'd need to repeat that for various steps, and join r g and b together.
here's another example where you're going from #a0 down to #20 over a range of 50m, 10m from #a0. note that we have an extra subtraction because the initial value isn't zero.
> ("00" + Math.floor(parseInt('a0',16) + 10 / 50 * (parseInt('20',16) - parseInt('a0', 16))).toString(16)).slice(-2)
"86"
so, ignoring the conversion, the interpolation is:
start_value + (distance_from_start / total_distance) * (end_value - start_value)
and the conversions are
decimal_int = parseInt(hex_string, 16)
hex_string = decimal_int.toString(16)
ps i would imagine that processing.js has functions like this already written for you, if it's any help (not used it, but it's that kind of library...)