Related
For example, if my function was called getlowestfraction(), this is what I expect it to do:
getlowestfraction(0.5) // returns 1, 2 or something along the lines of that
Another example:
getlowestfraction(0.125) // returns 1, 8 or something along the lines of that
Using Continued Fractions one can efficiently create a (finite or infinite) sequence of fractions hn/kn that are arbitrary good approximations to a given real number x.
If x is a rational number, the process stops at some point with hn/kn == x. If x is not a rational number, the sequence hn/kn, n = 0, 1, 2, ... converges to x very quickly.
The continued fraction algorithm produces only reduced fractions (nominator and denominator are relatively prime), and the fractions are in
some sense the "best rational approximations" to a given real number.
I am not a JavaScript person (programming in C normally), but I have tried to implement the algorithm with the following JavaScript function. Please forgive me if there are stupid errors. But I have checked the function and it seems to work correctly.
function getlowestfraction(x0) {
var eps = 1.0E-15;
var h, h1, h2, k, k1, k2, a, x;
x = x0;
a = Math.floor(x);
h1 = 1;
k1 = 0;
h = a;
k = 1;
while (x-a > eps*k*k) {
x = 1/(x-a);
a = Math.floor(x);
h2 = h1; h1 = h;
k2 = k1; k1 = k;
h = h2 + a*h1;
k = k2 + a*k1;
}
return h + "/" + k;
}
The loop stops when the rational approximation is exact or has the given precision eps = 1.0E-15. Of course, you can adjust the precision to your needs. (The while condition is derived from the theory of continued fractions.)
Examples (with the number of iterations of the while-loop):
getlowestfraction(0.5) = 1/2 (1 iteration)
getlowestfraction(0.125) = 1/8 (1 iteration)
getlowestfraction(0.1+0.2) = 3/10 (2 iterations)
getlowestfraction(1.0/3.0) = 1/3 (1 iteration)
getlowestfraction(Math.PI) = 80143857/25510582 (12 iterations)
Note that this algorithm gives 1/3 as approximation for x = 1.0/3.0. Repeated multiplication of x by powers of 10 and canceling common factors would give something like 3333333333/10000000000.
Here is an example of different precisions:
With eps = 1.0E-15 you get getlowestfraction(0.142857) = 142857/1000000.
With eps = 1.0E-6 you get getlowestfraction(0.142857) = 1/7.
You could keep multiplying by ten until you have integer values for your numerator and denominator, then use the answers from this question to reduce the fraction to its simplest terms.
Try this program instead:
function toFrac(number) {
var fractional = number % 1;
if (fractional) {
var real = number - fractional;
var exponent = String(fractional).length - 2;
var denominator = Math.pow(10, exponent);
var mantissa = fractional * denominator;
var numerator = real * denominator + mantissa;
var gcd = GCD(numerator, denominator);
denominator /= gcd;
numerator /= gcd;
return [numerator, denominator];
} else return [number, 1];
}
function gcd(numerator, denominator) {
do {
var modulus = numerator % denominator;
numerator = denominator;
denominator = modulus;
} while (modulus);
return numerator;
}
Then you may use it as follows:
var start = new Date;
var PI = toFrac(Math.PI);
var end = new Date;
alert(PI);
alert(PI[0] / PI[1]);
alert(end - start + " ms");
You can see the demo here: http://jsfiddle.net/MZaK9/1/
Was just fiddling around with code, and got the answer myself:
function getlowestfraction (num) {
var i = 1;
var mynum = num;
var retnum = 0;
while (true) {
if (mynum * i % 1 == 0) {
retnum = mynum * i;
break;
}
// For exceptions, tuned down MAX value a bit
if (i > 9000000000000000) {
return false;
}
i++;
}
return retnum + ", " + i;
}
In case anybody needed it.
P.S: I'm not trying to display my expertise or range of knowledge. I actually did spend a long time in JSFiddle trying to figure this out (well not a really long time anyway).
Suppose the number is x = 0 . ( a_1 a_2 ... a_k ) ( a_1 a_2 ... a_k ) .... for simplicity (keep in mind that the first few digits may not fit the repeating pattern, and that we need a way to figure out what k is). If b is the base, then
b ^ k * x - x = ( b ^ k - 1 ) * x
on one hand, but
b ^ k * x - x = ( a_1 a_2 ... a_k )
(exact, ie this is an integer) on the other hand.
So
x = ( a_1 ... a_k ) / ( b ^ k - 1 )
Now you can use Euclid's algorithm to get the gcd and divide it out to get the reduced fraction.
You would still have to figure out how to determine the repeating sequence. There should be an answer to that question. EDIT - one answer: it's the length of \1 if there's a match to the pattern /([0-9]+)\1+$/ (you might want to throw out the last digit before matching bc of rounding). If there's no match, then there's no better "answer" than the "trivial" representation" (x*base^precision/base^precision).
N.B. This answer makes some assumptions on what you expect of an answer, maybe not right for your needs. But it's the "textbook" way of getting reproducing the fraction from a repeating decimal representation - see e.g. here
A very old but a gold question which at the same time an overlooked one. So i will go and mark this popular one as a duplicate with hopes that new people end up at the correct place.
The accepted answer of this question is a gem of the internet. No library that i am aware of uses this magnificient technique and ends up with not wrong but silly rationals. Having said that, the accepted answer is not totally correct due to several issues like;
What exactly is happening there?
Why it still returns '140316103787451/7931944815571' instead of '1769/100' when the input is 17.69?
How do you decide when to stop the while loop?
Now the most important question is, what's happening there and howcome this algorithm is so very efficient.
We must know that any number can also be expressed as a continuous fraction. Say you are given 0.5. You can express it like
1
0 + ___ // the 0 here is in fact Math.floor(0.5)
2 // the 2 here is in fact Math.floor(1/0.5)
So say you are given 2.175 then you end up with
1
2 + _______________ // the 2 here is in fact Math.floor(2.175)
1
5 + ___________ // the 5 here is in fact Math.floor(1/0.175 = 5.714285714285714)
1
1 + _______ // the 1 here is in fact Math.floor(1/0.714285714285714 = 1.4)
1
2 + ___ // the 2 here is in fact Math.floor(1/0.4 = 2.5)
2 // the 2 here is in fact Math.floor(1/0.5)
We now have our continued fraction coefficients like [2;5,1,2,2] for 2.175. However the beauty of this algorithm lies behind how it calculates the approximation at once when we calculate the next continued fraction constant without requiring any further calculations. At this very moment we can compare the currently reached result with the given value and decide to stop or iterate once more.
So far so good however it still doesn't make sense right? Let us go with another solid example. Input value is 3.686635944700461. Now we are going to approach this from Infinity and very quickly converge to the result. So our first rational is 1/0 aka Infinity. We denote this as a fraction with a numerator p as 1 and denominator q as 0 aka 1/0. The previous approximation would be p_/q_ for the next stage. Let us make it 0 to start with. So p_ is 0 and q_ is 1.
The important part is, once we know the two previous approximations, (p, q, p_ and q_) we can then calculate the next coefficient m and also the next p and q to compare with the input. Calculating the coefficient m is as simple as Math.floor(x_) whereas x_ is reciprocal of the next floating part. The next approximation p/q would then be (m * p + p_)/(m * q + q_) and the next p_/q_ would be the previous p/q. (Theorem 2.4 # this paper)
Now given above information any decent programmer can easily resolve the following snippet. For curious, 3.686635944700461 is 800/217 and gets calculated in just 5 iterations by the below code.
function toRational(x){
var m = Math.floor(x),
x_ = 1/(x-m),
p_ = 1,
q_ = 0,
p = m,
q = 1;
if (x === m) return {n:p,d:q};
while (Math.abs(x - p/q) > Number.EPSILON){
m = Math.floor(x_);
x_ = 1/(x_-m);
[p_, q_, p, q] = [p, q, m*p+p_, m*q+q_];
}
return isNaN(x) ? NaN : {n:p,d:q};
}
Under practical considerations it would be ideal to store the coefficients in the fraction object as well so that in future you may use them to perform CFA (Continuous Fraction Arithmetics) among rationals. This way you may avoid huge integers and possible BigInt usage by staying in the CF domain to perform invertion, negation, addition and multiplication operations. Sadly, CFA is a very overlooked topic but it helps us to avoid double precision errors when doing cascaded arithmetic operations on the rational type values.
For an odds calculator for a board game, I need to calculate how many rounds a battle will last on average. Because there is a possibility that both sides in the battle will miss, a battle can theoretically last forever. Therefore I cannot traverse all branches, but need to calculate a mathematical limit. By verifying with a simulator, I have found that the following function correctly approximates the average number of rounds left:
// LIMIT could be any number, the larger it is, the more accurate the result.
const LIMIT = 100;
// r is the number of rounds left if at least 1 of the sides hit
// x is the chance that both sides miss and the round count gets increased,
// but the battle state stays the same.
function approximateLimitForNumberOfRounds(r: number, x: number) {
let approx = r / (1 - x);
// n -> infinity
for (let n = 1; n < LIMIT; n++) {
approx += x ** n;
}
return approx;
}
How can I modify this function to exactly calculate the number of rounds left, instead of approximating it? (noting that since x is a chance, it is contained in (0, 1) or 0 < x < 1).
We can note that approx takes on the following values:
r / (1 - x) # I refer to this as 'a' below
a + x
a + x + x^2
a + x + x^2 + x^3
a + x + x^2 + ... + x^n
Thus, we can simplify the mathematical expression to be:
a + (the sum of x^k from k = 1 to k = n)
Next, we must note that the sequence x + x^2 + x^3 ... forms a geometric sequence with first term x and common ratio x. Since x is bounded by 0 < x < 1, this will have a limiting sum, namely:
x + x^2 + x^3 + ... x^inf = x/(1-x)
(this obviously fails when x = 1, as well as in the original function where r / (1 - x) is taken, but in that case, you will simply have the sum as infinity and approx would escape to infinity if it were not undefined; so I am assuming that x != 1 in the following calculations and x = 1 can be / has been dealt with separately).
Now, since we have both a single expression for x + x^2 + ... to infinity, and a single expression for approx that includes x + x^2 + ... then we can write approx using both of these two facts:
approx = r / (1 - x) + x / (1 - x)
approx = (r + x) / (1 - x)
And there you go! That is the mathematical equivalent of the logic you've outlined in your question, compressed to a single statement (which I believe is correct :)).
Let's say you have a function that takes both x and y, real numbers that are integers, as arguments.
What would you put inside that function, using only mathematical operators, so that no two given sequences of arguments could ever return the same value, be it any kind of value?
Example of a function that fails at doing this:
function myfunction(x,y){
return x * y;
}
// myfunction(2,6) and myfunction(3,4) will both return 12
// myfunction(2,6) and myfunction(6,2) also both return 12.
As already noted in comments, at the level of JavaScript numbers such a function can't exist, simply because assuming that we're working with integer-valued IEEE 754 binary64 floats there are more possible input pairs than possible output values.
But to the mathematical question of whether there is a simple, injective function from pairs of integers to a single integer, the answer is yes. Here's one such function that uses only addition and multiplication, so should fit the questioner's "using only mathematical operators" constraint.
First we map each of the inputs from the domain of integers to the domain of nonnegative integers. The polynomial map x ↦ 2*x*x + x will do that for us, and maps distinct values to distinct values. (Sketch of proof: if 2*x*x + x == 2*y*y + y for some integers x and y, then rearranging and factoring gives (x - y) * (2*x + 2*y + 1) == 0; the second factor can never be zero for integers x and y, so the first factor must be zero and x == y.)
Second, given a pair of nonnegative integers (a, b), we map that pair to a single (nonnegative) integer using (a, b) ↦ (a + b)*(a + b) + a. It's easy to see that this, too, is injective: given the value of (a + b)*(a + b) + a, I can recover the value of a + b by taking the integer square root, and from there recover a and b.
Here's some Python code demonstrating the above:
def encode_pair(x, y):
""" Encode a pair of integers as a single (nonnegative) integer. """
a = 2*x*x + x
b = 2*y*y + y
return (a + b)*(a + b) + a
We can easily check that there are no repetitions for small x and y: here we take all pairs (x, y) with -500 <= x < 500 and -500 <= y < 500, and find the set containing encode_pair(x, y) for each combination. If all goes well, we should end up with a set with exactly 1 million entries, one per input combination.
>>> all_outputs = {encode_pair(x, y) for x in range(-500, 500) for y in range(-500, 500)}
>>> len(all_outputs)
1000000
>>> min(all_outputs)
0
But perhaps a more convincing way to establish the injectivity is to give an explicit inverse, showing that the original (x, y) can be recovered from the output. Here's that inverse function. It makes use of Python's integer square root operation math.isqrt, which is available only for Python >= 3.8, but is easy to implement yourself if you need it.
from math import isqrt
def decode_pair(n):
""" Decode an integer produced by encode_pair. """
a_plus_b = isqrt(n)
a = n - a_plus_b*a_plus_b
b = a_plus_b - a
c = isqrt(8*a + 1)
d = isqrt(8*b + 1)
return ((2 - c%4) * c - 1) // 4, ((2 - d%4) * d - 1) // 4
Example usage:
>>> encode_pair(3, 7)
15897
>>> decode_pair(15897)
(3, 7)
Depending on what you allow as a "mathematical operator" (which isn't really a particularly well-defined term), there are tighter functions possible. Here's a variant of the above that provides not just an injection but a bijection: every integer appears as the encoding of some pair of integers. It extends the set of mathematical operators used to include subtraction, division and absolute value. (Note that all divisions appearing in encode_pair are exact integer divisions, without any remainder.)
def encode_pair(x, y):
""" Encode a pair of integers as a single integer.
This gives a bijective map Z x Z -> Z.
"""
ax = (abs(2 * x + 1) - 1) // 2 # x if x >= 0, -1-x if x < 0
sx = (ax - x) // (2 * ax + 1) # 0 if x >= 0, 1 if x < 0
ay = (abs(2 * y + 1) - 1) // 2 # y if y >= 0, -1-y if y < 0
sy = (ay - y) // (2 * ay + 1) # 0 if y >= 0, 1 if y < 0
xy = (ax + ay + 1) * (ax + ay) // 2 + ax # encode ax and ay as xy
an = 2 * xy + sx # encode xy and sx as an
n = an - (2 * an + 1) * sy # encode an and sy as n
return n
def decode_pair(n):
""" Inverse of encode_pair. """
# decode an and sy from n
an = (abs(2 * n + 1) - 1) // 2
sy = (an - n) // (2 * an + 1)
# decode xy and sx from an
sx = an % 2
xy = an // 2
# decode ax and ay from xy
ax_plus_ay = (isqrt(8 * xy + 1) - 1) // 2
ax = xy - ax_plus_ay * (ax_plus_ay + 1) // 2
ay = ax_plus_ay - ax
# recover x from ax and sx, and y from ay and sy
x = ax - (1 + 2 * ax) * sx
y = ay - (1 + 2 * ay) * sy
return x, y
And now every integer appears as the encoding of exactly one pair, so we can start with an arbitrary integer, decode it to a pair, and re-encode to recover the same integer:
>>> n = -12345
>>> decode_pair(n)
(67, -44)
>>> encode_pair(67, -44)
-12345
The encode_pair function above is deliberately quite verbose, in order to explain all the steps involved. But the code and the algebra can be simplified: here's exactly the same computation expressed more compactly.
def encode_pair_cryptic(x, y):
""" Encode a pair of integers as a single integer.
This gives a bijective map Z x Z -> Z.
"""
c = abs(2 * x + 1)
d = abs(2 * y + 1)
e = (2 * y + 1) * ((c + d)**2 * c + 2 * (c - d) * c - 4 * x - 2)
return (e - 2 * c * d) // (4 * c * d)
encode_pair_cryptic gives exactly the same results as encode_pair. I'll give one example, and leave the reader to figure out the equivalence.
>>> encode_pair(47, -53)
-9995
>>> encode_pair_cryptic(47, -53)
-9995
I'm no math wiz but found this question kinda fun so I gave it a shot. This is by no means scalable to large number since I'm using prime numbers as exponents and gets out of control really quick. But tested up to 90,000 combinations and found no duplicates.
The code below has a couple extra functions generateValues() and hasDuplicates() that is just there to run and test multiple values coming from the output of myFunction()
BigNumber.config({ EXPONENTIAL_AT: 10 })
// This function is just to generate the array of prime numbers
function getPrimeArray(num) {
const array = [];
let isPrime;
let i = 2;
while (array.length < num + 1) {
for (let j = 2; (isPrime = i === j || i % j !== 0) && j <= i / 2; j++) {}
isPrime && array.push(i);
i++;
}
return array;
}
function myFunction(a, b) {
const primes = getPrimeArray(Math.max(a, b));
// Using the prime array, primes[a]^primes[b]
return BigNumber(primes[a]).pow(primes[b]).toString();
}
function generateValues(upTo) {
const results = [];
for (let i = 1; i < upTo + 1; i++) {
for (let j = 1; j < upTo + 1; j++) {
console.log(`${i},${j}`)
results.push(myFunction(i,j));
}
}
return results.sort();
}
function hasDuplicates(arr) {
return new Set(arr).size !== arr.length;
}
const values = generateValues(50)
console.log(`Checked ${values.length} values; duplicates: ${hasDuplicates(values)}`)
<script src="https://cdnjs.cloudflare.com/ajax/libs/bignumber.js/8.0.2/bignumber.min.js"></script>
Explanation of what's going on:
Using the example of myFunction(1,3)
And the array of primes [2, 3, 5, 7]
This would take the 2nd and 4th items, 3 and 7 which would result in 3^7=2187
Using 300 as the max generated 90,000 combinations with no duplicates (However it took quite some time.) I tried using a max of 500 but the fan on my laptop sounded like a jet engine taking off so gave up on it.
If x and y are some fixed size integers (eg 8 bits) then what you want is possible if the return of f has at least as many bits as the sum of the number of bits of x an y (ie 16 in the example) and not otherwise.
In the 8 bit example f(x,y) = (x<<8)+y would do. This is because if g(z) = ((z>>8), z&255) then g(f(x,y)) = (x,y). The impossibility comes from the pigeon hole principle: if we want (in the example) to map the pairs (x,y) (of which there 2^16) 1-1 to some integer type, then we must have at least 2^16 values of this type.
function myfunction(x,y){
x = 1/x;
y = 1/y;
let yLength = ("" + y).length
for(let i = 0; i < yLength; i++){
x*=10;
}
return (x + y)
}
console.log(myfunction(2,12))
console.log(myfunction(21,2))
Based on your question and you comments, I understood the following:
You want to pass 2 real numbers into a function. The function should use mathematical operators to generate a new result.
Your question is, if there is any kind of mathematical equation/function you could use, that would ALWAYS deliver a unique result.
If that's so, then the answer is no. You can make your function as complicated as possible and get a result(c) using the two numbers (a & b).
In this case I would look for another combination which could give me the result(c) using the same equation/function. Therefore I would use the system of linear equation to solve this mathematical issue.
In general, a system with fewer equations than unknowns has infinitely many solutions, but it may have no solution. Such a system is known as an underdetermined system.
In our case we would have one equation which gives us one result and two unknowns, therefore it would have infinitely many solutions because we already have a solution, so there is no way for the system to have no solutions at all.
More about this topic.
Edit:
I just recognized that some of us understood the domain of the function in a different way. I was thinking about real numbers (R) but it seems many assumed you talk about integers (Z) only.
Well I guess
real integers
wasnt clear enough, at least for me.
So if we would use integers only, I have no idea if that is possible to always have different results. Some users suggested a topic about that here I am also interested to take a look into that too.
I'd like to calculate the mathematical logarithm "by hand"...
... where stands for the logarithmBase and stands for the value.
Some examples (See Log calculator):
The base 2 logarithm of 10 is 3.3219280949
The base 5 logarithm of 15 is 1.6826061945
...
Hoever - I do not want to use a already implemented function call like Math.ceil, Math.log, Math.abs, ..., because I want a clean native solution that just deals with +-*/ and some loops.
This is the code I got so far:
function myLog(base, x) {
let result = 0;
do {
x /= base;
result ++;
} while (x >= base)
return result;
}
let x = 10,
base = 2;
let result = myLog(base, x)
console.log(result)
But it doesn't seems like the above method is the right way to calculate the logarithm to base N - so any help how to fix this code would be really appreciated.
Thanks a million in advance jonas.
You could use a recursive approach:
const log = (base, n, depth = 20, curr = 64, precision = curr / 2) =>
depth <= 0 || base ** curr === n
? curr
: log(base, n, depth - 1, base ** curr > n ? curr - precision : curr + precision, precision / 2);
Usable as:
log(2, 4) // 2
log(2, 10) // 3.32196044921875
You can influence the precision by changing depth, and you can change the range of accepted values (currently ~180) with curr
How it works:
If we already reached the wanted depth or if we already found an accurate value:
depth <= 0 || base ** curr === n
Then it just returns curr and is done. Otherwise it checks if the logarithm we want to find is lower or higher than the current one:
base ** curr > n
It will then continue searching for a value recursively by
1) lowering depth by one
2) increasing / decreasing curr by the current precision
3) lower precision
If you hate functional programming, here is an imperative version:
function log(base, n, depth = 20) {
let curr = 64, precision = curr / 2;
while(depth-- > 0 && base ** curr !== n) {
if(base ** curr > n) {
curr -= precision;
} else {
curr += precision;
}
precision /= 2;
}
return curr;
}
By the way, the algorithm i used is called "logarithmic search" commonly known as "binary search".
First method: with a table of constants.
First normalize the argument to a number between 1 and 2 (this is achieved by multiplying or dividing by 2 as many times as necessary - keep a count of these operations). For efficiency, if the values can span many orders of magnitude, instead of equal factors you can use a squared sequence, 2, 4, 16, 256..., followed by a dichotomic search when you have bracketed the value.
F.i. if the exponents 16=2^4 works but not 256=2^8, you try 2^6, then one of 2^5 and 2^7 depending on outcome. If the final exponent is 2^d, the linear search takes O(d) operations and the geometric/dichotomic search only O(log d). To avoid divisions, it is advisable to keep a table of negative powers.
After normalization, you need to refine the mantissa. Compare the value to √2, and if larger multiply by 1/√2. This brings the value between 1 and √2. Then compare to √√2 and so on. As you go, you add the weights 1/2, 1/4, ... to the exponent when a comparison returns greater.
In the end, the exponent is the base 2 logarithm.
Example: lg 27
27 = 2^4 x 1.6875
1.6875 > √2 = 1.4142 ==> 27 = 2^4.5 x 1.1933
1.1933 > √√2 = 1.1892 ==> 27 = 2^4.75 x 1.0034
1.0034 < √√√2 = 1.0905 ==> 27 = 2^4.75 x 1.0034
...
The true value is 4.7549.
Note that you can work with other bases, in particular e. In some contexts, base 2 allows shortcuts, this is why I used it. Of course, the square roots should be tabulated.
Second method: with a Taylor series.
After the normalization step, you can use the standard series
log(1 + x) = x - x²/2 + x³/3 - ...
which converges for |x| < 1. (Caution: we now have natural logarithms.)
As convergence is too slow for values close to 1, it is advisable to use the above method to reduce to the range [1, √2). Then every new term brings a new bit of accuracy.
Alternatively, you can use the series for log((1 + x)/(1 - x)), which gives a good convergence speed even for the argument 2. See https://fr.wikipedia.org/wiki/Logarithme_naturel#D%C3%A9veloppement_en_s%C3%A9rie
Example: with x = 1.6875, y = 0.2558 and
2 x (0.2558 + 0.2558³/3 + 0.2558^5/5) = 0.5232
lg 27 ~ 4 + 0.5232 / ln 2 = 4.7548
For example, if my function was called getlowestfraction(), this is what I expect it to do:
getlowestfraction(0.5) // returns 1, 2 or something along the lines of that
Another example:
getlowestfraction(0.125) // returns 1, 8 or something along the lines of that
Using Continued Fractions one can efficiently create a (finite or infinite) sequence of fractions hn/kn that are arbitrary good approximations to a given real number x.
If x is a rational number, the process stops at some point with hn/kn == x. If x is not a rational number, the sequence hn/kn, n = 0, 1, 2, ... converges to x very quickly.
The continued fraction algorithm produces only reduced fractions (nominator and denominator are relatively prime), and the fractions are in
some sense the "best rational approximations" to a given real number.
I am not a JavaScript person (programming in C normally), but I have tried to implement the algorithm with the following JavaScript function. Please forgive me if there are stupid errors. But I have checked the function and it seems to work correctly.
function getlowestfraction(x0) {
var eps = 1.0E-15;
var h, h1, h2, k, k1, k2, a, x;
x = x0;
a = Math.floor(x);
h1 = 1;
k1 = 0;
h = a;
k = 1;
while (x-a > eps*k*k) {
x = 1/(x-a);
a = Math.floor(x);
h2 = h1; h1 = h;
k2 = k1; k1 = k;
h = h2 + a*h1;
k = k2 + a*k1;
}
return h + "/" + k;
}
The loop stops when the rational approximation is exact or has the given precision eps = 1.0E-15. Of course, you can adjust the precision to your needs. (The while condition is derived from the theory of continued fractions.)
Examples (with the number of iterations of the while-loop):
getlowestfraction(0.5) = 1/2 (1 iteration)
getlowestfraction(0.125) = 1/8 (1 iteration)
getlowestfraction(0.1+0.2) = 3/10 (2 iterations)
getlowestfraction(1.0/3.0) = 1/3 (1 iteration)
getlowestfraction(Math.PI) = 80143857/25510582 (12 iterations)
Note that this algorithm gives 1/3 as approximation for x = 1.0/3.0. Repeated multiplication of x by powers of 10 and canceling common factors would give something like 3333333333/10000000000.
Here is an example of different precisions:
With eps = 1.0E-15 you get getlowestfraction(0.142857) = 142857/1000000.
With eps = 1.0E-6 you get getlowestfraction(0.142857) = 1/7.
You could keep multiplying by ten until you have integer values for your numerator and denominator, then use the answers from this question to reduce the fraction to its simplest terms.
Try this program instead:
function toFrac(number) {
var fractional = number % 1;
if (fractional) {
var real = number - fractional;
var exponent = String(fractional).length - 2;
var denominator = Math.pow(10, exponent);
var mantissa = fractional * denominator;
var numerator = real * denominator + mantissa;
var gcd = GCD(numerator, denominator);
denominator /= gcd;
numerator /= gcd;
return [numerator, denominator];
} else return [number, 1];
}
function gcd(numerator, denominator) {
do {
var modulus = numerator % denominator;
numerator = denominator;
denominator = modulus;
} while (modulus);
return numerator;
}
Then you may use it as follows:
var start = new Date;
var PI = toFrac(Math.PI);
var end = new Date;
alert(PI);
alert(PI[0] / PI[1]);
alert(end - start + " ms");
You can see the demo here: http://jsfiddle.net/MZaK9/1/
Was just fiddling around with code, and got the answer myself:
function getlowestfraction (num) {
var i = 1;
var mynum = num;
var retnum = 0;
while (true) {
if (mynum * i % 1 == 0) {
retnum = mynum * i;
break;
}
// For exceptions, tuned down MAX value a bit
if (i > 9000000000000000) {
return false;
}
i++;
}
return retnum + ", " + i;
}
In case anybody needed it.
P.S: I'm not trying to display my expertise or range of knowledge. I actually did spend a long time in JSFiddle trying to figure this out (well not a really long time anyway).
Suppose the number is x = 0 . ( a_1 a_2 ... a_k ) ( a_1 a_2 ... a_k ) .... for simplicity (keep in mind that the first few digits may not fit the repeating pattern, and that we need a way to figure out what k is). If b is the base, then
b ^ k * x - x = ( b ^ k - 1 ) * x
on one hand, but
b ^ k * x - x = ( a_1 a_2 ... a_k )
(exact, ie this is an integer) on the other hand.
So
x = ( a_1 ... a_k ) / ( b ^ k - 1 )
Now you can use Euclid's algorithm to get the gcd and divide it out to get the reduced fraction.
You would still have to figure out how to determine the repeating sequence. There should be an answer to that question. EDIT - one answer: it's the length of \1 if there's a match to the pattern /([0-9]+)\1+$/ (you might want to throw out the last digit before matching bc of rounding). If there's no match, then there's no better "answer" than the "trivial" representation" (x*base^precision/base^precision).
N.B. This answer makes some assumptions on what you expect of an answer, maybe not right for your needs. But it's the "textbook" way of getting reproducing the fraction from a repeating decimal representation - see e.g. here
A very old but a gold question which at the same time an overlooked one. So i will go and mark this popular one as a duplicate with hopes that new people end up at the correct place.
The accepted answer of this question is a gem of the internet. No library that i am aware of uses this magnificient technique and ends up with not wrong but silly rationals. Having said that, the accepted answer is not totally correct due to several issues like;
What exactly is happening there?
Why it still returns '140316103787451/7931944815571' instead of '1769/100' when the input is 17.69?
How do you decide when to stop the while loop?
Now the most important question is, what's happening there and howcome this algorithm is so very efficient.
We must know that any number can also be expressed as a continuous fraction. Say you are given 0.5. You can express it like
1
0 + ___ // the 0 here is in fact Math.floor(0.5)
2 // the 2 here is in fact Math.floor(1/0.5)
So say you are given 2.175 then you end up with
1
2 + _______________ // the 2 here is in fact Math.floor(2.175)
1
5 + ___________ // the 5 here is in fact Math.floor(1/0.175 = 5.714285714285714)
1
1 + _______ // the 1 here is in fact Math.floor(1/0.714285714285714 = 1.4)
1
2 + ___ // the 2 here is in fact Math.floor(1/0.4 = 2.5)
2 // the 2 here is in fact Math.floor(1/0.5)
We now have our continued fraction coefficients like [2;5,1,2,2] for 2.175. However the beauty of this algorithm lies behind how it calculates the approximation at once when we calculate the next continued fraction constant without requiring any further calculations. At this very moment we can compare the currently reached result with the given value and decide to stop or iterate once more.
So far so good however it still doesn't make sense right? Let us go with another solid example. Input value is 3.686635944700461. Now we are going to approach this from Infinity and very quickly converge to the result. So our first rational is 1/0 aka Infinity. We denote this as a fraction with a numerator p as 1 and denominator q as 0 aka 1/0. The previous approximation would be p_/q_ for the next stage. Let us make it 0 to start with. So p_ is 0 and q_ is 1.
The important part is, once we know the two previous approximations, (p, q, p_ and q_) we can then calculate the next coefficient m and also the next p and q to compare with the input. Calculating the coefficient m is as simple as Math.floor(x_) whereas x_ is reciprocal of the next floating part. The next approximation p/q would then be (m * p + p_)/(m * q + q_) and the next p_/q_ would be the previous p/q. (Theorem 2.4 # this paper)
Now given above information any decent programmer can easily resolve the following snippet. For curious, 3.686635944700461 is 800/217 and gets calculated in just 5 iterations by the below code.
function toRational(x){
var m = Math.floor(x),
x_ = 1/(x-m),
p_ = 1,
q_ = 0,
p = m,
q = 1;
if (x === m) return {n:p,d:q};
while (Math.abs(x - p/q) > Number.EPSILON){
m = Math.floor(x_);
x_ = 1/(x_-m);
[p_, q_, p, q] = [p, q, m*p+p_, m*q+q_];
}
return isNaN(x) ? NaN : {n:p,d:q};
}
Under practical considerations it would be ideal to store the coefficients in the fraction object as well so that in future you may use them to perform CFA (Continuous Fraction Arithmetics) among rationals. This way you may avoid huge integers and possible BigInt usage by staying in the CF domain to perform invertion, negation, addition and multiplication operations. Sadly, CFA is a very overlooked topic but it helps us to avoid double precision errors when doing cascaded arithmetic operations on the rational type values.