For example, if my function was called getlowestfraction(), this is what I expect it to do:
getlowestfraction(0.5) // returns 1, 2 or something along the lines of that
Another example:
getlowestfraction(0.125) // returns 1, 8 or something along the lines of that
Using Continued Fractions one can efficiently create a (finite or infinite) sequence of fractions hn/kn that are arbitrary good approximations to a given real number x.
If x is a rational number, the process stops at some point with hn/kn == x. If x is not a rational number, the sequence hn/kn, n = 0, 1, 2, ... converges to x very quickly.
The continued fraction algorithm produces only reduced fractions (nominator and denominator are relatively prime), and the fractions are in
some sense the "best rational approximations" to a given real number.
I am not a JavaScript person (programming in C normally), but I have tried to implement the algorithm with the following JavaScript function. Please forgive me if there are stupid errors. But I have checked the function and it seems to work correctly.
function getlowestfraction(x0) {
var eps = 1.0E-15;
var h, h1, h2, k, k1, k2, a, x;
x = x0;
a = Math.floor(x);
h1 = 1;
k1 = 0;
h = a;
k = 1;
while (x-a > eps*k*k) {
x = 1/(x-a);
a = Math.floor(x);
h2 = h1; h1 = h;
k2 = k1; k1 = k;
h = h2 + a*h1;
k = k2 + a*k1;
}
return h + "/" + k;
}
The loop stops when the rational approximation is exact or has the given precision eps = 1.0E-15. Of course, you can adjust the precision to your needs. (The while condition is derived from the theory of continued fractions.)
Examples (with the number of iterations of the while-loop):
getlowestfraction(0.5) = 1/2 (1 iteration)
getlowestfraction(0.125) = 1/8 (1 iteration)
getlowestfraction(0.1+0.2) = 3/10 (2 iterations)
getlowestfraction(1.0/3.0) = 1/3 (1 iteration)
getlowestfraction(Math.PI) = 80143857/25510582 (12 iterations)
Note that this algorithm gives 1/3 as approximation for x = 1.0/3.0. Repeated multiplication of x by powers of 10 and canceling common factors would give something like 3333333333/10000000000.
Here is an example of different precisions:
With eps = 1.0E-15 you get getlowestfraction(0.142857) = 142857/1000000.
With eps = 1.0E-6 you get getlowestfraction(0.142857) = 1/7.
You could keep multiplying by ten until you have integer values for your numerator and denominator, then use the answers from this question to reduce the fraction to its simplest terms.
Try this program instead:
function toFrac(number) {
var fractional = number % 1;
if (fractional) {
var real = number - fractional;
var exponent = String(fractional).length - 2;
var denominator = Math.pow(10, exponent);
var mantissa = fractional * denominator;
var numerator = real * denominator + mantissa;
var gcd = GCD(numerator, denominator);
denominator /= gcd;
numerator /= gcd;
return [numerator, denominator];
} else return [number, 1];
}
function gcd(numerator, denominator) {
do {
var modulus = numerator % denominator;
numerator = denominator;
denominator = modulus;
} while (modulus);
return numerator;
}
Then you may use it as follows:
var start = new Date;
var PI = toFrac(Math.PI);
var end = new Date;
alert(PI);
alert(PI[0] / PI[1]);
alert(end - start + " ms");
You can see the demo here: http://jsfiddle.net/MZaK9/1/
Was just fiddling around with code, and got the answer myself:
function getlowestfraction (num) {
var i = 1;
var mynum = num;
var retnum = 0;
while (true) {
if (mynum * i % 1 == 0) {
retnum = mynum * i;
break;
}
// For exceptions, tuned down MAX value a bit
if (i > 9000000000000000) {
return false;
}
i++;
}
return retnum + ", " + i;
}
In case anybody needed it.
P.S: I'm not trying to display my expertise or range of knowledge. I actually did spend a long time in JSFiddle trying to figure this out (well not a really long time anyway).
Suppose the number is x = 0 . ( a_1 a_2 ... a_k ) ( a_1 a_2 ... a_k ) .... for simplicity (keep in mind that the first few digits may not fit the repeating pattern, and that we need a way to figure out what k is). If b is the base, then
b ^ k * x - x = ( b ^ k - 1 ) * x
on one hand, but
b ^ k * x - x = ( a_1 a_2 ... a_k )
(exact, ie this is an integer) on the other hand.
So
x = ( a_1 ... a_k ) / ( b ^ k - 1 )
Now you can use Euclid's algorithm to get the gcd and divide it out to get the reduced fraction.
You would still have to figure out how to determine the repeating sequence. There should be an answer to that question. EDIT - one answer: it's the length of \1 if there's a match to the pattern /([0-9]+)\1+$/ (you might want to throw out the last digit before matching bc of rounding). If there's no match, then there's no better "answer" than the "trivial" representation" (x*base^precision/base^precision).
N.B. This answer makes some assumptions on what you expect of an answer, maybe not right for your needs. But it's the "textbook" way of getting reproducing the fraction from a repeating decimal representation - see e.g. here
A very old but a gold question which at the same time an overlooked one. So i will go and mark this popular one as a duplicate with hopes that new people end up at the correct place.
The accepted answer of this question is a gem of the internet. No library that i am aware of uses this magnificient technique and ends up with not wrong but silly rationals. Having said that, the accepted answer is not totally correct due to several issues like;
What exactly is happening there?
Why it still returns '140316103787451/7931944815571' instead of '1769/100' when the input is 17.69?
How do you decide when to stop the while loop?
Now the most important question is, what's happening there and howcome this algorithm is so very efficient.
We must know that any number can also be expressed as a continuous fraction. Say you are given 0.5. You can express it like
1
0 + ___ // the 0 here is in fact Math.floor(0.5)
2 // the 2 here is in fact Math.floor(1/0.5)
So say you are given 2.175 then you end up with
1
2 + _______________ // the 2 here is in fact Math.floor(2.175)
1
5 + ___________ // the 5 here is in fact Math.floor(1/0.175 = 5.714285714285714)
1
1 + _______ // the 1 here is in fact Math.floor(1/0.714285714285714 = 1.4)
1
2 + ___ // the 2 here is in fact Math.floor(1/0.4 = 2.5)
2 // the 2 here is in fact Math.floor(1/0.5)
We now have our continued fraction coefficients like [2;5,1,2,2] for 2.175. However the beauty of this algorithm lies behind how it calculates the approximation at once when we calculate the next continued fraction constant without requiring any further calculations. At this very moment we can compare the currently reached result with the given value and decide to stop or iterate once more.
So far so good however it still doesn't make sense right? Let us go with another solid example. Input value is 3.686635944700461. Now we are going to approach this from Infinity and very quickly converge to the result. So our first rational is 1/0 aka Infinity. We denote this as a fraction with a numerator p as 1 and denominator q as 0 aka 1/0. The previous approximation would be p_/q_ for the next stage. Let us make it 0 to start with. So p_ is 0 and q_ is 1.
The important part is, once we know the two previous approximations, (p, q, p_ and q_) we can then calculate the next coefficient m and also the next p and q to compare with the input. Calculating the coefficient m is as simple as Math.floor(x_) whereas x_ is reciprocal of the next floating part. The next approximation p/q would then be (m * p + p_)/(m * q + q_) and the next p_/q_ would be the previous p/q. (Theorem 2.4 # this paper)
Now given above information any decent programmer can easily resolve the following snippet. For curious, 3.686635944700461 is 800/217 and gets calculated in just 5 iterations by the below code.
function toRational(x){
var m = Math.floor(x),
x_ = 1/(x-m),
p_ = 1,
q_ = 0,
p = m,
q = 1;
if (x === m) return {n:p,d:q};
while (Math.abs(x - p/q) > Number.EPSILON){
m = Math.floor(x_);
x_ = 1/(x_-m);
[p_, q_, p, q] = [p, q, m*p+p_, m*q+q_];
}
return isNaN(x) ? NaN : {n:p,d:q};
}
Under practical considerations it would be ideal to store the coefficients in the fraction object as well so that in future you may use them to perform CFA (Continuous Fraction Arithmetics) among rationals. This way you may avoid huge integers and possible BigInt usage by staying in the CF domain to perform invertion, negation, addition and multiplication operations. Sadly, CFA is a very overlooked topic but it helps us to avoid double precision errors when doing cascaded arithmetic operations on the rational type values.
Is it possible to seed the random number generator (Math.random) in JavaScript?
No, it is not possible to seed Math.random(). The ECMAScript specification is intentionally vague on the subject, providing no means for seeding nor require that browsers even use the same algorithm. So such a function must be externally provided, which thankfully isn't too difficult.
I've implemented a number of good, short and fast Pseudorandom number generator (PRNG) functions in plain JavaScript. All of them can be seeded and provide high quality numbers. These are not intended for security purposes--if you need a seedable CSPRNG, look into ISAAC.
First of all, take care to initialize your PRNGs properly. To keep things simple, the generators below have no built-in seed generating procedure, but accept one or more 32-bit numbers as the initial seed state of the PRNG. Similar or sparse seeds (e.g. a simple seed of 1 and 2) have low entropy, and can cause correlations or other randomness quality issues, sometimes resulting in the output having similar properties (such as randomly generated levels being similar). To avoid this, it is best practice to initialize PRNGs with a well-distributed, high entropy seed and/or advancing past the first 15 or so numbers.
There are many ways to do this, but here are two methods. Firstly, hash functions are very good at generating seeds from short strings. A good hash function will generate very different results even when two strings are similar, so you don't have to put much thought into the string. Here's an example hash function:
function cyrb128(str) {
let h1 = 1779033703, h2 = 3144134277,
h3 = 1013904242, h4 = 2773480762;
for (let i = 0, k; i < str.length; i++) {
k = str.charCodeAt(i);
h1 = h2 ^ Math.imul(h1 ^ k, 597399067);
h2 = h3 ^ Math.imul(h2 ^ k, 2869860233);
h3 = h4 ^ Math.imul(h3 ^ k, 951274213);
h4 = h1 ^ Math.imul(h4 ^ k, 2716044179);
}
h1 = Math.imul(h3 ^ (h1 >>> 18), 597399067);
h2 = Math.imul(h4 ^ (h2 >>> 22), 2869860233);
h3 = Math.imul(h1 ^ (h3 >>> 17), 951274213);
h4 = Math.imul(h2 ^ (h4 >>> 19), 2716044179);
return [(h1^h2^h3^h4)>>>0, (h2^h1)>>>0, (h3^h1)>>>0, (h4^h1)>>>0];
}
Calling cyrb128 will produce a 128-bit hash value from a string which can be used to seed a PRNG. Here's how you might use it:
// Create cyrb128 state:
var seed = cyrb128("apples");
// Four 32-bit component hashes provide the seed for sfc32.
var rand = sfc32(seed[0], seed[1], seed[2], seed[3]);
// Only one 32-bit component hash is needed for mulberry32.
var rand = mulberry32(seed[0]);
// Obtain sequential random numbers like so:
rand();
rand();
Note: If you want a slightly more robust 128-bit hash, consider MurmurHash3_x86_128, it's more thorough, but intended for use with large arrays.
Alternatively, simply choose some dummy data to pad the seed with, and advance the generator beforehand a few times (12-20 iterations) to mix the initial state thoroughly. This has the benefit of being simpler, and is often used in reference implementations of PRNGs, but it does limit the number of initial states:
var seed = 1337 ^ 0xDEADBEEF; // 32-bit seed with optional XOR value
// Pad seed with Phi, Pi and E.
// https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number
var rand = sfc32(0x9E3779B9, 0x243F6A88, 0xB7E15162, seed);
for (var i = 0; i < 15; i++) rand();
Note: the output of these PRNG functions produce a positive 32-bit number (0 to 232-1) which is then converted to a floating-point number between 0-1 (0 inclusive, 1 exclusive) equivalent to Math.random(), if you want random numbers of a specific range, read this article on MDN. If you only want the raw bits, simply remove the final division operation.
JavaScript numbers can only represent whole integers up to 53-bit resolution. And when using bitwise operations, this is reduced to 32. Modern PRNGs in other languages often use 64-bit operations, which require shims when porting to JS that can drastically reduce performance. The algorithms here only use 32-bit operations, as it is directly compatible with JS.
Now, onward to the the generators. (I maintain the full list with references and license info here)
sfc32 (Simple Fast Counter)
sfc32 is part of the PractRand random number testing suite (which it passes of course). sfc32 has a 128-bit state and is very fast in JS.
function sfc32(a, b, c, d) {
return function() {
a >>>= 0; b >>>= 0; c >>>= 0; d >>>= 0;
var t = (a + b) | 0;
a = b ^ b >>> 9;
b = c + (c << 3) | 0;
c = (c << 21 | c >>> 11);
d = d + 1 | 0;
t = t + d | 0;
c = c + t | 0;
return (t >>> 0) / 4294967296;
}
}
You may wonder what the | 0 and >>>= 0 are for. These are essentially 32-bit integer casts, used for performance optimizations. Number in JS are basically floats, but during bitwise operations, they switch into a 32-bit integer mode. This mode is processed faster by JS interpreters, but any multiplication or addition will cause it to switch back to a float, resulting in a performance hit.
Mulberry32
Mulberry32 is a simple generator with a 32-bit state, but is extremely fast and has good quality randomness (author states it passes all tests of gjrand testing suite and has a full 232 period, but I haven't verified).
function mulberry32(a) {
return function() {
var t = a += 0x6D2B79F5;
t = Math.imul(t ^ t >>> 15, t | 1);
t ^= t + Math.imul(t ^ t >>> 7, t | 61);
return ((t ^ t >>> 14) >>> 0) / 4294967296;
}
}
I would recommend this if you just need a simple but decent PRNG and don't need billions of random numbers (see Birthday problem).
xoshiro128**
As of May 2018, xoshiro128** is the new member of the Xorshift family, by Vigna & Blackman (professor Vigna was also responsible for the Xorshift128+ algorithm powering most Math.random implementations under the hood). It is the fastest generator that offers a 128-bit state.
function xoshiro128ss(a, b, c, d) {
return function() {
var t = b << 9, r = a * 5; r = (r << 7 | r >>> 25) * 9;
c ^= a; d ^= b;
b ^= c; a ^= d; c ^= t;
d = d << 11 | d >>> 21;
return (r >>> 0) / 4294967296;
}
}
The authors claim it passes randomness tests well (albeit with caveats). Other researchers have pointed out that it fails some tests in TestU01 (particularly LinearComp and BinaryRank). In practice, it should not cause issues when floats are used (such as in these implementations), but may cause issues if relying on the raw lowest order bit.
JSF (Jenkins' Small Fast)
This is JSF or 'smallprng' by Bob Jenkins (2007), who also made ISAAC and SpookyHash. It passes PractRand tests and should be quite fast, although not as fast as sfc32.
function jsf32(a, b, c, d) {
return function() {
a |= 0; b |= 0; c |= 0; d |= 0;
var t = a - (b << 27 | b >>> 5) | 0;
a = b ^ (c << 17 | c >>> 15);
b = c + d | 0;
c = d + t | 0;
d = a + t | 0;
return (d >>> 0) / 4294967296;
}
}
No, it is not possible to seed Math.random(), but it's fairly easy to write your own generator, or better yet, use an existing one.
Check out: this related question.
Also, see David Bau's blog for more information on seeding.
NOTE: Despite (or rather, because of) succinctness and apparent elegance, this algorithm is by no means a high-quality one in terms of randomness. Look for e.g. those listed in this answer for better results.
(Originally adapted from a clever idea presented in a comment to another answer.)
var seed = 1;
function random() {
var x = Math.sin(seed++) * 10000;
return x - Math.floor(x);
}
You can set seed to be any number, just avoid zero (or any multiple of Math.PI).
The elegance of this solution, in my opinion, comes from the lack of any "magic" numbers (besides 10000, which represents about the minimum amount of digits you must throw away to avoid odd patterns - see results with values 10, 100, 1000). Brevity is also nice.
It's a bit slower than Math.random() (by a factor of 2 or 3), but I believe it's about as fast as any other solution written in JavaScript.
No, but here's a simple pseudorandom generator, an implementation of Multiply-with-carry I adapted from Wikipedia (has been removed since):
var m_w = 123456789;
var m_z = 987654321;
var mask = 0xffffffff;
// Takes any integer
function seed(i) {
m_w = (123456789 + i) & mask;
m_z = (987654321 - i) & mask;
}
// Returns number between 0 (inclusive) and 1.0 (exclusive),
// just like Math.random().
function random()
{
m_z = (36969 * (m_z & 65535) + (m_z >> 16)) & mask;
m_w = (18000 * (m_w & 65535) + (m_w >> 16)) & mask;
var result = ((m_z << 16) + (m_w & 65535)) >>> 0;
result /= 4294967296;
return result;
}
Antti Sykäri's algorithm is nice and short. I initially made a variation that replaced JavaScript's Math.random when you call Math.seed(s), but then Jason commented that returning the function would be better:
Math.seed = function(s) {
return function() {
s = Math.sin(s) * 10000; return s - Math.floor(s);
};
};
// usage:
var random1 = Math.seed(42);
var random2 = Math.seed(random1());
Math.random = Math.seed(random2());
This gives you another functionality that JavaScript doesn't have: multiple independent random generators. That is especially important if you want to have multiple repeatable simulations running at the same time.
Please see Pierre L'Ecuyer's work going back to the late 1980s and early 1990s. There are others as well. Creating a (pseudo) random number generator on your own, if you are not an expert, is pretty dangerous, because there is a high likelihood of either the results not being statistically random or in having a small period. Pierre (and others) have put together some good (pseudo) random number generators that are easy to implement. I use one of his LFSR generators.
https://www.iro.umontreal.ca/~lecuyer/myftp/papers/handstat.pdf
Combining some of the previous answers, this is the seedable random function you are looking for:
Math.seed = function(s) {
var mask = 0xffffffff;
var m_w = (123456789 + s) & mask;
var m_z = (987654321 - s) & mask;
return function() {
m_z = (36969 * (m_z & 65535) + (m_z >>> 16)) & mask;
m_w = (18000 * (m_w & 65535) + (m_w >>> 16)) & mask;
var result = ((m_z << 16) + (m_w & 65535)) >>> 0;
result /= 4294967296;
return result;
}
}
var myRandomFunction = Math.seed(1234);
var randomNumber = myRandomFunction();
It's not possible to seed the builtin Math.random function, but it is possible to implement a high quality RNG in Javascript with very little code.
Javascript numbers are 64-bit floating point precision, which can represent all positive integers less than 2^53. This puts a hard limit to our arithmetic, but within these limits you can still pick parameters for a high quality Lehmer / LCG random number generator.
function RNG(seed) {
var m = 2**35 - 31
var a = 185852
var s = seed % m
return function () {
return (s = s * a % m) / m
}
}
Math.random = RNG(Date.now())
If you want even higher quality random numbers, at the cost of being ~10 times slower, you can use BigInt for the arithmetic and pick parameters where m is just able to fit in a double.
function RNG(seed) {
var m_as_number = 2**53 - 111
var m = 2n**53n - 111n
var a = 5667072534355537n
var s = BigInt(seed) % m
return function () {
return Number(s = s * a % m) / m_as_number
}
}
See this paper by Pierre l'Ecuyer for the parameters used in the above implementations:
https://www.ams.org/journals/mcom/1999-68-225/S0025-5718-99-00996-5/S0025-5718-99-00996-5.pdf
And whatever you do, avoid all the other answers here that use Math.sin!
To write your own pseudo random generator is quite simple.
The suggestion of Dave Scotese is useful but, as pointed out by others, it is not quite uniformly distributed.
However, it is not because of the integer arguments of sin. It's simply because of the range of sin, which happens to be a one dimensional projection of a circle. If you would take the angle of the circle instead it would be uniform.
So instead of sin(x) use arg(exp(i * x)) / (2 * PI).
If you don't like the linear order, mix it a bit up with xor. The actual factor doesn't matter that much either.
To generate n pseudo random numbers one could use the code:
function psora(k, n) {
var r = Math.PI * (k ^ n)
return r - Math.floor(r)
}
n = 42; for(k = 0; k < n; k++) console.log(psora(k, n))
Please also note that you cannot use pseudo random sequences when real entropy is needed.
Many people who need a seedable random-number generator in Javascript these days are using David Bau's seedrandom module.
Math.random no, but the ran library solves this. It has almost all distributions you can imagine and supports seeded random number generation. Example:
ran.core.seed(0)
myDist = new ran.Dist.Uniform(0, 1)
samples = myDist.sample(1000)
Here's the adopted version of Jenkins hash, borrowed from here
export function createDeterministicRandom(): () => number {
let seed = 0x2F6E2B1;
return function() {
// Robert Jenkins’ 32 bit integer hash function
seed = ((seed + 0x7ED55D16) + (seed << 12)) & 0xFFFFFFFF;
seed = ((seed ^ 0xC761C23C) ^ (seed >>> 19)) & 0xFFFFFFFF;
seed = ((seed + 0x165667B1) + (seed << 5)) & 0xFFFFFFFF;
seed = ((seed + 0xD3A2646C) ^ (seed << 9)) & 0xFFFFFFFF;
seed = ((seed + 0xFD7046C5) + (seed << 3)) & 0xFFFFFFFF;
seed = ((seed ^ 0xB55A4F09) ^ (seed >>> 16)) & 0xFFFFFFFF;
return (seed & 0xFFFFFFF) / 0x10000000;
};
}
You can use it like this:
const deterministicRandom = createDeterministicRandom()
deterministicRandom()
// => 0.9872818551957607
deterministicRandom()
// => 0.34880331158638
No, like they said it is not possible to seed Math.random()
but you can install external package which make provision for that. i used these package which can be install using these command
npm i random-seed
the example is gotten from the package documentation.
var seed = 'Hello World',
rand1 = require('random-seed').create(seed),
rand2 = require('random-seed').create(seed);
console.log(rand1(100), rand2(100));
follow the link for documentation https://www.npmjs.com/package/random-seed
SIN(id + seed) is a very interesting replacement for RANDOM functions that cannot be seeded like SQLite:
https://stackoverflow.com/a/75089040/7776828
Most of the answers here produce biased results. So here's a tested function based on seedrandom library from github:
!function(f,a,c){var s,l=256,p="random",d=c.pow(l,6),g=c.pow(2,52),y=2*g,h=l-1;function n(n,t,r){function e(){for(var n=u.g(6),t=d,r=0;n<g;)n=(n+r)*l,t*=l,r=u.g(1);for(;y<=n;)n/=2,t/=2,r>>>=1;return(n+r)/t}var o=[],i=j(function n(t,r){var e,o=[],i=typeof t;if(r&&"object"==i)for(e in t)try{o.push(n(t[e],r-1))}catch(n){}return o.length?o:"string"==i?t:t+"\0"}((t=1==t?{entropy:!0}:t||{}).entropy?[n,S(a)]:null==n?function(){try{var n;return s&&(n=s.randomBytes)?n=n(l):(n=new Uint8Array(l),(f.crypto||f.msCrypto).getRandomValues(n)),S(n)}catch(n){var t=f.navigator,r=t&&t.plugins;return[+new Date,f,r,f.screen,S(a)]}}():n,3),o),u=new m(o);return e.int32=function(){return 0|u.g(4)},e.quick=function(){return u.g(4)/4294967296},e.double=e,j(S(u.S),a),(t.pass||r||function(n,t,r,e){return e&&(e.S&&v(e,u),n.state=function(){return v(u,{})}),r?(c[p]=n,t):n})(e,i,"global"in t?t.global:this==c,t.state)}function m(n){var t,r=n.length,u=this,e=0,o=u.i=u.j=0,i=u.S=[];for(r||(n=[r++]);e<l;)i[e]=e++;for(e=0;e<l;e++)i[e]=i[o=h&o+n[e%r]+(t=i[e])],i[o]=t;(u.g=function(n){for(var t,r=0,e=u.i,o=u.j,i=u.S;n--;)t=i[e=h&e+1],r=r*l+i[h&(i[e]=i[o=h&o+t])+(i[o]=t)];return u.i=e,u.j=o,r})(l)}function v(n,t){return t.i=n.i,t.j=n.j,t.S=n.S.slice(),t}function j(n,t){for(var r,e=n+"",o=0;o<e.length;)t[h&o]=h&(r^=19*t[h&o])+e.charCodeAt(o++);return S(t)}function S(n){return String.fromCharCode.apply(0,n)}if(j(c.random(),a),"object"==typeof module&&module.exports){module.exports=n;try{s=require("crypto")}catch(n){}}else"function"==typeof define&&define.amd?define(function(){return n}):c["seed"+p]=n}("undefined"!=typeof self?self:this,[],Math);
function randIntWithSeed(seed, max=1) {
/* returns a random number between [0,max] including zero and max
seed can be either string or integer */
return Math.round(new Math.seedrandom('seed' + seed)()) * max
}
test for true randomness of this code: https://es6console.com/kkjkgur2/
There are plenty of good answers here but I had a similar issue with the additional requirement that I would like portability between Java's random number generator and whatever I ended up using in JavaScript.
I found the java-random package
These two pieces of code had identical output assuming the seed is the same:
Java:
Random randomGenerator = new Random(seed);
int randomInt;
for (int i=0; i<10; i++) {
randomInt = randomGenerator.nextInt(50);
System.out.println(randomInt);
}
JavaScript:
let Random = require('java-random');
let rng = new Random(seed);
for (let i=0; i<10; i++) {
let val = rng.nextInt(50);
console.log(val);
}
I have written a function that returns a seeded random number, it uses Math.sin to have a long random number and uses the seed to pick numbers from that.
Use :
seedRandom("k9]:2#", 15)
it will return your seeded number
the first parameter is any string value ; your seed.
the second parameter is how many digits will return.
function seedRandom(inputSeed, lengthOfNumber){
var output = "";
var seed = inputSeed.toString();
var newSeed = 0;
var characterArray = ['0','1','2','3','4','5','6','7','8','9','a','b','c','d','e','f','g','h','i','j','k','l','m','n','o','p','q','r','s','t','u','v','w','y','x','z','A','B','C','D','E','F','G','H','I','J','K','L','M','N','O','P','Q','U','R','S','T','U','V','W','X','Y','Z','!','#','#','$','%','^','&','*','(',')',' ','[','{',']','}','|',';',':',"'",',','<','.','>','/','?','`','~','-','_','=','+'];
var longNum = "";
var counter = 0;
var accumulator = 0;
for(var i = 0; i < seed.length; i++){
var a = seed.length - (i+1);
for(var x = 0; x < characterArray.length; x++){
var tempX = x.toString();
var lastDigit = tempX.charAt(tempX.length-1);
var xOutput = parseInt(lastDigit);
addToSeed(characterArray[x], xOutput, a, i);
}
}
function addToSeed(character, value, a, i){
if(seed.charAt(i) === character){newSeed = newSeed + value * Math.pow(10, a)}
}
newSeed = newSeed.toString();
var copy = newSeed;
for(var i=0; i<lengthOfNumber*9; i++){
newSeed = newSeed + copy;
var x = Math.sin(20982+(i)) * 10000;
var y = Math.floor((x - Math.floor(x))*10);
longNum = longNum + y.toString()
}
for(var i=0; i<lengthOfNumber; i++){
output = output + longNum.charAt(accumulator);
counter++;
accumulator = accumulator + parseInt(newSeed.charAt(counter));
}
return(output)
}
A simple approach for a fixed seed:
function fixedrandom(p){
const seed = 43758.5453123;
return (Math.abs(Math.sin(p)) * seed)%1;
}
In PHP, there is function srand(seed) which generate fixed random value for particular seed.
But, in JS, there is no such inbuilt function.
However, we can write simple and short function.
Step 1: Choose some Seed (Fix Number).
var seed = 100;
Number should be Positive Integer and greater than 1, further explanation in Step 2.
Step 2: Perform Math.sin() function on Seed, it will give sin value of that number. Store this value in variable x.
var x;
x = Math.sin(seed); // Will Return Fractional Value between -1 & 1 (ex. 0.4059..)
sin() method returns a Fractional value between -1 and 1.And we don't need Negative value, therefore, in first step choose number greater than 1.
Step 3: Returned Value is a Fractional value between -1 and 1. So mulitply this value with 10 for making it more than 1.
x = x * 10; // 10 for Single Digit Number
Step 4: Multiply the value with 10 for additional digits
x = x * 10; // Will Give value between 10 and 99 OR
x = x * 100; // Will Give value between 100 and 999
Multiply as per requirement of digits.
The result will be in decimal.
Step 5: Remove value after Decimal Point by Math's Round (Math.round()) Method.
x = Math.round(x); // This will give Integer Value.
Step 6: Turn Negative Values into Positive (if any) by Math.abs method
x = Math.abs(x); // Convert Negative Values into Positive(if any)
Explanation End.Final Code
var seed = 111; // Any Number greater than 1
var digit = 10 // 1 => single digit, 10 => 2 Digits, 100 => 3 Digits and so. (Multiple of 10)
var x; // Initialize the Value to store the result
x = Math.sin(seed); // Perform Mathematical Sin Method on Seed.
x = x * 10; // Convert that number into integer
x = x * digit; // Number of Digits to be included
x = Math.round(x); // Remove Decimals
x = Math.abs(x); // Convert Negative Number into Positive
Clean and Optimized Functional Code
function random_seed(seed, digit = 1) {
var x = Math.abs(Math.round(Math.sin(seed++) * 10 * digit));
return x;
}
Then Call this function using
random_seed(any_number, number_of_digits)any_number is must and should be greater than 1.number_of_digits is optional parameter and if nothing passed, 1 Digit will return.
random_seed(555); // 1 Digit
random_seed(234, 1); // 1 Digit
random_seed(7895656, 1000); // 4 Digit
For a number between 0 and 100.
Number.parseInt(Math.floor(Math.random() * 100))
I'm trying to solve all the lessons on codility but I failed to do so on the following problem: Ladder by codility
I've searched all over the internet and I'm not finding a answer that satisfies me because no one answers why the max variable impacts so much the result.
So, before posting the code, I'll explain the thinking.
By looking at it I didn't need much time to understand that the total number of combinations it's a Fibonacci number, and removing the 0 from the Fibonacci array, I'd find the answer really fast.
Now, afterwards, they told that we should return the number of combinations modulus 2^B[i].
So far so good, and I decided to submit it without the var max, then I got a score of 37%.. I searched all over the internet and the 100% result was similar to mine but they added that max = Math.pow(2,30).
Can anyone explain to me how and why that max influences so much the score?
My Code:
// Powers 2 to num
function pow(num){
return Math.pow(2,num);
}
// Returns a array with all fibonacci numbers except for 0
function fibArray(num){
// const max = pow(30); -> Adding this max to the fibonaccy array makes the answer be 100%
const arr = [0,1,1];
let current = 2;
while(current<=num){
current++;
// next = arr[current-1]+arr[current-2] % max;
next = arr[current-1]+arr[current-2]; // Without this max it's 30 %
arr.push(next);
}
arr.shift(); // remove 0
return arr;
}
function solution(A, B) {
let f = fibArray(A.length + 1);
let res = new Array(A.length);
for (let i = 0; i < A.length; ++i) {
res[i] = f[A[i]] % (pow(B[i]));
}
return res;
}
console.log(solution([4,4,5,5,1],[3,2,4,3,1])); //5,1,8,0,1
// Note that the console.log wont differ in this solution having max set or not.
// Running the exercise on Codility shows the full log with all details
// of where it passed and where it failed.
The limits for input parameters are:
Assume that:
L is an integer within the range [1..50,000];
each element of array A is an integer within the range [1..L];
each element of array B is an integer within the range [1..30].
So the array f in fibArray can be 50,001 long.
Fibonacci numbers grow exponentially; according to this page, the 50,000th Fib number has over 10,000 digits.
Javascript does not have built-in support for arbitrary precision integers, and even doubles only offer ~14 s.f. of precision. So with your modified code, you will get "garbage" values for any significant value of L. This is why you only got 30%.
But why is max necessary? Modulo math tells us that:
(a + b) % c = ([a % c] + [b % c]) % c
So by applying % max to the iterative calculation step arr[current-1] + arr[current-2], every element in fibArray becomes its corresponding Fib number mod max, without any variable exceeding the value of max (or built-in integer types) at any time:
fibArray[2] = (fibArray[1] + fibArray[0]) % max = (F1 + F0) % max = F2 % max
fibArray[3] = (F2 % max + F1) % max = (F2 + F1) % max = F3 % max
fibArray[4] = (F3 % max + F2 % max) = (F3 + F2) % max = F4 % max
and so on ...
(Fn is the n-th Fib number)
Note that as B[i] will never exceed 30, pow(2, B[i]) <= max; therefore, since max is always divisible by pow(2, B[i]), applying % max does not affect the final result.
Here is a python 100% answer that I hope offers an explanation :-)
In a nutshell; modulus % is similar to 'bitwise and' & for certain numbers.
eg any number % 10 is equivalent to the right most digit.
284%10 = 4
1994%10 = 4
FACTS OF LIFE:
for multiples of 2 -> X % Y is equivalent to X & ( Y - 1 )
precomputing (2**i)-1 for i in range(1, 31) is faster than computing everything in B when super large arrays are given as args for this particular lesson.
Thus fib(A[i]) & pb[B[i]] will be faster to compute than an X % Y style thingy.
https://app.codility.com/demo/results/trainingEXWWGY-UUR/
And for completeness the code is here.
https://github.com/niall-oc/things/blob/master/codility/ladder.py
Here is my explanation and solution in C++:
Compute the first L fibonacci numbers. Each calculation needs modulo 2^30 because the 50000th fibonacci number cannot be stored even in long double, it is so big. Since INT_MAX is 2^31, the summary of previously modulo'd numbers by 2^30 cannot exceed that. Therefore, we do not need to have bigger store and/or casting.
Go through the arrays executing the lookup and modulos. We can be sure this gives the correct result since modulo 2^30 does not take any information away. E.g. modulo 100 does not take away any information for subsequent modulo 10.
vector<int> solution(vector<int> &A, vector<int> &B)
{
const int L = A.size();
vector<int> fibonacci_numbers(L, 1);
fibonacci_numbers[1] = 2;
static const int pow_2_30 = pow(2, 30);
for (int i = 2; i < L; ++i) {
fibonacci_numbers[i] = (fibonacci_numbers[i - 1] + fibonacci_numbers[i - 2]) % pow_2_30;
}
vector<int> consecutive_answers(L, 0);
for (int i = 0; i < L; ++i) {
consecutive_answers[i] = fibonacci_numbers[A[i] - 1] % static_cast<int>(pow(2, B[i]));
}
return consecutive_answers;
}
I'd like to calculate the mathematical logarithm "by hand"...
... where stands for the logarithmBase and stands for the value.
Some examples (See Log calculator):
The base 2 logarithm of 10 is 3.3219280949
The base 5 logarithm of 15 is 1.6826061945
...
Hoever - I do not want to use a already implemented function call like Math.ceil, Math.log, Math.abs, ..., because I want a clean native solution that just deals with +-*/ and some loops.
This is the code I got so far:
function myLog(base, x) {
let result = 0;
do {
x /= base;
result ++;
} while (x >= base)
return result;
}
let x = 10,
base = 2;
let result = myLog(base, x)
console.log(result)
But it doesn't seems like the above method is the right way to calculate the logarithm to base N - so any help how to fix this code would be really appreciated.
Thanks a million in advance jonas.
You could use a recursive approach:
const log = (base, n, depth = 20, curr = 64, precision = curr / 2) =>
depth <= 0 || base ** curr === n
? curr
: log(base, n, depth - 1, base ** curr > n ? curr - precision : curr + precision, precision / 2);
Usable as:
log(2, 4) // 2
log(2, 10) // 3.32196044921875
You can influence the precision by changing depth, and you can change the range of accepted values (currently ~180) with curr
How it works:
If we already reached the wanted depth or if we already found an accurate value:
depth <= 0 || base ** curr === n
Then it just returns curr and is done. Otherwise it checks if the logarithm we want to find is lower or higher than the current one:
base ** curr > n
It will then continue searching for a value recursively by
1) lowering depth by one
2) increasing / decreasing curr by the current precision
3) lower precision
If you hate functional programming, here is an imperative version:
function log(base, n, depth = 20) {
let curr = 64, precision = curr / 2;
while(depth-- > 0 && base ** curr !== n) {
if(base ** curr > n) {
curr -= precision;
} else {
curr += precision;
}
precision /= 2;
}
return curr;
}
By the way, the algorithm i used is called "logarithmic search" commonly known as "binary search".
First method: with a table of constants.
First normalize the argument to a number between 1 and 2 (this is achieved by multiplying or dividing by 2 as many times as necessary - keep a count of these operations). For efficiency, if the values can span many orders of magnitude, instead of equal factors you can use a squared sequence, 2, 4, 16, 256..., followed by a dichotomic search when you have bracketed the value.
F.i. if the exponents 16=2^4 works but not 256=2^8, you try 2^6, then one of 2^5 and 2^7 depending on outcome. If the final exponent is 2^d, the linear search takes O(d) operations and the geometric/dichotomic search only O(log d). To avoid divisions, it is advisable to keep a table of negative powers.
After normalization, you need to refine the mantissa. Compare the value to √2, and if larger multiply by 1/√2. This brings the value between 1 and √2. Then compare to √√2 and so on. As you go, you add the weights 1/2, 1/4, ... to the exponent when a comparison returns greater.
In the end, the exponent is the base 2 logarithm.
Example: lg 27
27 = 2^4 x 1.6875
1.6875 > √2 = 1.4142 ==> 27 = 2^4.5 x 1.1933
1.1933 > √√2 = 1.1892 ==> 27 = 2^4.75 x 1.0034
1.0034 < √√√2 = 1.0905 ==> 27 = 2^4.75 x 1.0034
...
The true value is 4.7549.
Note that you can work with other bases, in particular e. In some contexts, base 2 allows shortcuts, this is why I used it. Of course, the square roots should be tabulated.
Second method: with a Taylor series.
After the normalization step, you can use the standard series
log(1 + x) = x - x²/2 + x³/3 - ...
which converges for |x| < 1. (Caution: we now have natural logarithms.)
As convergence is too slow for values close to 1, it is advisable to use the above method to reduce to the range [1, √2). Then every new term brings a new bit of accuracy.
Alternatively, you can use the series for log((1 + x)/(1 - x)), which gives a good convergence speed even for the argument 2. See https://fr.wikipedia.org/wiki/Logarithme_naturel#D%C3%A9veloppement_en_s%C3%A9rie
Example: with x = 1.6875, y = 0.2558 and
2 x (0.2558 + 0.2558³/3 + 0.2558^5/5) = 0.5232
lg 27 ~ 4 + 0.5232 / ln 2 = 4.7548