Shuffling Algorithm Fair? (Javascript) - javascript

I came across the algorithm below for shuffling an array in Javascript. It seems to differ from the Fisher–Yates shuffle in that the range of available "swaps" increases with the for-loop counter. This appears to be the opposite of how the Fisher-Yates version behaves. I'm curious as to whether this is a valid algorithm. Is it the Fisher-Yates in disguise? Is it biased?
If anyone could provide some code to test the frequency of the permutations it generates that would be a bonus.
<script>
var shuffle = function (myArray) {
var random = 0;
var temp = 0;
console.log(myArray);
for (i = 1; i < myArray.length; i++) {
random = Math.round(Math.random() * i);
console.log(random + '\n');
temp = myArray[i];
myArray[i] = myArray[random];
myArray[random] = temp;
console.log(myArray);
}
return myArray;
}
var arr = [1, 2, 3, 4];
shuffle(arr);
</script>

No, it's not a fair shuffle.
Math.random() * i is a uniform random floating point value between 0 and i, but Math.round(Math.random() * i) does not pick integers between 0 and i equally. For example, when i is 2, the values in the range [0, 0.5) round to 0, values in the range [0.5, 1.5) round to 1, and values in the range (1.5, 2) round to 2. That means instead of picking 0, 1 and 2 equally often, 1 is picked with probability 0.5, and 0 and 2 are picked with probability 0.25 each.
Math.floor(Math.random * (i + 1)) would be correct.
You can verify this statistically: shuffle the array [0, 1, 2] 10000 times and see how often 2 remains at the end of the array. It should be around 3333, but because of the bias it'll be more like 2500.
Other than that, the algorithm is correct and could be described as a Fisher-Yates in reverse. You can prove it correct by induction. The base case of n=1 is trivial. The induction step is also relatively easy: if you've got a uniformly random permutation of n items, and then insert the n+1'th item at a uniformly random index from 0 to n+1, then you've got a random permutation of n+1 items.

Related

Breaking numbers down into components

I have two arrays and for each number in the first array I need to find the largest number from the second array that fits into that number and break the first number into its components. Since 25 from the second array is the largest number that fits into 80 of the first, I need to transform 80 into two numbers - 75, 5. Likewise for 6 and 5 the result should be 5, 1. So the end result should be an array [75, 5, 5, 1]
let arr = [80, 6]
let arr2 = [25, 5]
for (let x of arr) {
for (let y of arr2) {
if (x / y > 1 && x / y < 4) {
let mlt = Math.floor(x / y)
largestFit = y * mlt
arr.splice(arr.indexOf(x), 1, largestFit)
}
}
}
console.log(arr)
The code above gives [75, 5] so thought I could add one more splice operation to insert the remainders, but doing this arr.splice(arr.indexOf(x + 1), 0, x - largestFit) just crashes the code editor. Why isn't this working and what is the solution? Thank you.
It is not advised to splice an array that is being iterated, and it is the reason why your loop got suck sometimes.
Instead build a new array, so it doesn't affect the iteration on the input array.
If you first sort the second array in descending order, you can then find the first value in that array that fits the amount, and be sure it is the greatest one. For sorting numerically in descending order, you can use sort with a callback function.
Once you have found the value to subtract, you can use the remainder operator (%) to determine what is left over after this subtraction.
function breakDown(amounts, coins) {
// Get a sorted copy (descending) of denominations:
coins = [...coins].sort((a, b) => b - a);
const result = []; // The results are stored here
for (let amount of amounts) {
for (let coin of coins) {
if (coin <= amount) {
result.push(amount - amount % coin);
amount %= coin;
}
}
if (amount) result.push(amount); // remainder
}
return result;
}
// Example 1
console.log(breakDown([80, 6], [25, 5]));
// Example 2
console.log(breakDown([100, 90, 6], [100, 75, 15, 5, 1]));
Explanations
The coins are sorted in descending order so that when we search for a fitting coin from left to right, we can be sure that if we find one, it will be the greatest one that fits, not just any. So for instance, if our amount is 7 and we have coins 2 and 5, we don't want to use coin 2 just yet -- we want to use coin 5 first. So if we sort those coins into [5, 2], and then start looking for the first coin that is smaller than our amount, we will be sure to find 5 first. The result would not be as expected if we would have found 2 first.
We can calculate the remainder of a division with the % operator, and there is a shortcut for when we want to assign that remainder back to the amount: it is the %= operator. amount %= coin is short for amount = amount % coin.
When the inner loop completes, it might be that amount is not zero, i.e. there is still an amount that remains that cannot be consumed by any available coin. In that case we want to still have that remainder in the result array, so we push it.
Often, the amount will already be zero when the loop ends. This will be ensured when the smallest coin is the smallest unit of amount one can expect. For instance if the amount is expressed as an integer, and the smallest coin is 1, then there will never be any remainder left when the inner loop has completed. If however the smallest coin would be 2, and we are left with an a amount of 1, then there is no way to reduce that amount to zero, so after the loop ends, we could be stuck with that remainder. And so we have this code to deal with that:
if (amount) result.push(amount)
Floating point
Be careful with using non-integers: floating point arithmetic does not always lead to expected results, because numbers like 0.1 cannot be accurately represented in floating point. You can end up with a non-zero amount after the inner loop finishes, when zero would have been expected. That amount will be tiny like 1e-15, and really indicates there was such an inaccuracy at play.
When calculating with monetary amounts, it is common practice to do that in number of cents, so to make all calculations based on integers. This will give reliable results.
I found the issue. After the first splice() operation indexOf(x) was returning -1, since x's are being replaced, so the solution is to assign indexOf(x) to a variable and use that variable for both splice() operations.
let arr = [80, 6]
let arr2 = [25, 5]
for (let x of arr) {
for (let y of arr2) {
if (x / y > 1 && x / y < 4) {
let mlt = Math.floor(x / y)
largestFit = y * mlt
let idx = arr.indexOf(x)
arr.splice(idx, 1, largestFit)
arr.splice(idx + 1, 0, x - largestFit)
}
}
}
console.log(arr)

How to write an efficient algorithm that generates a list of unique random numbers? [duplicate]

This question already has answers here:
How do I shuffle a Javascript Array ensuring each Index is in a new position in the new Array?
(5 answers)
Closed 1 year ago.
I'm pretty new to algorithms and am trying to solve a problem that involves generating a list of 5,000 numbers in random order each time it is run. Each number in the list must be unique and be between 1 and 5,000 (inclusive).
function createRandomList() {
let arr = [];
while(arr.length < 5000){
const num = Math.floor(Math.random() * 5000) + 1;
if(arr.indexOf(num) === -1) arr.push(num);
}
console.log(arr);
}
createRandomList()
Here's the solution that I came up with. I wanted to know the Time/Space complexity of this solution. Would it just be O(1) for both space and time because the values are bounded?
Any feedback would be greatly appreciated as well better ways to optimize the solution.
Keep a sequential list around and shuffle it. Fisher-Yates shuffle in-place is O(n).
Mike Bostock suggests an implementation here.
function shuffle(array) {
var m = array.length, t, i;
// While there remain elements to shuffle…
while (m) {
// Pick a remaining element…
i = Math.floor(Math.random() * m--);
// And swap it with the current element.
t = array[m];
array[m] = array[i];
array[i] = t;
}
return array;
}
const sequence = [1,2,3,4,5,6,7,8,9] // or gen this for any length you choose
let randomNonRepeatingSequence = shuffle(sequence)
console.log(randomNonRepeatingSequence)
function createRandomList() {
let i=0, numbers=[]
while(i++<5000) numbers.push(i);
numbers.sort(() => (Math.random() > .5) ? 1 : -1);
return numbers
}
console.log(createRandomList())
Your approach has one big problem: towards the end, it will generate many random numbers which are already contained in your sequence. Consider, you already generated 4999 random numbers. Now for the last one, you only have one possibility left, but you still generate numbers in a range from 1 to 5000. So in average you'll 2500 tries to hit the correct number. Better would be, creating a sequence of your allowed elements and then shuffle this sequence. An easy possibly could be the following.
let arr = [1,2,3,4 ..., 5000]
for (let i = arr.length-1; i > 0; i--){
let r = Math.floor(Math.random() * (i + 1));
let x = a[i]; a[i] = a[r]; a[r] = x;
}
How does is work: first you initialize the array with all your allowed values. Then you pick a random index from your array and switch that with the last element of the array. In the next iteration, the range for your random index is decreased by one, and it's switched with the but-last element. And so on and so forth. Once the loop is down to 1, you have a random sequence of your allowed values.

Find the lowest combination of predefined numbers, which sum is higher than X

lets say I have an array with different item-prices.
var myItemsEuro = [0.34, 0.11, 0.5, 0.33, 0.05, 0.13, 0.23, 3.22, 1.94]
I would like to have function like this:
function getTradeItems(0.89) { //The price of the item I want to buy
//Calculate, which of my items should be used to buy the item for 0.89€
return [0, 3, 6] //The position of my items in the array, which added together equal 0.90€
}
To clear things up:
I have a box of items with pricetags on them (myItemsEuro). I want to buy an item, using my items as a payment. The other party will accept my trade, if I overpay with atleast one cent.
The function should work, so i can pass the other guy's price to it (0.89 for example) and it returns, which items I will have to give away. The combination of these items must be above 0.89 cents (atleast 0.9), but should be as low as possible!
I am quite new to JS, and I was thinking about calculating every single combination of my items and then use the one that has the lowest difference to the buyprice. This seems really complicated to me and I don't even know how I would make it calculate every single combination and also save which items were used for the calculation.
Is there any way to achieve this a bit more efficient? I don't really expect any perfectly working code here, a little bit of help to get into the right direction would also be nice.
Any help is appreciated! :)
Edit:
Sorry for missing my own attempt. It's just that I have no idea how I should solve this at all. And no - not homework - this is supposed to be part of a chromeextension I am working on!
var myItemsEuro = [0.34, 0.11, 0.5, 0.33, 0.05, 0.13, 0.23, 3.22, 1.94]
function getTradeItems(marketPrice) {
var result = 0;
var positions = [];
for(i = 0; i < myItemsEuro.length; i++) {
result += myItemsEuro[i]; //add numbers from the array
positions.push(i); //save the used numbers position
if(result > marketPrice) { //if result is greater than marketPrice...
console.log(result)
console.log(positions)
return positions; //return positions in the array
}
}
}
getTradeItems(1.31);
Edit:
Sorting the array and then adding up numbers doesn't give a solution.
var x = 1.18;
//Sorted by numbers
var myItemsEuro = [0.05, 0.11, 0.13, 0.20, 0.35, 0.50, 0.60, 0.69, 0.75];
//Add together and stop when sum > x:
0.05 + 0.11 + 0.13 + 0.20 + 0.35 + 0.50 = 1.34
//Best solution would be adding [6] and [8] from the array
0.50 + 0.69 = 1.19
You could use a brute force approach and test all combinations of the items if they are greater or equal of the target.
The base consideration is to take a counter from zero up to 2values.length and check if the actual 2index is part of the counter with a bitwise AND &. If so, take the value from the index and put it into the parts array.
Then build the sum of the parts and check if sum is greater or equal of target and possibly smaller than sum in result array, then move the result to the result array. If sum is equal to result[0].sum, then push the actual parts and sum to the result.
This proposal works with unsorted values, but could be more efficient, if the values which are greater then the target value are not included in the array to work on.
Another draw back is the fact, that bitwise arithmetic works only with 32 bit, that means arrays with more items than 32 are not possible use.
var values = [0.34, 0.11, 0.5, 0.33, 0.05, 0.13, 0.23, 3.22, 1.94],
target = 0.90,
i,
l = 1 << values.length,
result = [],
parts,
sum;
for (i = 0; i < l; i++) {
parts = values.filter(function (_, j) {
return i & 1 << j;
});
sum = parts.reduce(function (a, b) { return a + b; }, 0);
if (sum >= target) {
if (!result.length || sum < result[0].sum) {
result = [{ parts: parts, sum: sum }];
continue;
}
if (sum === result[0].sum) {
result.push({ parts: parts, sum: sum });
}
}
}
console.log(result);
.as-console-wrapper { max-height: 100% !important; top: 0; }
I would suggest some measures to take:
Fractional numbers can cause floating point precision issues, so it is better to first convert every value to an integer (i.e. value in cents)
Perform a recursive search, where at each recursion level you'll decide whether or not to take the item at the corresponding index.
Think of situations where you might have multiple solutions: maybe in those cases you want to give precedence to solutions that use fewer items. So you would need to keep track of the number of items selected.
The solution I propose here will backtrack as soon as it is clear there is no sense in continuing adding items. There are at least two situations where you can draw this conclusion (of stopping the recursive search):
When the sum of the values selected so far is already greater than the target value, then there is no sense in adding more items (via recursion)
If after adding an item at position i, it becomes clear that adding all of the remaining items (via recursion) leads to a sum that is lower than the target, then it makes no sense to not select the item at position i, and repeat the recursive search, as that certainly will not reach the target value either.
Here is the suggested code:
function getTradeItems(target, items) {
var best = -1e20, // should be maximised up to -1 if possible
bestTaken = [], // indices of the best selection
bestCount = 0, // number of selected items in best solution
// Multiply all amounts by 100 to avoid floating point inaccuracies:
cents = items.map( euro => Math.round(euro * 100) );
function recurse(target, i, count) {
if (target < 0) { // Found potential solution
// Backtrack if this is not better than the best solution
// found so far. If a tie, then minimise the number of selected items
if (target < best ||
target === best && count > bestCount) return false;
best = target;
bestTaken = [];
bestCount = count;
return true;
}
// Give up when there's not enough item value available
if (i >= cents.length) return null;
// Include the item at index i:
var res1 = recurse(target - cents[i], i+1, count+1);
// If we could not reach the target, don't lose time retrying
// without this item:
if (res1 === null) return null;
// Exclude the item at index i:
var res0 = recurse(target, i+1, count);
// If neither led to a solution...
if (!res0 && !res1) return false;
// If the best was to include the item, remember that item
if (!res0) bestTaken.push(i);
return true;
}
recurse(Math.round(target * 100), 0);
return bestTaken.reverse();
}
// Sample input
var myItemsEuro = [0.05, 0.11, 0.13, 0.20, 0.35, 0.50, 0.60, 0.69, 0.75];
var x = 1.18
// Get the best item selection
var result = getTradeItems(x, myItemsEuro);
// Output result
console.log('Selected: ', result.join()); // 5, 7
// Show corresponding sum: (1.19)
console.log('Sum: ', result.reduce( (sum, i) => sum + myItemsEuro[i], 0).toFixed(2));

How do I optimally distribute values over an array of percentages?

Let's say I have the following code:
arr = [0.1,0.5,0.2,0.2]; //The percentages (or decimals) we want to distribute them over.
value = 100; //The amount of things we have to distribute
arr2 = [0,0,0,0] //Where we want how many of each value to go
To find out how to equally distribute a hundred over the array is simple, it's a case of:
0.1 * 100 = 10
0.5 * 100 = 50
...
Or doing it using a for loop:
for (var i = 0; j < arr.length; i++) {
arr2[i] = arr[i] * value;
}
However, let's say each counter is an object and thus has to be whole. How can I equally (as much as I can) distribute them on a different value. Let's say the value becomes 12.
0.1 * 12 = 1.2
0.5 * 12 = 6
...
How do I deal with the decimal when I need it to be whole? Rounding means that I could potentially not have the 12 pieces needed.
A correct algorithm would -
Take an input/iterate through an array of values (for this example we'll be using the array defined above.
Turn it into a set of whole values, which added together equal the value (which will equal 100 for this)
Output an array of values which, for this example it will look something like [10,50,20,20] (these add up to 100, which is what we need to add them up to and also are all whole).
If any value is not whole, it should make it whole so the whole array still adds up to the value needed (100).
TL;DR dealing with decimals when distributing values over an array and attempting to turn them into an integer
Note - Should this be posted on a different stackoverflow website, my need is programming, but the actual question will likely be solved using a mathematics. Also, I had no idea how to word this question, which makes googling incredibly difficult. If I've missed something incredibly obvious, please tell me.
You should round all values as you assign them using a rounding that is known to uniformly distribute the rounding. Finally, the last value will be assigned differently to round the sum up to 1.
Let's start slowly or things get very confused. First, let's see how to assign the last value to have a total of the desired value.
// we will need this later on
sum = 0;
// assign all values but the last
for (i = 0; i < output.length - 1; i++)
{
output[i] = input[i] * total;
sum += output[i];
}
// last value must honor the total constraint
output[i] = total - sum;
That last line needs some explanation. The i will be one more than the last allowed int the for(..) loop, so it will be:
output.length - 1 // last index
The value we assign will be so that the sum of all elements is equal to total. We already computed the sum in a single-pass during the assignment of the values, and thus don't need to iterated over the elements a second time to determine it.
Next, we will approach the rounding problem. Let's simplify the above code so that it uses a function on which we will elaborate shortly after:
sum = 0;
for (i = 0; i < output.length - 1; i++)
{
output[i] = u(input[i], total);
sum += output[i];
}
output[i] = total - sum;
As you can see, nothing has changed but the introduction of the u() function. Let's concentrate on this now.
There are several approaches on how to implement u().
DEFINITION
u(c, total) ::= c * total
By this definition you get the same as above. It is precise and good, but as you have asked before, you want the values to be natural numbers (e.G. integers). So while for real numbers this is already perfect, for natural numbers we have to round it. Let's suppose we use the simple rounding rule for integers:
[ 0.0, 0.5 [ => round down
[ 0.5, 1.0 [ => round up
This is achieved with:
function u(c, total)
{
return Math.round(c * total);
}
When you are unlucky, you may round up (or round down) so much values that the last value correction will not be enough to honor the total constraint and generally, all value will seem to be off by too much. This is a well known problem of which exists a multi-dimensional solution to draw lines in 2D and 3D space which is called the Bresenham algorithm.
To make things easy I'll show you here how to implement it in 1 dimension (which is your case).
Let's first discuss a term: the remainder. This is what is left after you have rounded your numbers. It is computed as the difference between what you wish and what you really have:
DEFINITION
WISH ::= c * total
HAVE ::= Math.round(WISH)
REMAINDER ::= WISH - HAVE
Now think about it. The remained is like the piece of paper that you discard when you cut out a shape from a sheet. That remaining paper is still there but you throw it away. Instead of this, just add it to the next cut-out so it is not wasted:
WISH ::= c * total + REMAINDER_FROM_PREVIOUS_STEP
HAVE ::= Math.round(WISH)
REMAINDER ::= WISH - HAVE
This way you keep the error and carry it over to the next partition in your computation. This is called amortizing the error.
Here is an amortized implementation of u():
// amortized is defined outside u because we need to have a side-effect across calls of u
function u(c, total)
{
var real, natural;
real = c * total + amortized;
natural = Math.round(real);
amortized = real - natural;
return natural;
}
On your own accord you may wish to have another rounding rule as Math.floor() or Math.ceil().
What I would advise you to do is to use Math.floor(), because it is proven to be correct with the total constraint. When you use Math.round() you will have smoother amortization, but you risk to not have the last value positive. You might end up with something like this:
[ 1, 0, 0, 1, 1, 0, -1 ]
Only when ALL VALUES are far away from 0 you can be confident that the last value will also be positive. So, for the general case the Bresenham algoritm would use flooring, resulting in this last implementation:
function u(c, total)
{
var real, natural;
real = c * total + amortized;
natural = Math.floor(real); // just to be on the safe side
amortized = real - natural;
return natural;
}
sum = 0;
amortized = 0;
for (i = 0; i < output.length - 1; i++)
{
output[i] = u(input[i], total);
sum += output[i];
}
output[i] = total - sum;
Obviously, input and output array must have the same size and the values in input must be a paritition (sum up to 1).
This kind of algorithm is very common for probabilistical and statistical computations.
Alternate implementation - it remembers a pointer to the biggest rounded value and when the sum differs of 100, increment or decrement value at this position.
const items = [1, 2, 3, 5];
const total = items.reduce((total, x) => total + x, 0);
let result = [], sum = 0, biggestRound = 0, roundPointer;
items.forEach((votes, index) => {
let value = 100 * votes / total;
let rounded = Math.round(value);
let diff = value - rounded;
if (diff > biggestRound) {
biggestRound = diff;
roundPointer = index;
}
sum += rounded;
result.push(rounded);
});
if (sum === 99) {
result[roundPointer] += 1;
} else if (sum === 101) {
result[roundPointer] -= 1;
}

Need a hint/advice as to how to factor very large numbers in JavaScript

My task is to produce an array containing all the prime numbers up to a 12-digit number.
I tried to emulate the Sieve of Eratosthenes by first making a function enumerate that produces an array containing every integer from 2 to num:
var enumerate = function(num) {
array = [];
for (var i = 2; i <= num; i++) {
array.push(i);
}
return array;
};
Then I made a function leaveOnlyPrimes which loops through and removes multiples of every array member up to 1/2 max from the array (this does not end up being every integer because the array become smaller with every iteration):
var leaveOnlyPrimes = function(max,array) {
for (var i = 0; array[i] <= max/2; i++) {
(function(mult,array) {
for (var i = mult*2; i <= array[array.length-1]; i += mult) {
var index = array.indexOf(i);
if (index !== -1) {
array.splice(index,1);
}
}
})(array[i],array);
}
};
This works fine with numbers up to about 50000, but any higher than that and the browser seems to freeze up.
Is there some version of this approach which could be made to accommodate larger numbers, or am I barking up the wrong tree?
As #WillNess suggests, you should not make a single monolithic sieve of that size. Instead, use a segmented Sieve of Eratosthenes to perform the sieving in successive segments. At the first segment, the smallest multiple of each sieving prime that is within the segment is calculated, then multiples of the sieving primes are marked composite in the normal way; when all the sieving primes have been used, the remaining unmarked number in the segment are prime. Then, for the next segment, the smallest multiple of each sieving prime is the multiple that ended the sieving in the prior segment, and so the sieving continues until finished.
Consider the example of sieving from 100 to 200 in segments of 20; the 5 sieving primes are 3, 5, 7, 11 and 13. In the first segment from 100 to 120, the bitarray has 10 slots, with slot 0 corresponding to 101, slot k corresponding to 100 + 2*k* + 1, and slot 9 corresponding to 119. The smallest multiple of 3 in the segment is 105, corresponding to slot 2; slots 2+3=5 and 5+3=8 are also multiples of 3. The smallest multiple of 5 is 105 at slot 2, and slot 2+5=7 is also a multiple of 5. The smallest multiple of 7 is 105 at slot 2, and slot 2+7=9 is also a multiple of 7. And so on.
Function primes takes arguments lo, hi and delta; lo and hi must be even, with lo < hi, and lo must be greater than the square root of hi. The segment size is twice delta. Array ps of length m contains the sieving primes less than the square root of hi, with 2 removed since even numbers are ignored, calculated by the normal Sieve of Eratosthenes. Array qs contains the offset into the sieve bitarray of the smallest multiple in the current segment of the corresponding sieving prime. After each segment, lo advances by twice delta, so the number corresponding to an index i of the sieve bitarray is lo + 2 i + 1.
function primes(lo, hi, delta)
sieve := makeArray(0..delta-1)
ps := tail(primes(sqrt(hi)))
m := length(ps)
qs := makeArray(0..m-1)
for i from 0 to m-1
qs[i] := (-1/2 * (lo + ps[i] + 1)) % ps[i]
while lo < hi
for i from 0 to delta-1
sieve[i] := True
for i from 0 to m-1
for j from qs[i] to delta step ps[i]
sieve[j] := False
qs[i] := (qs[i] - delta) % ps[i]
for i from 0 to delta-1
t := lo + 2*i + 1
if sieve[i] and t < hi
output t
lo := lo + 2*delta
For the sample given above, this is called as primes(100, 200, 10). In the example given above, qs is initially [2,2,2,10,8], corresponding to smallest multiples 105, 105, 105, 121 and 117, and is reset for the second segment to [1,2,6,0,11], corresponding to smallest multiples 123, 125, 133, 121 and 143.
The value of delta is critical; you should make delta as large as possible so long at it fits in cache memory, for speed. Use your language's library for the bitarray, so that you only take a single bit for each sieve location. If you need a simple Sieve of Eratosthenes to calculate the sieving primes, this is my favorite:
function primes(n)
sieve := makeArray(2..n, True)
for p from 2 to n step 1
if sieve(p)
output p
for i from p * p to n step p
sieve[i] := False
I'll leave it to you to translate to JavaScript. You can see more algorithms involving prime numbers at my blog.
up to 12 digits is 100,000,000,000. That's a lot of primes (~ N/log N = 3,948,131,653).
So, make a sieve up to 10^6, compress it into the array of ~78,500 core primes, and use them to sieve segment by segment all the way up to your target. Use primes up to the square root of the current upper limit of the segment. The size of the segment is usually chosen so that it fits into system cache. After sieving each segment, collect its primes.
This is known as segmented sieve of Eratosthenes.

Categories