How to actually calculate if parity is even or odd? - javascript

I am working on an implementation of the 15-pieces-sliding puzzle, and I am stuck at the point were I must make sure I only shuffle into "solvable permutations" - in my case with the empty tile in the down right corner: even permutations.
I have read many similar threads such as How can I ensure that when I shuffle my puzzle I still end up with an even permutation? and understand that I need to "count the parity of the number of inversions in the permutation".
I am writing in Javascript, and using Fischer-Yates-algorithm to randomize my numbers:
var allNrs = [0,1,2,3,4,5,6,7,8,9,10,11,12,13,14];
for (var i = allNrs.length - 1; i > 0; i--) {
var j = Math.floor(Math.random() * (i + 1));
var temp1 = allNrs[i];
var temp2 = allNrs[j];
allNrs[i] = temp2;
allNrs[j] = temp1;
}
How do I actually caculate this permutation or parity value that I have read about in so many posts?

Just count the number of swaps you're making. If the number of swaps is even then the permutation has an even parity.
For example, these are the even permutations for 3 numbers. Note that you need 0 or 2 swaps to get to them from [1,2,3]:
1,2,3
2,3,1
3,1,2

Each swap of two numbers you do flips the parity. If you have an even number of them, you are good. If you have an odd number, you are not.
This is essentially what parity means and it is a (simple) theorem of group theory that any two ways to get to the same shuffle have the same parity.

Related

Get every possible combination of numbers (fastest possible method)

I am trying to get every single combination of elements into an array. Now I can use the method below, and remove the duplicates, but this way is far to slow for my use.
The code below would find every possible combination for 2 digits below 4. Now in the code I actually want to use this for, the least possible code would be 6 for loops (within each other) with the amount being 18 (rememeber this is the minimum).
The code below would execute amount^[amount of for loops], or amount^2 which in this case is 16. That means that in the code I want to use this for, it executes 18^6 times, or 34 million times. And this is the minimum, which would get much higher.
After trying to run my code (with 6 foor loops in which amount = 18), it crashed my browser... My question is: Is there any faster and more efficient (not elegant. I don't care how elegant it is) in which my browser won't crash?
Note: This question is not a duplicate question. All the other questions simply ask for a way to do this, however I already have a way. I am just trying to make it more efficient and faster so that it actually works correctly.
let combinations = [];
let amount = 4;
for (let a = 0; a < amount; a++) {
for (let b = 0; b < amount; b++) {
combinations.push(`${a}${b}`);
}
}
console.log(combinations);
Below is a snippet providing a possible example for how my code would work.
let possibilities = [];
let amount = 6; //Amount is set by me, so don't worry about it being incorrect
for (let a = 0; a < amount; a++) {
for (let b = 0; b < amount; b++) {
possibilities.push(a + b);
}
}
possibilities = [...new Set(possibilities)]; //Removes duplicates
possibilities.sort((a, b) => b - a); //Sorts in descending order
possibilities = possibilities.slice(0, 3); //Gets top 3 values
console.log(possibilities);
Ok, as discussed in the comments, if you need top 3 values for a particular amount, you could just do something simple like below:
let amount = 6;
let highest = amount - 1,second_highest = amount - 2,third_highest = amount - 3;
let possibilities = [
highest + highest,
highest + second_highest,
highest + third_highest
];
console.log(possibilities);
I don't know the any better solution for this, but yes there are some conditions you need to check first.
If(amount <= 0) return 'Invalid amount, Please enter a valid amount"
So if somebody enters a negative or zero value your loop will goes into infinite loop, and make the situation more worst.
if(amount === 1) return '1 possible combination'
As amount less than 1 is 0 only and combinations for 0 is 1 only, you need not to parse whole loop for 6 digits or n digits for 0 so it will get solve in complexity of 1 instead of N(no. of digits).
And for amount greater then 1 you can create manual loops, like here you created 2 loops for 2 digits, you create 6 loops for 6 digits, better create dynamic logic for this to create number of loops automatically.
You need to consider 1111, 1112 this type of combinations as well right?
Or if only 1234, 2134, 2314 this kind of scenarios are required? This can be done in very less complexity.
For duplication you can store combinations as a key value pair. and then Object.Keys will be your combinations.

Generate a random array in Javascript/jquery for Sudoku puzzle

I want to fill the 9 x 9 grid from the array by taking care of following condition
A particular number should not be repeated across the same column.
A particular number should not be repeated across the same row.
When i execute the below mentioned code it fills all the 9 X 9 grid with random values without the above mentioned condition.How can I add those two condition before inserting values into my 9 X 9 Grid.
var sudoku_array = ['1','2','3','4','6','5','7','8','9'];
$('.smallbox input').each(function(index) {
$(this).val(sudoku_array[Math.floor(Math.random()*sudoku_array.length)]);
});
My JSFIDDLE LINK
Generating and solving Sudokus is actually not as simple as other (wrong) answers might suggest, but it is not rocket science either. Instead of copying and pasting from Wikipedia I'd like to point you to this question.
However, since it is bad practice to just point to external links, I want to justify it by providing you at least with the intuition why naive approaches fail.
If you start generating a Sudoku board by filling some fields with random numbers (thereby taking into account your constraints), you obtain a partially filled board. Completing it is then equivalent to solving a Sudoku which is nothing else than completing a partially filled board by adhering to the Sudoku rules. If you ever tried it, you will know that this is not possible if you decide on the next number by chosing a valid number only with respect to the 3x3 box, the column and the row. For all but the simplest Sudokus there is some trial and error, so you need a form of backtracking.
I hope this helps.
To ensure that no number is repeated on a row, you might need a shuffling function. For columns, you'll just have to do it the hard way (checking previous solutions to see if a number exists on that column). I hope i am not confusing rows for columns, i tend to do it a lot.
It's similar to the eight queens problem in evolutionary computing. Backtracking, a pure random walk or an evolved solution would solve the problem.
This code will take a while, but it'll do the job.
You can the iterate through the returned two dimensional array, and fill the sudoku box. Holla if you need any help with that
Array.prototype.shuffle = function() {
var arr = this.valueOf();
var ret = [];
while (ret.length < arr.length) {
var x = arr[Math.floor(Number(Math.random() * arr.length))];
if (!(ret.indexOf(x) >= 0)) ret.push(x);
}
return ret;
}
function getSudoku() {
var sudoku = [];
var arr = [1, 2, 3, 4, 5, 6, 7, 8, 9];
sudoku.push(arr);
for (var i = 1; i < 9; i++) {
while (sudoku.length <= i) {
var newarr = arr.shuffle();
var b = false;
for (var j = 0; j < arr.length; j++) {
for (var k = 0; k < i; k++) {
if (sudoku[k].indexOf(newarr[j]) == j) b = true;
}
}
if (!b) {
sudoku.push(newarr);
document.body.innerHTML += `${newarr}<br/>`;
}
}
}
return sudoku;
}
getSudoku()
You need to keep track of what you have inserted before, for the following line:
$(this).val(sudoku_array[Math.floor(Math.random()*sudoku_array.length)]);
For example you can have a jagged array (arrays of arrays, its like a 2-D array) instead of 'sudoku_array' you have created to keep track of available numbers. In fact, you can create two jagged arrays, one for column and one for rows. Since you don't keep track of what you have inserted before, numbers are generated randomly.
After you create an array that keeps available numbers, you do the following:
After you generate number, remove it from the jagged array's respective row and column to mark it unavailable for those row and columns.
Before creating any number, check if it is available in the jagged array(check for both column and row). If not available, make it try another number.
Note: You can reduce the limits of random number you generate to available numbers. If you do that the random number x you generate would mean xth available number for that cell. That way you would not get a number that is not available and thus it works significantly faster.
Edit: As Lex82 pointed out in the comments and in his answer, you will also need a backtracking to avoid dead ends or you need to go deeper in mathematics. I'm just going to keep my answer in case it gives you an idea.

Why isn't this shuffle algorithm biased

My coworker and I are arguing about why the shuffle algorithm given in this list of JS tips & tricks doesn't produce biased results like the sort Jeff Atwood describes for naive shuffles.
The array shuffle code in the tips is:
list.sort(function() Math.random() - 0.5);
Jeff's naive shuffle code is:
for (int i = 0; i < cards.Length; i++)
{
int n = rand.Next(cards.Length);
Swap(ref cards[i], ref cards[n]);
}
I wrote this JS to test the shuffle:
var list = [1,2,3];
var result = {123:0,132:0,321:0,213:0,231:0,312:0};
function shuffle() { return Math.random() - 0.5; }
for (var i=0; i<60000000; i++) {
result[ list.sort(shuffle).join('') ]++;
}
For which I get results (from Firefox 5) like:
Order Count %Diff True Avg
123 9997461 -0.0002539
132 10003451 0.0003451
213 10001507 0.0001507
231 9997563 -0.0002437
312 9995658 -0.0004342
321 10004360 0.000436
Presumably Array.sort is walking the list array and performing swaps of (adjacent) elements similar to Jeff's example. So why don't the results look biased?
I found the reason it appears unbiased.
Array.sort() not only returns the array, it changes the array itself. If we re-initialize the array for each loop, we get results like:
123 14941
132 7530
321 7377
213 15189
231 7455
312 7508
Which shows a very significant bias.
For those interested, here's the modified code:
var result = {123:0,132:0,321:0,213:0,231:0,312:0};
var iterations = 60000;
function shuffle() {
comparisons++;
return Math.random() - 0.5;
}
for (var i=0; i<iterations; i++) {
var list = [1,2,3];
result[ list.sort(shuffle).join('') ]++;
}
console.log(result);
The problem with the naive shuffle is that the value might have already been swapped and you might swap it again later. Let's say you have three cards and you pick one truly at random for the first card. If you later can randomly swap that card with a latter one then you are taking away from the randomness of that first selection.
If the sort is quicksort, it continually splits the list about in half. The next iteration splits each of those groups into two groups randomly. This keeps going on until you are down to single cards, then you combine them all together. The difference is that you never take a card from the second randomly selected group and move it back to the first group.
The Knuth-Fisher-Yates shuffle is different than the naive shuffle because you only pick a card once. If you were picking random cards from a deck, would you put a card back and pick again? No, you take random cards one at a time. This is the first I've heard of it, but I've done something similar back in high school going from index 0 up. KFY is probably faster because I have an extra addition in the random statement.
for (int i = 0; i < cards.Length - 1; i++)
{
int n = rand.Next(cards.Length - i) + i; // (i to cards.Length - 1)
Swap(ref cards[i], ref cards[n]);
}
Don't think of it as swapping, think of it as selecting random cards from a deck. For each element in the array (except the last because there is only one left) you pick a random card out of all the remaining cards and lay it down forming a new stack of cards that are randomly shuffled. It doesn't matter that your remaining cards are no longer in the original order if you've done any swapping already, you are still picking one random card from all the remaining cards.
The random quicksort is like taking a stack of cards and randomly dividing them into two groups, then taking each group and randomly dividing it into two smaller groups, and on and on until you have individual cards then putting them back together.
Actually, that doesn't implement his naïve random sort. His algorithm actually transposes array keys manually, while sort actively sorts a list.
sort uses quicksort or insertion sort (thanks to cwolves for pointing that out -- see comments) to do this (this will vary based on the implementation):
Is A bigger or smaller than B? Smaller? Decrement.
Is A bigger or smaller than C? Smaller? Decrement.
Is A bigger or smaller than D? Smaller? Insert A after D
Is B bigger or smaller than C? Smaller? Decrement.
Is B bigger or smaller than D? Smaller? Insert B after D and before A...
This means that your big O for the average case is O(n log n) and your big O for the worst case is O(n^2) for each loop iteration.
Meanwhile the Atwood naïve random sort is a simple:
Start at A. Find random value. Swap.
Go to B. Find random value. Swap.
Go to C. Find random value. Swap.
(Knuth-Fisher-Yates is almost the same, only backwards)
So his has a big for the worst case of O(n) and a big O for the average case of O(n).

How is randomness achieved with Math.random in javascript?

How is randomness achieved with Math.random in javascript? I've made something that picks between around 50 different options randomly. I'm wondering how comfortable I should be with using Math.random to get my randomness.
From the specifications:
random():
Returns a Number value with positive
sign, greater than or equal to 0 but
less than 1, chosen randomly or pseudo
randomly with approximately uniform
distribution over that range, using an
implementation-dependent algorithm or
strategy. This function takes no
arguments.
So the answer is that it depends on what JavaScript engine you're using.
I'm not sure if all browsers use the same strategy or what that strategy is unfortunately
It should be fine for your purposes. Only if you're doing a large amount of numbers would you begin to see a pattern
Using Math.random() is fine if you're not centrally pooling & using the results, i.e. for OAuth.
For example, our site used Math.random() to generate random "nonce" strings for use with OAuth. The original JavaScript library did this by choosing a character from a predetermined list using Math.random(): i.e.
for (var i = 0; i < length; ++i) {
var rnum = Math.floor(Math.random() * chars.length);
result += chars.substring(rnum, rnum+1);
}
The problem is, users were getting duplicate nonce strings (even using a 10 character length - theoretically ~10^18 combinations), usually within a few seconds of each other. My guess this is due to Math.random() seeding from the timestamp, as one of the other posters mentioned.
The exact implementation can of course differ somewhat depending on the browser, but they all use some kind of pseudo random number generator. Although it's not really random, it's certainly good enough for all general purposes.
You should only be worried about the randomness if you are using it for something that needs exceptionally good randomness, like encryption or simulating a game of chance in play for money, but then you would hardly use Javascript anyway.
It's 100% random enough for your purposes. It's seeded by time, so every time you run it, you'll get different results.
Paste this into your browsers address bar...
javascript:alert(Math.random() * 2 > 1);
and press [Enter] a few times... I got "true, false, false, true" - random enough :)
This is a little overkill...but, I couldn't resist doing this :)
You can execute this in your browser address bar. It generates a random number between 0 and 4, 100000 times. And outputs the number of times each number was generated and the number of times one random number followed the other.
I executed this in Firefox 3.5.2. All the numbers seem to be about equal - indicating no bias, and no obvious pattern in the way the numbers are generated.
javascript:
var max = 5;
var transitions = new Array(max);
var frequency = new Array(max);
for (var i = 0; i < max; i++)
{
transitions[i] = new Array(max);
}
var old = 0, curr = 0;
for (var i = 0; i < 100000; i++)
{
curr = Math.floor(Math.random()*max);
if (frequency[curr] === undefined)
{
frequency[curr] = -1;
}
frequency[curr] += 1;
if (transitions[old][curr] === undefined)
{
transitions[old][curr] = -1;
}
transitions[old][curr] += 1;
old = curr;
}
alert(frequency);
alert(transitions);

Is it correct to use JavaScript Array.sort() method for shuffling?

I was helping somebody out with his JavaScript code and my eyes were caught by a section that looked like that:
function randOrd(){
return (Math.round(Math.random())-0.5);
}
coords.sort(randOrd);
alert(coords);
My first though was: hey, this can't possibly work! But then I did some experimenting and found that it indeed at least seems to provide nicely randomized results.
Then I did some web search and almost at the top found an article from which this code was most ceartanly copied. Looked like a pretty respectable site and author...
But my gut feeling tells me, that this must be wrong. Especially as the sorting algorithm is not specified by ECMA standard. I think different sorting algoritms will result in different non-uniform shuffles. Some sorting algorithms may probably even loop infinitely...
But what do you think?
And as another question... how would I now go and measure how random the results of this shuffling technique are?
update: I did some measurements and posted the results below as one of the answers.
After Jon has already covered the theory, here's an implementation:
function shuffle(array) {
var tmp, current, top = array.length;
if(top) while(--top) {
current = Math.floor(Math.random() * (top + 1));
tmp = array[current];
array[current] = array[top];
array[top] = tmp;
}
return array;
}
The algorithm is O(n), whereas sorting should be O(n log n). Depending on the overhead of executing JS code compared to the native sort() function, this might lead to a noticable difference in performance which should increase with array sizes.
In the comments to bobobobo's answer, I stated that the algorithm in question might not produce evenly distributed probabilities (depending on the implementation of sort()).
My argument goes along these lines: A sorting algorithm requires a certain number c of comparisons, eg c = n(n-1)/2 for Bubblesort. Our random comparison function makes the outcome of each comparison equally likely, ie there are 2^c equally probable results. Now, each result has to correspond to one of the n! permutations of the array's entries, which makes an even distribution impossible in the general case. (This is a simplification, as the actual number of comparisons neeeded depends on the input array, but the assertion should still hold.)
As Jon pointed out, this alone is no reason to prefer Fisher-Yates over using sort(), as the random number generator will also map a finite number of pseudo-random values to the n! permutations. But the results of Fisher-Yates should still be better:
Math.random() produces a pseudo-random number in the range [0;1[. As JS uses double-precision floating point values, this corresponds to 2^x possible values where 52 ≤ x ≤ 63 (I'm too lazy to find the actual number). A probability distribution generated using Math.random() will stop behaving well if the number of atomic events is of the same order of magnitude.
When using Fisher-Yates, the relevant parameter is the size of the array, which should never approach 2^52 due to practical limitations.
When sorting with a random comparision function, the function basically only cares if the return value is positive or negative, so this will never be a problem. But there is a similar one: Because the comparison function is well-behaved, the 2^c possible results are, as stated, equally probable. If c ~ n log n then 2^c ~ n^(a·n) where a = const, which makes it at least possible that 2^c is of same magnitude as (or even less than) n! and thus leading to an uneven distribution, even if the sorting algorithm where to map onto the permutaions evenly. If this has any practical impact is beyond me.
The real problem is that the sorting algorithms are not guaranteed to map onto the permutations evenly. It's easy to see that Mergesort does as it's symmetric, but reasoning about something like Bubblesort or, more importantly, Quicksort or Heapsort, is not.
The bottom line: As long as sort() uses Mergesort, you should be reasonably safe except in corner cases (at least I'm hoping that 2^c ≤ n! is a corner case), if not, all bets are off.
It's never been my favourite way of shuffling, partly because it is implementation-specific as you say. In particular, I seem to remember that the standard library sorting from either Java or .NET (not sure which) can often detect if you end up with an inconsistent comparison between some elements (e.g. you first claim A < B and B < C, but then C < A).
It also ends up as a more complex (in terms of execution time) shuffle than you really need.
I prefer the shuffle algorithm which effectively partitions the collection into "shuffled" (at the start of the collection, initially empty) and "unshuffled" (the rest of the collection). At each step of the algorithm, pick a random unshuffled element (which could be the first one) and swap it with the first unshuffled element - then treat it as shuffled (i.e. mentally move the partition to include it).
This is O(n) and only requires n-1 calls to the random number generator, which is nice. It also produces a genuine shuffle - any element has a 1/n chance of ending up in each space, regardless of its original position (assuming a reasonable RNG). The sorted version approximates to an even distribution (assuming that the random number generator doesn't pick the same value twice, which is highly unlikely if it's returning random doubles) but I find it easier to reason about the shuffle version :)
This approach is called a Fisher-Yates shuffle.
I would regard it as a best practice to code up this shuffle once and reuse it everywhere you need to shuffle items. Then you don't need to worry about sort implementations in terms of reliability or complexity. It's only a few lines of code (which I won't attempt in JavaScript!)
The Wikipedia article on shuffling (and in particular the shuffle algorithms section) talks about sorting a random projection - it's worth reading the section on poor implementations of shuffling in general, so you know what to avoid.
I did some measurements of how random the results of this random sort are...
My technique was to take a small array [1,2,3,4] and create all (4! = 24) permutations of it. Then I would apply the shuffling function to the array a large number of times and count how many times each permutation is generated. A good shuffling algoritm would distribute the results quite evenly over all the permutations, while a bad one would not create that uniform result.
Using the code below I tested in Firefox, Opera, Chrome, IE6/7/8.
Surprisingly for me, the random sort and the real shuffle both created equally uniform distributions. So it seems that (as many have suggested) the main browsers are using merge sort. This of course doesn't mean, that there can't be a browser out there, that does differently, but I would say it means, that this random-sort-method is reliable enough to use in practice.
EDIT: This test didn't really measured correctly the randomness or lack thereof. See the other answer I posted.
But on the performance side the shuffle function given by Cristoph was a clear winner. Even for small four-element arrays the real shuffle performed about twice as fast as random-sort!
// The shuffle function posted by Cristoph.
var shuffle = function(array) {
var tmp, current, top = array.length;
if(top) while(--top) {
current = Math.floor(Math.random() * (top + 1));
tmp = array[current];
array[current] = array[top];
array[top] = tmp;
}
return array;
};
// the random sort function
var rnd = function() {
return Math.round(Math.random())-0.5;
};
var randSort = function(A) {
return A.sort(rnd);
};
var permutations = function(A) {
if (A.length == 1) {
return [A];
}
else {
var perms = [];
for (var i=0; i<A.length; i++) {
var x = A.slice(i, i+1);
var xs = A.slice(0, i).concat(A.slice(i+1));
var subperms = permutations(xs);
for (var j=0; j<subperms.length; j++) {
perms.push(x.concat(subperms[j]));
}
}
return perms;
}
};
var test = function(A, iterations, func) {
// init permutations
var stats = {};
var perms = permutations(A);
for (var i in perms){
stats[""+perms[i]] = 0;
}
// shuffle many times and gather stats
var start=new Date();
for (var i=0; i<iterations; i++) {
var shuffled = func(A);
stats[""+shuffled]++;
}
var end=new Date();
// format result
var arr=[];
for (var i in stats) {
arr.push(i+" "+stats[i]);
}
return arr.join("\n")+"\n\nTime taken: " + ((end - start)/1000) + " seconds.";
};
alert("random sort: " + test([1,2,3,4], 100000, randSort));
alert("shuffle: " + test([1,2,3,4], 100000, shuffle));
Interestingly, Microsoft used the same technique in their pick-random-browser-page.
They used a slightly different comparison function:
function RandomSort(a,b) {
return (0.5 - Math.random());
}
Looks almost the same to me, but it turned out to be not so random...
So I made some testruns again with the same methodology used in the linked article, and indeed - turned out that the random-sorting-method produced flawed results. New test code here:
function shuffle(arr) {
arr.sort(function(a,b) {
return (0.5 - Math.random());
});
}
function shuffle2(arr) {
arr.sort(function(a,b) {
return (Math.round(Math.random())-0.5);
});
}
function shuffle3(array) {
var tmp, current, top = array.length;
if(top) while(--top) {
current = Math.floor(Math.random() * (top + 1));
tmp = array[current];
array[current] = array[top];
array[top] = tmp;
}
return array;
}
var counts = [
[0,0,0,0,0],
[0,0,0,0,0],
[0,0,0,0,0],
[0,0,0,0,0],
[0,0,0,0,0]
];
var arr;
for (var i=0; i<100000; i++) {
arr = [0,1,2,3,4];
shuffle3(arr);
arr.forEach(function(x, i){ counts[x][i]++;});
}
alert(counts.map(function(a){return a.join(", ");}).join("\n"));
I have placed a simple test page on my website showing the bias of your current browser versus other popular browsers using different methods to shuffle. It shows the terrible bias of just using Math.random()-0.5, another 'random' shuffle that isn't biased, and the Fisher-Yates method mentioned above.
You can see that on some browsers there is as high as a 50% chance that certain elements will not change place at all during the 'shuffle'!
Note: you can make the implementation of the Fisher-Yates shuffle by #Christoph slightly faster for Safari by changing the code to:
function shuffle(array) {
for (var tmp, cur, top=array.length; top--;){
cur = (Math.random() * (top + 1)) << 0;
tmp = array[cur]; array[cur] = array[top]; array[top] = tmp;
}
return array;
}
Test results: http://jsperf.com/optimized-fisher-yates
I think it's fine for cases where you're not picky about distribution and you want the source code to be small.
In JavaScript (where the source is transmitted constantly), small makes a difference in bandwidth costs.
It's been four years, but I'd like to point out that the random comparator method won't be correctly distributed, no matter what sorting algorithm you use.
Proof:
For an array of n elements, there are exactly n! permutations (i.e. possible shuffles).
Every comparison during a shuffle is a choice between two sets of permutations. For a random comparator, there is a 1/2 chance of choosing each set.
Thus, for each permutation p, the chance of ending up with permutation p is a fraction with denominator 2^k (for some k), because it is a sum of such fractions (e.g. 1/8 + 1/16 = 3/16).
For n = 3, there are six equally-likely permutations. The chance of each permutation, then, is 1/6. 1/6 can't be expressed as a fraction with a power of 2 as its denominator.
Therefore, the coin flip sort will never result in a fair distribution of shuffles.
The only sizes that could possibly be correctly distributed are n=0,1,2.
As an exercise, try drawing out the decision tree of different sort algorithms for n=3.
There is a gap in the proof: If a sort algorithm depends on the consistency of the comparator, and has unbounded runtime with an inconsistent comparator, it can have an infinite sum of probabilities, which is allowed to add up to 1/6 even if every denominator in the sum is a power of 2. Try to find one.
Also, if a comparator has a fixed chance of giving either answer (e.g. (Math.random() < P)*2 - 1, for constant P), the above proof holds. If the comparator instead changes its odds based on previous answers, it may be possible to generate fair results. Finding such a comparator for a given sorting algorithm could be a research paper.
It is a hack, certainly. In practice, an infinitely looping algorithm is not likely.
If you're sorting objects, you could loop through the coords array and do something like:
for (var i = 0; i < coords.length; i++)
coords[i].sortValue = Math.random();
coords.sort(useSortValue)
function useSortValue(a, b)
{
return a.sortValue - b.sortValue;
}
(and then loop through them again to remove the sortValue)
Still a hack though. If you want to do it nicely, you have to do it the hard way :)
If you're using D3 there is a built-in shuffle function (using Fisher-Yates):
var days = ['Lundi','Mardi','Mercredi','Jeudi','Vendredi','Samedi','Dimanche'];
d3.shuffle(days);
And here is Mike going into details about it:
http://bost.ocks.org/mike/shuffle/
No, it is not correct. As other answers have noted, it will lead to a non-uniform shuffle and the quality of the shuffle will also depend on which sorting algorithm the browser uses.
Now, that might not sound too bad to you, because even if theoretically the distribution is not uniform, in practice it's probably nearly uniform, right? Well, no, not even close. The following charts show heat-maps of which indices each element gets shuffled to, in Chrome and Firefox respectively: if the pixel (i, j) is green, it means the element at index i gets shuffled to index j too often, and if it's red then it gets shuffled there too rarely.
These screenshots are taken from Mike Bostock's page on this subject.
As you can see, shuffling using a random comparator is severely biased in Chrome and even more so in Firefox. In particular, both have a lot of green along the diagonal, meaning that too many elements get "shuffled" somewhere very close to where they were in the original sequence. In comparison, a similar chart for an unbiased shuffle (e.g. using the Fisher-Yates algorithm) would be all pale yellow with just a small amount of random noise.
Here's an approach that uses a single array:
The basic logic is:
Starting with an array of n elements
Remove a random element from the array and push it onto the array
Remove a random element from the first n - 1 elements of the array and push it onto the array
Remove a random element from the first n - 2 elements of the array and push it onto the array
...
Remove the first element of the array and push it onto the array
Code:
for(i=a.length;i--;) a.push(a.splice(Math.floor(Math.random() * (i + 1)),1)[0]);
Can you use the Array.sort() function to shuffle an array – Yes.
Are the results random enough – No.
Consider the following code snippet:
/*
* The following code sample shuffles an array using Math.random() trick
* After shuffling, the new position of each item is recorded
* The process is repeated 100 times
* The result is printed out, listing each item and the number of times
* it appeared on a given position after shuffling
*/
var array = ["a", "b", "c", "d", "e"];
var stats = {};
array.forEach(function(v) {
stats[v] = Array(array.length).fill(0);
});
var i, clone;
for (i = 0; i < 100; i++) {
clone = array.slice();
clone.sort(function() {
return Math.random() - 0.5;
});
clone.forEach(function(v, i) {
stats[v][i]++;
});
}
Object.keys(stats).forEach(function(v, i) {
console.log(v + ": [" + stats[v].join(", ") + "]");
});
Sample output:
a: [29, 38, 20, 6, 7]
b: [29, 33, 22, 11, 5]
c: [17, 14, 32, 17, 20]
d: [16, 9, 17, 35, 23]
e: [ 9, 6, 9, 31, 45]
Ideally, the counts should be evenly distributed (for the above example, all counts should be around 20). But they are not. Apparently, the distribution depends on what sorting algorithm is implemented by the browser and how it iterates the array items for sorting.
There is nothing wrong with it.
The function you pass to .sort() usually looks something like
function sortingFunc( first, second )
{
// example:
return first - second ;
}
Your job in sortingFunc is to return:
a negative number if first goes before second
a positive number if first should go after second
and 0 if they are completely equal
The above sorting function puts things in order.
If you return -'s and +'s randomly as what you have, you get a random ordering.
Like in MySQL:
SELECT * from table ORDER BY rand()

Categories