I am working on the Codility Peak problem:
Divide an array into the maximum number of same-sized blocks, each of which should contain an index P such that A[P - 1] < A[P] > A[P + 1].
My own solution is provided below, but it only scores 45%. So my question is:
How can I still improve my solution?
The code snippet seems to be long, since I added some extra comments to make myself clearer:
function solution(A) {
var storage = [], counter = 0;
// 1. So first I used a loop to find all the peaks
// and stored them all into an array called storage
for(var i = 1; i < A.length - 1; i++) {
if (A[i] > A[i-1] && A[i] > A[i+1]) {
storage.push(i);
}
}
// 2. Go and write the function canBeSeparatedInto
// 3. Use the for loop to check the counter
for(var j = 1; j < A.length; j++) {
if (canBeSeparatedInto(j, A, storage)) {
counter = j;
}
}
return counter;
}
/* this function tells if it is possible to divide the given array into given parts
* we will be passing our function with parameters:
* #param parts[number]: number of parts that we intend to divide the array into
* #param array[array]: the original array
* #param peaks[array]: an storage array that store all the index of the peaks
* #return [boolean]: true if the given array can be divided into given parts
*/
function canBeSeparatedInto(parts, array, peaks) {
var i = 1, result = false;
var blockSize = array.length / parts;
peaks.forEach(function(elem) {
// test to see if there is an element in the array belongs to the ith part
if ((elem+1)/blockSize <= i && (elem+1)/blockSize> i-1) {
i++;
}
});
// set the result to true if there are indeed peaks for every parts
if (i > parts) {
result = true;
}
return result;
}
The main problem with my code is that it does not pass the performance test. Could you give me some hint on that?
I would suggest this algorithm:
Sort the peeks by the distance they have with their predecessor. To do that, it might be more intuitive to identify "valleys", i.e. maximised ranges without peeks, and sort those by their size in descending order
Identify the divisors of the array length, as the solution must be one of those. For example, it is a waste of time to test for solutions when the array length is prime: in that case the answer can only be 1 (or zero if it has no peeks).
Try each of the divisors in ascending order (representing the size of array chunks), and see if for each valley such a split would bring one of the chunks completely inside that valley, i.e. it would not contain a peek: in that case reject that size as a solution, and try the next size.
Implementation with interactive input of the array:
"use strict";
// Helper function to collect the integer divisors of a given n
function divisors(n) {
var factors = [],
factors2 = [],
sq = Math.sqrt(n);
for (var i = 1; i <= sq; i++) {
if (n % i === 0) {
factors.push(n / i);
// Save time by storing complementary factor as well
factors2.push(i);
}
}
// Eliminate possible duplicate when n is a square
if (factors[factors.length-1] === factors2[factors2.length-1]) factors.pop();
// Return them sorted in descending order, so smallest is at end
return factors.concat(factors2.reverse());
}
function solution(A) {
var valleys = [],
start = 0,
size, sizes, i;
// Collect the maximum ranges that have no peeks
for (i = 1; i < A.length - 1; i++) {
if (A[i] > A[i-1] && A[i] > A[i+1]) {
valleys.push({
start,
end: i,
size: i - start,
});
start = i + 1;
}
}
// Add final valley
valleys.push({
start,
end: A.length,
size: A.length - start
});
if (valleys.length === 1) return 0; // no peeks = no solution
// Sort the valleys by descending size
// to improve the rest of the algorithm's performance
valleys.sort( (a, b) => b.size - a.size );
// Collect factors of n, as all chunks must have same, integer size
sizes = divisors(A.length)
// For each valley, require that a solution must not
// generate a chunk that falls completely inside it
do {
size = sizes.pop(); // attempted solution (starting with small size)
for (i = 0;
i < valleys.length &&
// chunk must not fit entirely inside this valley
Math.ceil(valleys[i].start / size) * size + size > valleys[i].end; i++) {
}
} while (i < valleys.length); // keep going until all valleys pass the test
// Return the number of chunks
return A.length / size;
}
// Helper function: chops up a given array into an
// array of sub arrays, which all have given size,
// except maybe last one, which could be smaller.
function chunk(arr, size) {
var chunks = [];
for (var i = 0; i < arr.length; i += size) {
chunks.push(arr.slice(i, i + size));
}
return chunks;
}
// I/O management
inp.oninput = function () {
// Get input as an array of positive integers (ignore non-digits)
if (!this.value) return;
var arr = this.value.match(/\d+/g).map(v => +v);
var parts = solution(arr);
// Output the array, chopped up into its parts:
outCount.textContent = parts;
outChunks.textContent = chunk(arr, arr.length / parts).join('\n');
}
Array (positive integers, any separator): <input id="inp" style="width:100%">
Chunks: <span id="outCount"></span>
<pre id="outChunks"></pre>
When checking whether array can be splitted into K parts, you will in worst case (array of [1,2,1,2,1,...]) do N/2 checks (since you are looking at every peak).
This can be done in K steps, by using clever datastructures:
Represent peaks as an binary array (0 - no peak, 1 - peak). Calculate prefix sums over that. If you want to check if block contains a peak, just compare prefix sums at the start and end of the block.
And also you have small other problem there. You should not check number of block which does not divide the size of the array.
Related
Maybe I have misunderstanding on Merge sort. Can someone explain to me if I break the toBeSorted list into array of sub arrays like this:
const toBeSorted = [2,4,5,1,6,8]
const brokenDown = [[2],[4],[5],[1],[6],[8]]
then I do the usual sort and merge thing with the subarrays inside brokenDown. What is the difference or drawbacks comparing to the classic original solution?
I understand merge sort as halving the original list down till each sub array only contain one item, then sort them and merge them together. So instead of halving, I just iterate through the original array and make it an array of subarrays.
I tried both solutions, the classic one took around 3000ms to sort 200,000 while my solution took 5000ms to sort the same amount of data.
So I think I am lacking some understanding on Merge Sort.
my solution full code:
(() => {
let i = 0
const data = []
const size = 200000
while (i < size) {
data.push(Math.round(Math.random() * 1000))
i++
}
function mergeSort(arr) {
if (arr.length < 2) return arr[0]
const output = []
for (let i=0; i<arr.length; i+=2) {
output.push(sortAndStitch(arr[i], arr[i+1]))
}
return mergeSort(output)
}
function breakDown(arr) {
const output = []
for (item of data) {
output.push([item])
}
return output
}
function sortAndStitch(sub1, sub2) {
const arr1 = sub1 || [], arr2 = sub2 || [], output = []
while(arr1.length && arr2.length) {
if (arr1[0] > arr2[0]) {
output.push(arr2.shift())
} else {
output.push(arr1.shift())
}
}
return output.concat(...arr1, ...arr2)
}
const start = new Date().getTime()
mergeSort(breakDown(data))
const interval = new Date().getTime() - start + 'ms'
console.log({ size, interval })
})()
the classic solution that I am comparing to:
(() => {
let i = 0
const data = []
const size = 200000
while (i < size) {
data.push(Math.round(Math.random() * 1000))
i++
}
const mergeSort = nums => {
if (nums.length < 2) {
return nums;
}
const length = nums.length;
const middle = Math.floor(length / 2);
const left = nums.slice(0, middle);
const right = nums.slice(middle);
return merge(mergeSort(left), mergeSort(right));
};
const merge = (left, right) => {
const results = [];
while (left.length && right.length) {
if (left[0] <= right[0]) {
results.push(left.shift());
}
else {
results.push(right.shift());
}
}
return results.concat(left, right);
};
const start = new Date().getTime()
mergeSort(data)
const interval = new Date().getTime() - start + 'ms'
console.log({ size, interval })
})()
The first example pushes n = 200,000 sub-arrays of size 1 to create an array of n = 200,000 sub-arrays. Then for each "pass", it indexes two sub-arrays from the array of sub-arrays, merges them into a single sub-array, and pushes that sub-array onto yet another array of sub-arrays. So the code ends up creating 19 arrays of sub-arrays (including the original), by pushing the sub-arrays onto arrays of sub-arrays, which ends up as 400000+ pushes of sub-arrays onto arrays of sub-arrays. Those arrays of sub-arrays are also consuming a lot of space.
The second example splits the array into two sub-arrays, then recursion follows only the left sub-arrays until a sub-array of size 1 is reached, then a right sub-array of size 1 is reached, and only then does merging begin, following the call chain up and down, depth first, left first. Only 18 levels of recursion occur, and the shifts and pushes are only performed on arrays of numbers, while the first example is also performing those 400000+ pushes of sub-arrays onto arrays of sub-arrays.
An indexed based merge sort (top down or bottom up) would be much faster for either first or second example, taking about 50 ms to sort 200,000 numbers. As commented above, Wikipedia has psuedo code examples:
https://en.wikipedia.org/wiki/Merge_sort
classic original solution
The "classic original solution" is bottom up merge sort. The 1945 EDVAC that Von Newmann described a merge sort for didn't have a stack. Prior to that, Hollerith card sorters, dating back to 1887, used radix sort. The IBM type 77 collators (1937) could merge two decks of sorted cards into a single merged deck, which is what probably gave Von Newmann the idea of merge sort in 1945, followed up in 1948 with a description and analysis of bottom up merge sort.
The first practical non-punched card based sorts were tape sorts (variations of bottom up merge sort), used on the Univac 1 (early 1950's) which could have up to 10 Uniservo tape drives.
Most libraries use some variation of hybrid insertion (on small sub-arrays) + bottom up merge sort for stable sort. Run time for insertion + top down merge sort is about the same, but recursion could throw a stack overflow exception mid-sort. Example hybrid insertion + bottom up merge sort (on my system (Intel 3770K 3.5 ghz, Windows 7 Pro 64 bit, to sort 200,000 numbers, Chrome takes ~40 ms, IE 11 takes ~35 ms.
function merge(a, b, bgn, mid, end) {
var i = bgn // left: a[bgn,mid)
var j = mid // right: a[mid,end)
var k = bgn // index for b[]
while(true){
if(a[i] <= a[j]){ // if left <= right
b[k++] = a[i++] // copy left
if(i < mid) // if not end of left
continue // continue back to while
do // else copy rest of right
b[k++] = a[j++]
while(j < end)
break // and break
} else { // else left > right
b[k++] = a[j++] // copy right
if(j < end) // if not end of right
continue // continue back to while
do // else copy rest of left
b[k++] = a[i++]
while(i < mid)
break // and break
}
}
}
function insertionsort(a, ll, rr)
{
var i = ll+1
while(i < rr){
var t = a[i]
var j = i
while((j > ll) && a[j-1] > t){
a[j] = a[j-1]
j -= 1}
a[j] = t
i += 1}
}
function getpasscount(n) // return # passes
{
var i = 0
for(var s = 1; s < n; s <<= 1)
i += 1
return(i)
}
function mergesort(a)
{
var n = a.length
if(n < 2) // if size < 2 return
return
var b = new Array(n) // allocate temp array
var s = 64 // set run size
if(0 != (1&getpasscount(n))) // for even # of passes
s = 32
for(var rr = 0; rr < n; ){ // do insertion sorts
var ll = rr;
rr += s;
if(rr > n)
rr = n;
insertionsort(a, ll, rr);
}
while(s < n){ // while not done
var ee = 0 // reset end index
while(ee < n){ // merge pairs of runs
var ll = ee // ll = start of left run
var rr = ll+s // rr = start of right run
if(rr >= n){ // if only left run
do // copy it
b[ll] = a[ll]
while(++ll < n)
break // end of pass
}
ee = rr+s // ee = end of right run
if(ee > n)
ee = n
merge(a, b, ll, rr, ee) // merge a[left],a[right] to b[]
}
var t = a // swap array references
a = b
b = t
s <<= 1 // double the run size
}
}
var a = new Array(200000)
for (var i = 0; i < a.length; i++) {
a[i] = Math.round(Math.random() * 1000000000)
}
console.time('measure')
mergesort(a)
console.timeEnd('measure')
for (var i = 1; i < a.length; i++) {
if(a[i-1] > a[i]){
console.log('error')
break
}
}
I need to do something like this: Let's say I have an array:
[3, 4, 1, 2]
I need to swap 3 and 4, and 1 and 2, so my array looks like [4, 3, 2, 1]. Now, I can just do the sort(). Here I need to count how many iterations I need, to change the initial array to the final output. Example:
// I can sort one pair per iteration
let array = [3, 4, 1, 2, 5]
let counter = 0;
//swap 3 and 4
counter++;
// swap 1 and 2
counter++;
// 5 goes to first place
counter++
// now counter = 3 <-- what I need
EDIT: Here is what I tried. doesn't work always tho... it is from this question: Bubble sort algorithm JavaScript
let counter = 0;
let swapped;
do {
swapped = false;
for (var i = 0; i < array.length - 1; i++) {
if (array[i] < array[i + 1]) {
const temp = array[i];
array[i] = array[i + 1];
array[i + 1] = temp;
swapped = true;
counter++;
}
}
} while (swapped);
EDIT: It is not correct all the time because I can swap places from last to first, for example. Look at the example code above, it is edited now.
This is most optimal code I have tried so far, also the code is accepted as optimal
answer by hackerrank :
function minimumSwaps(arr) {
var arrLength = arr.length;
// create two new Arrays
// one record value and key separately
// second to keep visited node count (default set false to all)
var newArr = [];
var newArrVisited = [];
for (let i = 0; i < arrLength; i++) {
newArr[i]= [];
newArr[i].value = arr[i];
newArr[i].key = i;
newArrVisited[i] = false;
}
// sort new array by value
newArr.sort(function (a, b) {
return a.value - b.value;
})
var swp = 0;
for (let i = 0; i < arrLength; i++) {
// check if already visited or swapped
if (newArr[i].key == i || newArrVisited[i]) {
continue;
}
var cycle = 0;
var j = i;
while (!newArrVisited[j]) {
// mark as visited
newArrVisited[j] = true;
j = newArr[j].key; //assign next key
cycle++;
}
if (cycle > 0) {
swp += (cycle > 1) ? cycle - 1 : cycle;
}
}
return swp;
}
reference
//You are given an unordered array consisting of consecutive integers [1, 2, 3, ..., n] without any duplicates.
//still not the best
function minimumSwaps(arr) {
let count = 0;
for(let i =0; i< arr.length; i++){
if(arr[i]!=i+1){
let temp = arr[i];
arr[arr.indexOf(i+1)] =temp;
arr[i] = i+1;
count =count+1;
}
}
return count;
}
I assume there are two reasons you're wanting to measure how many iterations a sort takes. So I will supply you with some theory (if the mathematics is too dense, don't worry about it), then some practical application.
There are many sort algorithms, some of them have a predicable number of iterations based on the number of items you are sorting, some of them are luck of the draw simply based on the order of the items to be sorted and which item how you select what is called a pivot. So if optimisation is very important to you, then you'll want to select the right algorithm for the purpose of the sort algorithm. Otherwise go for a general purpose algorithm.
Here are most popular sorting algorithms for the purpose of learning, and each of them have least, worst and average running-cases. Heapsort, Radix and binary-sort are worth looking at if this is more than just an theoretical/learning exercise.
Quicksort
Worst Case: Θ(n 2)
Best case: Θ(n lg n)
Average case: Θ(n lg n)
Here is a Quicksort implementation by Charles Stover
Merge sort
Worst case: Θ(n lg n)
Best case: Θ(n lg n)
Average Case: Θ(n lg n)
(note they're all the same)
Here is a merge sort implementation by Alex Kondov
Insertion sort
Worst case: Θ(n2)
Best case: Θ(n)
Average case:Θ(n2)
(Note that its worst and average case are the same, but its best case is the best of any algorithm)
Here is an insertion sort implementation by Kyle Jensen
Selection sort
Worst case: Θ(n2)
Best case: Θ(n2)
Average case: Θ(n2)
(note they're all the same, like a merge sort).
Here is a selection sort algorithm written by #dbdavid updated by myself for ES6
You can quite easily add an iterator variable to any of these examples to count the number of swaps they make, and play around with them to see which algorithms work best in which circumstance.
If there's a very good chance the items will already be well sorted, insertion sort is your best choice. If you have absolutely no idea, of the four basic sorting algorithms quicksort is your best choice.
function minimumSwaps(arr) {
var counter = 0;
for (var i = arr.length; i > 0; i--) {
var minval = Math.min(...arr); console.log("before", arr);
var minIndex = arr.indexOf(minval);
if (minval != = arr[0]) {
var temp = arr[0];
arr[0] = arr[minIndex];
arr[minIndex] = temp; console.log("after", arr);
arr.splice(0, 1);
counter++;
}
else {
arr.splice(0, 1); console.log("in else case")
}
} return counter;
}
This is how I call my swap function:
minimumSwaps([3, 7, 6, 9, 1, 8, 4, 10, 2, 5]);
It works with Selection Sort. Logic is as follows:
Loop through the array length
Find the minimum element in the array and then swap with the First element in the array, if the 0th Index doesn't have the minimum value founded out.
Now remove the first element.
If step 2 is not present, remove the first element(which is the minimum value present already)
increase counter when we swap the values.
Return the counter value after the for Loop.
It works for all values.
However, it fails due to a timeout for values around 50,000.
The solution to this problem is not very intuitive unless you are already somewhat familiar with computer science or real math wiz, but it all comes down to the number of inversions and the resulting cycles
If you are new to computer science I recommend the following resources to supplement this solution:
GeeksforGeeks Article
Informal Proof Explanation
Graph Theory Explanation
If we define an inversion as:
arr[i]>arr[j]
where "i" is the current index and "j" is the following index --
if there are no inversions the array is already in order and requires no sorting.
For Example:
[1,2,3,4,5]
So the number of swaps is related to the number of inversions, but not directly because each inversion can lead to a series of swaps (as opposed to a singular swap EX: [3,1,2]).
So if one consider's the following array:
[4,5,2,1,3,6,10,9,7,8]
This array is composed of three cycles.
Cycle One- 4,1,3 (Two Swaps)
Cycle Two- 5,2 (One Swap)
Cycle Three- 6 (0 Swaps)
Cycle Four- 10,9,7,8 (3 Swaps)
Now here's where the CS and Math magic really kicks in: each cycle will only require one pass through to properly sort it, and this is always going to be true.
So another way to say this would be-- the minimum number of swaps to sort any cycle is the number of element in that cycle minus one, or more explicitly:
minimum swaps = (cycle length - 1)
So if we sum the minimum swaps from each cycle, that sum will equal the minimum number of swaps for the original array.
Here is my attempt to explain WHY this algorithm works:
If we consider that any sequential set of numbers is just a section of a number line, then any set starting at zero should be equal to its own index should the set be expressed as a Javascript array. This idea becomes the criteria to programmatically determined if in element is already in the correct position based on its own value.
If the current value is not equal to its own index then the program should detect a cycle start and recording its length. Once the while loop reaches the the original value in the cycle it will add the minimum number of swaps in the cycle to a counter variable.
Anyway here is my code-- it is very verbose but should work:
export const minimumSwaps = (arr) => {
//This function returns the lowest value
//from the provided array.
//If one subtracts this value the from
//any value in the array it should equal
//that value's index.
const shift = (function findLowest(arr){
let lowest=arr[0];
arr.forEach((val,i)=>{
if(val<lowest){
lowest=val;
}
})
return lowest;
})(arr);
//Declare a counter variable
//to keep track of the swaps.
let swaps = 0;
//This function returns an array equal
//in size to the original array provided.
//However, this array is composed of
//boolean values with a value of false.
const visited = (function boolArray(n){
const arr=[];
for(let i = 0; i<n;i++){
arr.push(false);
}
return arr;
})(arr.length);
//Iterate through each element of the
//of the provided array.
arr.forEach((val, i) => {
//If the current value being assessed minus
//the lowest value in the original array
//is not equal to the current loop index,
//or, if the corresponding index in
//the visited array is equal to true,
//then the value is already sorted
if (val - shift === i || visited[i]) return;
//Declare a counter variable to record
//cycle length.
let cycleLength = 0;
//Declare a variable for to use for the
//while loop below, one should start with
//the current loop index
let x = i;
//While the corresponding value in the
//corresponding index in the visited array
//is equal to false, then we
while (!visited[x]) {
//Set the value of the current
//corresponding index to true
visited[x] = true;
//Reset the x iteration variable to
//the next potential value in the cycle
x = arr[x] - shift;
//Add one to the cycle length variable
cycleLength++;
};
//Add the minimum number of swaps to
//the swaps counter variable, which
//is equal to the cycle length minus one
swaps += cycleLength - 1;
});
return swaps
}
This solution is simple and fast.
function minimumSwaps(arr) {
let minSwaps = 0;
for (let i = 0; i < arr.length; i++) {
// at this position what is the right number to be here
// for example at position 0 should be 1
// add 1 to i if array starts with 1 (1->n)
const right = i+1;
// is current position does not have the right number
if (arr[i] !== right) {
// find the index of the right number in the array
// only look from the current position up passing i to indexOf
const rightIdx = arr.indexOf(right, i);
// replace the other position with this position value
arr[rightIdx] = arr[i];
// replace this position with the right number
arr[i] = right;
// increment the swap count since a swap was done
++minSwaps;
}
}
return minSwaps;
}
Here is my solution, but it timeouts 3 test cases with very large inputs. With smaller inputs, it works and does not terminate due to timeout.
function minimumSwaps(arr) {
let swaps = 0;
for (let i = 0; i < arr.length; i++) {
if (arr[i] === i + 1) continue;
arr.splice(i, 1, arr.splice(arr.indexOf(i + 1), 1, arr[i])[0]); //swap
swaps++;
}
return swaps;
}
I'm learning how to make it more performant, any help is welcome.
This is my solution to the Main Swaps 2 problem in JavaScript. It passed all the test cases. I hope someone finds it useful.
//this function calls the mainSwaps function..
function minimumSwaps(arr){
let swaps = 0;
for (var i = 0; i < arr.length; i++){
var current = arr[i];
var targetIndex = i + 1;
if (current != targetIndex){
swaps += mainSwaps(arr, i);
}
}
return swaps;
}
//this function is called by the minimumSwaps function
function mainSwaps(arr, index){
let swapCount = 0;
let currentElement = arr[index];
let targetIndex = currentElement - 1;
let targetElement = arr[currentElement - 1];
while (currentElement != targetElement){
//swap the elements
arr[index] = targetElement;
arr[currentElement - 1] = currentElement;
//increase the swapcount
swapCount++;
//store the currentElement, targetElement with their new values..
currentElement = arr[index];
targetElement = arr[currentElement - 1];
}
return swapCount;
}
var myarray = [2,3,4,1,5];
var result = console.log(minimumSwaps(myarray));
you can also do it with a map. But its O(nlogn)
const minSwaps = (arr) =>{
let arrSorted = [...arr].sort((a,b)=>a-b);
let indexMap = new Map();
// fill the indexes
for(let i=0; i<arr.length; i++){
indexMap.set(arr[i],i);
}
let count = 0;
for(let i=0; i<arrSorted.length;i++){
if(arr[i] != arrSorted[i]){
count++;
// swap the index
let newIdx = indexMap.get(arrSorted[i]);
indexMap.set(arr[i],newIdx);
indexMap.set(arrSorted[i],i);
// sawp the values
[arr[i],arr[newIdx]] =[arr[newIdx],arr[i]];
}
}
return count;
}
Problem (from Cracking the Coding Interview): Write a method to randomly generate a set of m integers from an array of size n.
Each element must have equal probability of being chosen.
I'm implementing my answer in JS. For the recursive function, the code sometimes returns undefined as one of the elements in the array.
My JS Code
var pickMRecursively = function(A, m, i) {
if (i === undefined) return pickMRecursively(A, m, A.length);
if (i+1 === m) return A.slice(0, m);
if (i + m > m) {
var subset = pickMRecursively(A, m, i-1);
var k = rand(0, i);
if (k < m) subset[k] = A[i];
return subset;
}
return null;
};
Given Java Solution
int[] pickMRecursively(int[] original, int m,int i) {
if (i +1==m){// Basecase
/* return first m elements of original */
} elseif(i+m>m){
int[] subset = pickMRecursively(original, m, i - 1);
int k = random value between 0 and i, inclusive
if(k<m){
subset[k] = original[i]j
}
return subset;
}
return null;
}
I hate these questions because sometime they're often deliberately vague - I'd ask "What type of data is in the array?". But if this is actually a question about randomly re-ordering an array, then in JavaScript, given that arr is an array of numbers, of which some/all might not be integers...
function generateM(arr) {
var hold = [];
var m = [];
var n = arr.length;
var grab;
// clone arr >> hold
while(n--) {
hold[n] = arr[n];
}
n = hold.length;
// select randomly from hold
while(n--) {
grab = hold.splice(Math.floor(Math.random()*n),1)[0];
// ensure integers
m.push(Math.round(grab));
}
return m;
}
The array arr is cloned here to cover scoping issues and to produce a fresh collection, not reorder an existing one.
ADDIT: Alternatively, if this is just asking for a set of m.length random integers generated from an array of n.length, then it doesn't matter what the content of the array actually is and the range of possible (randomly generated) values will be (could be?) 0 - n.length, so...
function generateM(arr, M) {
var aLen = arr.length;
var m = [];
do {
m.push(Math.round(Math.random() * aLen));
} while(M--);
return m;
}
...but that seems like a stupid, pointless challenge. The data in the 'array of size n' is quite important here it seems to me.
I have tried to implement this knapsack problem solution algorithm in JavaScript, but the solutions s_opt I get has a total weight greater than the L_max.
What am I doing wrong?
I suspect it could be something related to Closures in recursion.
/*
GENERAL:
Assume we have a knapsack and we want to bring as much stuff as possible.
Of each thing we have several variants to choose from. Each of these variants have
different value and takes different amount of space.
DEFINITIONS:
L_max = integer, size of the knapsack for the entire problem having N items
l = matrix, having the elements l[i-1][j-1] representing the space taken
by variant j of item i (-1 since indexing the matrices has index starting on zero, i.e. item i is stored at position i-1)
p = matrix, having the elements p[i-1][j-1] representing the value given by
by variant j of item i
n = total number of items (used in a sub-problem)
N = total number of items (used in the full problem, N >= n)
s_opt = vector having the optimal combination of variant selections s_i, i.e. s_opt = arg max p_sum
*/
function knapsack(L_max,l,p) {
// constructing (initializing) - they are private members
var self = this; // in order for private functions to be able read variables
this.N = l.length;
var DCached = []; // this is only used by a private function so it doesnt need to made public using this.*
this.s_opt = [];
this.p_mean = null;
this.L_max = L_max;
// define public optimization function for the entire problem
// when this is completed the user can read
// s_opt to get the solution and
// p_mean to know the quality of the solution
this.optimize = function() {
self.p_mean = D(self.N,self.L_max) / Math.max(1,self.N);
}
// define private sub-problem optimization function
var D = function(n,r) {
if (r<0)
return -Infinity;
if (n==0)
return 0;
if(DCached[n-1] != null) {
if(DCached[n-1][r-1] != null) {
return DCached[n-1][r-1];
}
}
var p_max = -Infinity;
var p_sum;
var J = l[n-1].length;
for(var j = 0; j < J; j++) {
p_sum = p[n-1][j] + D( n-1 , r - l[n-1][j] );
if(p_sum>p_max) {
p_max = p_sum;
self.s_opt[n-1] = j;
}
}
DCached[n-1] = [];
DCached[n-1][r-1] = p_max;
return p_max;
}
}
The client using this knapsack solver does the following:
var knapsackSolution = new knapsack(5,l,p);
knapsackSolution.optimize();
// now the client can access knapsackSolution.s_opt containing the solution.
I found a solution. When solving a sub-problem D(n,r) the code in the question returned the optimized value, but it didn't really manage the array s_opt in a proper way. In the modified solution, pasted below, I fixed this. Instead of only returning the optimized value of the knapsack also an array of chosen variants (e.g. the arg of the max) are returned. The cache is also modified to manage these two parts of the solution (both max value and arg max value).
The code below also contains an additional feature addition. The user can now also pass a value maxComputingComplexity controlling the computational size of the problem in some kind of heuristic manner.
/*
GENERAL:
Assume we have a knapsack and we want to bring as much stuff as possible.
Of each thing we have several variants to choose from. Each of these variants have
different value and takes different amount of space.
The quantity of each variant is one.
DEFINITIONS:
L_max = integer, size of the knapsack, e.g. max number of letters, for the entire problem having N items
l = matrix, having the elements l[i-1][j-1] representing the space taken
by variant j of item i (-1 since indexing the matrices has index starting on zero, i.e. item i is stored at position i-1)
p = matrix, having the elements p[i-1][j-1] representing the value given by
by variant j of item i
maxComputingComplexity = value limiting the product L_max*self.N*M_max in order to make the optimization
complete in limited amount of time. It has a serious implication, since it may cut the list of alternatives
so that only the first alternatives are used in the computation, meaning that the input should be well
ordered
n = total number of items (used in a sub-problem)
N = total number of items (used in the full problem, N >= n)
M_i = number of variants of item i
s_i = which variant is chosen to pack of item i
s = vector of elements s_i representing a possible solution
r = maximum total space in the knapsack, i.e. sum(l[i][s_i]) <= r
p_sum = sum of the values of the selected variants, i.e. sum(p[i][s_i]
s_opt = vector having the optimal combination of variant selections s_i, i.e. s_opt = arg max p_sum
In order to solve this, let us see p_sum as a function
D(n,r) = p_sum (just seeing it as a function of the sub-problem n combined with the maximum total space r)
RESULT:
*/
function knapsack(L_max,l,p,maxComputingComplexity) {
// constructing (initializing) - they are private members
var self = this; // in order for private functions to be able read variables
this.N = l.length;
var DCached = []; // this is only used by a private function so it doesnt need to made public using this.*
//this.s_opt = [];
//this.p_mean = null;
this.L_max = L_max;
this.maxComputingComplexity = maxComputingComplexity;
//console.log("knapsack: Creating knapsack. N=" + N + ". L_max=" + L_max + ".");
// object to store the solution (both big problem and sub-problems)
function result(p_max,s_opt) {
this.p_max = p_max; //max value
this.s_opt = s_opt; //arg max value
}
// define public optimization function for the entire problem
// when this is completed the user can read
// s_opt to get the solution and
// p_mean to know the quality of the solution
// computing complexity O(L_max*self.N*M_max),
// think O=L_max*N*M_max => M_max=O/L_max/N => 3=x/140/20 => x=3*140*20 => x=8400
this.optimize = function() {
var M_max = Math.max(maxComputingComplexity / (L_max*self.N),2); //totally useless if not at least two
console.log("optimize: Setting M_max =" + M_max);
return D(self.N,self.L_max,M_max);
//self.p_mean = mainResult.D / Math.max(1,self.N);
// console.log...
}
// Define private sub-problem optimization function.
// The function reads to "global" variables, p and l
// and as arguments it takes
// n delimiting the which sub-set of items to be able to include (from p and l)
// r setting the max space that this sub-set of items may take
// Based on these arguments the function optimizes D
// and returns
// D the max value that can be obtained by combining the things
// s_opt the selection (array of length n) of things optimizing D
var D = function(n,r,M_max) {
// Start by checking whether the value is already cached...
if(DCached[n-1] != null) {
if(DCached[n-1][r-1] != null) {
//console.log("knapsack.D: n=" + n + " r=" + r + " returning from cache.");
return DCached[n-1][r-1];
}
}
var D_result = new result(-Infinity, []); // here we will manage the result
//D_result.s_opt[n-1] = 0; // just put something there to start with
if (r<0) {
//D_result.p_max = -Infinity;
return D_result;
}
if (n==0) {
D_result.p_max = 0;
return D_result;
}
var p_sum;
//self.s_opt[n] = 0; not needed
var J = Math.min(l[n-1].length,M_max);
var D_minusOneResult; //storing the result when optimizing all previous items given a max length
for(var j = 0; j < J; j++) {
D_minusOneResult = D( n-1 , r - l[n-1][j] , M_max)
p_sum = p[n-1][j] + D_minusOneResult.p_max;
if(p_sum > D_result.p_max) {
D_result.p_max = p_sum;
D_result.s_opt = D_minusOneResult.s_opt;
D_result.s_opt[n-1] = j;
}
}
DCached[n-1] = [];
DCached[n-1][r-1] = D_result;
//console.log("knapsack.D: n=" + n + " r=" + r + " p_max= "+ p_max);
return D_result;
}
}
I have:
function getRandomInt(min, max){
return Math.floor(Math.random() * (max - min + 1)) + min;
}
But the problem is I want randomise the population of something with elements in an array (so they do not appear in the same order every time in the thing I am populating) so I need to ensure the number returned is unique compared to the other numbers so far.
So instead of:
for(var i = 0; i < myArray.length; i++) {
}
I have:
var i;
var count = 0;
while(count < myArray.length){
count++;
i = getRandomInt(0, myArray.length); // TODO ensure value is unique
// do stuff with myArray[i];
}
It looks like rather than independent uniform random numbers you rather want a random permutation of the set {1, 2, 3, ..., N}. I think there's a shuffle method for arrays that will do that for you.
As requested, here's the code example:
function shuffle(array) {
var top = array.length;
while (top--) {
var current = Math.floor(Math.random() * top);
var tmp = array[current];
array[current] = array[top - 1];
array[top - 1] = tmp;
}
return array;
}
Sometimes the best way to randomize something (say a card deck) is to not shuffle it before pulling it out, but to shuffle it as you pull it out.
Say you have:
var i,
endNum = 51,
array = new Array(52);
for(i = 0; i <= endNum; i++) {
array[i] = i;
}
Then you can write a function like this:
function drawNumber() {
// set index to draw from
var swap,
drawIndex = Math.floor(Math.random() * (endNum+ 1));
// swap the values at the drawn index and at the "end" of the deck
swap = array[drawIndex];
array[drawIndex] = array[endNum];
array[endNum] = swap;
endNum--;
}
Since I decrement the end counter the drawn items will be "discarded" at the end of the stack and the randomize function will only treat the items from 0 to end as viable.
This is a common pattern I've used, I may have adopted it into js incorrectly since the last time I used it was for writing a simple card game in c#. In fact I just looked at it and I had int ____ instead of var ____ lol
If i understand well, you want an array of integers but sorted randomly.
A way to do it is described here
First create a rand function :
function randOrd(){
return (Math.round(Math.random())-0.5); }
Then, randomize your array. The following example shows how:
anyArray = new Array('1','2','3','4','5');
anyArray.sort( randOrd );
document.write('Random : ' + anyArray + '<br />';);
Hope that will help,
Regards,
Max
You can pass in a function to the Array.Sort method. If this function returns a value that is randomly above or below zero then your array will be randomly sorted.
myarray.sort(function() {return 0.5 - Math.random()})
should do the trick for you without you having to worry about whether or not every random number is unique.
No loops and very simple.