I'm looking to divide a rectangle up into smaller cells while trying to keep the cells close to the same size. Iām trying to keep the cells widths and heights similar. Aspect ratio of 1:1 or closest to that.
Examples:
Number of blocks = 1
return rectangle
Number of blocks = 2
return splitCell(rectangle)
Number of blocks = 3
const cells = splitCell(rectangle)
return [...splitCell(cells[0]), cells[1]]
Number of blocks = 4
const cells = splitCell(rectangle)
return [...splitCell(cells[0]), ...splitCell(cells[1])]
Number of blocks = 5
This is where it starts getting tricky. Firstly the one row has a different number of cells. Secondly the cells in row 1 are a different size to row 2.
?
Number of blocks = 41
?
The split cell function
const splitCell = (cell): Array => cell.width > cell.height ? splitByWidth(cell) : splitByHeight(cell)
My initial solution was to find the largest sided cell and divide that but at a certain point most cells have the same dimensions so it ends up with a group of smaller cells on one side:
Something like:
for (let i = 1; i < n; i++) {
// findBiggest(cells)
// splitCell(biggestCell)
// add splitCell in to result array
}
But I am looking for a solution where most of the cells have a similar aspect ratio
Ideally, you would like to use N squares of area W.H/N, i.e. of side ā(W.H/N). So the number of squares per sides should be U=ā(N.W/H)) in width and V=ā(N.H/W)) in height, or the closest integers.
So you can form the subdivision in U.V squares, and add or remove one cell per row or per column (and rearrange evenly) to adjust to N.
As a fine tuning, you can also adjust the heights of the two different row types by using a penalty value (that rates the discrepancy to the ideal ratio; for instance, the absolute difference of the area and the ideal area W.H/N) and minimize the total penalty.
Related
I have an ordered data set of decimal numbers. This data is always similar - but not always the same. The expected data is a few, 0 - 5 large numbers, followed by several (10 - 90) average numbers then follow by smaller numbers. There are cases where a large number may be mixed into the average numbers' See the following arrays.
let expectedData = [35.267,9.267,9.332,9.186,9.220,9.141,9.107,9.114,9.098,9.181,9.220,4.012,0.132];
let expectedData = [35.267,32.267,9.267,9.332,9.186,9.220,9.141,9.107,30.267,9.114,9.098,9.181,9.220,4.012,0.132];
I am trying to analyze the data by getting the average without high numbers on front and low numbers on back. The middle high/low are fine to keep in the average. I have a partial solution below. Right now I am sort of brute forcing it but the solution isn't perfect. On smaller datasets the first average calculation is influenced by the large number.
My question is: Is there a way to handle this type of problem, which is identifying patterns in an array of numbers?
My algorithm is:
Get an average of the array
Calculate an above/below average value
Remove front (n) elements that are above average
remove end elements that are below average
Recalculate average
In JavaScript I have: (this is partial leaving out below average)
let total= expectedData.reduce((rt,cur)=> {return rt+cur;}, 0);
let avg = total/expectedData.length;
let aboveAvg = avg*0.1+avg;
let remove = -1;
for(let k=0;k<expectedData.length;k++) {
if(expectedData[k] > aboveAvg) {
remove=k;
} else {
if(k==0) {
remove = -1;//no need to remove
}
//break because we don't want large values from middle removed.
break;
}
}
if(remove >= 0 ) {
//remove front above average
expectedData.splice(0,remove+1);
}
//remove belows
//recalculate average
I believe you are looking for some outlier detection Algorithm. There are already a bunch of questions related to this on Stack overflow.
However, each outlier detection algorithm has its own merits.
Here are a few of them
https://mathworld.wolfram.com/Outlier.html
High outliers are anything beyond the 3rd quartile + 1.5 * the inter-quartile range (IQR)
Low outliers are anything beneath the 1st quartile - 1.5 * IQR
Grubbs's test
You can check how it works for your expectations here
Apart from these 2, the is a comparison calculator here . You can visit this to use other Algorithms per your need.
I would have tried to get a sliding window coupled with an hysteresis / band filter in order to detect the high value peaks, first.
Then, when your sliding windows advance, you can add the previous first value (which is now the last of analyzed values) to the global sum, and add 1 to the number of total values.
When you encounter a peak (=something that causes the hysteresis to move or overflow the band filter), you either remove the values (may be costly), or better, you set the value to NaN so you can safely ignore it.
You should keep computing a sliding average within your sliding window in order to be able to auto-correct the hysteresis/band filter, so it will reject only the start values of a peak (the end values are the start values of the next one), but once values are stabilized to a new level, values will be kept again.
The size of the sliding window will set how much consecutive "stable" values are needed to be kept, or in other words how much UNstable values are rejected when you reach a new level.
For that, you can check the mode of the values (rounded) and then take all the numbers in a certain range around the mode. That range can be taken from the data itself, for example by taking the 10% of the max - min value. That helps you to filter your data. You can select the percent that fits your needs. Something like this:
let expectedData = [35.267,9.267,9.332,9.186,9.220,9.141,9.107,9.114,9.098,9.181,9.220,4.012,0.132];
expectedData.sort((a, b) => a - b);
/// Get the range of the data
const RANGE = expectedData[ expectedData.length - 1 ] - expectedData[0];
const WINDOW = 0.1; /// Window of selection 10% from left and right
/// Frequency of each number
let dist = expectedData.reduce((acc, e) => (acc[ Math.floor(e) ] = (acc[ Math.floor(e) ] || 0) + 1, acc), {});
let mode = +Object.entries(dist).sort((a, b) => b[1] - a[1])[0][0];
let newData = expectedData.filter(e => mode - RANGE * WINDOW <= e && e <= mode + RANGE * WINDOW);
console.log(newData);
I have 3 groups using group().reduceCount() that produce frequency counts
What I'm trying to do is to turn the counts into percents by dividing each count by the total size of the group
This would give me the size of the group
var valueGroupCounter = value.groupAll().value();
This would divide each count by the total
.valueAccessor(function(d) { return (d.value/valueGroupCounter()); })
When I filter a section where the groups are the same or about the same size, I get the graph in the right, which is that I want, but then I filter a section where the groups have very different sizes I get the graph in the left. I want to have the 3 histogram to be about the same size by having them in percents instead of counts.
The problem is that it is giving the me the same size for the 3 groups, some groups have empty values, the 3 groups have different sizes.
I made a pen a added some data:
I was able to get the desired results by using the Number Display Example
Now, the histograms are in the same range
This gives me the total size the group excluding empty values
valueGroupCounter.value().n
Divide each count by the total
.valueAccessor(function(p) { return p.value/valueGroupCounter.value().n })
Forked pen corrected
I have been a lurker on this site for a while while I have been designing a small program in JavaScript. I have run into an issue that I cant seem to solve though, and I need your help!
You can find my question at the bottom of the post. in layman's terms, I need help setting up a function loop to compare three values against three values in a nested array, where each value can be compared to all three values in the array.
I am working on a type of calculator to find the best way to cut a 3D rectangular piece of material to get the best yield, given a certain cut size I need. I have many different potential sizes of material to cut from, sorted in a nested array, like this:
data[i][x] where i = number key of material arrays available, and x(each array) is set like so
[material ID#, height, width, length, volume]
What I have is a cut size i need to cut out of the parent material, which consists of: height, width, length.
The problem I have run into is the best way to align the cut size in the parent material (comparing each value against the other: height, width, length) to return the highest yield possible.
I will mention that I do have a few constraints.
First, whatever lines up in the height measurement in the parent material can only be 1 cut size tall. The other two dimensions (length and width) can be multiple sizes each if they fit.
so, to reiterate constraints:
Height must be 1 unit.
Width can be multiple units
Length can be multiple units
The orientation of the cut piece does not matter, so long as these constraints are held to.
Length or width can be a decimal, but only whole cut sizes will count.
Below, I have placed some code I designed to do this, but the problem is it is not nearly dynamic enough, as you can see. it only compares height to height, width to width, and so forth.
The input for this is as follows:
cutheight;
cutwidth;
cutlength;
data[i][x]; // each nested data array is like so: [material ID key, height of material, width of material, length of material, volume of material]
for (i = 0; i < data.length; i++) {
if((data[i][1] > cutheight) && (data[i][2] > cutwidth) && (data[i][3] > cutlength)) {
insertSort(results, data[i], (Math.min((data[i][4] * (Math.ceil((numCutPiecesNeeded) / ((Math.floor(data[i][2]/cutwidth)) * (Math.floor(data[i][3]/cutlength)))) ) ))) ); // the last step in this function determines pieces yielded per material
}
}
function insertSort(array, match, materialVolumeMin){
//itterate through the results and see where the new match should be inserted
for( j = 0; j < array.length; j++){
// if the billetVolumeMin of match is less than the materialVolumeMin of the item at position j
if(array[j][5] > materialVolumeMin){
//insert the match at position j
array.splice(j, 0, match);
array[j].push(materialVolumeMin);
return true;
}
}
// if the materialVolumeMin of match is not less than anything in array then push it to the end. This should also cover the case where this is the first match.
// push match
match.push(materialVolumeMin);
array.push(match);
// set match index 5 to the materialVolumeMin for future comparison
//array[array.length -1].push(materialVolumeMin);
return true;
}
What this code does is return all the nested arrays with a new value attached to the end, which is in effect the total volume of material needed to get the cuts you need out of it. I use this volume as a way of sorting the arrays to find the most cost effective (or highest yield) material to use.
I believe this is more of a logic question, but I have no idea how to do this. I do know that it would be better to have multiple cuts available on the height axis as well, but due to various factors this is not possible.
I also believe that the height should be found first, by comparing all three dimensions of the cut size to the parent material, and finding the one that has the least waste
How I see this maybe happening:
Math.min((data[i][1]-cutheight), (data[i][1]-cutwidth), (data[i][1]-cutlength));
From there, I really dont know. I appreciate any and all help, advice, or suggestions. Thank you!
Edit: I dont think I was clear in my Question. The code above is how im going about the problem now, but is by no means how I want to actually do it in the finished program.
My question is:
How can I set up a function to compare my three sizes to each possible material size in my nested array, and return the best match, where data[i][5] (which is data[i][4] (volume of material) times the number of cut pieces I need, divided by how many fit in said parent material) is the smallest out of all of my possible choices.
I have an array - it's populated with 1s and 0s.
When "rendered" out - it looks like this:
Basically, I would like to select the lower sections (highlighted in red). Is there anyway to select just the lowest 1s?
Thanks again!
[edit]
I should mention that the lowest points are random each time!
[edit2]
Currently I'm just selecting everything below a certain area and seeing if it's a 1 and doing what I want... Is there no better way?
You loop through the 2d array in reverse...
var lowest = [];
var threshold = 6; // find the 6 "lowest" 1's
for(var row=arr.length-1; row>=0; row--)
for(var col=arr[row].length-1; col>=0; col--)
if(arr[row][col] == 1 && threshold > 0) {
threshold--;
lowest.push({x: col, y: row});
}
Another way :
1) compute per row density = number of black pixel per row
put this data inside a new 1D array.
2) decide where you consider it is leg or not (with a treshold possibly,
or a relative threshold (ex: less than 30% mean value of non-null rows ...) ).
3) push all (x,y) values in the 'leg' rows.
This will avoid lots of small points 'eating' the pixel threshold
before you come to the body of the monster.
Say I have a parent div with width 500px. It has 13 child elements that should fill its width.
If I give each child element a width of 500 / 13 = 38.46... pixels, the browser will floor the pixel values, so I end up with 13 elements that take up a total of 38 * 13 = 494 pixels. There will be 6 pixels on the right side of the parent div that are not filled.
Is there an easy way to dither the child element widths so that the remainder (6 pixels) is distributed among some of the child elements, resulting in a total width of 500 pixels?
If I have to do the calculations manually and there's no way to get the browser to manage it, what dithering algorithm might I use in this case?
EDIT: A clarification -- I'm doing these calculations on the client side using JavaScript. Also, the size of the parent div and the number of child divs vary at runtime; the figures above are just an example.
I'd suggest you just do everything with integer math yourself. You can then calculate what the uneven amount is and then decide how you want to distribute it across the elements. My supposition is that the least noticeable way to distribute the extra pixels would be to keep as many like width elements next to each other as possible.
One way of doing that would to calculate how many extra pixels N you have and then just give each N elements starting from the left one extra pixel. If you were worried about things not being centered, you could allocate the first extra pixel to the far left object, the second extra pixel to the far right, the third extra pixel to the 2nd from left, the fourth extra pixel from the 2nd from right, etc... This would have one more boundary between dissimilar width objects, but be more symmetric than the first algorithm.
Here's some code that shows how one could put the extra pixels on the end elements from outside in:
function distributeWidth(len, totalWidth) {
var results = new Array(len);
var coreWidth = Math.floor(totalWidth / len);
var extraWidth = totalWidth - (coreWidth * len);
var w,s;
for (var i = 0; i < len; i++) {
w = coreWidth;
if (extraWidth > 0) {
w++;
extraWidth--;
}
if (i % 2 == 0) {
s = i/2; // even, index from front of array
} else {
s = len - ((i+1)/2); // odd, index from end of array
}
results[s] = w;
}
return(results)
}
And here's a fiddle to see it in action: http://jsfiddle.net/jfriend00/qpFtT/2/