How can I expand the radius of a light bloom? - javascript

I am writing a software filter object and trying to implement a light bloom effect. I'm using a simple, two pass convolution approach which works fine except that the effect radius is tiny and I can't seem to control the radius. I've played with larger box filters and adjusted the weights of the various pixels, but none of that seems to have any effect. The effect seems to have a maximum size (which is not very big) and then all changes to the parameters just serve to make it smaller.
I'd like to be able to create a light bloom with an arbitrary radius. After a LOT of experimentation and searching online, I'm starting to wonder if this just can't be done. I've been thinking about alternate approaches--plasmas, gradients, and various seeding schemes--but I'd like to hound this path into the ground first. Does anyone out there know how to create an arbitrarily sized light bloom (in software)?
The javascript is as follows (this operates on an HTML5 canvas; I can add comments to the code if needed):
// the kernel functions are called via Array.map on this.backBuffer.data, a canvas surface color array
this.kernelFirstPass = function(val, index, array)
{
if(index<pitch || index>=array.length-pitch || index%pitch<4 || index%pitch>pitch-5 || index%4==3)
return;
var c = 1,
l1 = 1,
l2 = 1,
l3 = 1,
r1 = 1,
r2 = 1,
r3 = 1;
var avg =
(
c*this.frontBuffer.data[index]+
l1*this.frontBuffer.data[index-4]+
l2*this.frontBuffer.data[index-8]+
l3*this.frontBuffer.data[index-12]+
l1*this.frontBuffer.data[index+4]+
l2*this.frontBuffer.data[index+8]+
l3*this.frontBuffer.data[index+12]
)/(c+l1+l2+l3+l1+l2+l3);
//this.frontBuffer.data[index] = avg;
array[index] = avg;
}
this.kernelSecondPass = function(val, index, array)
{
if(index<pitch || index>=array.length-pitch || index%pitch<4 || index%pitch>=pitch-4 || index%4==3)
return;
var c = 1,
l1 = 1,
l2 = 1,
l3 = 1,
r1 = 1,
r2 = 1,
r3 = 1;
var avg =
(
c*array[index]+
l1*array[index-pitch]+
l2*array[index-(pitch*2)]+
l3*array[index-(pitch*3)]+
l1*array[index+pitch]+
l2*array[index+(pitch*2)]+
l3*array[index+(pitch*3)]
)/(c+l1+l2+l3+l1+l2+l3);
array[index] = avg;
}
Perhaps an important point that I missed in my original question was to explain that I'm not trying to simulate any real or particular phenomenon (and it probably doesn't help that I call it a "light" bloom). It could be that, when dealing with real light phenomenon, in order to have a penumbra with arbitrary radius you need a source (ie. "completely saturated area") with arbitrary radius. If that were in fact the way a real light bloom behaved, then Jim's and tskuzzy's explanations would seem like reasonable approaches to simulating that. Regardless, that's not what I'm trying to accomplish. I want to control the radius of the "gradient" portion of the bloom independently from the size/intensity/etc. of the source. I'd like to be able to set a single, white (max value) pixel in the center of the screen and have the bloom grow out as far as I want it, to the edges of the screen or beyond if I feel like it.

In order to achieve a good bloom effect, you should be using high-dynamic range rendering . Otherwise, your whites will not be bright enough.
The reason for this is that pixel brightnesses are typically represented from the range [0,1]. Thus the maximum brightness is clamped to 1. However in a real world situation, there isn't really a maximum. And although really bright lights are all perceived as a "1", the visual side-effects like bloom are not the same.
So what you have to do is allow for really bright areas to exceed the maximum brightness, at least for the bloom convolution. Then when you do the rendering, clamp the values as needed.
Once you have that done, you should be able to increase the bloom radius simply by increasing the size of the Airy disk used in the convolution.

The simple summary of tskuzzy's answer is: use a floating-point buffer to store the pre-bloom image, and either convolve into a second floationg-point buffer (whence you saturate the pixels back into integer format) or saturate on the fly to convert each output pixel back to integer before storing it directly in an integer output buffer.
The Airy convolution must be done with headroom (i.e. in either fixed-point or floating-point, and these days the former isn't usually worth the hassle with fast FPUs so common) so that brighter spots in the image will bleed correspondingly more over their neighbouring areas.
Note: on-the-fly saturation for colour isn't as simple as individually clipping the channels - if you do that, you might end up with hue distortion and contours around the spots that clip.

Related

Organizing felt tip pens: optimizing the arrangement of items in a 2D grid by similarity of adjacent items, using JS [updated]

UPD: the question has been updated with specifics and code, see below.
Warning: This question is about optimizing an arrangement of items in a matrix. It is not about comparing colors. Initially, I have decided that providing context about my problem would help. I now regret this decision because the result was the opposite: too much irrelevant talk about colors and almost nothing about actual algorithms. 😔
I've got a box of 80 felt tip pens for my kid, and it annoys me so much that they are not sorted.
I used to play a game called Blendoku on Android where you need to do just that: arrange colors in such a way that they form gradients, with nearby colors being the most similar:
It is easy and fun to organize colors in intersecting lines like a crossword. But with these sketch markers, I've got a full-fledged 2D grid. What makes it even worse, colors are not extracted from a uniform gradient.
This makes me unable to sort felt tip pens by intuition. I need to do it algorithmically!
Here's what I've got:
Solid knowledge of JavaScript
A flat array of color values of all pens
A function distance(color1, color2) that shows how similar a color pair is. It returns a float between 0 and 100 where 0 means that colors are identical.
All I'm lacking is an algorithm.
A factorial of 80 is a number with 118 digits, which rules out brute forcing.
There might be ways to make brute forcing feasible:
fix the position of a few pens (e. g. in corners) to reduce the number of possible combinations;
drop branches that contain at least one pair of very dissimilar neighbours;
stop after finding first satisfactory arrangement.
But I'm still lacking an actual algorithm even for than, not to mention a non-brute-forcey one.
PS Homework:
Sorting a matrix by similarity -- no answers.
Algorithm for optimal 2D color palette arrangement -- very similar question, no answers.
How to sort colors in two dimensions? -- more than 50% of cells already contain correctly organized colors; unfamiliar programming language; the actual sorting solution is not explained.
Sort Colour / Color Values -- single flat array.
Update
Goal
Arrange a predefined set of 80 colors in a 8×10 grid in such a way that colors form nice gradients without tearing.
For reasons described below, there is no definitive solution to this question, possible solution are prone to imperfect result and subjectiveness. This is expected.
Note that I already have a function that compares two colors and tells how similar they are.
Color space is 3D
Human eye has three types of receptors to distinguish colors. Human color space is three-dimensional (trichromatic).
There are different models for describing colors and they all are three-dimensional: RGB, HSL, HSV, XYZ, LAB, CMY (note that "K" in CMYK is only required because colored ink is not fully opaque and expensive).
For example, this palette:
...uses polar coordinates with hue on the angle and saturation on the radius. Without the third dimension (lightness), this palete is missing all the bright and dark colors: white, black, all the greys (except 50% grey in the center), and tinted greys.
This palette is only a thin slice of the HSL/HSV color space:
It is impossible to lay out all colors on a 2D grid in a gradient without tearing in the gradient.
For example, here are all the 32-bit RGB colors, enumerated in lexicographic order into a 2D grid. You can see that the gradient has a lot of tearing:
Thus, my goal is to find an arbitrary, "good enough" arrangment where neighbors are more or less similar. I'd rather sacrifice a bit of similarity than have a few very similar clusters with tearing between them.
This question is about optimizing the grid in JavaScript, not about comparing colors!
I have already picked a function to determine the similarity of colors: Delta E 2000. This function is specifically designed to reflect the subjective human perception of color similarity. Here is a whitepaper describing how it works.
This question is about optimizing the arrangement of items in a 2D grid in such a way that the similarity of each pair of adjacent items (vertical and horizontal) is as low as it gets.
The word "optimizing" is used not in a sense of making an algorithm run faster. It is in a sense of Mathematical optimization:
In the simplest case, an optimization problem consists of maximizing or minimizing a real function by systematically choosing input values from within an allowed set and computing the value of the function.
In my case:
"The function" here means running the DeltaE.getDeltaE00(color1, color2) function for all adjacent items, the output is a bunch of numbers (142 of them... I think) reflecting how dissimilar all the adjacent pairs are.
"Maximizing or minimizing" — the goal is to minimize the output of "the function".
"An input value" — is a specific arrangement of 80 predefined items in the 8×10 grid. There are a total of 80! input values, which makes the task impossible to brute force on a home computer.
Note that I don't have a clear definition for the minimization criteria of "the function". If we simply use the smallest sum of all numbers, then the winning result might be a case where the sum is the lowest, but a few adjacent item pairs are very dissimilar.
Thus, "the function" should maybe take into account not only the sum of all comparisons, but also ensure that no comparisons are way off.
Possible paths for solving the issue
From my previous bounty attempt on this question, I've learned the following paths:
genetic algorithm
optimizer/solver library
manual sorting with a some algorithmic help
something else?
The optimizer/solver library solution is what I initially was hoping for. But the mature libraries such as CPLEX and Gurobi are not in JS. There are some JS libraries but they are not well documented and have no newbie tutorials.
The genetic algorithm approach is very exciting. But it requires concieving algorithms of mutating and mating specimen (grid arrangements). Mutating seems trivial: simply swap adjacent items. But I have no idea about mating. And I have little understanding of the whole thing in general.
Manual sorting suggestions seem promising at the first glance, but fall short when looking into them in depth. They also assume using algorithms to solve certain steps without providing actual algorithms.
Code boilerplate and color samples
I have prepared a code boilerplate in JS: https://codepen.io/lolmaus/pen/oNxGmqz?editors=0010
Note: the code takes a while to run. To make working with it easier, do the following:
Login/sign up for CodePen in order to be able to fork the boilerplate.
Fork the boilerplate.
Go to Settings/Behavior and make sure automatic update is disabled.
Resize panes to maximize the JS pane and minimize other panes.
Go to Change view/Debug mode to open the result in a separate tab. This enables console.log(). Also, if code execution freezes, you can kill the render tab without losing access the coding tab.
After making changes to code, hit save in the code tab, then refresh the render tab and wait.
In order to include JS libraries, go to Settings/JS. I use this CDN to link to code from GitHub: https://www.jsdelivr.com/?docs=gh
Source data:
const data = [
{index: 1, id: "1", name: "Wine Red", rgb: "#A35A6E"},
{index: 2, id: "3", name: "Rose Red", rgb: "#F3595F"},
{index: 3, id: "4", name: "Vivid Red", rgb: "#F4565F"},
// ...
];
Index is one-based numbering of colors, in the order they appear in the box, when sorted by id. It is unused in code.
Id is the number of the color from pen manufacturer. Since some numbers are in form of WG3, ids are strings.
Color class.
This class provides some abstractions to work with individual colors. It makes it easy to compare a given color with another color.
index;
id;
name;
rgbStr;
collection;
constructor({index, id, name, rgb}, collection) {
this.index = index;
this.id = id;
this.name = name;
this.rgbStr = rgb;
this.collection = collection;
}
// Representation of RGB color stirng in a format consumable by the `rgb2lab` function
#memoized
get rgbArr() {
return [
parseInt(this.rgbStr.slice(1,3), 16),
parseInt(this.rgbStr.slice(3,5), 16),
parseInt(this.rgbStr.slice(5,7), 16)
];
}
// LAB value of the color in a format consumable by the DeltaE function
#memoized
get labObj() {
const [L, A, B] = rgb2lab(this.rgbArr);
return {L, A, B};
}
// object where distances from current color to all other colors are calculated
// {id: {distance, color}}
#memoized
get distancesObj() {
return this.collection.colors.reduce((result, color) => {
if (color !== this) {
result[color.id] = {
distance: this.compare(color),
color,
};
}
return result;
}, {});
}
// array of distances from current color to all other colors
// [{distance, color}]
#memoized
get distancesArr() {
return Object.values(this.distancesObj);
}
// Number reprtesenting sum of distances from this color to all other colors
#memoized
get totalDistance() {
return this.distancesArr.reduce((result, {distance}) => {
return result + distance;
}, 0);
}
// Accepts another color instance. Returns a number indicating distance between two numbers.
// Lower number means more similarity.
compare(color) {
return DeltaE.getDeltaE00(this.labObj, color.labObj);
}
}
Collection: a class to store all the colors and sort them.
class Collection {
// Source data goes here. Do not mutate after setting in the constructor!
data;
constructor(data) {
this.data = data;
}
// Instantiates all colors
#memoized
get colors() {
const colors = [];
data.forEach((datum) => {
const color = new Color(datum, this);
colors.push(color);
});
return colors;
}
// Copy of the colors array, sorted by total distance
#memoized
get colorsSortedByTotalDistance() {
return this.colors.slice().sort((a, b) => a.totalDistance - b.totalDistance);
}
// Copy of the colors array, arranged by similarity of adjacent items
#memoized
get colorsLinear() {
// Create copy of colors array to manipualte with
const colors = this.colors.slice();
// Pick starting color
const startingColor = colors.find((color) => color.id === "138");
// Remove starting color
const startingColorIndex = colors.indexOf(startingColor);
colors.splice(startingColorIndex, 1);
// Start populating ordered array
const result = [startingColor];
let i = 0;
while (colors.length) {
if (i >= 81) throw new Error('Too many iterations');
const color = result[result.length - 1];
colors.sort((a, b) => a.distancesObj[color.id].distance - b.distancesObj[color.id].distance);
const nextColor = colors.shift();
result.push(nextColor);
}
return result;
}
// Accepts name of a property containing a flat array of colors.
// Renders those colors into HTML. CSS makes color wrap into 8 rows, with 10 colors in every row.
render(propertyName) {
const html =
this[propertyName]
.map((color) => {
return `
<div
class="color"
style="--color: ${color.rgbStr};"
title="${color.name}\n${color.rgbStr}"
>
<span class="color-name">
${color.id}
</span>
</div>
`;
})
.join("\n\n");
document.querySelector('#box').innerHTML = html;
document.querySelector('#title').innerHTML = propertyName;
}
}
Usage:
const collection = new Collection(data);
console.log(collection);
collection.render("colorsLinear"); // Implement your own getter on Collection and use its name here
Sample output:
I managed to find a solution with objective value 1861.54 by stapling a couple ideas together.
Form unordered color clusters of size 8 by finding a min-cost matching and joining matched subclusters, repeated three times. We use d(C1, C2) = ∑c1 in C1 ∑c2 in C2 d(c1, c2) as the distance function for subclusters C1 and C2.
Find the optimal 2 × 5 arrangement of clusters according to the above distance function. This involves brute forcing 10! permutations (really 10!/4 if one exploits symmetry, which I didn't bother with).
Considering each cluster separately, find the optimal 4 × 2 arrangement by brute forcing 8! permutations. (More symmetry breaking possible, I didn't bother.)
Brute force the 410 possible ways to flip the clusters. (Even more symmetry breaking possible, I didn't bother.)
Improve this arrangement with local search. I interleaved two kinds of rounds: a 2-opt round where each pair of positions is considered for a swap, and a large-neighborhood round where we choose a random maximal independent set and reassign optimally using the Hungarian method (this problem is easy when none of the things we're trying to move can be next to each other).
The output looks like this:
Python implementation at https://github.com/eisenstatdavid/felt-tip-pens
The trick for this is to stop thinking about it as an array for a moment and anchor yourself to the corners.
First, you need to define what problem you are trying to solve. Normal colors have three dimensions: hue, saturation, and value (darkness), so you're not going to be able to consider all three dimensions on a two dimensional grid. However, you can get close.
If you want to arrange from white->black and red->purple, you can define your distance function to treat differences in darkness as distance, as well as differences in hue value (no warping!). This will give you a set, four-corner-compatible sorting for your colors.
Now, anchor each of your colors to the four corners, like so, defining (0:0) as black, (1:1) as white, (0,1) as red (0 hue), and (1:0) as purple-red (350+ hue). Like so (let's say purple-red is purple for simplicity):
Now, you have two metrics of extremes: darkness and hue. But wait... if we rotate the box by 45 degrees...
Do you see it? No? The X and Y axes have aligned with our two metrics! Now all we need to do is divide each color's distance from white with the distance of black from white, and each color's distance from purple with the distance of red from purple, and we get our Y and X coordinates, respectively!
Let's add us a few more pens:
Now iterate over all the pens with O(n)^2, finding the closest distance between any pen and a final pen position, distributed uniformly through the rotated grid. We can keep a mapping of these distances, replacing any distances if the respective pen position has been taken. This will allow us to stick pens into their closest positions in polynomial time O(n)^3.
However, we're not done yet. HSV is 3 dimensional, and we can and should weigh the third dimension into our model too! To do this, we extend the previous algorithm by introducing a third dimension into our model before calculating closest distances. We put our 2d plane into a 3d space by intersecting it with the two color extremes and the horizontal line between white and black. This can be done simply by finding the midpoint of the two color extremes and nudging darkness slightly. Then, generate our pen slots fitted uniformly onto this plane. We can place our pens directly in this 3D space based off their HSV values - H being X, V being Y, and S being Z.
Now that we have the 3d representation of the pens with saturation included, we can once again iterate over the position of pens, finding the closest one for each in polynomial time.
There we go! Nicely sorted pens. If you want the result in an array, just generate the coordinates for each array index uniformly again and use those in order!
Now stop sorting pens and start making code!
As it was pointed out to you in some of the comments, you seem to be interested in finding one of the global minima of a discrete optimization problem. You might need to read up on that if you don't know much about it already.
Imagine that you have an error (objective) function that is simply the sum of distance(c1, c2) for all (c1, c2) pairs of adjacent pens. An optimal solution (arrangement of pens) is one whose error function is minimal. There might be multiple optimal solutions. Be aware that different error functions may give different solutions, and you might not be satisfied with the results provided by the simplistic error function I just introduced.
You could use an off-the-shelf optimizer (such as CPLEX or Gurobi) and just feed it a valid formulation of your problem. It might find an optimal solution. However, even if it does not, it may still provide a sub-optimal solution that is quite good for your eyes.
You could also write your own heuristic algorithm (such as a specialized genetic algorithm) and get a solution that is better than what the solver could find for you within the time and space limit it had. Given that your weapons seem to be input data, a function to measure color dissimilarity, and JavaScript, implementing a heuristic algorithm is probably the path that will feel most familiar to you.
My answer originally had no code with it because, as is the case with most real-world problems, there is no simple copy-and-paste solution for this question.
Doing this sort of computation using JavaScript is weird, and doing it on the browser is even weirder. However, because the author explicitly asked for it, here is a JavaScript implementation of a simple evolutionary algorithm hosted on CodePen.
Because of the larger input size than the 5x5 I originally demonstrated this algorithm with, how many generations the algorithm goes on for, and how slow code execution is, it takes a while to finish. I updated the mutation code to prevent mutations from causing the solution cost to be recomputed, but the iterations still take quite some time. The following solution took about 45 minutes to run in my browser through CodePen's debug mode.
Its objective function is slightly less than 2060 and was produced with the following parameters.
const SelectionSize = 100;
const MutationsFromSolution = 50;
const MutationCount = 5;
const MaximumGenerationsWithoutImprovement = 5;
It's worth pointing out that small tweaks to parameters can have a substantial impact on the algorithm's results. Increasing the number of mutations or the selection size will both increase the running time of the program significantly, but may also lead to better results. You can (and should) experiment with the parameters in order to find better solutions, but they will likely take even more compute time.
In many cases, the best improvements come from algorithmic changes rather than just more computing power, so clever ideas about how to perform mutations and recombinations will often be the way to get better solutions while still using a genetic algorithm.
Using an explicitly seeded and reproducible PRNG (rather than Math.random()) is great, as it will allow you to replay your program as many times as necessary for debugging and reproducibility proofs.
You might also want to set up a visualization for the algorithm (rather than just console.log(), as you hinted to) so that you can see its progress and not just its final result.
Additionally, allowing for human interaction (so that you can propose mutations to the algorithm and guide the search with your own perception of color similarity) may also help you to get the results you want. This will lead you to an Interactive Genetic Algorithm (IGA). The article J. C. Quiroz, S. J. Louis, A. Shankar and S. M. Dascalu, "Interactive Genetic Algorithms for User Interface Design," 2007 IEEE Congress on Evolutionary Computation, Singapore, 2007, pp. 1366-1373, doi: 10.1109/CEC.2007.4424630. is a good example of such approach.
If you could define a total ordering function between two colors that tell you which one is the 'darker' color, you can sort the array of colors using this total ordering function from dark to light (or light to dark).
You start at the top left with the first color in the sorted array, keep going diagonally across the grid and fill the grid with the subsequent elements. You will get a gradient filled rectangular grid where adjacent colors would be similar.
Do you think that would meet your objective?
You can change the look by changing the behavior of the total ordering function. For example, if the colors are arranged by similarity using a color map as shown below, you can define the total ordering as a traversal of the map from one cell to the next. By changing which cell gets picked next in the traversal, you can get different color-similar gradient grid fills.
I think there might be a simple approximate solution to this problem based on placing each color where it is the approximate average of the sorrounding colors. Something like:
C[j] ~ sum_{i=1...8}(C[i])/8
Which is the discrete Laplace operator i.e., solving this equation is equivalent to define a discrete harmonic function over the color vector space i.e., Harmonic functions have the mean-value property which states that the average value of the function in a neighborhood is equal to its value at the center.
In order to find a particular solution we need to setup boundary conditions i.e., we must fix at least two colors in the grid. In our case it looks convinient to pick 4 extrema colors and fix them to the corners of the grid.
One simple way to solve the Laplace's equation is the relaxation method (this amounts to solve a linear system of equations). The relaxation method is an iterative algorithm that solves one linear equation at a time. Of course in this case we cannot use a relaxation method (e.g., Gauss Seidel) directly because it is really a combinatorial problem more than a numercal problem. But still we can try to use relaxation to solve it.
The idea is the following. Start fixing the 4 corner colors (we will discuss about those colors later) and fill the grid with the bilinear interpolation of those colors. Then pick a random color C_j and compute the corresponding Laplacian color L_j i.e., the average color of sorrounding neighbors. Find the color closest to L_j from the set of input colors. If that color is different to C_j then replace C_j with it. Repeat the process until all colors C_j have been searched and no color replacements are needed (convergence critetia).
The function that find the closest color from input set must obey some rules in order to avoid trivial solutions (like having the same color in all neighbors and thus also in the center).
First, the color to find must be the closest to L_j in terms of Euclidian metric. Second, that color cannot be the same as any neighbor color i.e., exclude neighbors from search. You can see this match as a projection operator into the input set of colors.
It is expected that covergence won't be reached in the strict sense. So limiting the number of iterations to a large number is acceptable (like 10 times the number of cells in the grid). Since colors C_j are picked randomly, there might be colors in the input that were never placed in the grid (which corresponds to discontinuities in the harmonic function). Also there might be colors in the grid which are not from input (i.e., colors from initial interpolation guess) and there might be repeated colors in the grid as well (if the function is not a bijection).
Those cases must be addressed as special cases (as they are singularities). So we must replace colors from initial guess and repeated colors with that were not placed in the grid. That is a search sub-problem for which I don't have a clear euristic to follow beyond using distance function to guess the replacements.
Now, how to pick the first 2 or 4 corner colors. One possible way is to pick the most distinct colors based on Euclidean metric. If you treat colors as points in a vector space then you can perform regular PCA (Principal Component Analysis) on the point cloud. That amounts to compute the eigenvectors and corresponding eigenvalues of the covariance matrix. The eigenvector corresponding to the largest eigenvalue is a unit vector that points towards direction of greatest color variance. The other two eigenvectors are pointing in the second and third direction of greatest color variance in that order. The eigenvectors are orthogonal to each other and eigenvalues are like the "length" of those vectors in a sense. Those vectors and lengths can be used to determine an ellipsoid (egg shape surface) that approximately sorround the point cloud (let alone outliers). So we can pick 4 colors in the extrema of that ellipsoid as the boundary conditions of the harmonic function.
I haven't tested the approach, but my intuition ia that it should give you a good approximate solution if the input colors vary smoothly (the colors corresponds to a smooth surface in color vector space) otherwise the solution will have "singularities" which mean that some colors will jump abruptly from neighbors.
EDIT:
I have (partially) implemented my approach, the visual comparison is in the image below. My handling of singularities is quite bad, as you can see in the jumps and outliers. I haven't used your JS plumbing (my code is in C++), if you find the result useful I will try to write it in JS.
I would define a concept of color regions, that is, a group of colors where distance(P1, P2) <= tolerance. In the middle of such a region you would find the point which is closest to all others by average.
Now, you start with a presumably unordered grid of colors. The first thing my algorithm would do is to identify items which would fit together as color regions. By definition each region would fit well together, so we arrive to the second problem of interregion compatibility. Due to the very ordered manner of a region and the fact that into its middle we put the middle color, its edges will be "sharp", that is, varied. So, region1 and region2 might be much more compatible, if they are placed together from one side than the other side. So, we need to identify which side the regions are desirably glued together and if for some reason "connecting" those sides is impossible (for example region1 should be "above" region2, but, due to the boundaries and the planned positions of other regions), then one could "rotate" one (or both) the regions.
The third step is to check the boundaries between regions after the necessary rotations were made. Some repositioning of the items on the boundaries might still be needed.

canvas - change perspective of the camera in a 2d setup

TLDR:
I need to change the perspective over an object in a 2d cavas.
Let's say I make a photo of my desk while I sit next to it. I want to change that image, to look as if I would see it from above. Basically the view would change like in the image below (yes, my paint skills are awesome :)) ).
I am using canvas and easeljs (createjs).
I am totally aware that this is not a 3rd object, I don't have a stage and a camera. I am also aware that easeljs doesn't support this as a feature.
What I am actually trying is to emulate this (I am also not interested in quality of the result at this point).
What I tried and it worked (I noticed I am not the first to try to do this and didn't actually found a real answer so this is my current approach):
I take the original image. Divide it in a high number of images (the higher the better) as in the image below.
Then I scale each of the "mini-images" on x axis with a different factor (that I'm computing depending on the angle the picture was made). Part of relevant code below (excuse the hardcodings and creepy code - is from a proof of concept I made in like 10 minutes):
for (var i = 0; i < 400; i++) {
crops[i] = new createjs.Bitmap(baseImage);
crops[i].sourceRect = new createjs.Rectangle(0, i, 700, i + 1);
crops[i].regX = 350;
crops[i].y = i - 1;
crops[i].x = 100;
crops[i].scaleX = (400 - i / 2) * angleFactor;
stage.addChild(crops[i]);
}
Then you crop again only the relevant part.
Ok... so this works but the performance is... terrible - you basically generate 400 images - in this case - then you put them in a canvas. Yes, I know it can be optimized a bit but is still pretty bad. So I was wondering if you have any other (preferably better) approaches.
I also tried combining a skewing transformation with a scale and a rotation but I could not achieve the right result (but I actually still think there may still be something here... I just can't put my finger on it and my "real" math is a bit rusty).

render a tile map using javascript

I'm looking for a logical understanding with sample implementation ideas on taking a tilemap such as this:
http://thorsummoner.github.io/old-html-tabletop-test/pallete/tilesets/fullmap/scbw_tiles.png
And rendering in a logical way such as this:
http://thorsummoner.github.io/old-html-tabletop-test/
I see all of the tiles are there, but I don't understand how they are placed in a way that forms shapes.
My understanding of rendering tiles so far is simple, and very manual. Loop through map array, where there are numbers (1, 2, 3, whatever), render that specified tile.
var mapArray = [
[0, 0, 0, 0 ,0],
[0, 1, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 0, 0 ,0],
[0, 0, 1, 1 ,0]
];
function drawMap() {
background = new createjs.Container();
for (var y = 0; y < mapArray.length; y++) {
for (var x = 0; x < mapArray[y].length; x++) {
if (parseInt(mapArray[y][x]) == 0) {
var tile = new createjs.Bitmap('images/tile.png');
}
if (parseInt(mapArray[y][x]) == 1) {
var tile = new createjs.Bitmap('images/tile2.png');
}
tile.x = x * 28;
tile.y = y * 28;
background.addChild(tile);
}
}
stage.addChild(background);
}
Gets me:
But this means I have to manually figure out where each tile goes in the array so that logical shapes are made (rock formations, grass patches, etc)
Clearly, the guy who made the github code above used a different method. Any guidance on understanding the logic (with simply pseudo code) would be very helpful
There isn't any logic there.
If you inspect the page's source, you'll see that the last script tag, in the body, has a huge array of tile coordinates.
There is no magic in that example which demonstrates an "intelligent" system for figuring out how to form shapes.
Now, that said, there are such things... ...but they're not remotely simple.
What is more simple, and more manageable, is a map-editor.
Tile Editors
out of the box:
There are lots of ways of doing this... There are free or cheap programs which will allow you to paint tiles, and will then spit out XML or JSON or CSV or whatever the given program supports/exports.
Tiled ( http://mapeditor.org ) is one such example.
There are others, but Tiled is the first I could think of, is free, and is actually quite decent.
pros:
The immediate upside is that you get an app that lets you load image tiles, and paint them into maps.
These apps might even support adding collision-layers and entity-layers (put an enemy at [2,1], a power-up at [3,5] and a "hurt-player" trigger, over the lava).
cons:
...the downside is that you need to know exactly how these files are formatted, so that you can read them into your game engines.
Now, the outputs of these systems are relatively-standardized... so that you can plug that map data into different game engines (what's the point, otherwise?), and while game-engines don't all use tile files that are exactly the same, most good tile-editors allow for export into several formats (some will let you define your own format).
...so that said, the alternative (or really, the same solution, just hand-crafted), would be to create your own tile-editor.
DIY
You could create it in Canvas, just as easily as creating the engine to paint the tiles.
The key difference is that you have your map of tiles (like the tilemap .png from StarCr... erm... the "found-art" from the example, there).
Instead of looping through an array, finding the coordinates of the tile and painting them at the world-coordinates which match that index, what you would do is choose a tile from the map (like choosing a colour in MS Paint), and then wherever you click (or drag), figure out which array point that relates to, and set that index to be equal to that tile.
pros:
The sky is the limit; you can make whatever you want, make it fit any file-format you want to use, and make it handle any crazy stuff you want to throw at it...
cons:
...this of course, means you have to make it, yourself, and define the file-format you want to use, and write the logic to handle all of those zany ideas...
basic implementation
While I'd normally try to make this tidy, and JS-paradigm friendly, that would result in a LOT of code, here.
So I'll try to denote where it should probably be broken up into separate modules.
// assuming images are already loaded properly
// and have fired onload events, which you've listened for
// so that there are no surprises, when your engine tries to
// paint something that isn't there, yet
// this should all be wrapped in a module that deals with
// loading tile-maps, selecting the tile to "paint" with,
// and generating the data-format for the tile, for you to put into the array
// (or accepting plug-in data-formatters, to do so)
var selected_tile = null,
selected_tile_map = get_tile_map(), // this would be an image with your tiles
tile_width = 64, // in image-pixels, not canvas/screen-pixels
tile_height = 64, // in image-pixels, not canvas/screen-pixels
num_tiles_x = selected_tile_map.width / tile_width,
num_tiles_y = selected_tile_map.height / tile_height,
select_tile_num_from_map = function (map_px_X, map_px_Y) {
// there are *lots* of ways to do this, but keeping it simple
var tile_y = Math.floor(map_px_Y / tile_height), // 4 = floor(280/64)
tile_x = Math.floor(map_px_X / tile_width ),
tile_num = tile_y * num_tiles_x + tile_x;
// 23 = 4 down * 5 per row + 3 over
return tile_num;
};
// won't go into event-handling and coordinate-normalization
selected_tile_map.onclick = function (evt) {
// these are the coordinates of the click,
//as they relate to the actual image at full scale
map_x, map_y;
selected_tile = select_tile_num_from_map(map_x, map_y);
};
Now you have a simple system for figuring out which tile was clicked.
Again, there are lots of ways of building this, and you can make it more OO,
and make a proper "tile" data-structure, that you expect to read and use throughout your engine.
Right now, I'm just returning the zero-based number of the tile, reading left to right, top to bottom.
If there are 5 tiles per row, and someone picks the first tile of the second row, that's tile #5.
Then, for "painting", you just need to listen to a canvas click, figure out what the X and Y were,
figure out where in the world that is, and what array spot that's equal to.
From there, you just dump in the value of selected_tile, and that's about it.
// this might be one long array, like I did with the tile-map and the number of the tile
// or it might be an array of arrays: each inner-array would be a "row",
// and the outer array would keep track of how many rows down you are,
// from the top of the world
var world_map = [],
selected_coordinate = 0,
world_tile_width = 64, // these might be in *canvas* pixels, or "world" pixels
world_tile_height = 64, // this is so you can scale the size of tiles,
// or zoom in and out of the map, etc
world_width = 320,
world_height = 320,
num_world_tiles_x = world_width / world_tile_width,
num_world_tiles_y = world_height / world_tile_height,
get_map_coordinates_from_click = function (world_x, world_y) {
var coord_x = Math.floor(world_px_x / num_world_tiles_x),
coord_y = Math.floor(world_px_y / num_world_tiles_y),
array_coord = coord_y * num_world_tiles_x + coord_x;
return array_coord;
},
set_map_tile = function (index, tile) {
world_map[index] = tile;
};
canvas.onclick = function (evt) {
// convert screen x/y to canvas, and canvas to world
world_px_x, world_px_y;
selected_coordinate = get_map_coordinates_from_click(world_px_x, world_px_y);
set_map_tile(selected_coordinate, selected_tile);
};
As you can see, the procedure for doing one is pretty much the same as the procedure for doing the other (because it is -- given an x and y in one coordinate-set, convert it to another scale/set).
The procedure for drawing the tiles, then, is nearly the exact opposite.
Given the world-index and tile-number, work in reverse to find the world-x/y and tilemap-x/y.
You can see that part in your example code, as well.
This tile-painting is the traditional way of making 2d maps, whether we're talking about StarCraft, Zelda, or Mario Bros.
Not all of them had the luxury of having a "paint with tiles" editor (some were by hand in text-files, or even spreadsheets, to get the spacing right), but if you load up StarCraft or even WarCraft III (which is 3D), and go into their editors, a tile-painter is exactly what you get, and is exactly how Blizzard made those maps.
additions
With the basic premise out of the way, you now have other "maps" which are also required:
you'd need a collision-map to know which of those tiles you could/couldn't walk on, an entity-map, to show where there are doors, or power-ups or minerals, or enemy-spawns, or event-triggers for cutscenes...
Not all of these need to operate in the same coordinate-space as the world map, but it might help.
Also, you might want a more intelligent "world".
The ability to use multiple tile-maps in one level, for instance...
And a drop-down in a tile-editor to swap tile-maps.
...a way to save out both tile-information (not just X/Y, but also other info about a tile), and to save out the finished "map" array, filled with tiles.
Even just copying JSON, and pasting it into its own file...
Procedural Generation
The other way of doing this, the way you suggested earlier ("knowing how to connect rocks, grass, etc") is called Procedural Generation.
This is a LOT harder and a LOT more involved.
Games like Diablo use this, so that you're in a different randomly-generated environment, every time you play. Warframe is an FPS which uses procedural generation to do the same thing.
premise:
Basically, you start with tiles, and instead of just a tile being an image, a tile has to be an object that has an image and a position, but ALSO has a list of things that are likely to be around it.
When you put down a patch of grass, that grass will then have a likelihood of generating more grass beside it.
The grass might say that there's a 10% chance of water, a 20% chance of rocks, a 30% chance of dirt, and a 40% chance of more grass, in any of the four directions around it.
Of course, it's really not that simple (or it could be, if you're wrong).
While that's the idea, the tricky part of procedural generation is actually in making sure everything works without breaking.
constraints
You couldn't, for example have the cliff wall, in that example, appear on the inside of the high-ground. It can only appear where there's high ground above and to the right, and low-ground below and to the left (and the StarCraft editor did this automatically, as you painted). Ramps can only connect tiles that make sense. You can't wall off doors, or wrap the world in a river/lake that prevents you from moving (or worse, prevents you from finishing a level).
pros
Really great for longevity, if you can get all of your pathfinding and constraints to work -- not only for pseudo-randomly generating the terrain and layout, but also enemy-placement, loot-placement, et cetera.
People are still playing Diablo II, nearly 14 years later.
cons
Really difficult to get right, when you're a one-man team (who doesn't happen to be a mathematician/data-scientist in their spare time).
Really bad for guaranteeing that maps are fun/balanced/competitive...
StarCraft could never have used 100% random-generation for fair gameplay.
Procedural-generation can be used as a "seed".
You can hit the "randomize" button, see what you get, and then tweak and fix from there, but there'll be so much fixing for "balance", or so many game-rules written to constrain the propagation, that you'll end up spending more time fixing the generator than just painting a map, yourself.
There are some tutorials out there, and learning genetic-algorithms, pathfinding, et cetera, are all great skills to have... ...buuuut, for purposes of learning to make 2D top-down tile-games, are way-overkill, and rather, are something to look into after you get a game/engine or two under your belt.

JavaScript "pixel"-perfect collision detection for rotating sprites using math (probably linear algebra)

I'm making a 2D game in JavaScript. For it, I need to be able to "perfectly" check collision between two sprites which have x/y positions (corresponding to their centre), a rotation in radians, and of course known width/height.
After spending many weeks of work (yeah, I'm not even exaggerating), I finally came up with a working solution, which unfortunately turned out to be about 10,000x too slow and impossible to optimize in any meaningful manner. I have entirely abandoned the idea of actually drawing and reading pixels from a canvas. That's just not going to cut it, but please don't make me explain in detail why. This needs to be done with math and an "imaginated" 2D world/grid, and from talking to numerous people, the basic idea became obvious. However, the practical implementation is not. Here's what I do and want to do:
What I already have done
In the beginning of the program, each sprite is pixel-looked through in its default upright position and a 1-dimensional array is filled up with data corresponding to the alpha channel of the image: solid pixels get represented by a 1, and transparent ones by 0. See figure 3.
The idea behind that is that those 1s and 0s no longer represent "pixels", but "little math orbs positioned in perfect distances to each other", which can be rotated without "losing" or "adding" data, as happens with pixels if you rotate images in anything but 90 degrees at a time.
I naturally do the quick "bounding box" check first to see if I should bother calculating accurately. This is done. The problem is the fine/"for-sure" check...
What I cannot figure out
Now that I need to figure out whether the sprites collide for sure, I need to construct a math expression of some sort using "linear algebra" (which I do not know) to determine if these "rectangles of data points", positioned and rotated correctly, both have a "1" in an overlapping position.
Although the theory is very simple, the practical code needed to accomplish this is simply beyond my capabilities. I've stared at the code for many hours, asking numerous people (and had massive problems explaining my problem clearly) and really put in an effort. Now I finally want to give up. I would very, very much appreciate getting this done with. I can't even give up and "cheat" by using a library, because nothing I find even comes close to solving this problem from what I can tell. They are all impossible for me to understand, and seem to have entirely different assumptions/requirements in mind. Whatever I'm doing always seems to be some special case. It's annoying.
This is the pseudo code for the relevant part of the program:
function doThisAtTheStartOfTheProgram()
{
makeQuickVectorFromImageAlpha(sprite1);
makeQuickVectorFromImageAlpha(sprite2);
}
function detectCollision(sprite1, sprite2)
{
// This easy, outer check works. Please ignore it as it is unrelated to the problem.
if (bounding_box_match)
{
/*
This part is the entire problem.
I must do a math-based check to see if they really collide.
These are the relevant variables as I have named them:
sprite1.x
sprite1.y
sprite1.rotation // in radians
sprite1.width
sprite1.height
sprite1.diagonal // might not be needed, but is provided
sprite2.x
sprite2.y
sprite2.rotation // in radians
sprite2.width
sprite2.height
sprite2.diagonal // might not be needed, but is provided
sprite1.vectorForCollisionDetection
sprite2.vectorForCollisionDetection
Can you please help me construct the math expression, or the series of math expressions, needed to do this check?
To clarify, using the variables above, I need to check if the two sprites (which can rotate around their centre, have any position and any dimensions) are colliding. A collision happens when at least one "unit" (an imagined sphere) of BOTH sprites are on the same unit in our imaginated 2D world (starting from 0,0 in the top-left).
*/
if (accurate_check_goes_here)
return true;
}
return false;
}
In other words, "accurate_check_goes_here" is what I wonder what it should be. It doesn't need to be a single expression, of course, and I would very much prefer seeing it done in "steps" (with comments!) so that I have a chance of understanding it, but please don't see this as "spoon feeding". I fully admit I suck at math and this is beyond my capabilities. It's just a fact. I want to move on and work on the stuff I can actually solve on my own.
To clarify: the 1D arrays are 1D and not 2D due to performance. As it turns out, speed matters very much in JS World.
Although this is a non-profit project, entirely made for private satisfaction, I just don't have the time and energy to order and sit down with some math book and learn about that from the ground up. I take no pride in lacking the math skills which would help me a lot, but at this point, I need to get this game done or I'll go crazy. This particular problem has prevented me from getting any other work done for far too long.
I hope I have explained the problem well. However, one of the most frustrating feelings is when people send well-meaning replies that unfortunately show that the person helping has not read the question. I'm not pre-insulting you all -- I just wish that won't happen this time! Sorry if my description is poor. I really tried my best to be perfectly clear.
Okay, so I need "reputation" to be able to post the illustrations I spent time to create to illustrate my problem. So instead I link to them:
Illustrations
(censored by Stackoverflow)
(censored by Stackoverflow)
OK. This site won't let me even link to the images. Only one. Then I'll pick the most important one, but it would've helped a lot if I could link to the others...
First you need to understand that detecting such collisions cannot be done with a single/simple equation. Because the shapes of the sprites matter and these are described by an array of Width x Height = Area bits. So the worst-case complexity of the algorithm must be at least O(Area).
Here is how I would do it:
Represent the sprites in two ways:
1) a bitmap indicating where pixels are opaque,
2) a list of the coordinates of the opaque pixels. [Optional, for speedup, in case of hollow sprites.]
Choose the sprite with the shortest pixel list. Find the rigid transform (translation + rotation) that transforms the local coordinates of this sprite into the local coordinates of the other sprite (this is where linear algebra comes into play - the rotation is the difference of the angles, the translation is the vector between upper-left corners - see http://planning.cs.uiuc.edu/node99.html).
Now scan the opaque pixel list, transforming the local coordinates of the pixels to the local coordinates of the other sprite. Check if you fall on an opaque pixel by looking up the bitmap representation.
This takes at worst O(Opaque Area) coordinate transforms + pixel tests, which is optimal.
If you sprites are zoomed-in (big pixels), as a first approximation you can ignore the zooming. If you need more accuracy, you can think of sampling a few points per pixel. Exact computation will involve a square/square collision intersection algorithm (with rotation), more complex and costly. See http://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm.
Here is an exact solution that will work regardless the size of the pixels (zoomed or not).
Use both a bitmap representation (1 opacity bit per pixel) and a decomposition into squares or rectangles (rectangles are optional, just an optimization; single pixels are ok).
Process all rectangles of the (source) sprite in turn. By means of rotation/translation, map the rectangles to the coordinate space of the other sprite (target). You will obtain a rotated rectangle overlaid on a grid of pixels.
Now you will perform a filling of this rectangle with a scanline algorithm: first split the rectangle in three (two triangles and one parallelogram), using horizontal lines through the rectangle vertexes. For the three shapes independently, find all horizontal between-pixel lines that cross them (this is simply done by looking at the ranges of Y values). For every such horizontal line, compute the two intersections points. Then find all pixel corners that fall between the two intersections (range of X values). For any pixel having a corner inside the rectangle, lookup the corresponding bit in the (target) sprite bitmap.
No too difficult to program, no complicated data structure. The computational effort is roughly proportional to the number of target pixels covered by every source rectangle.
Although you have already stated that you don't feel rendering to the canvas and checking that data is a viable solution, I'd like to present an idea which may or may not have already occurred to you and which ought to be reasonably efficient.
This solution relies on the fact that rendering any pixel to the canvas with half-opacity twice will result in a pixel of full opacity. The steps follow:
Size the test canvas so that both sprites will fit on it (this will also clear the canvas, so you don't have to create a new element each time you need to test for collision).
Transform the sprite data such that any pixel that has any opacity or color is set to be black at 50% opacity.
Render the sprites at the appropriate distance and relative position to one another.
Loop through the resulting canvas data. If any pixels have an opacity of 100%, then a collision has been detected. Return true.
Else, return false.
Wash, rinse, repeat.
This method should run reasonably fast. Now, for optimization--the bottleneck here will likely be the final opacity check (although rendering the images to the canvas could be slow, as might be clearing/resizing it):
reduce the resolution of the opacity detection in the final step, by changing the increment in your loop through the pixels of the final data.
Loop from middle up and down, rather than from the top to bottom (and return as soon as you find any single collision). This way you have a higher chance of encountering any collisions earlier in the loop, thus reducing its length.
I don't know what your limitations are and why you can't render to canvas, since you have declined to comment on that, but hopefully this method will be of some use to you. If it isn't, perhaps it might come in handy to future users.
Please see if the following idea works for you. Here I create a linear array of points corresponding to pixels set in each of the two sprites. I then rotate/translate these points, to give me two sets of coordinates for individual pixels. Finally, I check the pixels against each other to see if any pair are within a distance of 1 - which is "collision".
You can obviously add some segmentation of your sprite (only test "boundary pixels"), test for bounding boxes, and do other things to speed this up - but it's actually pretty fast (once you take all the console.log() statements out that are just there to confirm things are behaving…). Note that I test for dx - if that is too large, there is no need to compute the entire distance. Also, I don't need the square root for knowing whether the distance is less than 1.
I am not sure whether the use of new array() inside the pixLocs function will cause a problem with memory leaks. Something to look at if you run this function 30 times per second...
<html>
<script type="text/javascript">
var s1 = {
'pix': new Array(0,0,1,1,0,0,1,0,0,1,1,0),
'x': 1,
'y': 2,
'width': 4,
'height': 3,
'rotation': 45};
var s2 = {
'pix': new Array(1,0,1,0,1,0,1,0,1,0,1,0),
'x': 0,
'y': 1,
'width': 4,
'height': 3,
'rotation': 90};
pixLocs(s1);
console.log("now rotating the second sprite...");
pixLocs(s2);
console.log("collision detector says " + collision(s1, s2));
function pixLocs(s) {
var i;
var x, y;
var l1, l2;
var ca, sa;
var pi;
s.locx = new Array();
s.locy = new Array();
pi = Math.acos(0.0) * 2;
var l = new Array();
ca = Math.cos(s.rotation * pi / 180.0);
sa = Math.sin(s.rotation * pi / 180.0);
i = 0;
for(x = 0; x < s.width; ++x) {
for(y = 0; y < s.height; ++y) {
// offset to center of sprite
if(s.pix[i++]==1) {
l1 = x - (s.width - 1) * 0.5;
l2 = y - (s.height - 1) * 0.5;
// rotate:
r1 = ca * l1 - sa * l2;
r2 = sa * l1 + ca * l2;
// add position:
p1 = r1 + s.x;
p2 = r2 + s.y;
console.log("rotated pixel [ " + x + "," + y + " ] is at ( " + p1 + "," + p2 + " ) " );
s.locx.push(p1);
s.locy.push(p2);
}
else console.log("no pixel at [" + x + "," + y + "]");
}
}
}
function collision(s1, s2) {
var i, j;
var dx, dy;
for (i = 0; i < s1.locx.length; i++) {
for (j = 0; j < s2.locx.length; j++) {
dx = Math.abs(s1.locx[i] - s2.locx[j]);
if(dx < 1) {
dy = Math.abs(s1.locy[i] - s2.locy[j]);
if (dx*dx + dy+dy < 1) return 1;
}
}
}
return 0;
}
</script>
</html>

How to subdivide a shape into sections of a given size

I'm currently trying to build a kind of pie chart / voronoi diagram hybrid (in canvas/javascript) .I don't know if it's even possible. I'm very new to this, and I haven't tried any approaches yet.
Assume I have a circle, and a set of numbers 2, 3, 5, 7, 11.
I want to subdivide the circle into sections equivalent to the numbers (much like a pie chart) but forming a lattice / honeycomb like shape.
Is this even possible? Is it ridiculously difficult, especially for someone who's only done some basic pie chart rendering?
This is my view on this after a quick look.
A general solution, assuming there are to be n polygons with k vertices/edges, will depend on the solution to n equations, where each equation has no more than 2nk, (but exactly 2k non-zero) variables. The variables in each polygon's equation are the same x_1, x_2, x_3... x_nk and y_1, y_2, y_3... y_nk variables. Exactly four of x_1, x_2, x_3... x_nk have non-zero coefficients and exactly four of y_1, y_2, y_3... y_nk have non-zero coefficients for each polygon's equation. x_i and y_i are bounded differently depending on the parent shape.. For the sake of simplicity, we'll assume the shape is a circle. The boundary condition is: (x_i)^2 + (y_i)^2 <= r^2
Note: I say no more than 2nk, because I am unsure of the lowerbound, but know that it can not be more than 2nk. This is a result of polygons, as a requirement, sharing vertices.
The equations are the collection of definite, but variable-bounded, integrals representing the area of each polygon, with the area equal for the ith polygon:
A_i = pi*r^2/S_i
where r is the radius of the parent circle and S_i is the number assigned to the polygon, as in your diagram.
The four separate pairs of (x_j,y_j), both with non-zero coefficients in a polygon's equation will yield the vertices for the polygon.
This may prove to be considerably difficult.
Is the boundary fixed from the beginning, or can you deform it a bit?
If I had to solve this, I would sort the areas from large to small. Then, starting with the largest area, I would first generate a random convex polygon (vertices along a circle) with the required size. The next area would share an edge with the first area, but would be otherwise also random and convex. Each polygon after that would choose an existing edge from already-present polygons, and would also share any 'convex' edges that start from there (where 'convex edge' is one that, if used for the new polygon, would result in the new polygon still being convex).
By evaluating different prospective polygon positions for 'total boundary approaches desired boundary', you can probably generate a cheap approximation to your initial goal. This is quite similar to what word-clouds do: place things incrementally from largest to smallest while trying to fill in a more-or-less enclosed space.
Given a set of voronio centres (i.e. a list of the coordinates of the centre for each one), we can calculate the area closest to each centre:
area[i] = areaClosestTo(i,positions)
Assume these are a bit wrong, because we haven't got the centres in the right place. So we can calculate the error in our current set by comparing the areas to the ideal areas:
var areaIndexSq = 0;
var desiredAreasMagSq = 0;
for(var i = 0; i < areas.length; ++i) {
var contrib = (areas[i] - desiredAreas[i]);
areaIndexSq += contrib*contrib;
desiredAreasMagSq += desiredAreas[i]*desiredAreas[i];
}
var areaIndex = Math.sqrt(areaIndexSq/desiredAreasMagSq);
This is the vector norm of the difference vector between the areas and the desiredAreas. Think of it like a measure of how good a least squares fit line is.
We also want some kind of honeycomb pattern, so we can call that honeycombness(positions), and get an overall measure of the quality of the thing (this is just a starter, the weighting or form of this can be whatever floats your boat):
var overallMeasure = areaIndex + honeycombnessIndex;
Then we have a mechanism to know how bad a guess is, and we can combine this with a mechanism for modifying the positions; the simplest is just to add a random amount to the x and y coords of each centre. Alternatively you can try moving each point towards neighbour areas which have an area too high, and away from those with an area too low.
This is not a straight solve, but it requires minimal maths apart from calculating the area closest to each point, and it's approachable. The difficult part may be recognising local minima and dealing with them.
Incidentally, it should be fairly easy to get the start points for the process; the centroids of the pie slices shouldn't be too far from the truth.
A definite plus is that you could use the intermediate calculations to animate a transition from pie to voronoi.

Categories