Given an array of circles (x,y,r values), I want to place a new point, such that it has a fixed/known Y-coordinate (shown as the horizontal line), and is as close as possible to the center BUT not within any of the existing circles. In the example images, the point in red would be the result.
Circles have a known radius and Y-axis attribute, so easy to calculate the points where they intersect the horizontal line at the known Y value. Efficiency is important, I don't want to have to try a bunch of X coords and test them all against each item in the circles array. Is there a way to work out this optimal X coordinate mathematically? Any help greatly appreciated. By the way, I'm writing it in javascript using the Raphael.js library (because its the only one that supports IE8) - but this is more of a logic problem so the language doesn't really matter.
I'd approach your problem as follows:
Initialize a set of intervals S, sorted by the X coordinate of the interval, to the empty set
For each circle c, calculate the interval of intersection Ic of c with with the X axis. If c does not intersect, go on to the next circle. Otherwise, test whether Ic overlaps with any interval(s) in S (this is quick because S is sorted); if so, remove all intersecting intervals from S, collapse Ic and all removed intervals into a new interval I'c and add I'c to S. If there are no intersections, add Ic to S.
Check whether any interval in S includes the center (again, fast because S is sorted). If so, select the interval endpoint closest to the center; if not, select the center as the closest point.
Basically the equation of a circle is (x - cx)2 + (y - cy)2 = r2. Therefore you can easily find the intersection points between the circle and X axis by substituting y with 0. After that you just have a simple quadratic equation to solve: x2 - 2cxx + cx2 + cy2 - r2 = 0 . For it you have 3 possible outcomes:
No intersection - the determinant will be irrational number (NaN in JavaScript), ignore this result;
One intersection - both solutions match, use [value, value];
Two intersections - both solutions are different, use [value1, value2].
Sort the newly calculated intersection intervals, than try merge them where it is possible. However take in mind that in every program language there approximation, therefore you need to define delta value for your dot approximation and take it into consideration when merging the intervals.
When the intervals are merged you can generate your x coordinates by subtracting/adding the same delta value to the beginning/end of every interval. And lastly from all points, the one closest to zero is your answer.
Here is an example with O(n log n) complexity that is oriented rather towards readability. I've used 1*10-10 for delta :
var circles = [
{x:0, y:0, r:1},
{x:2.5, y:0, r:1},
{x:-1, y:0.5, r:1},
{x:2, y:-0.5, r:1},
{x:-2, y:0, r:1},
{x:10, y:10, r:1}
];
console.log(getClosestPoint(circles, 1e-10));
function getClosestPoint(circles, delta)
{
var intervals = [],
len = circles.length,
i, result;
for (i = 0; i < len; i++)
{
result = getXIntersection(circles[i])
if (result)
{
intervals.push(result);
}
}
intervals = intervals.sort(function(a, b){
return a.from - b.from;
});
if (intervals.length <= 0) return 0;
intervals = mergeIntervals(intervals, delta);
var points = getClosestPoints(intervals, delta);
points = points.sort(function(a, b){
return Math.abs(a) - Math.abs(b);
});
return points[0];
}
function getXIntersection(circle)
{
var d = Math.sqrt(circle.r * circle.r - circle.y * circle.y);
return isNaN(d) ? null : {from: circle.x - d, to: circle.x + d};
}
function mergeIntervals(intervals, delta)
{
var curr = intervals[0],
result = [],
len = intervals.length, i;
for (i = 1 ; i < len ; i++)
{
if (intervals[i].from <= curr.to + delta)
{
curr.to = Math.max(curr.to, intervals[i].to);
} else {
result.push(curr);
curr = intervals[i];
}
}
result.push(curr);
return result;
}
function getClosestPoints(intervals, delta)
{
var result = [],
len = intervals.length, i;
for (i = 0 ; i < len ; i++)
{
result.push( intervals[i].from - delta );
result.push( intervals[i].to + delta );
}
return result;
}
create the intersect_segments array (normalizing at x=0 y=0)
sort intersectsegments by upperlimit and remove those with upperlimit<0
initialize point1 = 0 and segment = 0
loop while point1 is inside intersectsegment[segment]
4.1. increment point1 by uppper limit of intersectsegment[segment]
4.2. increment segment
sort intersectsegments by lowerlimit and remove those with loerlimit>0
initialize point2 = 0 and segment = 0
loop while point2 is inside intersectsegments[segment]
7.1. decrement point2 by lower limit of segment
7.2. decrement segment
the point is minimum absolute value of p1 and p2
Related
Let's say I have a function called bars()
bars () {
const bars = []
for (let i = 0; i < this.numberOfBars; i++) {
bars.push(Math.sqrt(this.numberOfBars * this.numberOfBars - i * i))
}
return bars
}
If I'm reducing the bars array to approximate PI, what should be on the right side of the arrow function?
PI = bars().reduce((a, b) =>
I tried adding the values and dividing by the number of bars, but I'm not getting anywhere near the approximation of Pi. I feel like there's a simple trick that I'm missing.
Your funcion seems to list lengths of "bars" in a quarter of a circle, so we have to add them all up (to have the area of the quarter of a circle), then multiply by 4 (because there is 4 quarter) and the divide by this.numberOfBars ^ 2 because area = π * r^2, but like we have to know the radius, it is better using a pure function :
// Your function rewritten as a pure one
const bars = numberOfBars => {
const bars = []
for (let i = 0; i < numberOfBars; i++) {
bars.push(Math.sqrt(numberOfBars * numberOfBars - i * i))
}
return bars
}
// Here we take 1000 bars as an example but in your case you replace it by this.numberOfBars
// Sum them all up, multiply by 4, divide by the square of the radius
const PI = bars(1000).reduce((g, c) => g + c) * 4 / Math.pow(1000, 2)
console.log(PI)
/** Approximates PI using geometry
* You get a better approximation using more bars and a smaller step size
*/
function approximatePI(numberOfBars, stepSize) {
const radius = numberOfBars * stepSize;
// Generate bars (areas of points on quarter circle)
let bars = [];
// You can think of i as some point along the x-axis
for (let i = 0; i < radius; i += stepSize) {
let height = Math.sqrt(radius*radius - i*i)
bars.push(height * stepSize);
}
// Add up all the areas of the bars
// (This is approximately the area of a quarter circle if stepSize is small enough)
const quarterArea = bars.reduce((a, b) => a + b);
// Calculate PI using area of circle formula
const PI = 4 * quarterArea / (radius*radius)
return PI;
}
console.log(`PI is approximately ${approximatePI(100_000, 0.001)}`);
There is no reason to push all terms to an array, then to reduce the array by addition. Just use an accumulator variable and add all terms to it.
Notice that the computation becomes less and less accurate the closer you get to the end of the radius. If you sum to half of the radius, you obtain r²(3√3+π)/24, from which you can draw π.
(Though in any case, this is one of the worst methods to evaluate π.)
I have an variable obj that has the element count that needs a cartesian coordinate.
So I want to generate the following matrix.
obj = 9, Square root of obj = 3, 3x3 matrix
(-1,1)
(0,1)
(1,1)
(-1,0)
(0,0)
(1,0)
(-1,-1)
(0,-1)
(1,-1)
obj = 25, Square root of obj = 5, 5x5 matrix
(-2,2)
(-1,2)
(0,2)
(1,2)
(2,2)
(-2,1)
(-1,1)
(0,1)
(1,1)
(2,1)
(-2,0)
(-1,0)
(0,0)
(1,0)
(2,0)
(-2,-1)
(-1,-1)
(0,-1)
(1,-1)
(2,-1)
(-2,-2)
(-1,-2)
(0,-2)
(1,-2)
(2,-2)
obj = 49, Square root of obj = 7, 7x7 matrix
(-3,3)
(-2,3)
(-1,3)
(0,3)
(1,3)
(2,3)
(3,3)
(-3,2)
(-2,2)
(-1,2)
(0,2)
(1,2)
(2,2)
(3,2)
(-3,1)
(-2,1)
(-1,1)
(0,1)
(1,1)
(2,1)
(3,1)
(-3,0)
(-2,0)
(-1,0)
(0,0)
(1,0)
(2,0)
(3,0)
(-3,-1)
(-2,-1)
(-1,-1)
(0,-1)
(1,-1)
(2,-1)
(3,-1)
(-3,-2)
(-2,-2)
(-1,-2)
(0,-2)
(1,-2)
(2,-2)
(3,-2)
(-3,-3)
(-2,-3)
(-1,-3)
(0,-3)
(1,-3)
(2,-3)
(3,-3)
What I did was hardcoded the first set that is when the obj value is 9 to be created inside a loop, and pushed those in a list called coordinates.
All I then did was call the loop by passing the Math.sqrt(obj).
Problem:
There are missing coordinates, when the obj value is greater than 9.
For eg: when the obj value is 49. It would create the adjacent previous element, but it won't create the previous element of the previous element (coordinates like (-1, 3), (1, 3), (-3, 1), (3, 1), (-3, -1), (3, -1), (-1, -3), (1, -3)).
This is happening because I hardcoded the logic to create the previous coordinate by subtracting with 1. As the obj value increases the current number of missing coordinates is twice the previous number of missing elements (not sure).
I can't seem to figure out a way to create the logic to create the missing elements.
Another problem is repeating coordinates. Which happened because I used the logic to create the missing elements wrong.
Hard to check if all coordinates are correct when the count (obj) value increases.
Note:
I would like to know different approaches to create the cartesian coordinates around (0, 0). Apparently all my efforts in building the logic ends up with missing elements or repeating elements. And it is hard to actually check if all the coordinates are correct when the obj values increases.
I want to create a cartesian coordinate matrix with any value. Currently I'm stuck with using the squares of odd numbers (I plan to substitute the 0 axis for when the number is less than or greater than squares of odd numbers).
Approach ideas/concepts to test:
As I'm a beginner in graphics programming, I would like to know better approaches to do this. Also here are some approaches I just came up with. I am not sure if this works yet, but I'll add an update.
I could maybe create a cross for just the 0's (x,y) axis. And then try to create the rest of the elements by subtracting or adding to each coordinate in the axis.
As there are 4 quadrants, I could create 4 individual loops that creates just that particular quadrant's missing coordinates.
(0,1)
(-1,0)
(0,0)
(1,0)
(0,-1)
Another approach could be like to sort the coordinates and then check to see the distance between 2 adjacent coordinates if it is greater than 1 create a new element, else continue checking.
Current Code:
My demo code at JSFiddle
const speak = 'these are the COORDINATES you are looking for!'
// 9, 25, 49, 81, 121 => substitutable values for variable 'obj'
const obj = 49 // loop using this variable
const coordinates = []
// hardcodes
const start = [0,0]
const points = []
/* points.push(start) */
/**
* FIX!.
*
* needs to also create coordinates from initial coordinate substracted
* by more than 1, currently it gets the previous element by substracting 1,
* we need to get previous elements of the previous elements based on number
* of elements.
*/
// creating array from coordinates in all quadrants
function demo (n) {
// pushing initial coordinates
for (let i = 1; i <= Math.sqrt(n); i++) {
coordinates.push([-i, i], [i-1, i], [i, i], [-i, i-1], [i-1, i-1], [i, i-1], [-i, -i], [i-1, -i], [i, -i])
for (let j = 3; j < Math.sqrt(n); j++) {
coordinates.push([-i, i-j], [i-j, i-j], [i, i-j], [i-j, -i])
}
}
// pushing missing coordinates
/* for (let i = 1; i <= Math.sqrt(n); i++) {
coordinates.push([i-2, i], [-i, i-2], [i-2, i-2], [i, i-2])
} */
for (let i = 0; i < obj; i++) {
points.push(coordinates[i])
}
}
demo(obj)
// sorting multidimensional array
points.sort(function (a, b) {
return a[1] - b[1]
})
/* // print array as row and column of coordinates
for (let x = 0; x < Math.sqrt(obj); x++) {
let el = []
for (let i = 0; i < Math.sqrt(obj); i++){
el.push(points[i + Math.sqrt(obj) * x])
}
console.log(el)
*/
}
If I understand you correctly you want to have the coordinates in an order so that the left upper corner is first and right lower corner is last, right?
You can try it this way
let
size = 81, //ie a 7x7 grid,
rc = Math.floor(Math.sqrt(size)) //number of rows/columns
max = Math.floor(rc / 2), //maximum x and y coordinates
min = -1 * max; //minimim x and y coordinates
coords = [] //the array of coordinates
//as the positive y coordinates should be first, iterate from max down to min
for (let y = max; y >= min; y--)
//for each row, iterate the x cooridinates from min up to max
for (let x = min; x <= max; x++)
coords.push([x,y]);
for (let i = 0; i < rc; i++) {
let row = coords.slice(i*rc, (i+1)*rc); //get one full row of coordinates
console.log(row.map(x => formatCoordinate(x)).join("")); //and display it
}
function formatCoordinate(x) {
return "|" + `${x[0]}`.padStart(3, " ") + "/" + `${x[1]}`.padStart(3, " ") + "|"
}
Another way, is, just put your coordinates in the array in any order, and sort the values afterwards. But you have to sort by x and y coordinate,
let
size = 81, //ie a 7x7 grid,
rc = Math.floor(Math.sqrt(size)) //number of rows/columns
max = Math.floor(rc / 2), //maximum x and y coordinates
min = -1 * max; //minimim x and y coordinates
coords = [] //the array of coordinates
//coords will be [[-3, -3], [-3, -2], [-3, -1] ..., [3, 3]]
for (let i = min; i <= max; i++)
for (let j = min; j <= max; j++)
coords.push([i,j]);
//sort coords to be [[-3, 3], [-2, 3], [-1, 3], ... [3, -3]]
coords.sort((a, b) => {
if (a[1] != b[1]) //if y coordinates are different
return b[1] - a[1]; //higher y coordinates come first
return a[0] - b[0]; //lower x coordinates come firs
})
for (let i = 0; i < rc; i++) {
let row = coords.slice(i*rc, (i+1)*rc); //get one full row of coordinates
console.log(row.map(x => formatCoordinate(x)).join("")); //and display it
}
function formatCoordinate(x) {
return "|" + `${x[0]}`.padStart(3, " ") + "/" + `${x[1]}`.padStart(3, " ") + "|"
}
Both approaches assume that size is the square of an odd number, but you can of course adapt them any way you want, ie in principle you just need to set min and max to any values you want, and both approaches will create a square of coordinates from [[min/max] ... [max/min]].
I am trying to figure out a way to have a fixed scale for the:
https://en.wikipedia.org/wiki/Diamond-square_algorithm
I see that the algorithm requires a power of 2 (+1) size of the array.
The problem I am having is that I would like to have the same heightmap produced regardless of the resolution. So if I have a resolution of 512 it would look the same as with the resolution 256 but just have less detail. I just can't figure out how to do this with.
My initial thought was to always create the heightmap in a certain dimension e.g. 1024 and downsample to the res I would like. Problem is I would like the upper resolution to be quite high (say 4096) and this severely reduces the performance at lower resolutions as we have to run the algo at the highest possible resolution.
Currently the algorithm is in javascript here is a snippet:
function Advanced() {
var adv = {},
res, max, heightmap, roughness;
adv.heightmap = function() {
// heightmap has one extra pixel this is ot remove it.
var hm = create2DArray(res-1, res-1);
for(var x = 0;x< res-1;x++) {
for(var y = 0;y< res-1;y++) {
hm[x][y] = heightmap[x][y];
}
}
return hm;
}
adv.get = function(x,y) {
if (x < 0 || x > max || y < 0 || y > max) return -1;
return heightmap[x][y];
}
adv.set = function(x,y,val) {
if(val < 0) {
val = 0;
}
heightmap[x][y] = val;
}
adv.divide = function(size) {
var x, y, half = size / 2;
var scale = roughness * size;
if (half < 1) return;
for (y = half; y < max; y += size) {
for (x = half; x < max; x += size) {
adv.square(x, y, half, Math.random() * scale * 2 - scale);
}
}
for (y = 0; y <= max; y += half) {
for (x = (y + half) % size; x <= max; x += size) {
adv.diamond(x, y, half, Math.random() * scale * 2 - scale);
}
}
adv.divide(size / 2);
}
adv.average = function(values) {
var valid = values.filter(function(val) {
return val !== -1;
});
var total = valid.reduce(function(sum, val) {
return sum + val;
}, 0);
return total / valid.length;
}
adv.square = function(x, y, size, offset) {
var ave = adv.average([
adv.get(x - size, y - size), // upper left
adv.get(x + size, y - size), // upper right
adv.get(x + size, y + size), // lower right
adv.get(x - size, y + size) // lower left
]);
adv.set(x, y, ave + offset);
}
adv.diamond = function(x, y, size, offset) {
var ave = adv.average([
adv.get(x, y - size), // top
adv.get(x + size, y), // right
adv.get(x, y + size), // bottom
adv.get(x - size, y) // left
]);
adv.set(x, y, Math.abs(ave + offset));
}
adv.generate = function(properties, resolution) {
Math.seedrandom(properties.seed);
res = resolution + 1;
max = res - 1;
heightmap = create2DArray(res, res);
roughness = properties.roughness;
adv.set(0, 0, max);
adv.set(max, 0, max / 2);
adv.set(max, max, 0);
adv.set(0, max, max / 2);
adv.divide(max);
}
function create2DArray(d1, d2) {
var x = new Array(d1),
i = 0,
j = 0;
for (i = 0; i < d1; i += 1) {
x[i] = new Array(d2);
}
for (i=0; i < d1; i += 1) {
for (j = 0; j < d2; j += 1) {
x[i][j] = 0;
}
}
return x;
}
return adv;
}
Anyone ever done this before ?
Not quite sure if I understand your question yet but I'll provide further clarification if I can.
You've described a case where you want a diamond-square heightmap with a resolution of 256 to be used at a size of 512 without scaling it up. I'll go through an example using a 2x2 heightmap to a "size" of 4x4.
A diamond-square heightmap is really a set of vertices rather than tiles or squares, so a heightmap with a size of 2x2 is really a set of 3x3 vertices as shown:
You could either render this using the heights of the corners, or you might turn it into a 2x2 set of squares by taking the average of the four surrounding points - really this is just the "square" step of the algorithm without the displacement step.
So in this case the "height" of the top-left square would be the average of the (0, 0), (0, 1), (1, 1) and (1, 0) points.
If you wanted to draw this at a higher resolution, you could split each square up into a smaller set of 4 squares, adjusting the average based on how close it is to each point.
So now the value of the top-left-most square would be a sample of the 4 sub-points around it or a sample of its position relative to the points around it. But really this is just the diamond square algorithm applied again without any displacement (no roughness) so you may as well apply the algorithm again and go to the larger size.
You've said that going to the size you wish to go to would be too much for the processor to handle, so you may want to go with this sampling approach on the smaller size. An efficient way would be to render the heightmap to a texture and sample from it and the position required.
Properly implemented diamond & square algorithm has the same first N steps regardless of map resolution so the only thing to ensure the same look is use of some specified seed for pseudo random generator.
To make this work you need:
set seed
allocate arrays and set base randomness magnitude
Diamond
Square
lower base randomness magnitude
loop #3 until lowest resolution hit
If you are not lowering the randomness magnitude properly then the lower recursion/iteration layers can override the shape of the result of the upper layers making this not work.
Here see how I do it just add the seed:
Diamond-square algorithm not working
see the line:
r=(r*220)>>8; if (r<2) r=2;
The r is the base randomness magnitude. The way you are lowering it will determine the shape of the result as you can see I am not dividing it by two but multiplying by 220/256 instead so the lower resolution has bigger bumps which suite my needs.
Now if you want to use non 2^x+1 resolutions then choose the closer bigger resolution and then scale down to make this work for them too. The scaling down should be done carefully to preserve them main grid points of the first few recursion/iteration steps or use bi-cubic ...
If you're interested take a look on more up to date generator based on the linked one:
Diamond&Square Island generator
If I had an array of numbers such as [3, 5, 0, 8, 4, 2, 6], is there a way to “smooth out” the values so they’re closer to each other and display less variance?
I’ve looked into windowing the data using something called the Gaussian function for a 1-dimensional case, which is my array, but am having trouble implementing it. This thread seems to solve exactly what I need but I don’t understand how user naschilling (second post) came up with the Gaussian matrix values.
Context: I’m working on a music waveform generator (borrowing from SoundCloud’s design) that maps the amplitude of the song at time t to a corresponding bar height. Unfortunately there’s a lot of noise, and it looks particularly ugly when the program maps a tiny amplitude which results in a sudden decrease in height. I basically want to smooth out the bar heights so they aren’t so varied.
The language I'm using is Javascript.
EDIT: Sorry, let me be more specific about "smoothing out" the values. According to the thread linked above, a user took an array
[10.00, 13.00, 7.00, 11.00, 12.00, 9.00, 6.00, 5.00]
and used a Gaussian function to map it to
[ 8.35, 9.35, 8.59, 8.98, 9.63, 7.94, 5.78, 7.32]
Notice how the numbers are much closer to each other.
EDIT 2: It worked! Thanks to user Awal Garg's algorithm, here are the results:
No smoothing
Some smoothing
Maximum smoothing
EDIT 3: Here's my final code in JS. I tweaked it so that the first and last elements of the array were able to find its neighbors by wrapping around the array, rather than calling itself.
var array = [10, 13, 7, 11, 12, 9, 6, 5];
function smooth(values, alpha) {
var weighted = average(values) * alpha;
var smoothed = [];
for (var i in values) {
var curr = values[i];
var prev = smoothed[i - 1] || values[values.length - 1];
var next = curr || values[0];
var improved = Number(this.average([weighted, prev, curr, next]).toFixed(2));
smoothed.push(improved);
}
return smoothed;
}
function average(data) {
var sum = data.reduce(function(sum, value) {
return sum + value;
}, 0);
var avg = sum / data.length;
return avg;
}
smooth(array, 0.85);
Interesting question!
The algorithm to smooth out the values obviously could vary a lot, but here is my take:
"use strict";
var array = [10, 13, 7, 11, 12, 9, 6, 5];
function avg (v) {
return v.reduce((a,b) => a+b, 0)/v.length;
}
function smoothOut (vector, variance) {
var t_avg = avg(vector)*variance;
var ret = Array(vector.length);
for (var i = 0; i < vector.length; i++) {
(function () {
var prev = i>0 ? ret[i-1] : vector[i];
var next = i<vector.length ? vector[i] : vector[i-1];
ret[i] = avg([t_avg, avg([prev, vector[i], next])]);
})();
}
return ret;
}
function display (x, y) {
console.clear();
console.assert(x.length === y.length);
x.forEach((el, i) => console.log(`${el}\t\t${y[i]}`));
}
display(array, smoothOut(array, 0.85));
NOTE: It uses some ES6 features like fat-arrow functions and template strings. Firefox 35+ and Chrome 45+ should work fine. Please use the babel repl otherwise.
My method basically computes the average of all the elements in the array in advance, and uses that as a major factor to compute the new value along with the current element value, the one prior to it, and the one after it. I am also using the prior value as the one newly computed and not the one from the original array. Feel free to experiment and modify according to your needs. You can also pass in a "variance" parameter to control the difference between the elements. Lowering it will bring the elements much closer to each other since it decreases the value of the average.
A slight variation to loosen out the smoothing would be this:
"use strict";
var array = [10, 13, 7, 11, 12, 9, 6, 5];
function avg (v) {
return v.reduce((a,b) => a+b, 0)/v.length;
}
function smoothOut (vector, variance) {
var t_avg = avg(vector)*variance;
var ret = Array(vector.length);
for (var i = 0; i < vector.length; i++) {
(function () {
var prev = i>0 ? ret[i-1] : vector[i];
var next = i<vector.length ? vector[i] : vector[i-1];
ret[i] = avg([t_avg, prev, vector[i], next]);
})();
}
return ret;
}
function display (x, y) {
console.clear();
console.assert(x.length === y.length);
x.forEach((el, i) => console.log(`${el}\t\t${y[i]}`));
}
display(array, smoothOut(array, 0.85));
which doesn't take the averaged value as a major factor.
Feel free to experiment, hope that helps!
The technique you describe sounds like a 1D version of a Gaussian blur. Multiply the values of the 1D Gaussian array times the given window within the array and sum the result. For example
Assuming a Gaussian array {.242, .399, .242}
To calculate the new value at position n of the input array - multiply the values at n-1, n, and n+1 of the input array by those in (1) and sum the result. eg for [3, 5, 0, 8, 4, 2, 6], n = 1:
n1 = 0.242 * 3 + 0.399 * 5 + 0.242 * 0 = 2.721
You can alter the variance of the Gaussian to increase or reduce the affect of the blur.
i stumbled upon this post having the same problem with trying to achieve smooth circular waves from fft averages.
i've tried normalizing, smoothing and wildest math to spread the dynamic of an array of averages between 0 and 1. it is of course possible but the sharp increases in averaged values remain a bother that basically makes these values unfeasable for direct display.
instead i use the fft average to increase amplitude, frequency and wavelength of a separately structured clean sine.
imagine a sine curve across the screen that moves right to left at a given speed(frequency) times the current average and has an amplitude of current average times whatever will then be mapped to 0,1 in order to eventually determine 'the wave's' z.
the function for calculating size, color, shift of elements or whatever visualizes 'the wave' will have to be based on distance from center and some array that holds values for each distance, e.g. a certain number of average values.
that very same array can instead be fed with values from a sine - that is influenced by the fft averages - which themselves thus need no smoothing and can remain unaltered.
the effect is pleasingly clean sine waves appearing to be driven by the 'energy' of the sound.
like this - where 'rings' is an array that a distance function uses to read 'z' values of 'the wave's x,y positions.
const wave = {
y: height / 2,
length: 0.02,
amplitude: 30,
frequency: 0.5
}
//var increment = wave.frequency;
var increment = 0;
function sinewave(length,amplitude,frequency) {
ctx.strokeStyle = 'red';
ctx.beginPath();
ctx.moveTo(0, height / 2);
for (let i = 0; i < width; i+=cellSize) {
//ctx.lineTo(i, wave.y + Math.sin(i * wave.length + increment) * wave.amplitude)
ctx.lineTo(i, wave.y + Math.sin(i * length + increment) * amplitude);
rings.push( map( Math.sin(i * length + increment) * amplitude,0,20,0.1,1) );
rings.shift();
}
ctx.stroke();
increment += frequency;
}
the function is called each frame (from draw) with the current average fft value driving the sine function like this - assuming that value is mapped to 0,1:
sinewave(0.006,averg*20,averg*0.3)
allowing fluctuating values to determine wavelength or frequency can have some visually appealing effect. however, the movement of 'the wave' will never seem natural.
i've accomplished a near enough result in my case.
for making the sine appear to be driven by each 'beat' you'd need beat detection to determine the exact tempo of 'the sound' that 'the wave' is supposed to visualize.
continuous averaging of distance between larger peaks in the lower range of fft spectrum might work there with setting a semi fixed frequency - with edm...
i know, the question was about smoothing array values.
forgive me for changing the subject. i just thought that the objective 'sound wave' is an interesting one that could be achieved differently.
and just so this is complete here's a bit that simply draws circles for each fft and assign colour according to volume.
with linewidths relative to total radius and sum of volumes this is quite nice:
//col generator
function getCol(n,m,f){
var a = (PIx5*n)/(3*m) + PIdiv2;
var r = map(sin(a),-1,1,0,255);
var g = map(sin(a - PIx2/3),-1,1,0,255);
var b = map(sin(a - PIx4/3),-1,1,0,255);
return ("rgba(" + r + "," + g + "," + b + "," + f + ")");
}
//draw circles for each fft with linewidth and colour relative to value
function drawCircles(arr){
var nC = 20; //number of elem from array we want to use
var cAv = 0;
var cAvsum = 0;
//get the sum of all values so we can map a single value with regard to this
for(var i = 0; i< nC; i++){
cAvsum += arr[i];
}
cAv = cAvsum/nC;
var lastwidth = 0;
//draw a circle for each elem from array
//compute linewith a fraction of width relative to value of elem vs. sum of elems
for(var i = 0; i< nC; i++){
ctx.beginPath();
var radius = lastwidth;//map(arr[i]*2,0,255,0,i*300);
//use a small col generator to assign col - map value to spectrum
ctx.strokeStyle = getCol(map(arr[i],0,255,0,1280),1280,0.05);
//map elem value as fraction of elem sum to linewidth/total width of outer circle
ctx.lineWidth = map(arr[i],0,cAvsum,0,width);
//draw
ctx.arc(centerX, centerY, radius, 0, Math.PI*2, false);
ctx.stroke();
//add current radius and linewidth to lastwidth
var lastwidth = radius + ctx.lineWidth/2;
}
}
codepen here: https://codepen.io/sumoclub/full/QWBwzaZ
always happy about suggestions.
The problem is currently solved. In case some one wants to see the colored fractal, the code is here.
Here is the previous problem:
Nonetheless the algorithm is straight forward, I seems to have a small error (some fractals are drawing correctly and some are not). You can quickly check it in jsFiddle that c = -1, 1/4 the fractal is drawing correctly but if I will take c = i; the image is totally wrong.
Here is implementation.
HTML
<canvas id="a" width="400" height="400"></canvas>
JS
function point(pos, canvas){
canvas.fillRect(pos[0], pos[1], 1, 1); // there is no drawpoint in JS, so I simulate it
}
function conversion(x, y, width, R){ // transformation from canvas coordinates to XY plane
var m = R / width;
var x1 = m * (2 * x - width);
var y2 = m * (width - 2 * y);
return [x1, y2];
}
function f(z, c){ // calculate the value of the function with complex arguments.
return [z[0]*z[0] - z[1] * z[1] + c[0], 2 * z[0] * z[1] + c[1]];
}
function abs(z){ // absolute value of a complex number
return Math.sqrt(z[0]*z[0] + z[1]*z[1]);
}
function init(){
var length = 400,
width = 400,
c = [-1, 0], // all complex number are in the form of [x, y] which means x + i*y
maxIterate = 100,
R = (1 + Math.sqrt(1+4*abs(c))) / 2,
z;
var canvas = document.getElementById('a').getContext("2d");
var flag;
for (var x = 0; x < width; x++){
for (var y = 0; y < length; y++){ // for every point in the canvas plane
flag = true;
z = conversion(x, y, width, R); // convert it to XY plane
for (var i = 0; i < maxIterate; i++){ // I know I can change it to while and remove this flag.
z = f(z, c);
if (abs(z) > R){ // if during every one of the iterations we have value bigger then R, do not draw this point.
flag = false;
break;
}
}
// if the
if (flag) point([x, y], canvas);
}
}
}
Also it took me few minutes to write it, I spent much more time trying to find why does not it work for all the cases. Any idea where I screwed up?
Good news! (or bad news)
You're implementation is completely. correct. Unfortunately, with c = [0, 1], the Julia set has very few points. I believe it is measure zero (unlike say, the Mandelbrot set). So the probability of a random point being in that Julia set is 0.
If you reduce your iterations to 15 (JSFiddle), you can see the fractal. One hundred iterations is more "accurate", but as the number of iterations increase, the chance that a point on your 400 x 400 grid will be included in your fractal approximation decreases to zero.
Often, you will see the Julia fractal will multiple colors, where the color indicates how quickly it diverges (or does not diverge at all), like in this Flash demonstration. This allows the Julia fractal to be somewhat visible even in cases like c = i.
Your choices are
(1) Reduce your # of iterations, possibly depending on c.
(2) Increase the size of your sampling (and your canvas), possibly depending on c.
(3) Color the points of your canvas according to the iteration # at which R was exceeded.
The last option will give you the most robust result.