Find point using distance from any other point with minimum steps - javascript

I need to find a point in 3d space with a function that returns the distance between that point and any other point of my choosing. all the coordinates can vary from 0 to 100 and only can be integers. Of course, I could use brute force, but the complexity will be n^3, and that’s way too far from minimum steps
I was trying to reach the solution using binary search, because we can imagine, that I have an array from 0 to 100, where each point represents some axis, x-axis for example. And by moving either to the left or to the right I will be moving closer to the point, but my solution does not work correctly
This is the code I could come up with. At some points it works, but not all
const findClosestX = (low, high, destinationPoint) => {
const middle = Math.floor((low + high) / 2);
const starting_distance = new Point(middle, 0, 0).getDistance(destinationPoint);
// Point class is simple object with x,y,z coordinates and function
// for getting distance between points
const distance_1 = new Point(low, 0, 0).getDistance(destinationPoint);
const distance_2 = new Point(high, 0, 0).getDistance(destinationPoint);
const minPoint = Math.min(starting_distance, distance_1, distance_2);
if (minPoint === starting_distance) {
return middle;
} else if (minPoint === distance_1) return findClosestX(low, middle - 1, destinationPoint);
else if (minPoint === distance_2) return findClosestX(middle + 1, high, destinationPoint);
};

For the one-dimensional case there's the bisection method which is the continuous version of the well known binary search algorithm.
It is used to find the solution of an equation,
that is the zero of a function f(x) = 0, in an interval [a, b],
where the function is monotonous (increasing or decreasing),
and f(a) > 0 and f(b) < 0 (or f(a) < 0, f(b) > 0),
which guarantees that there is exactly one solution
in the interval [a, b].
It consists in a loop that maintains three points: left, right and middle.
Initially, they are the a, b and (a+b)/2.
At each step it updates the three points by either:
choosing the middle point to be the next left point (chooses the right half,
the new interval is [middle, right])
choosing the middle point to be the right point
(chooses the left half, the new interval is [left, middle]).
The next middle will be computed as the middle of the new interval.
the choice between the variants above is made in such a way
that the sign of the f(left) is the same as the sign of f(a) and
the sign of the f(right) is the same as f(b). This guarantees that
the solution is always inside the interval [left, right] and
since the interval is always shrinking (it's half the length
of the interval in the previous step) it's guaranteed to
get ever close to the true solution.
the iteration is stopped when the current middle point
is close enough to the solution |f(middle)| < eps
The first step in adjusting this algorithm to the problem
here is the most sensitive: making the algorithm work
for an optimization problem. I'll restrict the analysis
to a minimization problem, as it is the case here.
The change in the algorithm is trivial: from the points
{left, middle, right}, we choose the two with the minimal value
of f. That is if f(left) < f(right), we chose [left, middle]
and if f(left) > f(right), we choose [middle, right].
However, care should be taken that this algorithm is
convergent for a minimization problem only in very special cases.
A sufficient condition for the algorithm to converge (that is
fortunately satisfied by the Euclidean distance) is that
the function f is symmetrical with respect to
its minimum in the interval we work with:
f(xmin - d) = f(xmin + d), when
xmin - d and xmin + d are inside the
interval [a, b].
The second step now is to extend the above algorithm for
a 3-dimensional minimization problem. Instead of the
interval [a, b] we start with the initial cube (eight 3d points)
and its center.
In the next step we choose the
point among the eight corners that has the minimum f(x, y, z) - from that point and the center as diagonally opposite we form
a new cube - that is we chose one of the eight "half-cubes"
of the previous cube determined by the center.
The symmetry condition in this case can be written as f(xmin±dx, ymin±dy, zmin±dz) = f(xmin∓dx, ymin∓dy, zmin∓dz)
Here's the code for the continuous case (i.e., the coordinates are floating point numbers):
const x0 = Math.random()*100, y0 = Math.random()*100, z0 = Math.random()*100;
console.log('Original point', [x0, y0, z0]);
function dist(x, y, z){
return Math.sqrt((x-x0)*(x-x0)+(y-y0)*(y-y0)+(z-z0)*(z-z0));
}
//utility function
function makeCube([xA, yA, zA], [xB, yB, zB]){
return {
cube: [[xA, yA, zA],[xB, yA, zA],[xA, yB, zA],[xA, yA, zB],
[xA, yB, zB], [xB, yB, zA], [xB, yA, zB], [xB, yB, zB]],
center: [(xA+xB)/2, (yA+yB)/2, (zA+zB)/2]
}
}
//utility function
function minIndex(a){
return a.reduce(([minValue, minIndex], xi, i) => xi < minValue ? [xi, i] : [minValue, minIndex], [1/0, -1]);
}
function findPoint(eps = 1e-5, NMAX = 10000){
let A = [0, 0, 0], B = [100, 100, 100];
for(let i = 0; i < NMAX; i++){
let {cube, center} = makeCube(A, B);
const d = cube.map(p=>dist(...p));
const [dMin, iMin] = minIndex(d);
if(dMin < eps){
return center;
}
A = cube[iMin];
B = center;
}
return null; // failed after NMAX iterations
}
console.log('Found point', findPoint())
and its adaptation for the case the coordinates are integers
const x0 = Math.floor(Math.random()*101), y0 = Math.floor(Math.random()*101), z0 = Math.floor(Math.random()*101);
console.log('Original point', [x0, y0, z0]);
function dist(x, y, z){
return Math.sqrt((x-x0)*(x-x0)+(y-y0)*(y-y0)+(z-z0)*(z-z0));
}
//utility function
function makeCube([xA, yA, zA], [xB, yB, zB]){
return {
cube: [[xA, yA, zA],[xB, yA, zA],[xA, yB, zA],[xA, yA, zB],
[xA, yB, zB], [xB, yB, zA], [xB, yA, zB], [xB, yB, zB]],
center: [Math.round((xA+xB)/2), Math.round((yA+yB)/2), Math.round((zA+zB)/2)]
}
}
//utility function
function minIndex(a){
return a.reduce(([minValue, minIndex], xi, i) => xi < minValue ? [xi, i] : [minValue, minIndex], [1/0, -1]);
}
function findPoint(NMAX = 10000){
let A = [0, 0, 0], B = [100, 100, 100];
for(let i = 0; i < NMAX; i++){
let {cube, center} = makeCube(A, B);
const d = cube.map(p=>dist(...p));
const [dMin, iMin] = minIndex(d);
if(dMin === 0){
console.log(i+' steps');
return cube[iMin];
}
A = cube[iMin];
B = center;
}
return null; // failed after NMAX iterations
}
console.log('Found point', findPoint())
Note that in both cases an optimization can be made by reusing the previously computed distance for the point in the new cube that was inherited from the previous cube, but I feel that it would make the code less readable.

Related

How can I avoid exceeding the max call stack size during a flood fill algorithm?

I am using a recursive flood fill algorithm in javascript and I am not sure how to avoid exceeding the max call stack size. This is a little project that runs in the browser.
I got the idea from here: https://guide.freecodecamp.org/algorithms/flood-fill/
I chose this algorithm because it's easy to understand and so far I like it because it's pretty quick.
x and y are the 2-d coordinates from the top-left, targetColor and newColor are each a Uint8ClampedArray, and id = ctx.createImageData(1,1); that gets its info from newColor.
function floodFill2(x, y, targetColor, newColor, id) {
let c = ctx.getImageData(x, y, 1, 1).data;
// if the pixel doesnt match the target color, end function
if (c[0] !== targetColor[0] || c[1] !== targetColor[1] || c[2] !== targetColor[2]) {
return;
}
// if the pixel is already the newColor, exit function
if (c[0] === newColor[0] && c[1] === newColor[1] && c[2] === newColor[2]) {
// this 'probably' means we've already been here, so we should ignore the pixel
return;
}
// if the fn is still alive, then change the color of the pixel
ctx.putImageData(id, x, y);
// check neighbors
floodFill2(x-1, y, targetColor, newColor, id);
floodFill2(x+1, y, targetColor, newColor, id);
floodFill2(x, y-1, targetColor, newColor, id);
floodFill2(x, y+1, targetColor, newColor, id);
return;
}
If the section is small, this code works fine. If the section is big, only a portion gets filled in and then I get the max call stack size error.
Questions
Is there something that doesn't make sense in the above code? (ie. maybe an issue for code review?)
If the code looks ok, is it the possible that I am simply using an algorithm that is inappropriate to flood fill a large section?
I would like to say that my hope for this question is to have a simple function similar to the one above which will work even for a very large, oddly shaped region but that I suppose is contingent on the generality of the algorithm. Like, am I trying to drive a nail with a screwdriver kind of thing?
Use a stack or Why recursion in JavaScript sucks.
Recursion is just a lazy mans stack. Not only is it lazy, it uses more memory and is far slower than traditional stacks
To top it off (as you have discovered) In JavaScript recursion is risky as the call stack is very small and you can never know how much of the call stack has been used when your function is called.
Some bottle necks while here
Getting image data via getImageData is an intensive task for many devices. It can take just as long to get 1 pixel as getting 65000 pixels. Calling getImageData for every pixel is a very bad idea. Get all pixels once and get access to pixels directly from RAM
Use an Uint32Array so you can process a pixel in one step rather than having to check each channel in turn.
Example
Using a simple array as a stack, each item pushed to the stack is the index of a new pixel to fill. Thus rather than have to create a new execution context, a new local scope and associated variables, closure, and more. A single 64bit number takes the place of a callStack entry.
See demo for an alternative flood fill pixel search method
function floodFill(x, y, targetColor, newColor) {
const w = ctx.canvas.width, h = ctx.canvas.height;
const imgData = ctx.getImageData(0, 0, w, h);
const p32 = new Uint32Array(imgData.data.buffer);
const channelMask = 0xFFFFFF; // Masks out Alpha NOTE order of channels is ABGR
const cInvMask = 0xFF000000; // Mask out BGR
const canFill = idx => (p32[idx] & channelMask) === targetColor;
const setPixel = (idx, newColor) => p32[idx] = (p32[idx] & cInvMask) | newColor;
const stack = [x + y * w]; // add starting pos to stack
while (stack.length) {
let idx = stack.pop();
setPixel(idx, newColor);
// for each direction check if that pixel can be filled and if so add it to the stack
canFill(idx + 1) && stack.push(idx + 1); // check right
canFill(idx - 1) && stack.push(idx - 1); // check left
canFill(idx - w) && stack.push(idx - w); // check Up
canFill(idx + w) && stack.push(idx + w); // check down
}
// all done when stack is empty so put pixels back to canvas and return
ctx.putImageData(imgData,0, 0);
}
Usage
To use the function is slightly different. id is not used and the colors targetColor and newColor need to be 32bit words with the red, green, blue, alpha reversed.
For example if targetColor was yellow = [255, 255, 0] and newColor was blue =[0, 0, 255] then revers RGB for each and call fill with
const yellow = 0xFFFF;
const blue = 0xFF0000;
floodFill(x, y, yellow, blue);
Note that I am matching your function and completely ignoring alpha
Inefficient algorithm
Note that this style of fill (mark up to 4 neighbors) is very inefficient as many of the pixels will be marked to fill and by the time they are popped from the stack it will already have been filled by another neighbor.
The following GIF best illustrates the problem. Filling the 4 by 3 area with green.
First set the pixel green,
Then push to stack if not green right, left, up, down [illustration red, orange, cyan, purple boxes]
Pop bottom and set to green
Repeat
When a location that already is on the stack is added it is inset (just for illustration purpose)
Note that when all pixels are green there are still 6 items on the stack that still need to be popped. I estimate on average you will be processing 1.6 times the number of pixels needed. For a large image 2000sq thats 2million (alot of) pixels
Using an array stack rather than call stack means
No more call stack overflows
Inherently faster code.
Allows for many optimizations
Demo
The demo is a slightly different version as your logic has some problems. It still uses a stack, but limits the number of entries pushed to the stack to be equal to the number of unique columns in the fill area.
Includes alpha in the pixel fill test and pixel write color. Simplifying the pixel read and write code.
Checks against the edges of the canvas rather than filling outside the canvas width (looping back AKA asteroids style)
Reads target color from the canvas at the first x,y pixel
Fills columns from the top most pixel in each column and only branching left or right if the previous left or right pixel was not the target color. This reduces the number of pixels to push the stack by orders of magnitude.
Click to flood fill
function floodFill(x, y, newColor) {
var left, right, leftEdge, rightEdge;
const w = ctx.canvas.width, h = ctx.canvas.height, pixels = w * h;
const imgData = ctx.getImageData(0, 0, w, h);
const p32 = new Uint32Array(imgData.data.buffer);
const stack = [x + y * w]; // add starting pos to stack
const targetColor = p32[stack[0]];
if (targetColor === newColor || targetColor === undefined) { return } // avoid endless loop
while (stack.length) {
let idx = stack.pop();
while(idx >= w && p32[idx - w] === targetColor) { idx -= w }; // move to top edge
right = left = false;
leftEdge = (idx % w) === 0;
rightEdge = ((idx +1) % w) === 0;
while (p32[idx] === targetColor) {
p32[idx] = newColor;
if(!leftEdge) {
if (p32[idx - 1] === targetColor) { // check left
if (!left) {
stack.push(idx - 1); // found new column to left
left = true; //
}
} else if (left) { left = false }
}
if(!rightEdge) {
if (p32[idx + 1] === targetColor) {
if (!right) {
stack.push(idx + 1); // new column to right
right = true;
}
} else if (right) { right = false }
}
idx += w;
}
}
ctx.putImageData(imgData,0, 0);
return;
}
var w = canvas.width;
var h = canvas.height;
const ctx = canvas.getContext("2d");
var i = 400;
const fillCol = 0xFF0000FF
const randI = v => Math.random() * v | 0;
ctx.fillStyle = "#FFF";
ctx.fillRect(0, 0, w, h);
ctx.fillStyle = "#000";
while(i--) {
ctx.fillRect(randI(w), randI(h), 20, 20);
ctx.fillRect(randI(w), randI(h), 50, 20);
ctx.fillRect(randI(w), randI(h), 10, 60);
ctx.fillRect(randI(w), randI(h), 180, 2);
ctx.fillRect(randI(w), randI(h), 2, 182);
ctx.fillRect(randI(w), randI(h), 80, 6);
ctx.fillRect(randI(w), randI(h), 6, 82);
ctx.fillRect(randI(w), randI(h), randI(40), randI(40));
}
i = 400;
ctx.fillStyle = "#888";
while(i--) {
ctx.fillRect(randI(w), randI(h), randI(40), randI(40));
ctx.fillRect(randI(w), randI(h), randI(4), randI(140));
}
var fillIdx = 0;
const fillColors = [0xFFFF0000,0xFFFFFF00,0xFF00FF00,0xFF00FFFF,0xFF0000FF,0xFFFF00FF];
canvas.addEventListener("click",(e) => {
floodFill(e.pageX | 0, e.pageY | 0, fillColors[(fillIdx++) % fillColors.length]);
});
canvas {
position: absolute;
top: 0px;
left: 0px;
}
<canvas id="canvas" width="2048" height="2048">
Flood fill is a problematic process with respect to stack size requirements (be it the system stack or one managed on the heap): in the worst case you will need a recursion depth on the order of the image size. Such cases can occur when you binarize random noise, they are not so improbable.
There is a version of flood filling that is based on filling whole horizontal runs in a single go (https://en.wikipedia.org/wiki/Flood_fill#Scanline_fill). It is advisable in general because it roughly divides the recursion depth by the average length of the runs and is faster in the "normal" cases. Anyway, it doesn't solve the worst-case issue.
There is also an interesting truly stackless algorithm as described here: https://en.wikipedia.org/wiki/Flood_fill#Fixed-memory_method_(right-hand_fill_method). But the implementation looks cumbersome.

Smoothing out values of an array

If I had an array of numbers such as [3, 5, 0, 8, 4, 2, 6], is there a way to “smooth out” the values so they’re closer to each other and display less variance?
I’ve looked into windowing the data using something called the Gaussian function for a 1-dimensional case, which is my array, but am having trouble implementing it. This thread seems to solve exactly what I need but I don’t understand how user naschilling (second post) came up with the Gaussian matrix values.
Context: I’m working on a music waveform generator (borrowing from SoundCloud’s design) that maps the amplitude of the song at time t to a corresponding bar height. Unfortunately there’s a lot of noise, and it looks particularly ugly when the program maps a tiny amplitude which results in a sudden decrease in height. I basically want to smooth out the bar heights so they aren’t so varied.
The language I'm using is Javascript.
EDIT: Sorry, let me be more specific about "smoothing out" the values. According to the thread linked above, a user took an array
[10.00, 13.00, 7.00, 11.00, 12.00, 9.00, 6.00, 5.00]
and used a Gaussian function to map it to
[ 8.35, 9.35, 8.59, 8.98, 9.63, 7.94, 5.78, 7.32]
Notice how the numbers are much closer to each other.
EDIT 2: It worked! Thanks to user Awal Garg's algorithm, here are the results:
No smoothing
Some smoothing
Maximum smoothing
EDIT 3: Here's my final code in JS. I tweaked it so that the first and last elements of the array were able to find its neighbors by wrapping around the array, rather than calling itself.
var array = [10, 13, 7, 11, 12, 9, 6, 5];
function smooth(values, alpha) {
var weighted = average(values) * alpha;
var smoothed = [];
for (var i in values) {
var curr = values[i];
var prev = smoothed[i - 1] || values[values.length - 1];
var next = curr || values[0];
var improved = Number(this.average([weighted, prev, curr, next]).toFixed(2));
smoothed.push(improved);
}
return smoothed;
}
function average(data) {
var sum = data.reduce(function(sum, value) {
return sum + value;
}, 0);
var avg = sum / data.length;
return avg;
}
smooth(array, 0.85);
Interesting question!
The algorithm to smooth out the values obviously could vary a lot, but here is my take:
"use strict";
var array = [10, 13, 7, 11, 12, 9, 6, 5];
function avg (v) {
return v.reduce((a,b) => a+b, 0)/v.length;
}
function smoothOut (vector, variance) {
var t_avg = avg(vector)*variance;
var ret = Array(vector.length);
for (var i = 0; i < vector.length; i++) {
(function () {
var prev = i>0 ? ret[i-1] : vector[i];
var next = i<vector.length ? vector[i] : vector[i-1];
ret[i] = avg([t_avg, avg([prev, vector[i], next])]);
})();
}
return ret;
}
function display (x, y) {
console.clear();
console.assert(x.length === y.length);
x.forEach((el, i) => console.log(`${el}\t\t${y[i]}`));
}
display(array, smoothOut(array, 0.85));
NOTE: It uses some ES6 features like fat-arrow functions and template strings. Firefox 35+ and Chrome 45+ should work fine. Please use the babel repl otherwise.
My method basically computes the average of all the elements in the array in advance, and uses that as a major factor to compute the new value along with the current element value, the one prior to it, and the one after it. I am also using the prior value as the one newly computed and not the one from the original array. Feel free to experiment and modify according to your needs. You can also pass in a "variance" parameter to control the difference between the elements. Lowering it will bring the elements much closer to each other since it decreases the value of the average.
A slight variation to loosen out the smoothing would be this:
"use strict";
var array = [10, 13, 7, 11, 12, 9, 6, 5];
function avg (v) {
return v.reduce((a,b) => a+b, 0)/v.length;
}
function smoothOut (vector, variance) {
var t_avg = avg(vector)*variance;
var ret = Array(vector.length);
for (var i = 0; i < vector.length; i++) {
(function () {
var prev = i>0 ? ret[i-1] : vector[i];
var next = i<vector.length ? vector[i] : vector[i-1];
ret[i] = avg([t_avg, prev, vector[i], next]);
})();
}
return ret;
}
function display (x, y) {
console.clear();
console.assert(x.length === y.length);
x.forEach((el, i) => console.log(`${el}\t\t${y[i]}`));
}
display(array, smoothOut(array, 0.85));
which doesn't take the averaged value as a major factor.
Feel free to experiment, hope that helps!
The technique you describe sounds like a 1D version of a Gaussian blur. Multiply the values of the 1D Gaussian array times the given window within the array and sum the result. For example
Assuming a Gaussian array {.242, .399, .242}
To calculate the new value at position n of the input array - multiply the values at n-1, n, and n+1 of the input array by those in (1) and sum the result. eg for [3, 5, 0, 8, 4, 2, 6], n = 1:
n1 = 0.242 * 3 + 0.399 * 5 + 0.242 * 0 = 2.721
You can alter the variance of the Gaussian to increase or reduce the affect of the blur.
i stumbled upon this post having the same problem with trying to achieve smooth circular waves from fft averages.
i've tried normalizing, smoothing and wildest math to spread the dynamic of an array of averages between 0 and 1. it is of course possible but the sharp increases in averaged values remain a bother that basically makes these values unfeasable for direct display.
instead i use the fft average to increase amplitude, frequency and wavelength of a separately structured clean sine.
imagine a sine curve across the screen that moves right to left at a given speed(frequency) times the current average and has an amplitude of current average times whatever will then be mapped to 0,1 in order to eventually determine 'the wave's' z.
the function for calculating size, color, shift of elements or whatever visualizes 'the wave' will have to be based on distance from center and some array that holds values for each distance, e.g. a certain number of average values.
that very same array can instead be fed with values from a sine - that is influenced by the fft averages - which themselves thus need no smoothing and can remain unaltered.
the effect is pleasingly clean sine waves appearing to be driven by the 'energy' of the sound.
like this - where 'rings' is an array that a distance function uses to read 'z' values of 'the wave's x,y positions.
const wave = {
y: height / 2,
length: 0.02,
amplitude: 30,
frequency: 0.5
}
//var increment = wave.frequency;
var increment = 0;
function sinewave(length,amplitude,frequency) {
ctx.strokeStyle = 'red';
ctx.beginPath();
ctx.moveTo(0, height / 2);
for (let i = 0; i < width; i+=cellSize) {
//ctx.lineTo(i, wave.y + Math.sin(i * wave.length + increment) * wave.amplitude)
ctx.lineTo(i, wave.y + Math.sin(i * length + increment) * amplitude);
rings.push( map( Math.sin(i * length + increment) * amplitude,0,20,0.1,1) );
rings.shift();
}
ctx.stroke();
increment += frequency;
}
the function is called each frame (from draw) with the current average fft value driving the sine function like this - assuming that value is mapped to 0,1:
sinewave(0.006,averg*20,averg*0.3)
allowing fluctuating values to determine wavelength or frequency can have some visually appealing effect. however, the movement of 'the wave' will never seem natural.
i've accomplished a near enough result in my case.
for making the sine appear to be driven by each 'beat' you'd need beat detection to determine the exact tempo of 'the sound' that 'the wave' is supposed to visualize.
continuous averaging of distance between larger peaks in the lower range of fft spectrum might work there with setting a semi fixed frequency - with edm...
i know, the question was about smoothing array values.
forgive me for changing the subject. i just thought that the objective 'sound wave' is an interesting one that could be achieved differently.
and just so this is complete here's a bit that simply draws circles for each fft and assign colour according to volume.
with linewidths relative to total radius and sum of volumes this is quite nice:
//col generator
function getCol(n,m,f){
var a = (PIx5*n)/(3*m) + PIdiv2;
var r = map(sin(a),-1,1,0,255);
var g = map(sin(a - PIx2/3),-1,1,0,255);
var b = map(sin(a - PIx4/3),-1,1,0,255);
return ("rgba(" + r + "," + g + "," + b + "," + f + ")");
}
//draw circles for each fft with linewidth and colour relative to value
function drawCircles(arr){
var nC = 20; //number of elem from array we want to use
var cAv = 0;
var cAvsum = 0;
//get the sum of all values so we can map a single value with regard to this
for(var i = 0; i< nC; i++){
cAvsum += arr[i];
}
cAv = cAvsum/nC;
var lastwidth = 0;
//draw a circle for each elem from array
//compute linewith a fraction of width relative to value of elem vs. sum of elems
for(var i = 0; i< nC; i++){
ctx.beginPath();
var radius = lastwidth;//map(arr[i]*2,0,255,0,i*300);
//use a small col generator to assign col - map value to spectrum
ctx.strokeStyle = getCol(map(arr[i],0,255,0,1280),1280,0.05);
//map elem value as fraction of elem sum to linewidth/total width of outer circle
ctx.lineWidth = map(arr[i],0,cAvsum,0,width);
//draw
ctx.arc(centerX, centerY, radius, 0, Math.PI*2, false);
ctx.stroke();
//add current radius and linewidth to lastwidth
var lastwidth = radius + ctx.lineWidth/2;
}
}
codepen here: https://codepen.io/sumoclub/full/QWBwzaZ
always happy about suggestions.

Calculate minimum value not between set of ranges

Given an array of circles (x,y,r values), I want to place a new point, such that it has a fixed/known Y-coordinate (shown as the horizontal line), and is as close as possible to the center BUT not within any of the existing circles. In the example images, the point in red would be the result.
Circles have a known radius and Y-axis attribute, so easy to calculate the points where they intersect the horizontal line at the known Y value. Efficiency is important, I don't want to have to try a bunch of X coords and test them all against each item in the circles array. Is there a way to work out this optimal X coordinate mathematically? Any help greatly appreciated. By the way, I'm writing it in javascript using the Raphael.js library (because its the only one that supports IE8) - but this is more of a logic problem so the language doesn't really matter.
I'd approach your problem as follows:
Initialize a set of intervals S, sorted by the X coordinate of the interval, to the empty set
For each circle c, calculate the interval of intersection Ic of c with with the X axis. If c does not intersect, go on to the next circle. Otherwise, test whether Ic overlaps with any interval(s) in S (this is quick because S is sorted); if so, remove all intersecting intervals from S, collapse Ic and all removed intervals into a new interval I'c and add I'c to S. If there are no intersections, add Ic to S.
Check whether any interval in S includes the center (again, fast because S is sorted). If so, select the interval endpoint closest to the center; if not, select the center as the closest point.
Basically the equation of a circle is (x - cx)2 + (y - cy)2 = r2. Therefore you can easily find the intersection points between the circle and X axis by substituting y with 0. After that you just have a simple quadratic equation to solve: x2 - 2cxx + cx2 + cy2 - r2 = 0 . For it you have 3 possible outcomes:
No intersection - the determinant will be irrational number (NaN in JavaScript), ignore this result;
One intersection - both solutions match, use [value, value];
Two intersections - both solutions are different, use [value1, value2].
Sort the newly calculated intersection intervals, than try merge them where it is possible. However take in mind that in every program language there approximation, therefore you need to define delta value for your dot approximation and take it into consideration when merging the intervals.
When the intervals are merged you can generate your x coordinates by subtracting/adding the same delta value to the beginning/end of every interval. And lastly from all points, the one closest to zero is your answer.
Here is an example with O(n log n) complexity that is oriented rather towards readability. I've used 1*10-10 for delta :
var circles = [
{x:0, y:0, r:1},
{x:2.5, y:0, r:1},
{x:-1, y:0.5, r:1},
{x:2, y:-0.5, r:1},
{x:-2, y:0, r:1},
{x:10, y:10, r:1}
];
console.log(getClosestPoint(circles, 1e-10));
function getClosestPoint(circles, delta)
{
var intervals = [],
len = circles.length,
i, result;
for (i = 0; i < len; i++)
{
result = getXIntersection(circles[i])
if (result)
{
intervals.push(result);
}
}
intervals = intervals.sort(function(a, b){
return a.from - b.from;
});
if (intervals.length <= 0) return 0;
intervals = mergeIntervals(intervals, delta);
var points = getClosestPoints(intervals, delta);
points = points.sort(function(a, b){
return Math.abs(a) - Math.abs(b);
});
return points[0];
}
function getXIntersection(circle)
{
var d = Math.sqrt(circle.r * circle.r - circle.y * circle.y);
return isNaN(d) ? null : {from: circle.x - d, to: circle.x + d};
}
function mergeIntervals(intervals, delta)
{
var curr = intervals[0],
result = [],
len = intervals.length, i;
for (i = 1 ; i < len ; i++)
{
if (intervals[i].from <= curr.to + delta)
{
curr.to = Math.max(curr.to, intervals[i].to);
} else {
result.push(curr);
curr = intervals[i];
}
}
result.push(curr);
return result;
}
function getClosestPoints(intervals, delta)
{
var result = [],
len = intervals.length, i;
for (i = 0 ; i < len ; i++)
{
result.push( intervals[i].from - delta );
result.push( intervals[i].to + delta );
}
return result;
}
create the intersect_segments array (normalizing at x=0 y=0)
sort intersectsegments by upperlimit and remove those with upperlimit<0
initialize point1 = 0 and segment = 0
loop while point1 is inside intersectsegment[segment]
4.1. increment point1 by uppper limit of intersectsegment[segment]
4.2. increment segment
sort intersectsegments by lowerlimit and remove those with loerlimit>0
initialize point2 = 0 and segment = 0
loop while point2 is inside intersectsegments[segment]
7.1. decrement point2 by lower limit of segment
7.2. decrement segment
the point is minimum absolute value of p1 and p2

JS canvas implementation of Julia set

The problem is currently solved. In case some one wants to see the colored fractal, the code is here.
Here is the previous problem:
Nonetheless the algorithm is straight forward, I seems to have a small error (some fractals are drawing correctly and some are not). You can quickly check it in jsFiddle that c = -1, 1/4 the fractal is drawing correctly but if I will take c = i; the image is totally wrong.
Here is implementation.
HTML
<canvas id="a" width="400" height="400"></canvas>
JS
function point(pos, canvas){
canvas.fillRect(pos[0], pos[1], 1, 1); // there is no drawpoint in JS, so I simulate it
}
function conversion(x, y, width, R){ // transformation from canvas coordinates to XY plane
var m = R / width;
var x1 = m * (2 * x - width);
var y2 = m * (width - 2 * y);
return [x1, y2];
}
function f(z, c){ // calculate the value of the function with complex arguments.
return [z[0]*z[0] - z[1] * z[1] + c[0], 2 * z[0] * z[1] + c[1]];
}
function abs(z){ // absolute value of a complex number
return Math.sqrt(z[0]*z[0] + z[1]*z[1]);
}
function init(){
var length = 400,
width = 400,
c = [-1, 0], // all complex number are in the form of [x, y] which means x + i*y
maxIterate = 100,
R = (1 + Math.sqrt(1+4*abs(c))) / 2,
z;
var canvas = document.getElementById('a').getContext("2d");
var flag;
for (var x = 0; x < width; x++){
for (var y = 0; y < length; y++){ // for every point in the canvas plane
flag = true;
z = conversion(x, y, width, R); // convert it to XY plane
for (var i = 0; i < maxIterate; i++){ // I know I can change it to while and remove this flag.
z = f(z, c);
if (abs(z) > R){ // if during every one of the iterations we have value bigger then R, do not draw this point.
flag = false;
break;
}
}
// if the
if (flag) point([x, y], canvas);
}
}
}
Also it took me few minutes to write it, I spent much more time trying to find why does not it work for all the cases. Any idea where I screwed up?
Good news! (or bad news)
You're implementation is completely. correct. Unfortunately, with c = [0, 1], the Julia set has very few points. I believe it is measure zero (unlike say, the Mandelbrot set). So the probability of a random point being in that Julia set is 0.
If you reduce your iterations to 15 (JSFiddle), you can see the fractal. One hundred iterations is more "accurate", but as the number of iterations increase, the chance that a point on your 400 x 400 grid will be included in your fractal approximation decreases to zero.
Often, you will see the Julia fractal will multiple colors, where the color indicates how quickly it diverges (or does not diverge at all), like in this Flash demonstration. This allows the Julia fractal to be somewhat visible even in cases like c = i.
Your choices are
(1) Reduce your # of iterations, possibly depending on c.
(2) Increase the size of your sampling (and your canvas), possibly depending on c.
(3) Color the points of your canvas according to the iteration # at which R was exceeded.
The last option will give you the most robust result.

Rotating 2D Vector by unknown angle such that its direction vector is [1,0]

I am trying to rotate a vector [x,y] around the origin such that when the rotation is completed it lies on the X axis. In order to do this, I'm first computing the angle between [x,y] and [1,0], then applying a simple 2D rotation matrix to it. I'm using numericjs to work with the vectors.
math.angleBetween = function(A, B) {
var x = numeric.dot(A, B) / (numeric.norm2(A) * numeric.norm2(B));
if(Math.abs(x) <= 1) {
return Math.acos(x);
} else {
throw "Bad input to angleBetween";
}
};
math.alignToX = function(V) {
var theta = -math.angleBetween([1,0], V);
var R = [[Math.cos(theta), -Math.sin(theta)],
[Math.sin(theta), Math.cos(theta)]];
return numeric.dot(R, V);
};
(Note: math is a namespace object within my project. Math is ye olde math object.)
This code works sometimes, however there are occasions where no matter how many times I run math.alignToX the vector never even gets close to aligning with the X axis. I'm testing this by checking if the y coordinate is less than 1e-10.
I've also tried using Math.atan2 with an implicit z coordinate of 0, but the results have been the same. Errors are not being thrown. Some example results:
math.alignToX([152.44444444444434, -55.1111111111111])
// result: [124.62691466033475, -103.65652585400568]
// expected: [?, 0]
math.alignToX([372, 40])
// result: [374.14435716712336, -2.0605739337042905e-13]
// expected: [?, 0]
// this value has abs(y coordinate) < 1e-10, so its considered aligned
What am I doing wrong?
If you're rotating something other than your vector, then you'll need to use your R matrix. But if you just need to rotate your vector, the result will be [Math.sqrt(x*x+y*y),0].
Actually, the task of building a rotation matrix that aligns a known 2d vector with [1, 0] doesn't require any trigonometric functions at all.
In fact, if [x y] is your vector and s is its length (s = Sqrt(x*x + y*y)), then the transformation that maps [x y] to align with [1 0] (pure rotation, no scaling) is just:
[ x y]
T = 1/s^2 [-y x]
For example, suppose your vector is [Sqrt(3)/2, 1/2]. This is a unit vector as you can easily check so s = 1.
[Sqrt(3)/2 1/2 ]
T = [ -1/2 Sqrt(3)/2]
Multiplying T by our vector we get:
[Sqrt(3)/2 1/2 ][Sqrt(3)/2] [1]
T = [ -1/2 Sqrt(3)/2][ 1/2 ] = [0]
So in finding the rotation angle (which in this case is Pi/6) and then creating the rotation matrix, you've just come full circle back to what you started with. The rotation angle for [Sqrt(3)/2, 1/2] is Pi/2, and cos(Pi/2) is Sqrt(3)/2 = x, sin(pi/2) is 1/2 = y.
Put another way, if you know the vector, you ALREADY know the sine and cosine of it's angle with the x axis from the definition of sine and cosine:
cos a = x/s
sin a = y/s where s = || [x, y] ||, is the length of the vector.
My problem is so mind-bendingly obvious that I cannot believe I didn't see it. While I'm checking the domain of Math.acos, I'm not checking the range at all! The problem occurs when the vector lies outside of the range (which is [0,PI]). Here is what I did to fix it:
math.alignToX = function(V) {
var theta = -math.angleBetween([1,0], V);
var R = [[Math.cos(theta), -Math.sin(theta)],
[Math.sin(theta), Math.cos(theta)]];
var result = numeric.dot(R, V);
if(Math.abs(result[1]) < ZERO_THRESHOLD) {
return result;
} else {
V = numeric.dot([[-1, 0], [0, -1]], V); // rotate by PI
theta = -math.angleBetween([1,0], V);
R = [[Math.cos(theta), -Math.sin(theta)],
[Math.sin(theta), Math.cos(theta)]];
result = numeric.dot(R, V);
if(Math.abs(result[1]) < ZERO_THRESHOLD) {
return result;
} else {
throw "Unable to align " + V; // still don't trust it 100%
}
}
};
For the broken example I gave above, this produces:
[162.10041088743887, 2.842170943040401e-14]
The Y coordinate on this result is significantly less than my ZERO_THRESHOLD (1e-10). I almost feel bad that I solved it myself, but I don't think I would have done so nearly as quickly had I not posted here. I saw the problem when I was checking over my post for typos.

Categories