Related
I'm not using any engine, but instead trying to build my own softbody dynamics for fun using verlet integeration. I made a cube defined by 4x4 points with segments keeping its shape like so:
I have the points collide against the edges of the scene and it seems to work fine. Though I do get some cases where the points collapses in itself and it'll create a dent instead of maintaining its box shape. For example, if it's a high enough velocity and it lands on its corner it tends to crumble:
I must be doing something wrong or out of order when solving the collision.
Here's how I'm handling it. It's in Javascript, though the language doesn't matter, feel free to reply with any language:
sim = function() {
// Sim all points.
for (let i = 0; i < this.points.length; i++) {
this.points[i].sim();
}
// Keep in bounds.
let border = 100;
for (let i = 0; i < this.points.length; i++) {
let p = this.points[i];
let vx = p.pos.x - p.oldPos.x;
let vy = p.pos.y - p.oldPos.y;
if (p.pos.y > height - border) {
// Bottom screen
p.pos.y = height - border;
p.oldPos.y = p.pos.y + vy;
} else if (p.pos.y < 0 + border) {
// Top screen
p.pos.y = 0 + border;
p.oldPos.y = p.pos.y + vy;
}
if (p.pos.x < 0 + border) {
// Left screen
p.pos.x = 0 + border;
p.oldPos.x = p.pos.x + vx;
} else if (p.pos.x > width - border) {
// Right screen
p.pos.x = width - border;
p.oldPos.x = p.pos.x + vx;
}
}
// Sim its segments.
let timesteps = 20;
for (let ts = 0; ts < timesteps; ts++) {
for (let i = 0; i < this.segments.length; i++) {
this.segments[i].sim();
}
}
}
Please let me know if I need to post any other details.
This may be better answered on a physics or game-dev exchange (and likely has already been), but i'll give it a crack because it's nice to revisit this stuff...
Verlet integration is a fantastically stable if not physically accurate method, but the problem here is not the integration method, or anything you've done wrong as far as I can see; it's the type of simulation: mass-aggregate physics (the building of geometry out of dynamic constraints), which is really nice and simple :) but has some inherent deficiencies and limitations, and this particular problem is inherent to the simulation type.
First, look carefully at the arrangement of constraints in the collapsed box - they are just as valid as the initial one. Although individual constraints may not be as satisfied, in combination they are still in a local equilibrium with each other - there nothing compelling them to form their original arrangement.
Second, the external force (collision with immovable plane) is what overcame the constraint forces originally. Even if the constraints respond proportionally towards infinity as they compress - the simulation can never match reality because it works a frame at a time not continuously, and the longer the frame the more error.
To retain shape more reliably, a more explicit angular constraint is needed, which usually involves quaternions - these are quite a bit harder to implement than distance constraints, and once you have them you will be pretty far along the road to implementing rigid body physics anyway. But there are ways to mitigate instead:
1. Use a smaller interval
All posteriori simulations and numerical integration in general has some inherent instability. While different integration methods (e.g verlet) can mitigate this, generally the smaller the interval the better the stability. This alone will give the constraints more opportunity to push back against external forces, but it will also increase the maximum stable constraint stiffness.
This will probably require you to optimise your engine more. Additionally make sure you don't couple your render step to the simulation, you want to be able to render at multiples of the simulation interval which allows you to run the simulation faster for stability without unnecessarily rendering more frames than is useful, as it will just slow down the simulation.
2. Try more stable shapes
For your box-of-boxes shape, see what happens when you add more constraints between far vertices, it will add more global stability to the shape.
A common type of shape people make with mass-aggregate physics are polygons, because it's straight forward to make them highly interconnected (a little bit like a bicycle wheel, but where each spoke point connects to every other spoke point). In- fact spoke-like designs are one of the most stable, but you can usually apply the same principles to more irregular shapes once you get an intuition for it.
3. A different type of constraint
Quaternions are not the only possible constraint that will help retain configuration, the main problem is not so much that explicit angle isn't being maintained; but rather that when points are forced past a certain position relative to their siblings, their distance constraints flip and start working in reverse - keeping them on that side.
There could be many different ways to solve this without something as complex as quarternions - in fact I will give it some thought and edit this post if I come up with anything, I have a bit more ammunition since I last explored mass-aggregate physics...
I'm making a 2D game in JavaScript. For it, I need to be able to "perfectly" check collision between two sprites which have x/y positions (corresponding to their centre), a rotation in radians, and of course known width/height.
After spending many weeks of work (yeah, I'm not even exaggerating), I finally came up with a working solution, which unfortunately turned out to be about 10,000x too slow and impossible to optimize in any meaningful manner. I have entirely abandoned the idea of actually drawing and reading pixels from a canvas. That's just not going to cut it, but please don't make me explain in detail why. This needs to be done with math and an "imaginated" 2D world/grid, and from talking to numerous people, the basic idea became obvious. However, the practical implementation is not. Here's what I do and want to do:
What I already have done
In the beginning of the program, each sprite is pixel-looked through in its default upright position and a 1-dimensional array is filled up with data corresponding to the alpha channel of the image: solid pixels get represented by a 1, and transparent ones by 0. See figure 3.
The idea behind that is that those 1s and 0s no longer represent "pixels", but "little math orbs positioned in perfect distances to each other", which can be rotated without "losing" or "adding" data, as happens with pixels if you rotate images in anything but 90 degrees at a time.
I naturally do the quick "bounding box" check first to see if I should bother calculating accurately. This is done. The problem is the fine/"for-sure" check...
What I cannot figure out
Now that I need to figure out whether the sprites collide for sure, I need to construct a math expression of some sort using "linear algebra" (which I do not know) to determine if these "rectangles of data points", positioned and rotated correctly, both have a "1" in an overlapping position.
Although the theory is very simple, the practical code needed to accomplish this is simply beyond my capabilities. I've stared at the code for many hours, asking numerous people (and had massive problems explaining my problem clearly) and really put in an effort. Now I finally want to give up. I would very, very much appreciate getting this done with. I can't even give up and "cheat" by using a library, because nothing I find even comes close to solving this problem from what I can tell. They are all impossible for me to understand, and seem to have entirely different assumptions/requirements in mind. Whatever I'm doing always seems to be some special case. It's annoying.
This is the pseudo code for the relevant part of the program:
function doThisAtTheStartOfTheProgram()
{
makeQuickVectorFromImageAlpha(sprite1);
makeQuickVectorFromImageAlpha(sprite2);
}
function detectCollision(sprite1, sprite2)
{
// This easy, outer check works. Please ignore it as it is unrelated to the problem.
if (bounding_box_match)
{
/*
This part is the entire problem.
I must do a math-based check to see if they really collide.
These are the relevant variables as I have named them:
sprite1.x
sprite1.y
sprite1.rotation // in radians
sprite1.width
sprite1.height
sprite1.diagonal // might not be needed, but is provided
sprite2.x
sprite2.y
sprite2.rotation // in radians
sprite2.width
sprite2.height
sprite2.diagonal // might not be needed, but is provided
sprite1.vectorForCollisionDetection
sprite2.vectorForCollisionDetection
Can you please help me construct the math expression, or the series of math expressions, needed to do this check?
To clarify, using the variables above, I need to check if the two sprites (which can rotate around their centre, have any position and any dimensions) are colliding. A collision happens when at least one "unit" (an imagined sphere) of BOTH sprites are on the same unit in our imaginated 2D world (starting from 0,0 in the top-left).
*/
if (accurate_check_goes_here)
return true;
}
return false;
}
In other words, "accurate_check_goes_here" is what I wonder what it should be. It doesn't need to be a single expression, of course, and I would very much prefer seeing it done in "steps" (with comments!) so that I have a chance of understanding it, but please don't see this as "spoon feeding". I fully admit I suck at math and this is beyond my capabilities. It's just a fact. I want to move on and work on the stuff I can actually solve on my own.
To clarify: the 1D arrays are 1D and not 2D due to performance. As it turns out, speed matters very much in JS World.
Although this is a non-profit project, entirely made for private satisfaction, I just don't have the time and energy to order and sit down with some math book and learn about that from the ground up. I take no pride in lacking the math skills which would help me a lot, but at this point, I need to get this game done or I'll go crazy. This particular problem has prevented me from getting any other work done for far too long.
I hope I have explained the problem well. However, one of the most frustrating feelings is when people send well-meaning replies that unfortunately show that the person helping has not read the question. I'm not pre-insulting you all -- I just wish that won't happen this time! Sorry if my description is poor. I really tried my best to be perfectly clear.
Okay, so I need "reputation" to be able to post the illustrations I spent time to create to illustrate my problem. So instead I link to them:
Illustrations
(censored by Stackoverflow)
(censored by Stackoverflow)
OK. This site won't let me even link to the images. Only one. Then I'll pick the most important one, but it would've helped a lot if I could link to the others...
First you need to understand that detecting such collisions cannot be done with a single/simple equation. Because the shapes of the sprites matter and these are described by an array of Width x Height = Area bits. So the worst-case complexity of the algorithm must be at least O(Area).
Here is how I would do it:
Represent the sprites in two ways:
1) a bitmap indicating where pixels are opaque,
2) a list of the coordinates of the opaque pixels. [Optional, for speedup, in case of hollow sprites.]
Choose the sprite with the shortest pixel list. Find the rigid transform (translation + rotation) that transforms the local coordinates of this sprite into the local coordinates of the other sprite (this is where linear algebra comes into play - the rotation is the difference of the angles, the translation is the vector between upper-left corners - see http://planning.cs.uiuc.edu/node99.html).
Now scan the opaque pixel list, transforming the local coordinates of the pixels to the local coordinates of the other sprite. Check if you fall on an opaque pixel by looking up the bitmap representation.
This takes at worst O(Opaque Area) coordinate transforms + pixel tests, which is optimal.
If you sprites are zoomed-in (big pixels), as a first approximation you can ignore the zooming. If you need more accuracy, you can think of sampling a few points per pixel. Exact computation will involve a square/square collision intersection algorithm (with rotation), more complex and costly. See http://en.wikipedia.org/wiki/Sutherland%E2%80%93Hodgman_algorithm.
Here is an exact solution that will work regardless the size of the pixels (zoomed or not).
Use both a bitmap representation (1 opacity bit per pixel) and a decomposition into squares or rectangles (rectangles are optional, just an optimization; single pixels are ok).
Process all rectangles of the (source) sprite in turn. By means of rotation/translation, map the rectangles to the coordinate space of the other sprite (target). You will obtain a rotated rectangle overlaid on a grid of pixels.
Now you will perform a filling of this rectangle with a scanline algorithm: first split the rectangle in three (two triangles and one parallelogram), using horizontal lines through the rectangle vertexes. For the three shapes independently, find all horizontal between-pixel lines that cross them (this is simply done by looking at the ranges of Y values). For every such horizontal line, compute the two intersections points. Then find all pixel corners that fall between the two intersections (range of X values). For any pixel having a corner inside the rectangle, lookup the corresponding bit in the (target) sprite bitmap.
No too difficult to program, no complicated data structure. The computational effort is roughly proportional to the number of target pixels covered by every source rectangle.
Although you have already stated that you don't feel rendering to the canvas and checking that data is a viable solution, I'd like to present an idea which may or may not have already occurred to you and which ought to be reasonably efficient.
This solution relies on the fact that rendering any pixel to the canvas with half-opacity twice will result in a pixel of full opacity. The steps follow:
Size the test canvas so that both sprites will fit on it (this will also clear the canvas, so you don't have to create a new element each time you need to test for collision).
Transform the sprite data such that any pixel that has any opacity or color is set to be black at 50% opacity.
Render the sprites at the appropriate distance and relative position to one another.
Loop through the resulting canvas data. If any pixels have an opacity of 100%, then a collision has been detected. Return true.
Else, return false.
Wash, rinse, repeat.
This method should run reasonably fast. Now, for optimization--the bottleneck here will likely be the final opacity check (although rendering the images to the canvas could be slow, as might be clearing/resizing it):
reduce the resolution of the opacity detection in the final step, by changing the increment in your loop through the pixels of the final data.
Loop from middle up and down, rather than from the top to bottom (and return as soon as you find any single collision). This way you have a higher chance of encountering any collisions earlier in the loop, thus reducing its length.
I don't know what your limitations are and why you can't render to canvas, since you have declined to comment on that, but hopefully this method will be of some use to you. If it isn't, perhaps it might come in handy to future users.
Please see if the following idea works for you. Here I create a linear array of points corresponding to pixels set in each of the two sprites. I then rotate/translate these points, to give me two sets of coordinates for individual pixels. Finally, I check the pixels against each other to see if any pair are within a distance of 1 - which is "collision".
You can obviously add some segmentation of your sprite (only test "boundary pixels"), test for bounding boxes, and do other things to speed this up - but it's actually pretty fast (once you take all the console.log() statements out that are just there to confirm things are behaving…). Note that I test for dx - if that is too large, there is no need to compute the entire distance. Also, I don't need the square root for knowing whether the distance is less than 1.
I am not sure whether the use of new array() inside the pixLocs function will cause a problem with memory leaks. Something to look at if you run this function 30 times per second...
<html>
<script type="text/javascript">
var s1 = {
'pix': new Array(0,0,1,1,0,0,1,0,0,1,1,0),
'x': 1,
'y': 2,
'width': 4,
'height': 3,
'rotation': 45};
var s2 = {
'pix': new Array(1,0,1,0,1,0,1,0,1,0,1,0),
'x': 0,
'y': 1,
'width': 4,
'height': 3,
'rotation': 90};
pixLocs(s1);
console.log("now rotating the second sprite...");
pixLocs(s2);
console.log("collision detector says " + collision(s1, s2));
function pixLocs(s) {
var i;
var x, y;
var l1, l2;
var ca, sa;
var pi;
s.locx = new Array();
s.locy = new Array();
pi = Math.acos(0.0) * 2;
var l = new Array();
ca = Math.cos(s.rotation * pi / 180.0);
sa = Math.sin(s.rotation * pi / 180.0);
i = 0;
for(x = 0; x < s.width; ++x) {
for(y = 0; y < s.height; ++y) {
// offset to center of sprite
if(s.pix[i++]==1) {
l1 = x - (s.width - 1) * 0.5;
l2 = y - (s.height - 1) * 0.5;
// rotate:
r1 = ca * l1 - sa * l2;
r2 = sa * l1 + ca * l2;
// add position:
p1 = r1 + s.x;
p2 = r2 + s.y;
console.log("rotated pixel [ " + x + "," + y + " ] is at ( " + p1 + "," + p2 + " ) " );
s.locx.push(p1);
s.locy.push(p2);
}
else console.log("no pixel at [" + x + "," + y + "]");
}
}
}
function collision(s1, s2) {
var i, j;
var dx, dy;
for (i = 0; i < s1.locx.length; i++) {
for (j = 0; j < s2.locx.length; j++) {
dx = Math.abs(s1.locx[i] - s2.locx[j]);
if(dx < 1) {
dy = Math.abs(s1.locy[i] - s2.locy[j]);
if (dx*dx + dy+dy < 1) return 1;
}
}
}
return 0;
}
</script>
</html>
Background
I have a pet project that I love to overthink from time to time. The project has to do with an RC aircraft control input device. People familiar with that hobby are probably also familiar with what is known as "stick expo", which is a common feature of RC transmitters where the control sticks are either more or less sensitive near the neutral center position and become less or more sensitive as the stick moves closer to its minimum or maximum values.
I've read some papers that I don't fully understand. I clearly don't have the math background to solve this, so I'm hoping that perhaps one of you might.
Problem
I have decided to approximate the curve by taking a pre-determined number of samples and use linear interpolation to determine output values for any input values between the sample points. I'm trying to find a way to determine the most optimal set of sample points.
If you look at this example of a typical growth curve for this application, you will notice that some sections are more linear (straighter), and some are less linear (more curvy).
These samples are equally distant from each other, but they don't have to be. It would be smart to increase the sample density where there is more change and thereby increasing the resolution in the curvy segments by borrowing redundant points from the straight segments.
Is it possible to quantify the degree of error? If it is, then is it also possible to determine the optimal set of samples for a given function and a pre-determined number of samples?
Reference Code
Snippet from the class that uses a pre-calculated set of points to approximate an output value.
/* This makes the following assumptions:
* 1. The _points[] data member contians at least 2 defined Points.
* 2. All defined Points have x and y values between MIN_VALUE and MAX_VALUE.
* 3. The Points in the array are ordered by ascending values of x.
*/
int InterpolatedCurve::value( int x ) {
if( _points[0].x >= x ) { return _points[0].y; }
for( unsigned int i = 1; i < _point_count; i++ ) {
if( _points[i].x >= x ) {
return map(x, _points[i-1].x, _points[i].x,
_points[i-1].y, _points[i].y);
}
}
// This is an error condition that is not otherwise reported.
// It won't happen as long as the points are set up correctly.
return x;
}
// Example map function (borrowed from Arduino site)
long map( long x, long x1, long x2, long y1, long y2 ) {
return (x - x1) * (y2 - y1) / (x2 - x1) + y1;
}
Although my project is actually in C++, I'm using a Google spreadsheet to produce some numbers while I ponder this problem.
// x: Input value between -1 and 1
// s: Scaling factor for curve between 0 (linear) and 1 (maximum curve)
// c: Tunable constant
function expo_fn( x, s, c ) {
s = typeof s !== 'undefined' ? s : 1.0;
c = typeof c !== 'undefined' ? c : 4.0;
var k = c * ((c - 1.0) * s*s*s + s)/c + 1.0;
return ((k - 1.0) * x*x*x*x*x + x)/k;
};
The following creates a set of isometrically distributed (non-optimal) points between input values -1 and 1. These output values were expanded to integers between -16383 and 16383 for the above example spreadsheet. Factor is a value between 0 and 1 that determines the "curviness"--zero being a flat, linear curve and 1 being the least-linear curve I care to generate.
function Point( x, y ) {
this.x = x;
this.y = y;
};
function compute_points_iso( count, factor ) {
var points = [];
for( var i = 0; i < count; ++i ) {
var x = 2.0/(count - 1.0) * i - 1.0;
var y = expo_fn(x, factor);
points.push(new Point(x,y));
}
return points;
};
Relevant Academic Work
I have been studying this paper describing an algorithm for selecting significant data points, but my program doesn't quite work right yet. I will report back if I ever get this thing working.
The key here is to realize that you can bound the error on your linear interpolation in terms of the second derivative of the function. I.e. if we approximate f(x) \approx f(x_0) + f'(x_0)*(x-x_0), then the error in this approximation is less than abs[ 0.5*f''(x_0)(x-x_0)^2 ].
The outline of an iterative approach could look like this:
Construct an initial, e.g. uniformly spaced, grid
Compute the second derivative of the function on this grid.
Compute the bound on the error using the second-derivative and the inter-sample spacing
Move the samples closer together where the error is large; move them further apart where the error is small.
I'd expect this to be an iterative solution that loops on steps 2,3,4.
Most of the details are in step 4.
For a fixed number of sample points one could use the median of the error bounds to select
where finer/coarser sampling is required (i.e. those locations where the error is larger than the median error will have the sample points pulled closer together).
Let E_0 be this median of the error bounds; then we can, for each sample in the point, compute a new desired sample spacing (dx')^2=2*E_0/f''(x); then you'd need some logic to go through and change the grid spacing so that it is closer to these ideal spacings.
My answer is influenced by having used the "Self-Organizing Map" algorithm on data; this or related algorithms may be relevant for your problem. However, I can't recall ever
seeing a problem like yours where the goal is to make your estimates of the error uniform across the grid.
I'm trying to implement Stereo Phase as it is described here: http://www.image-line.com/support/FLHelp/html/plugins/3x%20OSC.htm
"Stereo Phase (SP) - Allows you to set different phase offset for the left and right channels of the generator. The offset results in the oscillator starting at a different point on the oscillator's shape (for example, start at the highest value of the sine function instead at the zero point). Stereo phase offset adds to the richness and stereo panorama of the sound produced."
I'm trying to achieve this for an OscillatorNode. My only idea is to use createPeriodicWave (https://dvcs.w3.org/hg/audio/raw-file/tip/webaudio/specification.html#dfn-createPeriodicWave) However, the description of create periodic wave from the specification is above my understanding and I have not found any examples via Google.
Any help in deciphering the description of createPeriodicWave would be helpful as would any other ideas about how to achieve this effect.
Thanks!
Mcclellan and others,
This answer helped and subsequently warped me into the world of Fourier. With the help of a page on the subject and some Wikipedia, I think I got the square and sawtooth patterns down, but the triangle pattern still eludes me. Does anyone know?
It indeed gives you the ability to phase shift, as this article by Nick Thompson explains (although he calls the AudioContext methods differently, but the principle is the same).
As far as the square and sawtooth patterns go:
var n = 4096;
var real = new Float32Array(n);
var imag = new Float32Array(n);
var ac = new AudioContext();
var osc = ac.createOscillator();
/* Pick a wave pattern */
/* Sine
imag[1] = 1; */
/* Sawtooth
for(x=1;x<n;x++)
imag[x] = 2.0 / (Math.pow(-1, x) * Math.PI * x); */
/* Square */
for(x=1;x<n;x+=2)
imag[x] = 4.0 / (Math.PI * x);
var wave = ac.createPeriodicWave(real, imag);
osc.setPeriodicWave(wave);
osc.connect(ac.destination);
osc.start();
osc.stop(2); /* Have it stop after 2 seconds */
This will play the activated pattern, the square pattern in this case. What would the triangle formula look like?
A simple way to fake it would be to add separate delay nodes to the left and right channels, and give them user-controlled delay values. This would be my approach and will have more-or-less the same effect as a phase setting.
If you want to use createPeriodicWave, unfortunately you'll probably have to understand the somewhat difficult math behind it.
Basically, you'll first have to represent your waveform as a sum of sine wave "partials". All periodic waves have some representation of this form. Then, once you've found the relative magnitudes of each partial, you'll have to phase shift them separately for left and right channels by multiplying each by a complex number. You can read more details about representing periodic waves as sums of sine waves here: http://music.columbia.edu/cmc/musicandcomputers/chapter3/03_03.php
Using createPeriodicWave has a significant advantage over using a BufferSourceNode: createPeriodicWave waveforms will automatically avoid aliasing. It's rather difficult to avoid aliasing if you're generating the waveforms "by hand" in a buffer.
I do not think that it would be possible to have a phase offset using a OscillatorNode.
A way to do that would be to use context.createBuffer and generate a sine wave buffer (or any type of wave that you want) and set it as the buffer for a BufferSourceNode and then use the offset parameter in its start() method. But you need to calculate the sample offset amount in seconds.
var buffer = context.createBuffer(1,1024,44100);
var data = buffer.getChannelData(0);
for(var i=0; i < data.length; i++){
//generate waveform
}
var osc = context.createBufferSourceNode();
osc.buffer = buffer;
osc.loop = true;
osc.connect(context.destination);
osc.start(time, offset in seconds);
By the looks of this article on Wolfram the triangle wave can be established like this:
/* Triangle */
for(x=1;x<n;x+=2)
imag[x] = 8.0 / Math.pow(Math.PI, 2) * Math.pow(-1, (x-1)/2) / Math.pow(x, 2) * Math.sin(Math.PI * x);
Also helpful by the way is the Wikipedia page that actually shows how the Fourier constructions work.
function getTriangleWave(imag, n) {
for (var i = 1; i < n; i+=2) {
imag[i] = (8*Math.sin(i*Math.PI/2))/(Math.pow((Math.PI*i), 2));
}
return imag;
}
With Chrome 66 adding AudioWorklet's, you can write sound processing programs the same way as the now deprecated ScriptProcessorNode.
I made a convenience library using this that's a normal WebAudio API OscillatorNode that can also have its phase (among other things) varied. You can find it here
const context = new AudioContext()
context.audioWorklet.addModule("worklet.js").then(() => {
const osc = new BetterOscillator(context)
osc.parameters.get("phase").value = Math.PI / 4
osc.connect(context.destination)
osc.start()
}
Background
This picture illustrates the problem:
I can control the red circle. The targets are the blue triangles. The black arrows indicate the direction that the targets will move.
I want to collect all targets in the minimum number of steps.
Each turn I must move 1 step either left/right/up or down.
Each turn the targets will also move 1 step according to the directions shown on the board.
Demo
I've put up a playable demo of the problem here on Google appengine.
I would be very interested if anyone can beat the target score as this would show that my current algorithm is suboptimal. (A congratulations message should be printed if you manage this!)
Problem
My current algorithm scales really badly with the number of targets. The time goes up exponentially and for 16 fish it is already several seconds.
I would like to compute the answer for board sizes of 32*32 and with 100 moving targets.
Question
What is an efficient algorithm (ideally in Javascript) for computing the minimum number of steps to collect all targets?
What I've tried
My current approach is based on memoisation but it is very slow and I don't know whether it will always generate the best solution.
I solve the subproblem of "what is the minimum number of steps to collect a given set of targets and end up at a particular target?".
The subproblem is solved recursively by examining each choice for the previous target to have visited.
I assume that it is always optimal to collect the previous subset of targets as quickly as possible and then move from the position you ended up to the current target as quickly as possible (although I don't know whether this is a valid assumption).
This results in n*2^n states to be computed which grows very rapidly.
The current code is shown below:
var DX=[1,0,-1,0];
var DY=[0,1,0,-1];
// Return the location of the given fish at time t
function getPt(fish,t) {
var i;
var x=pts[fish][0];
var y=pts[fish][1];
for(i=0;i<t;i++) {
var b=board[x][y];
x+=DX[b];
y+=DY[b];
}
return [x,y];
}
// Return the number of steps to track down the given fish
// Work by iterating and selecting first time when Manhattan distance matches time
function fastest_route(peng,dest) {
var myx=peng[0];
var myy=peng[1];
var x=dest[0];
var y=dest[1];
var t=0;
while ((Math.abs(x-myx)+Math.abs(y-myy))!=t) {
var b=board[x][y];
x+=DX[b];
y+=DY[b];
t+=1;
}
return t;
}
// Try to compute the shortest path to reach each fish and a certain subset of the others
// key is current fish followed by N bits of bitmask
// value is shortest time
function computeTarget(start_x,start_y) {
cache={};
// Compute the shortest steps to have visited all fish in bitmask
// and with the last visit being to the fish with index equal to last
function go(bitmask,last) {
var i;
var best=100000000;
var key=(last<<num_fish)+bitmask;
if (key in cache) {
return cache[key];
}
// Consider all previous positions
bitmask -= 1<<last;
if (bitmask==0) {
best = fastest_route([start_x,start_y],pts[last]);
} else {
for(i=0;i<pts.length;i++) {
var bit = 1<<i;
if (bitmask&bit) {
var s = go(bitmask,i); // least cost if our previous fish was i
s+=fastest_route(getPt(i,s),getPt(last,s));
if (s<best) best=s;
}
}
}
cache[key]=best;
return best;
}
var t = 100000000;
for(var i=0;i<pts.length;i++) {
t = Math.min(t,go((1<<pts.length)-1,i));
}
return t;
}
What I've considered
Some options that I've wondered about are:
Caching of intermediate results. The distance calculation repeats a lot of simulation and intermediate results could be cached.
However, I don't think this would stop it having exponential complexity.
An A* search algorithm although it is not clear to me what an appropriate admissible heuristic would be and how effective this would be in practice.
Investigating good algorithms for the travelling salesman problem and see if they apply to this problem.
Trying to prove that the problem is NP-hard and hence unreasonable to be seeking an optimal answer for it.
Have you searched the literature? I found these papers which seems to analyse your problem:
"Tracking moving targets and the non- stationary traveling salesman
problem": http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.85.9940
"The moving-target traveling salesman problem": http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.57.6403
UPDATE 1:
The above two papers seems to concentrate on linear movement for the euclidian metric.
Greedy method
One approach suggested in the comments is to go to the closest target first.
I've put up a version of the demo which includes the cost calculated via this greedy method here.
The code is:
function greedyMethod(start_x,start_y) {
var still_to_visit = (1<<pts.length)-1;
var pt=[start_x,start_y];
var s=0;
while (still_to_visit) {
var besti=-1;
var bestc=0;
for(i=0;i<pts.length;i++) {
var bit = 1<<i;
if (still_to_visit&bit) {
c = fastest_route(pt,getPt(i,s));
if (besti<0 || c<bestc) {
besti = i;
bestc = c;
}
}
}
s+=c;
still_to_visit -= 1<<besti;
pt=getPt(besti,s);
}
return s;
}
For 10 targets it is around twice the optimal distance, but sometimes much more (e.g. *4) and occasionally even hits the optimum.
This approach is very efficient so I can afford some cycles to improve the answer.
Next I'm considering using ant colony methods to see if they can explore the solution space effectively.
Ant colony method
An Ant colony method seems to work remarkable well for this problem. The link in this answer now compares the results when using both greedy and ant colony method.
The idea is that ants choose their route probabilistically based on the current level of pheromone. After every 10 trials, we deposit additional pheromone along the shortest trail they found.
function antMethod(start_x,start_y) {
// First establish a baseline based on greedy
var L = greedyMethod(start_x,start_y);
var n = pts.length;
var m = 10; // number of ants
var numrepeats = 100;
var alpha = 0.1;
var q = 0.9;
var t0 = 1/(n*L);
pheromone=new Array(n+1); // entry n used for starting position
for(i=0;i<=n;i++) {
pheromone[i] = new Array(n);
for(j=0;j<n;j++)
pheromone[i][j] = t0;
}
h = new Array(n);
overallBest=10000000;
for(repeat=0;repeat<numrepeats;repeat++) {
for(ant=0;ant<m;ant++) {
route = new Array(n);
var still_to_visit = (1<<n)-1;
var pt=[start_x,start_y];
var s=0;
var last=n;
var step=0;
while (still_to_visit) {
var besti=-1;
var bestc=0;
var totalh=0;
for(i=0;i<pts.length;i++) {
var bit = 1<<i;
if (still_to_visit&bit) {
c = pheromone[last][i]/(1+fastest_route(pt,getPt(i,s)));
h[i] = c;
totalh += h[i];
if (besti<0 || c>bestc) {
besti = i;
bestc = c;
}
}
}
if (Math.random()>0.9) {
thresh = totalh*Math.random();
for(i=0;i<pts.length;i++) {
var bit = 1<<i;
if (still_to_visit&bit) {
thresh -= h[i];
if (thresh<0) {
besti=i;
break;
}
}
}
}
s += fastest_route(pt,getPt(besti,s));
still_to_visit -= 1<<besti;
pt=getPt(besti,s);
route[step]=besti;
step++;
pheromone[last][besti] = (1-alpha) * pheromone[last][besti] + alpha*t0;
last = besti;
}
if (ant==0 || s<bestantscore) {
bestroute=route;
bestantscore = s;
}
}
last = n;
var d = 1/(1+bestantscore);
for(i=0;i<n;i++) {
var besti = bestroute[i];
pheromone[last][besti] = (1-alpha) * pheromone[last][besti] + alpha*d;
last = besti;
}
overallBest = Math.min(overallBest,bestantscore);
}
return overallBest;
}
Results
This ant colony method using 100 repeats of 10 ants is still very fast (37ms for 16 targets compared to 3700ms for the exhaustive search) and seems very accurate.
The table below shows the results for 10 trials using 16 targets:
Greedy Ant Optimal
46 29 29
91 38 37
103 30 30
86 29 29
75 26 22
182 38 36
120 31 28
106 38 30
93 30 30
129 39 38
The ant method seems significantly better than greedy and often very close to optimal.
The problem may be represented in terms of the Generalized Traveling Salesman Problem, and then converted to a conventional Traveling Salesman Problem. This is a well-studied problem. It is possible that the most efficient solutions to the OP's problem are no more efficient than solutions to the TSP, but by no means certain (I am probably failing to take advantage of some aspects of the OP's problem structure that would allow for a quicker solution, such as its cyclical nature). Either way, it is a good starting point.
From C. Noon & J.Bean, An Efficient Transformation of the Generalized Traveling Salesman Problem:
The Generalized Traveling Salesman Problem (GTSP) is a useful model for problems involving decisions of selection and sequence. The asymmetric version of the problem is defined on a directed graph with nodes N, connecting arcs A and a vector of corresponding arc costs c. The nodes are pregrouped into m mutually exclusive and exhaustive nodesets. Connecting arcs are defined only between nodes belonging to different sets, that is, there are no intraset arcs. Each defined arc has a corresponding non-negative cost. The GTSP can be stated as the problem of finding a minimum cost m-arc cycle which includes exactly one node from each nodeset.
For the OP's problem:
Each member of N is a particular fish's location at a particular time. Represent this as (x, y, t), where (x, y) is a grid coordinate, and t is the time at which the fish will be at this coordinate. For the leftmost fish in the OP's example, the first few of these (1-based) are: (3, 9, 1), (4, 9, 2), (5, 9, 3) as the fish moves right.
For any member of N let fish(n_i) return the ID of the fish represented by the node. For any two members of N we can calculate manhattan(n_i, n_j) for the manhattan distance between the two nodes, and time(n_i, n_j) for the time offset between the nodes.
The number of disjoint subsets m is equal to the number of fish. The disjoint subset S_i will consist only of the nodes for which fish(n) == i.
If for two nodes i and j fish(n_i) != fish(n_j) then there is an arc between i and j.
The cost between node i and node j is time(n_i, n_j), or undefined if time(n_i, n_j) < distance(n_i, n_j) (i.e. the location can't be reached before the fish gets there, perhaps because it is backwards in time). Arcs of this latter type can be removed.
An extra node will need to be added representing the location of the player with arcs and costs to all other nodes.
Solving this problem would then result in a single visit to each node subset (i.e. each fish is obtained once) for a path with minimal cost (i.e. minimal time for all fish to be obtained).
The paper goes on to describe how the above formulation may be transformed into a traditional Traveling Salesman Problem and subsequently solved or approximated with existing techniques. I have not read through the details but another paper that does this in a way it proclaims to be efficient is this one.
There are obvious issues with complexity. In particular, the node space is infinite! This can be alleviated by only generating nodes up to a certain time horizon. If t is the number of timesteps to generate nodes for and f is the number of fish then the size of the node space will be t * f. A node at time j will have at most (f - 1) * (t - j) outgoing arcs (as it can't move back in time or to its own subset). The total number of arcs will be in the order of t^2 * f^2 arcs. The arc structure can probably be tidied up, to take advantage of the fact the fish paths are eventually cyclical. The fish will repeat their configuration once every lowest common denominator of their cycle lengths so perhaps this fact can be used.
I don't know enough about the TSP to say whether this is feasible or not, and I don't think it means that the problem posted is necessarily NP-hard... but it is one approach towards finding an optimal or bounded solution.
I think another approch would be:
calculate the path of the targets - predictive.
than use Voronoi diagrams
Quote wikipedia:
In mathematics, a Voronoi diagram is a way of dividing space into a number of regions. A set of points (called seeds, sites, or generators) is specified beforehand and for each seed there will be a corresponding region consisting of all points closer to that seed than to any other.
So, you choose a target, follow it's path for some steps and set a seed point there. Do this with all other targets as well and you get a voroni diagram. Depending in which area you are, you move to the seedpoint of it. Viola, you got the first fish. Now repeat this step until you cought them all.