I needed to show some simple code example for friend, for example moving a button along the edge of the screen clockwise. Sure this is something very simple, no, this is simplest. But I found myself spending about almost 30 minutes on that. I was embarased, because I am professional programmer for 20+ years, and I even needed to start the program many times before the button was flying the right ways. And I was thinking, why? So I speculated that this kind of code is difficult to get right immediately, because it is very old style, each case is typed separately, each check must be entered manually, you need to go through each iteration and make carefully sure that all numbers and checks are exactly precisely correct, and this is hard to do because of the nature of code style, which is, eehh, spaghetti, messy?
So I was like, is there a way to convert it to "modern" way, like use loop instead of cases, use templates or other meta-programming, use functional approach or at least use arrays. And it seems I cannot find any good way for this.
var mx = screen.width - b.w, my = screen.height - b.h
setInterval(function() {
var step = 3
if (state == 1) {
b.x += step
if (b.x >= mx) b.x = mx, state++
} else if (state == 2) {
b.y += step
if (b.y >= my) b.y = my, state++
} else if (state == 3) {
b.x -= step
if (b.x <= 0) b.x = 0, state++
} else if (state == 4) {
b.y -= step
if (b.y <= 0) b.y = 0, state = 1
}
b.apply()
}, 1)
This is JavaScript, in C it would be even more difficult to get right fast, because you need to think about types.
Here is what I come up myself... This maybe demonstrating what I am trying to achieve. I am not talking about choosing different algorithm. but rather abotu exploring language(s) features, and programming techniques.
var D = 3, name = ['x','y'], delta = [D,D,-D,-D],
limit = [screen.width - b.w, screen.height - b.h,0,0]
setInterval(function() {
b[name[0]] += delta[0]
if (delta[0] > 0 && b[name[0]] > limit[0] || b[name[0]] <= 0)
b[name[0]] = limit[0],
[name,delta,limit].some(function(x){ x.push(x.shift()) })
b.apply()
},1)
This one at least separates data from code, making it simpler to get it right. It worked for me from the first attempt. But I am still not satisfied completely)
The average American newspaper article is written at something like a 6th grade reading level. This isn't because the person writing it never went beyond 6th grade; rather, they've learned that it's better to write in a manner that everyone can understand. I bring this up because you're calling your code childish or unsophisticated, yet it's the most clear and concise code there is for the task you had, and therefore the best.
If you want to go around a square, you're really not going to have any choice besides what you did - have four different directions to go, and keep track of where you're at. You code shows off the basics of a state machine, because that's what you need to go around in four different straight lines.
If you're willing to fake it with a slightly different movement, you can remove all states and just go around in an ellipse using some trigonometry. (It actually should be a circle, but since the screen is rectangular, you'll get an ellipse, with different speeds on the long and short sides of the screen.)
Here's the basics. It may need some tweaking to make sure you have the edges hitting the right spot. Truthfully, I think by the time you work out the special cases on this version, you'll find your solution was more elegant.
// find the center x and y
var centerX = screen.width / 2;
var centerY = screen.height / 2;
// first, find the radius. If you want to cover everything, you need half the diagonal
var radius = Math.sqrt(centerX * centerX + centerY * centerY);
var increment = 0.01; // higher values, of course, will move you faster
var theta = 0;
setInterval(function() {
b.x = Math.min(Math.max(centerX + radius * Math.cos(x), screen.width), 0);
b.y = Math.min(Math.max(centerY + radius * Math.sin(y), screen.height), 0);
theta -= increment;
b.apply();
}, 1);
As I mentioned, this will almost certainly need some tweaking to come anywhere as close to looking good as the code you made. My code may be less childish, but it's less easy to understand - and it actually does less well at accomplishing your task.
Don't worry about how fancy your code was. It works well, and is clear to understand, and that's really what matters.
Edit
I realized later on that I forgot to base everything from the center, and the code I had posted would do a circle from the top left. I've added in the center stuff above. See? Adding complexity doesn't at all imply the code will be better... :-)
Also, I did figure out one change I would recommend to your original algorithm: name your states! Make them strings, and have state be "top", "right", "bottom" and "left", and actively set them rather than using state++. That will help make your code even more readable.
Related
I'm not using any engine, but instead trying to build my own softbody dynamics for fun using verlet integeration. I made a cube defined by 4x4 points with segments keeping its shape like so:
I have the points collide against the edges of the scene and it seems to work fine. Though I do get some cases where the points collapses in itself and it'll create a dent instead of maintaining its box shape. For example, if it's a high enough velocity and it lands on its corner it tends to crumble:
I must be doing something wrong or out of order when solving the collision.
Here's how I'm handling it. It's in Javascript, though the language doesn't matter, feel free to reply with any language:
sim = function() {
// Sim all points.
for (let i = 0; i < this.points.length; i++) {
this.points[i].sim();
}
// Keep in bounds.
let border = 100;
for (let i = 0; i < this.points.length; i++) {
let p = this.points[i];
let vx = p.pos.x - p.oldPos.x;
let vy = p.pos.y - p.oldPos.y;
if (p.pos.y > height - border) {
// Bottom screen
p.pos.y = height - border;
p.oldPos.y = p.pos.y + vy;
} else if (p.pos.y < 0 + border) {
// Top screen
p.pos.y = 0 + border;
p.oldPos.y = p.pos.y + vy;
}
if (p.pos.x < 0 + border) {
// Left screen
p.pos.x = 0 + border;
p.oldPos.x = p.pos.x + vx;
} else if (p.pos.x > width - border) {
// Right screen
p.pos.x = width - border;
p.oldPos.x = p.pos.x + vx;
}
}
// Sim its segments.
let timesteps = 20;
for (let ts = 0; ts < timesteps; ts++) {
for (let i = 0; i < this.segments.length; i++) {
this.segments[i].sim();
}
}
}
Please let me know if I need to post any other details.
This may be better answered on a physics or game-dev exchange (and likely has already been), but i'll give it a crack because it's nice to revisit this stuff...
Verlet integration is a fantastically stable if not physically accurate method, but the problem here is not the integration method, or anything you've done wrong as far as I can see; it's the type of simulation: mass-aggregate physics (the building of geometry out of dynamic constraints), which is really nice and simple :) but has some inherent deficiencies and limitations, and this particular problem is inherent to the simulation type.
First, look carefully at the arrangement of constraints in the collapsed box - they are just as valid as the initial one. Although individual constraints may not be as satisfied, in combination they are still in a local equilibrium with each other - there nothing compelling them to form their original arrangement.
Second, the external force (collision with immovable plane) is what overcame the constraint forces originally. Even if the constraints respond proportionally towards infinity as they compress - the simulation can never match reality because it works a frame at a time not continuously, and the longer the frame the more error.
To retain shape more reliably, a more explicit angular constraint is needed, which usually involves quaternions - these are quite a bit harder to implement than distance constraints, and once you have them you will be pretty far along the road to implementing rigid body physics anyway. But there are ways to mitigate instead:
1. Use a smaller interval
All posteriori simulations and numerical integration in general has some inherent instability. While different integration methods (e.g verlet) can mitigate this, generally the smaller the interval the better the stability. This alone will give the constraints more opportunity to push back against external forces, but it will also increase the maximum stable constraint stiffness.
This will probably require you to optimise your engine more. Additionally make sure you don't couple your render step to the simulation, you want to be able to render at multiples of the simulation interval which allows you to run the simulation faster for stability without unnecessarily rendering more frames than is useful, as it will just slow down the simulation.
2. Try more stable shapes
For your box-of-boxes shape, see what happens when you add more constraints between far vertices, it will add more global stability to the shape.
A common type of shape people make with mass-aggregate physics are polygons, because it's straight forward to make them highly interconnected (a little bit like a bicycle wheel, but where each spoke point connects to every other spoke point). In- fact spoke-like designs are one of the most stable, but you can usually apply the same principles to more irregular shapes once you get an intuition for it.
3. A different type of constraint
Quaternions are not the only possible constraint that will help retain configuration, the main problem is not so much that explicit angle isn't being maintained; but rather that when points are forced past a certain position relative to their siblings, their distance constraints flip and start working in reverse - keeping them on that side.
There could be many different ways to solve this without something as complex as quarternions - in fact I will give it some thought and edit this post if I come up with anything, I have a bit more ammunition since I last explored mass-aggregate physics...
I have a fiddle with the following code. ColorWalk clone
Here is the js code:
function floodFill(x, y, selectedColor, grayColor) {
if (x < 0 || x >= 600) return;
if (y < 0 || y >= 400) return;
let square = $('.blockattribute').filter(function(ind, el) {
return $(el).css('left') == x + 'px' && $(el).css('top') == y + 'px'
});
let squareColor = square.css('background-color');
if (squareColor === grayColor || squareColor === selectedColor) {
square.removeClass().addClass('blockattribute gray');
floodFill(x + 20, y, selectedColor, grayColor);
floodFill(x, y + 20, selectedColor, grayColor);
floodFill(x - 20, y, selectedColor, grayColor);
floodFill(x, y - 20, selectedColor, grayColor);
}
else {
return;
}
}
I've been working on learning javascript/jquery and algorithms and I've pretty much have this clone working except for the fact that when I get deeper and deeper into the grid, the slower and slower the code is. I've been reading about memoization and trying to use it on the grid but I'm getting stuck as to how to approach. All I'm really looking for is a little push regarding how to do this. Maybe memoization isn't the way to go and maybe I can optimize my code some other way. My current thinking is that I need to grab the last gray square and then proceed from there. Am I on the right track?
----Edit------
I realized that I can combine the if/else operator to check for matching gray color or selected color
Reading from and writing to the DOM are very expensive in Javascript. You also should never use the DOM as the source of your data.
To speed up your algorithm, store the pixel data offline as regular Javascript data, manipulate the data only, then update the visual code only once. This way you minimize the number of DOM operations.
Additionally, Javascript is not "tail call optimized," meaning you can't recurse forever, and every level of recursion will slow down the program to some degree. If you can use a non-recursive flood fill algorithm in this case, it will likely be faster.
I have this code to calculate whether this circle intersections with another circle. I want a faster version, is that possible?
this.CheckIntersection = function(another){
var xC = this.x;
var yC = this.y;
var GxC = another.x;
var GyC = another.y;
var distSq = (xC - GxC) * (xC - GxC) + (yC - GyC) * (yC - GyC);
return distSq < (this.r + another.r) * (this.r + another.r);
}
Well you can try to improve this a little bit as follows:
this.CheckIntersection = function(another){
var dx = this.x-another.x;
var dy = this.y-another.y;
dx = dx*dx+dy*dy;
dy = this.r+another.r;
return dx < dy*dy;
}
This will be a bit faster since you save some subtractions, and you use less variables so the runtime environment will have an easier job with register allocation/caching.
But in terms of time complexity there is not much you can do. So you are limited to peephole optimization like for instance looking for duplicated computation and try to compute them only once.
If you have many such checks, it might make sense to offload them to the GPU. The GPU should be able to do more of them in parallel, but you pay a bit more because you need to copy data to/from GPU. There's not much to do complexity wise.
Nvidia's cuda is a good starting point, but there are other libraries.
If you ask for a faster test, you probably identified a bottleneck there, meaning that you must be using this function intensively.
For a test of a single circle against a single other, there is about nothing that you can reduce. The test essentially takes 3+/2* to compute a squared Euclidean distance and a comparison to a term obtained with 1+/1*. There is nothing you can remove, and the code is so tiny that you probably pay more for the interpreter overhead than for the operations themselves.
Things can become more interesting, and potential gains much higher, if you have to test a moving circle against N fixed circles, or, better, N circles against each other.
I'm working on an HTML5-canvas game, where the map is randomly generated 10px by 10px tiles which the player can then dig and build upon. The tiles are stored in an array of objects and a small map contains about 23000 tiles. My collision detection function checks the players position against all non-air tiles every run through (using requestAnimationFrame()), and it works perfectly but I feel like it's CPU intensive. Collision function is as follows (code came from an online tutorial):
function colCheck(shapeA, shapeB) {
var vX = (shapeA.x + (shapeA.width / 2)) - (shapeB.x + (shapeB.width / 2)),
vY = (shapeA.y + (shapeA.height / 2)) - (shapeB.y + (shapeB.height / 2)),
hWidths = (shapeA.width / 2) + (shapeB.width / 2),
hHeights = (shapeA.height / 2) + (shapeB.height / 2),
colDir = null;
// if the x and y vector are less than the half width or half height, they we must be inside the object, causing a collision
if (Math.abs(vX) < hWidths && Math.abs(vY) < hHeights) {
// figures out on which side we are colliding (top, bottom, left, or right)
var oX = hWidths - Math.abs(vX),
oY = hHeights - Math.abs(vY);
if (oX >= oY) {
if (vY > 0) {
colDir = "t";
shapeA.y += oY;
} else {
colDir = "b";
shapeA.y -= oY;
}
} else {
if (vX > 0) {
colDir = "l";
shapeA.x += oX;
} else {
colDir = "r";
shapeA.x -= oX;
}
}
}
return colDir;
};
Then within my update function I run this function with the player and tiles as arguments:
for (var i = 0; i < tiles.length; i++) {
//the tiles tag attribute determines rendering colour and how the player can interact with it ie. dirt, rock, etc.
//anything except "none" is solid and therefore needs collision
if (tiles[i].tag !== "none") {
dir = colCheck(player, tiles[i]);
if (dir === "l"){
player.velX = 0;
player.jumping = false;
} else if (dir === "r") {
player.velX = 0;
player.jumping = false;
} else if (dir === "b") {
player.grounded = true;
player.jumping = false;
} else if (dir === "t") {
player.velY *= -0.3;
}
}
};
So what I'm wondering is if I only check tiles within a certain distance from the player using a condition like Math.abs(tiles[i].x - player.x) < 100 and the same for y, should that make the code more efficient because it will be checking collision against fewer tiles or or is it less efficient to be checking extra parameters?
And if that's difficult to saying without testing, how do I go about finding how well my code is running?
but I feel like it's CPU intensive
CPU's are intended to do a lot of stuff very fast. There is math to determine the efficiency of your algorithm, and it appears that your current implementation is O(n). If you reduce the number of tiles to a constant number, you would achieve O(1), which is better, but may not be noticeable for your application. To achieve O(1), you would have to keep an index of the X closest tiles and incrementally update the index when closest tiles change. I.e. if the player moves to the right, you would modify the index so that the left most column of tiles are removed and you get a new column of tiles on the right. When checking for collisions, you would simply iterate through the fixed number of tiles in the index instead of the entire set of tiles.
...should that make the code more efficient because it will be checking collision against fewer tiles or or is it less efficient to be checking extra parameters?
The best way to answer that is with a profiler, but I expect it would improve performance especially on larger maps. This would be an O(n) solution because you still iterate over the entire tile set, and you can imagine as your tile set approaches infinity, performance would start to degrade again. Your proposed solution may be a good compromise between the O(1) solution I suggested above.
The thing you don't want to do is prematurely optimize code. It's best to optimize when you're actually experiencing performance problems, and you should do so systematically so that you get the most bang for your buck. In other words even if you did have performance problems, the collision detection may not be the source.
how do I go about finding how well my code is running?
The best way to go about optimizing code is to attach a profiler and measure which parts of your code are the most CPU intensive. When you figure out what part of your code is too slow, either figure out a solution yourself, or head over to https://codereview.stackexchange.com/ and ask a very specific about how to improve the section of code that isn't performing well and include your profiler information and the associated section of code.
In response to Samuel's answer suggesting I use a profiler:
With a map made up of ~23 000 tiles in an array:
The original collision code was running 48% time. By changing if (tiles[i].tag !== "none") to the following the amount of time spent checking for collisions dropped to 5%.
if (tiles[i].tag !== "none"
&& Math.abs(tiles[i].x - player.x) < 200
&& Math.abs(tiles[i].y - player.y) < 200)
With a map made up of ~180 000 tiles:
The original collision code was running 60-65% of the time and performance of the game is so low it can't be played. With the updated code the collision function is only running 0.5% of the time but performance is still low, so I would assume that even though less tiles are being checked for collision there are so many tiles that checking their position relative to the player is causing the game to run slowly.
I'm building an image preload animation, that is a circle/pie that gets drawn. Each 'slice' is totalImages / imagesLoaded. So if there are four images and 2 have loaded, it should draw to 180 over time.
I'm using requestAnimFrame, which is working great, and I've got a deltaTime setup to restrict animation to time, however I'm having trouble getting my head around the maths. The closest I can get is that it animates and eases to near where it's supposed to be, but then the value increments become smaller and smaller. Essentially it will never reach the completed value. (90 degrees, if one image has loaded, as an example).
var totalImages = 4;
var imagesLoaded = 1;
var currentArc = 0;
function drawPie(){
var newArc = 360 / totalImages * this.imagesLoaded // Get the value in degrees that we should animate to
var percentage = (isNaN(currentArc / newArc) || !isFinite(currentArc / newArc)) || (currentArc / newArc) > 1 ? 1 : (currentArc / newArc); // Dividing these two numbers sometimes returned NaN or Infinity/-Infinity, so this is a fallback
var easeValue = easeInOutExpo(percentage, 0, newArc, 1);
//This animates continuously (Obviously), because it's just constantly adding to itself:
currentArc += easedValue * this.time.delta;
OR
//This never reaches the full amount, as increments get infinitely smaller
currentArc += (newArc - easedValue) * this.time.delta;
}
function easeInOutExpo(t, b, c, d){
return c*((t=t/d-1)*t*t + 1) + b;
}
I feel like I've got all the right elements and values. I'm just putting them together incorrectly.
Any and all help appreciated.
You've got the idea of easing. The reality is that at some point you cap the value.
If you're up for a little learning, you can brush up on Zeno's paradoxes (the appropriate one here being Achilles and the Tortoise) -- it's really short... The Dichotomy Paradox is the other side of the same coin.
Basically, you're only ever half-way there, regardless of where "here" or "there" may be, and thus you can never take a "final"-step.
And when dealing with fractions, such as with easing, that's very true. You can always get smaller.
So the solution is just to clamp it. Set a minimum amount that you can move in an update (2px or 1px, or 0.5px... or play around).
The alternative (which ends up being similar, but perhaps a bit jumpier), is to set a threshold distance. To say "As soon as it's within 4px of home, snap to the end", or switch to a linear model, rather than continuing the easing.