I have an hexagonal map but I realized that my coordinate system is bad for using some findpath algorithms so I want to reform it.
I chose one system that fully satisfies me. You can find it here. But at the referenced example the whole map is rotated differently than I need.
My old version of my map is here:
http://dark-project.cz/wesnoth/map-view/1
And my question is how to render my map in HTML to have the same map as I have now but with the new coordinate system?
(I render it using PHP cycle. For each field I have this informations:
coordinates, field type (grass, village, ...) and dimensions of field image)
Thank you for your answers!
PS: I think it could be done by using HTML5 Canvas but I want good browser support and I haven't got any experience with HTML5 (but I'm not againt rendering on the client side if it was fast and had good browser support) so I prefer server side (PHP) solution!
Your coordinate system is not compatible with the one used in the algorithm demo.
I think your best bet is to alter the algorithm you have found to use your coordinate system.
AFAIK you essentially have to change:
the part that takes a given coordinate and determines the 6 neighbouring coordinates.
the function that determines if a given coordinate is inside the map boundaries.
(sort of) the function that calculates the cost/distance
I notice the demo code goes:
function hex_distance(x1,y1,x2,y2) {
dx = Math.abs(x1-x2);
dy = Math.abs(y2-y1);
return Math.sqrt((dx*dx) + (dy*dy));
}
But that's an inaccurate estimate as the axes aren't perpendicular. It could produce non-optimal results - the requirement of a score function in A* search is to produce a value not higher than the real cost. This function may violate that rule.
Your coordinate system would actually make that function more accurate, but you could also get away with just the manhattan distance:
function hex_distance(x1,y1,x2,y2) {
return abs(x2-x1) + abs(y2-y1);
}
Which, if I am not mistaken, works out to the number of tile steps needed to get from (x1,y1) to (x2,y2).
Related
I am trying to building a HTML5 canvas game so that I will need to detect collision with character with different shape.
If you have enough knowledge about this type of collision please share your knowledge
Depends on personal preference.
One of them is Distance Detection.
Check the distance between two objects(or coords of said object), and if its close enough, count it as hit.
You could also check overlap this way, which is a definite hit. Then just apply whatever physics you wanted to apply.
The HTML Canvas is only in charge of displaying your "game" through a web browser. You'll need to utilize a bit of programming logic to decide what you want to render through the canvas. As far as the collision implementation, it'll depend on what you're looking to create.
We'll assume this is a 2d plane with objects moving in all directions.
Given this assumption, one implementation might be to attach positional coordinates to each object in your game, and consistently keep track of each object coordinate with a "global" game state. You can than perform calculations on each object's position and trigger collisions between objects that meet a certain threshold.
Implementation might differ depending on what you're looking to build but this link might be useful. Hope this helps!
The correct way is to go with a physics engine. Somethng like:
https://brm.io/matter-js/
However, it might be overkill for a simple game. here are the basics for collision detection.
You know the coordinates and size of all the objects you control.
After each movement, get the perimeter coordinates of each object. Each object will have an array of points marking its perimeter.
Compare the arrays and see if the same coordinate is present in two arrays.
This is the basic. You need a lot of enhancement to this code for a practical game. But you can figure them out after this step.
I'm working on an application for generating a path for custom-made CNC machine. It is based on a PLC controller which does not support G-code, therefore I need to define the whole path as a list of commands.
I'm having a trouble with defining the toolpath for pocket milling. As an input, I use DXF files with different kind of shapes in it. Each shape is located on different layer and built of simple elements such as LINE, ARC etc. What I need is to analyze these simple elements as a closed contour and generate toolpath for milling the whole material inside this contour. Do you know of any library or simple algorithm where I can define the shape (in this case, based on the DXF data) and the lib/algorithm would generate the whole toolpath, taking the tool diameter into consideration?
For simple shapes like circles or rectangles, I'm able to generate such toolpath manually but when the shape is more complex (e.g. like below) I'm running out of ideas how to do so.
There is a lot of freeware CAM software in the internet and each of them generates the toolpath in form of G-Code, so I assume such kind of algorithm is implemented there somehow. I thought about using such CAM software but the G-code output is not usable for me, besides I do not need any GUI. Most of them is also written in higher-level languages whilst I'm writing my app in JavaScript running under node.js.
Do you mean you know how to process each entity individually and don't know how to combine them together? Since they touch you just need to find the next entity according to its starting/ending point (1), from the current entity's ending point. And if the point (1) was an ending point of that entity, you will need to process the found entity in reverse, or process it in normal order and reverse the resulting line. Of course taking care to offset it in the correct direction.
For faster neighbor search sort them first by either X or Y coordinate of both their starting and ending points.
Intended effect
When user clicks on a polygon feature (a county, region, or neighborhood/municipality) or uses the "Draw" widget, a dashboard card displays the number of intersected point features returned by queryFeatures() (see below).
localitiesLayer.queryFeatures(query).then(function(results) {
var queriedLocalities = results.features;
if (queriedLocalities.length > 0) {
var fossilsFound = queriedLocalities.length;
}
}
Issue
The maximum number of returned intersected features is 2,000 even when more than 2,000 point features have been selected.
In the photo below, there are only "2000 fossil sites in the area!" when there should be over 3,000 features returned.
Troubleshooting
The issue is fixed when instead of querying the localitiesLayer feature layer, a feature layer view is instead queried. This introduces the unsolvable issue of the number of localities returned by queryFeatures changing depending on the level of zoom (as detailed in the API Reference for queryFeatures of FeatureLayerView).
Since it seems I'm stuck using a server-side query, I need to understand why this is happening at such a seemingly arbitrary number.
At first I thought it was related to possible topology issues between features, but why would that affect the polygon generated by the Draw widget? Before writing this question I also ran the integrate tool on all features layers just to make sure that there wasn't any non-coincident polygons.
Question
Why is the upper limit of features returned by the queryFeatures() on the localitiesLayer 2,000 even when more than 2,000 point features intersect with a selected polygon?
Why does querying with a feature layer view fix this issue (though as detailed above is not a valid solution to this problem)?
CodePen of app with bug
Usually feature services has a maximum amount of features to retrieve in one query. This is what is happening here.
You can check the service endpoint of the layer (LAU_Localities_View - 0) to find these value in Max Record Count, here set to 2000.
So you will have to use some other technique in order to have all the values. One simple way is to iterate and query with an extra condition using a field as last index, for example OBJECTID. You will have to order the result by the index field.
I am trying to implement a collision system for my three js racing game. I am following this guide to implement the system which calculates the final linear and angular velocities following a collision between two cars.
https://www.myphysicslab.com/engine2D/collision-en.html#resting_contact
However I have troubling when it comes to finding the correct direction for the normal. According to the link: Let the vector n be normal (perpendicular) to the edge of body B that is being impacted, and pointing outward from body B. I am using the following method for finding this normal.
How do I calculate the normal vector of a line segment?
Finding the numerical value of the normal is easy but I have trouble making my program use the correct direction. For instance i want the blue normal and not the red one.
Here is a picture that explains what i mean more clearly I hope:
No formula can guess what side of the surface you are interested in, so it is up to you to provide this hint.
You can select the right orientation that by using one of Rap x Rbp or Rbp x Rap, but it is up to you to choose depending on the orientation conventions used in your model. (With the little information provided, I can't tell you more.)
I'm trying to cluster photos (GPS + timestamp) around known GPS locations.
3d points = 2d + time stamp.
For example:
I walk along the road and take photos of lampposts, some are interesting and so I take 10 photos and others are not so I don't take any.
I'd like to cluster my photos around the lampposts, allowing me to see which lamppost was being photographed.
I've been looking at something like k-means clustering and wanted something intelligent than just snapping the photos to nearest lamppost.
(I'm going to write the code in javascript for a client side app handing about (2000,500) points at a time )
KMeans Clustering is indeed a popular and easy to implement algorithm, but it has a couple of problems.
You need to feed him the number of clusters N as an input
variable. Now, since I assume you don't know how many "things" you
want to photoigraph, finding the right N. Using Iterative KMeans or similar variations only slides the problem to finding a proper evaluation function for multicluster partitions, that's in no way easier then finding N itself.
It can only detect linearly separable shapes. Let's say you are walking around Versailles, and you take a lot of pictures of the external walls. Then you move inside, and take pictures of the inside garden. The two shapes you obtain are a thorus with a disk inside it, but KMeans can't distinguish them.
Personally, I'd go with some sort of Density Based Clustering : you'll still have to feed the algorithm some variables,but, since we assume that the space will be Euclidian, finding them shouldn't take too much. Plus, it gives you the ability to distinct Noise points from Cluster points, and let you treat them differently.
Furthermore, it can distinguish between most shapes, and you don't need to give the number of cluster beforehand.
Density based clustering, such as DBSCAN, definitely is the way to go.
The two parameters of DBSCAN should be quite obvious to set:
epsilon: this is the radius for clustering, so e.g. you could use 10 meters, assuming that there are no lampposts closer than 10 meters. (You should be using Geodetic distance, not Euclidean!)
minPts: essentially the minimum size of a cluster. You can use 1 or 2, even.
distance: this parameter is implicit, but probably more important. You can use a combination of space and time here. E.g. 10 meters spatially, and 1 year in the time domain. See Generalized DBSCAN for the more flexible version, which makes it obvious how to use multiple domains.
You can use a delaunay triangulation to look for nearest points. It gives you a nearest neighbor graph where the points is on the delaunay edges. Or you can cluster by color like in photo mosaic. It uses an anti pole tree. Here is a similar answer: Algorithm to find for all points in set A the nearest neighbor in set B