I use Node.js and Concave hull algorithm for isolating UK postcode sectors.
This is what I get for now:
So now, I need to smooth boundaries to look like this:
Does anyone have any idea which algorithm should I use?
There seems to be a lots of ways of doing this. I'm inclined to cite some kind of bezier interpolation (http://www.antigrain.com/research/bezier_interpolation/).
#amit gave another great clue about how to solve the problem, splines are actually pretty useful for smoothing polygons. See the related question: https://gis.stackexchange.com/questions/24827/how-to-smooth-the-polygons-in-a-contour-map
Hope it helps!
There are at least 2 approaches to this:
Curve fitting algorithms(best for your use case)
Rammer-Douglas-Peucker algorithm (simpler to implement)
The Rammer-Douglas-Peucker algorithm reduces the node count of a polygon - this will not smooth it in the sense that it will make it curvy, it will simply reduce the nodes(roughness), whilst struggling to keep the polygon in it's original shape as much as possible
Although what you are, most probably, after is a Curve fitting algorithm through a series of points.
See this answer I've made (and the answer above, which is more descriptive) for solutions to this.
Related
I have a chart which line I wish to be able to make as smooth as possible. The line should still keep the overall pattern and as close to the original line as possible - but I need to be able to smooth all "bumbs" 100% away / to the degree I wish.
When I say "100% smooth" - I mean something like this (try to draw a curved line in the square): http://soswow.github.io/fit-curve/demo/
The line must only go up or down (while the main trend is up/down-wards) - E.g. like a Sine curve. Now imaging you added a lot of noise/bumps of different sizes/freq. to the Sine curve - but you like to "restore" the curve without changing the curve's overall pattern. That is exactly my need. The ideal: If I could filter away exactly the selected level of noise/freq. I wish to remove from the main trend.
SMA is lagging in nature and I need something which is a lot closer to the actual data-points in time.
I know the lagging feature of SMA is normally accepted - but I don't accept it ;) I strongly believe it would be possible to do better than that :) DMA can shift the data-points itself - but has no effect of the data-points info in real time which is what I'm looking for as well...I know I have to hack/compensate - and I can also come up with 100s of ways myself (mixing all the algos I know, running them multiple times etc.) But I guess someone out there is way smarter than me and that it has already been solved - and I would definitely wonder if not a standard algorithm for exactly this issue exist?
I have looked into many different algorithms - but none of them worked satisfyingly (Moving Averages, Median, polynomial regression, Savitzky Golay etc.). But the result is still way too "bumby" and "pixelated" and otherwise it again becomes too lagging.
Lastly I have found: Bezier Cubic and quadratic which seems pretty interesting but I don't know how apply it on all my data-points and I can't find a suitable NPM (I can only find libraries like this: https://www.npmjs.com/package/bezier-easing which only takes 1 data-point which is not what I'm looking for).
Savitzky G. is better than regular MA - but I still believe it lags too much when it is as smooth as I consider acceptable.
The task is pre-processing and noise-reduction of temperature, price and similar charts in real-time before it is handled over to an IA which looks for abnormalizes (too much noise seems to confuse the AI and is also unnecessary for the most parts). The example with the drawing was only an example - just as well as me mentioning a "Sine curve" (to illustrate my point). The chart is in general very arbitrary and doesn't follow any pre-defined patterns.
I like to emphasize again that the primary prerequisite of the selected algorithm/procedure must be - that it generates a chart-line which minimizes lagging from the main chart's overall trend to an absolutely minimum and at the same time makes it possible to adjust at what level the noise-reduction should take place :-)
I have also made this small drawing in paint - just so you easily would understand my point :-) screencast.com/t/jFq2sCAOu The algo should remove and replace all instances/areas in a given chart which matches the selected frequency - in the drawing is only shown one of each - but normally there would exist many different areas of the chart with the same level of noise.
Please let me know if all this makes sense to you guys - otherwise please pin-point what I need to elaborate more about.
All help, ideas and suggestions are highly appreciated.
I hope I am clear here, I will try to add images to help people understand my problem. I have a very simple Perlin noise lake generator:
that outputs a lake as an array of points:
[
{x: 0, y: 0},
...
]
Using a simple tracer to generate a polygon from those points. It works very well and I was happy with it until I thought of an issue that might happen and then found the issue actually did happen. When you have two separate lakes in one chunk (like this)
the polygon tracer fails to create a valid polygon. I believe the solution is to go beforehand and separate points into groups of points, I looked up how to do this but all I found was an algorithm that needed to know how many groups there was supposed to be before it could work (which I do not know). I am completely stumped and would like some advice as to where to start with this. I don't need a full code answer (I can implement it myself), but a concept would be nice.
I will still be looking around and trying things while I wait for answers although I doubt with my knowledge that I will be able to find anything useful. Oh, and my polygon tracer uses marching squares if that is important.
What you need is a Clustering algorithm. And since you say that you don't know the number of clusters beforehand, I would recommend the Mean Shift algoritm.
Given the start and end points and the two control points of a bezier curve, I would like to calculate the subdivisions (in JavaScript) to approximate the curve with straight line segments within an angular tolerance (avoid too much of an angle between segments). I mainly want to see if there is already an efficient open source algorithm out there before I try to write my own.
Here is what I have found that is close do doing this:
https://github.com/turf-junkyard/turf-bezier - although it's not quite the same, I could use some of the code, since I already have the spline.
https://github.com/seanchas116/bezier-subdivide - this seems to do exactly what I want, although it looks like a recursive algorithm that would be costly to performance.
https://pomax.github.io/bezierjs/ - getLUT() could be useful but I would need a way to decide how many steps.
http://ciechanowski.me/blog/2014/02/18/drawing-bezier-curves/ - pretty much what I want, but this isn't in Javascript.
http://antigrain.com/research/adaptive_bezier/ - helpful theory.
This module should do what is needed: https://github.com/mattdesl/adaptive-bezier-curve
I'm working on a problem that need to randomly generate and put convex polyhedrons into a cube/cylinder container at randomly chosen points without overlapping. I'm using three.js to get a graphic output.
A demo.
While putting a polyhedron, how to check whether it intersects with other polyhedrons?
The convex polyhedron involved is simply tetrahedron or hexahedron and is constructed using THREE.ConvexGeometry. As I need a precise check, bounding box is not enough, I just use it to make sure two polyhedrons are not intersected.
I've done a lot research and found many complicated theories and methods, what I need is to get a Boolean result that tells if there exists an intersection between two convex polyhedrons. SAT (Separating Axis Theorem) in 3D is good enough, but Three.js doesn't seem to be capable of doing this. Can anyone tell me how to do this kind of check in a simple way or just explain how to do it with SAT in 3D?
You can take a look at http://www.realtimerendering.com/intersections.html. Even though the site is from 2011 the intersection algorithms have not changed in the last years. From the demo it seems that once the polyhedrons are placed in the cube, they don't move. So the SAT algorithm would not be the best solution as it is used for moving polyhedra.
The Gilbert–Johnson–Keerthi is a powerful algorithm that allow to measure distance and check for intersection between convex polyhedrons. Still I believe that it is better to use on simple polyhedron otherwise the computation in the support function might take some time. A possible drawback is that you need to have functions able to measure the distance between a point and another point/segment/triangle, I do not know if some are available in three.js.
http://en.wikipedia.org/wiki/Gilbert%E2%80%93Johnson%E2%80%93Keerthi_distance_algorithm
Does anybody have an idea on how I can create shapes from Canny Edge Detection in Canvas?
I assume here you already have the Canny edge detection implemented based on the way the question is formulated -
You can use an approach such as this (written in Java but should be easy enough to translate into JavaScript) and/or perhaps some limited use of line-fitting approaches (statistical).
The essence is that you will have to find out which pixels are connected and create polygon objects/arrays yourselves based on the result of the edge detection.
Once you have the connected pixels you can use point reduction algorithms such as the Ramer–Douglas–Peucker algorithm (JavaScript implementation here) to avoid the polygons to contain every single point of similar sloped lines and so forth.
You will run into a variety of challenges though such as short segmented lines due to too much noise in the original image or "weak lines", clusters of "lines" which makes it hard to find out how to connect them as polygons.
I don't know of any libraries for this, however you could:
Use getImageData() to access a byte[] of pixel data
Implement your own convolution filter on top of that data (examples for this may exist online)
In this way you can find areas of high contrast (edges.)
EDIT I agree with Ken -- I may have misread the question.
In addition to Ken's answer, if you know what kinds of shapes you're looking for then you might like to look at the Hough Transform which is well suited to detecting lines, ellipses and other shapes that can be geometrically defined using only a few parameters.