d3js v5 + Topojson v3 Map with border rendering - javascript

I come back for a problem. I'm still learning d3js for 5 days and I have a problem about the border rendering.
Why I have that ?
The problem occurs when I add region borders.
svgMap.append("path")
.attr("class", "reg_contour")
.datum(topojson.mesh(fr[0], fr[0].objects.reg_GEN_WGS84_UTF8, function(a, b) { return a !== b; }))
.attr("d", path);
Here is my code : https://plnkr.co/edit/mD1PzxtedWGrZd5ave28?p=preview
Just for the record, the json file combines two layers (departments and regions) created with the same shp and compiled by QGIS for geojson output. After I convert this to topojson by mapshapper.

The error lies in your topojson - the two different feature types, departments and regions do not share the same coordinates along their common boundaries.
First, in this sort of situation it is desirable to check to make sure there is no layering issue (often other features are drawn ontop of others, hiding them or portions of them), we can do this by just showing the regional boundaries:
(plunkr)
So the problem can't be layering, if we look at a particular feature in the topojson, say the department of Creuse:
{"arcs":[[-29,-28,-27,-26,202,-297,-296,205,-295,-410,419]],"type":"Polygon","properties":{"codgeo":"23","pop":120581,"libgeo":"Creuse","libgeo_m":"CREUSE","codreg":"75","libreg":"Nouvelle Aquitaine"}}
We see that the department is drawn using 11 arcs representing each portion of the boundary based on shared boundaries between multiple features so that shared boundaries are only represented once in the data.
If we zoom in on Creuse we can see those 11 arc segments shared between either other departments, regions, or with nothing at all:
The thick portions of the boundary correspond to the thick white boundaries in the image in the question, I've only changed the styling and zoom from the original plunkr
This looks problematic, the department should only have 6 arcs:
Why are there additional arcs? Because the boundaries of the departments are not aligned properly - the shared boundaries between departments do not always share the same arcs in your topojson. Chances are the departments might use a different scale than the regions, a different precision or projection, or were made somehow differently. This has resulted in minute nearly imperceptible differences that have resulted in boundaries that share coordinates in reality not sharing the same coordinates in the data, and thus the shared arcs are going unrecognized.
Since you are generating the mesh like this:
topojson.mesh(fr[0], fr[0].objects.reg_GEN_WGS84_UTF8, function(a, b) { return a !== b; })
Only shared boundaries are drawn, which explains the gaps.
We can rectify this a few ways, the easiest way would be to remove the regions altogether. The departments record which region they are in, we can check to see if a boundary should be drawn if the departments on each side of that boundary are in different regions:
.datum(topojson.mesh(fr[0], fr[0].objects.dep_GEN_WGS84_UTF8, function(a, b) { return a.properties.libreg !== b.properties.libreg; }))
Which gives us:
(plunkr)
Alternatively, we can re-habilitate the regional boundaries by importing them into a GIS platform such as ArcGIS and repairing the geometry. We could also import the departments and make a new layer based on region properties in a dissolve. Using the repair geometry tool in Arc, I get a nice boundary (when shown with the same code as the first image here):
There are other methods, such as including a tolerance in aligning arcs, but these might be more difficult than the above.

Related

Triangulating contours into 2d mesh with color data intact

I am using a Javascript library 'Tess2' to triangulate a series of contours.
https://github.com/memononen/tess2.js/blob/master/src/tess2.js
It generates a perfect 2d mesh of any shape consisting of multiple contours:
A contour consists of a series of points (in a negative winding order for solid fills, in a positive winding order for holes)
However, the resulting triangles output by the algorithm are no longer tied to a contour and its fill color.
How would I alter Tess2 (or any other javascript library that tesselates contours) to allow for the retention of color data in the resulting triangles?
I've tried looking everywhere and I cannot find a solution.
From what I've seen in the source code, the tessalation function contains a vertex indices in an returned object:
Tess2.tesselate = function(opts) {
...
return {
vertices: tess.vertices,
vertexIndices: tess.vertexIndices,
vertexCount: tess.vertexCount,
elements: tess.elements,
elementCount: tess.elementCount,
mesh: debug ? tess.mesh : undefined
};
You can create a new array with the colors for each vertex, and then use vertexIndices from the object to get a color of the vertex.
If you would like to have a color per face, you would just need to generate an array like above, which means putting the same vertex color for each vertex in a array. You would also like to wrap all of this data in some kind of convienent object or class.
[EDIT]
It turns out that the tesselation algorithm merges vertices in the same position, meaning that it reorganizes vertex array completely. There are a solution to explicitly not merge different contours with overlapping vertices:
Tess2.tesselate({ contours: yourContours, elementType: Tess2.BOUNDARY_CONTOURS });
That should preserve the original vertices, however not in an original order, use vertexIndices to get the original position of these.
After many failed attempts I finally got there.
I've been trying all this time to try and process a huge amount of contours all at once with a single tessellation pass.
I tried editing the tessellation library to make each half edge retain it original contour ID. I had several eureka moments when it finally seemed to work, only to be disappointed when I stress tested it and found it to be less than perfect.
But it turns out I've been incredibly daft...
All I had to do was group each contour with a particular fill, and then tesselate each group independently.
I didn't understand that for every interior contour fill, there would always be an opposite contour that effectively enclosed the outer contour loop.
For instance, to represent a red box with a blue box inside there would be 2 red contours and 1 blue. I thought it could be represented with only 1 blue contour and 1 red, where the blue would also represent the red contour's hole, and so processing each colour group of contours independently didn't make sense to me.
When I finally realised this, I figured it out.
I've published a solution on github but I'm not sure how much use it is to anyone:
https://github.com/hedgehog90/triangulate-contours-colour-example
I've included a pretty comprehensive exporter script for converting contours (including curves) into polygonal paths for Adobe Flash/Animate which might be useful for someone.
I will be writing an OBJ exporter on top of this shortly, so I can represent vector graphics in a 3D engine.

Plotting custom json maps with D3.js

I am creating a map with D3.js.
I began by downloading the country (Canada) shapefile here:
https://www.arcgis.com/home/item.html?id=dcbcdf86939548af81efbd2d732336db
..and converted it into a geojson here (link to file below):
http://mapshaper.org/
So far all I see is a coloured block, without any errors on the console. My question is, how can I tell if my json file or my code is incorrect?
Here is my code and on bottom is a link to json file.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>D3: Setting path fills</title>
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.5.6/d3.min.js"></script>
<!-- <script src="https://d3js.org/topojson.v1.min.js"></script> -->
<style type="text/css">
/* styles */
</style>
</head>
<body>
<script type="text/javascript">
var canvas = d3.select("body").append("svg")
.attr("width", 760)
.attr("height", 700)
d3.json("canada.geo.json", function(data) {
var group = canvas.selectAll("g")
.data(data.features)
.enter()
.append("g")
var projection = d3.geo.mercator();
var path = d3.geo.path().projection(projection);
var areas = group.append("path")
.attr("d",path)
.attr("class","area")
})
</script>
</body>
</html>
Link to json file:
https://github.com/returnOfTheYeti/CanadaJSON/blob/master/canada.geo.json
A d3 geoProjection uses unprojected coordinates - coordinates on a three dimensional globe. The geoProjection takes those coordinates and projects them onto a two dimensional plane. The units of unprojected coordinates are generally degrees longitude and latitude, and a d3 geoProjection expects this. The problem is that your data is already projected.
How can I tell if the data is projected?
There are two quick methods to determine if your data is projected:
look at the meta data of the data
look at the geographic coordinates themselves
Look at the Geographic Metadata
The projection your data uses is defined in the .prj file that forms part of the collection of files that makes up a shapefile:
PROJCS["Canada_Albers_Equal_Area_Conic",
GEOGCS["GCS_North_American_1983",
DATUM["D_North_American_1983",
SPHEROID["GRS_1980",6378137.0,298.257222101]],
PRIMEM["Greenwich",0.0],
UNIT["Degree",0.0174532925199433]],
PROJECTION["Albers"],
PARAMETER["False_Easting",0.0],
PARAMETER["False_Northing",0.0],
PARAMETER["Central_Meridian",-96.0],
PARAMETER["Standard_Parallel_1",50.0],
PARAMETER["Standard_Parallel_2",70.0],
PARAMETER["Latitude_Of_Origin",40.0],
UNIT["Meter",1.0]]
Your data is already projected with an Albers projection, and the unit of measurement is the meter. Projecting this data as though it consists of lat/long pairs will not work.
If you only have a geojson file and no reference shapefile, some geojson files will specify an EPSG number in a projection propery, if this number is something other than 4326 you probably have projected data.
Look at the Coordinates
You can tell your data doesn't have unprojected data because the values of each coordinate are outside the bounds of longitude and latitude (+/-180 degrees east/west, +/- 90 degrees north south):
"coordinates":[[[[899144.944639163,2633537.
Your coordinates translate around the globe several times: this is why your projection results in an svg is filled entirely with features.
Ok, Now What?
There are two primary solutions available for you:
Convert the projection so that the geojson consists of latitude and longitude points
Use d3.geoTransform or d3.geoIdentity to transform your projected data.
Convert the Projection
To do this you want to "unproject" your data, or alternatively, project it so that it consists of longitude, latitude points.
Most GIS software offers the ability to reproject data. It's much easier with a shapefile than a geojson, as shapefiles are much more common in GIS software. GDAL, QGIS, ArcMap offer relatively easy conversion.
There are also online converters, mapshaper.org is probably the easiest for this, and has added benefits when dealing with d3 - simplification (many shapefiles contain way too much detail for the purposes of web mapping). Drag all the files of the shapefile into the mapshaper window, open the console and type: proj wgs84. Export as geojson (after simplification), and you've got a geojson ready for d3.
After reprojecting, you may notice that your data is awkward looking. Don't worry, it's unprojected (well, kind of unprojected, it's shown as 2d, but with a very simple projection that assumes Cartesian input data).
With your unprojected data, you are now ready to project your data in d3.
Here's an example with your data (d3-v4. data is simplified and reprojected on mapshaper (no affiliation to me))
Using d3.geoIdentity or d3.geoTransform
For this I would recommend using d3v4 (I see your code is v3). While geo.transform is available in v3, it is much more cumbersome without the new methods available in v4, namely: d3.geoIdentity and projection.fitSize. I will address the v4 method of using projected data here
With your data you can define a different sort of projection:
var projection = d3.geoIdentity();
However, this type of "projection" will give you trouble if you aren't careful. It basically spits out the x,y values it is given. However, geographic projected coordinate spaces typically have [0,0] somewhere in the bottom left, and svg coordinates space has [0,0] in the top left. In svg coordinate space, y values increase as you go down the coordinate plane, in the projected coordinate space of your data, y values increase as you go up. Using an identity will therefore project your data upside down.
Luckily we can use:
var projection = d3.geoIdentity()
.reflectY(true);
One last problem remains: the coordinates in the geojson are not scaled or translated so that the features are properly centered. For this there is the fitSize method:
var projection = d3.geoIdentity()
.reflectY(true)
.fitSize([width,height],geojsonObject)
Here width and height are the width and height of the SVG (or parent container we want to display the feature in), and the geojsonObject is a geojson feature. Note it won't take an array of features, if you have an array of features, place them in a feature collection.
Here's your data shown taking this approach (I still simplified the geojson).
You can also use a geoTransform, this is a bit more complex, but allows you to specify your own transform equation. For most situations it is probably overkill, stick with geoIdentity.
Pros and Cons of Each Option:
Unprojecting the data:
Beyond the initial leg work to unproject the data, by unprojecting the data so that it consists of longitude latitude pairs you have to do some extra processing each time you show the data.
But, you also have a high degree of flexibility in how you show that data by accessing any d3 geoProjection. Using unprojected data also allows you to more easily align different layers: you don't have to worry about rescaling and transforming multiple layers individually.
Keeping the Projected Data
By keeping the projection the data comes in, you save on computing time by not having to do spherical math. The downsides are the upsides listed above, it's difficult to match data that doesn't share this projection (which is fine if you export everything using this projection), and your stuck with the representation - a d3.geoTransform doesn't offer much in the way of converting your projection from say a Mercator to an Albers.
Note that that the fit.size() method I used for option two above is available for all geoProjections (v4).
In the two examples, I used your code where possible. A couple caveats though, I changed to d3v4 (d3.geo.path -> d3.geoPath, d3.geo.mercator -> d3.geoMercator, for example). I also changed the name of your variable canvas to svg, since it is a selection of an svg and not a canvas. Lastly, in the first example I didn't modify your projection, and a mercator's center defaults to [0,0] (in long/lat), which explains the odd positioning
If by correct you mean it is valid JSON - then you can simply parse it via javascript and check for any errors.
In this example any errors will be logged to the console. Otherwise the parsed object will be logged if valid.
// where data is your JSON data
try {
console.log(JSON.parse(data));
} catch (e) {
console.error(e);
}
However as you are already using D3 you can simply use the d3.json method to test for errors. Add an extra parameter to the d3 call.
d3.json("canada.geo.json", function(error, canada) {
if (error) return console.error(error);
console.log(canada);
});
See: https://github.com/d3/d3-3.x-api-reference/blob/master/Requests.md

Render topojson mesh in D3 as distinct paths

I have code like:
d3.json("topo-census-regions.json", function(error, topoJ) {
g.append("path")
.datum(topojson.mesh(topoJ, topoJ.objects.divis, function(a, b) { return a !== b; }))
.attr("class", "division-borders")
.attr("d", path);
});
It works, basically. (This is pretty much straight from Bostock's zoom demo, just using a different source file.)
My problem is that the boundaries between census regions render out as a single SVG path. TopoJson's mesh method collapses all shared internal boundaries into a single compound path. But I need to render different parts of the path with different styles.
For a visual reference, see this.
The boundary between "Pacific" and "Mountain" within the "Western" division should be one path element, the boundary between "West North Central" and "East North Central" should be another, etc. I could manufacture these paths manually in Illustrator fairly easily but need to be able to do this programmatically across a large data set. I want the efficient de-duplication that mesh performs, but with continuous segments as separate elements.
Thanks in advance for any help.

How can I associate multiple rectangles to the entity in Cesiumjs?

In documentation I see entity seem's be able to have associated different shapes with it (point, polygon, polyline, rectangle, billboard etc.). But how can I add e.g multiple rectangles or polygons with different color, shape etc. ?
You need to create separate entities. A single entity has a lot of graphics options (point, label, polygon, etc) but only one of each per entity. So if you want three separate labels, you need three entities. They can all be in the same position if need be, with different label pixel offsets.
Updating my answer to include some "Primitive" code, in response to a comment below.
var rectangle = viewer.scene.primitives.add(new Cesium.RectanglePrimitive({
rectangle : Cesium.Rectangle.fromDegrees(-120.0, 20.0, -60.0, 40.0)
}));

JavaScript Separating Axis Theorem

I'm trying to wrap my head around using the Separating Axis Theorem in JavaScript to detect two squares colliding (one rotated, one not). As hard as I try, I can't figure out what this would look like in JavaScript, nor can I find any JavaScript examples. Please help, an explanation with plain numbers or JavaScript code would be most useful.
Update: After researching lots of geometry and math theories I've decided to roll out a simplified SAT implementation in a GitHub repo. You can find a working copy of SAT in JavaScript here: https://github.com/ashblue/canvas-sat
Transforming polygons
First you have to transform all points of your convex polygons (squares in this case) so they are all in the same space, by applying a rotation of angle.
For future support of scaling, translation, etc. I recommend doing this through matrix transforms. You'll have to code your own Matrix class or find some library that has this functionality already (I'm sure there are plenty of options).
Then you'll use code in the vain of:
var transform = new Matrix();
transform.appendRotation(alpha);
points = transform.transformPoints(points);
Where points is an array of Point objects or so.
Collision algorithm overview
So that's all before you get to any collision stuff. Regarding the collision algorithm, it's standard practice to try and separate 2 convex polygons (squares in your case) using the following steps:
For each polygon edge (edges of both polygon 0 and polygon 1):
Classify both polgyons as "in front", "spanning" or "behind" the edge.
If both polygons are on different sides (1 "in front" and 1 "behind"), there is no collision, and you can stop the algorithm (early exit).
If you get here, no edge was able to separate the polgyons: The polygons intersect/collide.
Note that conceptually, the "separating axis" is the axis perpendicular to the edge we're classifying the polygons with.
Classifying polygons with regards to an edge
In order to do this, we'll classify a polygon's points/vertices with regards to the edge. If all points are on one side, the polygon's on that side. Otherwise, the polygon's spanning the edge (partially on one side, partially on the other side).
To classify points, we first need to get the edge's normal:
// this code assumes p0 and p1 are instances of some Vector3D class
var p0 = edge[0]; // first point of edge
var p1 = edge[1]; // second point of edge
var v = p1.subtract(p0);
var normal = new Vector3D(0, 0, 1).crossProduct(v);
normal.normalize();
The above code uses the cross-product of the edge direction and the z-vector to get the normal. Ofcourse, you should pre-calculate this for each edge instead.
Note: The normal represents the separating axis from the SAT.
Next, we can classify an arbitrary point by first making it relative to the edge (subtracting an edge point), and using the dot-product with the normal:
// point is the point to classify as "in front" or "behind" the edge
var point = point.subtract(p0);
var distance = point.dotProduct(normal);
var inFront = distance >= 0;
Now, inFront is true if the point is in front or on the edge, and false otherwise.
Note that, when you loop over a polygon's points to classify the polygon, you can also exit early if you have at least 1 point in front and 1 behind, since then it's already determined that the polygon is spanning the edge (and not in front or behind).
So as you can see, you still have quite a bit of coding to do. Find some js library with Matrix and Vector3D classes or so and use that to implement the above. Represent your collision shapes (polygons) as sequences of Point and Edge instances.
Hopefully, this will get you started.

Categories