syncing d3.js with THREE.js earth - javascript

I am trying to combine WebGL earth with d3.geo.satellite projection.
I have managed to to overlay the 2 projections on top of each other and sync rotation, but I am having trouble to sync zooming. When I sync them to match size, WebGL projection gets deformed, but the d3.geo.satellite remains the same. I have tried different combination of projection.scale, projection.distance without much success.
Here is JS fiddle (it take a little while to load the resources). You can drag it to rotate (works well). But if you zoom in (use mousewheel) you can see the problem.
https://jsfiddle.net/nxtwrld/7x7dLj4n/2/
The important code is at the bottom of the script - the scale function.
function scale(){
var scale = d3.event.scale;
var ratio = scale/scale0;
// scale projection
projection.scale(scale);
// scale Three.js earth
earth.scale.x = earth.scale.y = earth.scale.z = ratio;
}

I do not using WebGL earth either , checking on your jsfiddle is not working anymore, and my assumption of your problem that you want to integrated D3.js with Threejs as a solution for 3d globe.
May I suggest you to try earthjs as your solution. Under the hood it use D3.js v4 & Threejs revision 8x both are the latest, and it can combine between Svg, canvas & threejs(WebGL).
const g = earthjs({padding:60})
.register(earthjs.plugins.mousePlugin())
.register(earthjs.plugins.threejsPlugin())
.register(earthjs.plugins.autorotatePlugin())
.register(earthjs.plugins.dropShadowSvg(),'dropshadow')
.register(earthjs.plugins.worldSvg('../d/world-110m.json'))
.register(earthjs.plugins.globeThreejs('../globe/world.jpg'))
g._.options.showLakes = false;
g.ready(function(){
g.create();
})
above snippet code you can run it from here.

Related

2D/3D CAD design in JavaScript

I am having 2D design in microstation and I wanted to represent this design in web using any tool(javascript/Unity 3D or any other) where the web tool will not have all the functionality but basic functionality like reshaping or adding a new shape should be available.
As of now, my approach is once I created a design in microstation then I am capturing properties of shapes like the cordinates of a line and now using these coordinates I wanted to represent in the browser since this is a 2D design so it will be plotted in some location (x,y) for example I have created a line in microstation from (2,2) to (10,10) so it will be a straight line and I have all the coordinates I tried redrawing it in Unity which am able to do but I am facing issue to change the length from (2,2) to (20,20) by mouse click. And my goal is to do it in runtime, not in Unity editor tool.
This is an example of a straight line I wanted to do it for all geometric shape,any guidance would be appreciated.
As of now am trying Unity to do so but struggling in the edit part is there a way to achieve this in unity?
I also looked at various javascript libraries like konvaJS, makerJS, ThreeJS, etc. but except konvajs none of the other library provide facilities like reshaping, in Konva also creating shape using a mouse not found any solution for this.
Can we achieve this by any of the two approaches, of course, am not looking for all functionality only a few custom functionality, if yes which approach will be the best, and which tool should I proceed with?
Any guidance will be helpful.
To draw a line-segment, you can use LineRenderer.
//two points of the line-segment are known (or got from the Transform of GameObject)
Vector3 start;
Vector3 end;
GameObject myLine = new GameObject();
myLine.transform.position = start;
myLine.AddComponent<LineRenderer>();
LineRenderer lr = myLine.GetComponent<LineRenderer>();
lr.material = new Material(Shader.Find("Particles/Alpha Blended Premultiply"));
lr.SetColors(color, color);
lr.SetWidth(0.1f, 0.1f);
lr.SetPosition(0, start);
lr.SetPosition(1, end);
//to change the points of this line
myLine.transform.position = another_start;
lr.SetPosition(0, another_start);
lr.SetPosition(1, another_end);
There are also other solutions:
Use scaled cube or capsule primitive.
3rd-party plugins: vectrosity
To get mouse clicked position, use Camera.main.ScreenToWorldPoint(Input.mousePosition).
To determine when your mouse is clicked, use Input.GetMouseButtonUp.

How to create a Three.js 3D line series with width and thickness?

Is there a way to create a Three.js 3D line series with width and thickness?
Even though the Three.js line object supports linewidth, this attribute is not yet supported in all browsers on all platforms in WebGL.
Here's where you set linewidth in Three.js:
var material = new THREE.LineBasicMaterial({
color: 0xff0000,
linewidth: 5
});
The Three.js ribbon object - which had width - has recently been dropped.
The Three.js tube object generates 3D extrusions but - being Bezier-based - the lines do not pass through the control points.
Can anybody think of a method of drawing a line series (polylines, plotlines) in Three.js that has some sort of user definable 'bulk' such as width, thickness or radius?
This question may be a restating of this question:
Extruding a graph in three.js.
Given that I do not think that there is a readily available method, I would be happy to participate in an effort to create a simple function that responds to this question.
But a response that points to an existing workable method would be cool...
As WestLangley suggests, one possible solution includes the polyline being of constant pixel width - as is currently available with the Three.js canvas renderer.
A comparison of the two renderers is shown here:
Canvas and WebGL Lines Compared via GitHub Pages
Canvas and WebGL Lines Compared via jsFiddle
A solution where you could specify linewidth and similar results occurred on both renderers would be very cool.
There are, however, other ways of thinking of 3D lines where lines have actual physical constructs. They cast shadows, they respond to events. These also need to be looked into.
Here are links to GitHub Pages with two demos of lines made up of multiple meshes:
Sphere and Cylinder Polylines
An 'expensive solution. Each joint is made up of a full sphere.
Cubes Polylines
My guess is that building either of these as smooth single meshes will be complex to problems to solve. So in the meantime here is a link to a partial visualization of 3D lines that are wide and have height:
3D Box Line on jsFiddle
The goal is have to code 'with a low level of complexity - in other words - for dummies'. Thus a 3D line should be as easy and as familiar as adding a sphere or cube. Geometry + material = mesh > scene. And the geometry should be quite economical in terms of creating vertices and faces.
The lines should have width and height. Up is always in the Y direction. The demo shows this. What the demo does not show is corners being mitred nicely...
I cooked up a possible solution which I believe meets most of your requirements:
http://codepen.io/garciahurtado/pen/AGEsf?editors=001
The concept is fairly simple: render any arbitrary geometry in "wireframe mode", then apply a full screen GLSL shader to it to add thickness to the wireframe lines.
The shader is inspired by the blur shaders in the ThreeJS distro, which essentially copy the image a bunch of times along the horizontal and vertical axis. I automated that process and made the number of copies a user defined parameter, while ensuring that the copies were offset by 1 pixel.
I used a 3D cube mesh in my demo (with an ortho camera), but it should be trivial to convert it to a poly line.
The real meat and potatoes of this thing is in the custom shader (fragment shader portion):
uniform sampler2D tDiffuse;
uniform int edgeWidth;
uniform int diagOffset;
uniform float totalWidth;
uniform float totalHeight;
const int MAX_LINE_WIDTH = 30; // Needed due to weird limitations in GLSL around for loops
varying vec2 vUv;
void main() {
int offset = int( floor(float(edgeWidth) / float(2) + 0.5) );
vec4 color = vec4( 0.0, 0.0, 0.0, 0.0);
// Horizontal copies of the wireframe first
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalWidth);
float newUvX = vUv.x + float(i - offset) * uvFactor;
float newUvY = vUv.y + (float(i - offset) * float(diagOffset) ) * uvFactor; // only modifies vUv.y if diagOffset > 0
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
// GLSL does not allow loop comparisons against dynamic variables. Workaround below
if(i == edgeWidth) break;
}
// Now we create the vertical copies
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalHeight);
float newUvX = vUv.x + (float(i - offset) * float(-diagOffset) ) * uvFactor; // only modifies vUv.x if diagOffset > 0
float newUvY = vUv.y + float(i - offset) * uvFactor;
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
if(i == edgeWidth) break;
}
gl_FragColor = color;
}
Pros:
No need for additional geometry beyond the line vertices
Line thickness is user definable
A full screen shader should be relatively gentle on the GPU
Can be implemented fully within the WebGL canvas
Cons:
Line thickness is close to pixel perfect on horizontal and vertical edges, but slightly off on diagonal edges. This is due to the algorithm used and is a limitation of the solution. Having said that, for low line thickness and complex geometries, this is barely noticeable with the naked eye.
The joints between lines will show gaps for large enough line thickness. You can play with the Codepen demo to see what I mean. I started to implement a solution to this by adding a second "diagonal pass", but it got a little hairy and I think this would only be an issue for higher line thicknesses (+8 pixels) or extreme line angles. If you are interested in this solution, you can look at the original source to see where I was going with it.
Since this uses a full screen filter, you can only use the WebGL context for displaying objects of this thickness. Showing various line widths would require additional rendering passes.
As a potential solution. You could take your 3d points, then use THREE.Vector3.project method to figure out screen-space coordinates. Then simply use canvas and it's lineTo and moveTo operations. Canvas 2d context does support variable line thickness.
var w = renderer.domElement.innerWidth;
var h = renderer.domElement.innerHeight;
vector.project(camera);
context2d.lineWidth = 3;
var x = (vector.x+1)*(w/2);
var y = h - (vector.y+1)*(h/2);
context2d.lineTo(x,y);
Also, i don't think you can use the same canvas for that, so it would have to be a layer (another canvas) above your gl rendering context canvas.
If you have infrequent camera changes - it is also possible to construct line out of polygons and update it's vertex positions based on camera transform. For orthographic camera this would work best as only rotations would require vertex position manipulation.
Lastly, you could disable canvas clearing and draw your lines several times with offset inside a circle or a box. After that you can re-enable clearing. This would require several extra draw operations, but it's probably the most scalable approach.
The reason lines don't work as you'd expect out of the box is due to how ANGLE works, it's used in Chrome and in Firefox to my knowledge, it emulates OpenGL via DirectX. Guys from ANGLE state that WebGL spec only requires support of line thickness up-to 1, so they do not see it as a bug and don't intend to "fix" it. Line thickness should work on non-windows OSs though, where ANGLE is not used.

BabylonJS Radial vs Rectangular Textures (Conversion or Code Change)

I am working with the planetary textures from this site. They are all in rectangular form.
However, in my BabylonJS application, textures are expected to be like this.
I have tried setting the coordinates mode, but it doesn't seem to do anything.
// These didn't have an effect
material.diffuseTexture.coordinatesMode = BABYLON.Texture.SPHERICAL_MODE;
material.diffuseTexture.coordinatesMode = BABYLON.Texture.EXPLICIT_MODE;
material.diffuseTexture.coordinatesMode = BABYLON.Texture.SPHERICAL_MODE;
material.diffuseTexture.coordinatesMode = BABYLON.Texture.PLANAR_MODE;
material.diffuseTexture.coordinatesMode = BABYLON.Texture.CUBIC_MODE;
material.diffuseTexture.coordinatesMode = BABYLON.Texture.PROJECTION_MODE;
material.diffuseTexture.coordinatesMode = BABYLON.Texture.SKYBOX_MODE;
Is there a way to convert between these two kinds of textures? Alternatively, are their planet textures like the bottom.
In fact this is related to the texture coordinates embedded into your mesh. You should use Blender to export different coordinates or you can also play with texture.uOffset, texture.vOffset, texture.uScale and texture.vScale to move your texture on your mesh

D3 + Leaflet: d3.geo.path() resampling

We've adapted Mike Bostock's original D3 + Leaflet example:
http://bost.ocks.org/mike/leaflet/
so that it does not redraw all paths on each zoom in Leaflet.
Our code is here: https://github.com/madeincluj/Leaflet.D3/blob/master/js/leaflet.d3.js
Specifically, the projection from geographical coordinates to pixels happens here:
https://github.com/madeincluj/Leaflet.D3/blob/master/js/leaflet.d3.js#L30-L35
We draw the SVG paths on the first load, then simply scale/translate the SVG to match the map.
This works very well, except for one issue: D3's path resampling, which looks great at the first zoom level, but looks progressively more broken once you start zooming in.
Is there a way to disable the resampling?
As to why we're doing this: We want to draw a lot of shapes (thousands) and redrawing them all on each zoom is impractical.
Edit
After some digging, seems that resampling happens here:
function d3_geo_pathProjectStream(project) {
var resample = d3_geo_resample(function(x, y) {
return project([ x * d3_degrees, y * d3_degrees ]);
});
return function(stream) {
return d3_geo_projectionRadians(resample(stream));
};
}
Is there a way to skip the resampling step?
Edit 2
What a red herring! We had switched back and forth between sending a raw function to d3.geo.path().projection and a d3.geo.transform object, to no avail.
But in fact the problem is with leaflet's latLngToLayerPoint, which (obviously!) rounds point.x & point.y to integers. Which means that the more zoomed out you are when you initialize the SVG rendering, the more precision you will lose.
The solution is to use a custom function like this:
function latLngToPoint(latlng) {
return map.project(latlng)._subtract(map.getPixelOrigin());
};
var t = d3.geo.transform({
point: function(x, y) {
var point = latLngToPoint(new L.LatLng(y, x));
return this.stream.point(point.x, point.y);
}
});
this.path = d3.geo.path().projection(t);
It's similar to leaflet's own latLngToLayerPoint, but without the rounding. (Note that map.getPixelOrigin() is rounded as well, so probably you'll need to rewrite it)
You learn something every day, don't you.
Coincidentally, I updated the tutorial recently to use the new d3.geo.transform feature, which makes it easy to implement a custom geometric transform. In this case the transform uses Leaflet’s built-in projection without any of D3’s advanced cartographic features, thus disabling adaptive resampling.
The new implementation looks like this:
var transform = d3.geo.transform({point: projectPoint}),
path = d3.geo.path().projection(transform);
function projectPoint(x, y) {
var point = map.latLngToLayerPoint(new L.LatLng(y, x));
this.stream.point(point.x, point.y);
}
As before, you can continue to pass a raw projection function to d3.geo.path, but you’ll get adaptive resampling and antimeridian cutting automatically. So to disable those features, you need to define a custom projection, and d3.geo.transform is an easy way to do this for simple point-based transformations.

Rendering spheres (or points) in a particle system

I am using the Three.JS library to display a point cloud in a web brower. The point cloud is generated once at start up and no further points are added or removed. But it does need to be rotated, panned and zoomed. I've gone through the tutorial about creating particles in three.js here
Using the example I can create particles that are squares or use an image of a sphere to create a texture. The image is closer to what I want, but is it possible to generate the point clouds without using the image? The sphere geometry for example.
The problem with the image is that when you have thousands of points it seems they sometimes obscure each other around the edges. From what I can gather it seems like the black region in a point's png file blocks the image immediately behind the current point. (But it is transparent to points further behind)
This obscuring of the images is the reason I would like to generate the points using shapes. I have tried replacing particles = new THREE.Geometry() with THREE.SphereGeometry(radius, segments, rings) and tried to change the vertices to spheres.
So my question is. How do I modify the example code so that it renders spheres (or points) instead of squares? Also, is a particle system the most efficient system for my particular case or should I just generate the particles and set their individual positions? As I mentioned I only generate the points once, but then rotate, zoom, pan the points. (I used the TrackBall sample code to get the mouse events working).
Thanks for your help
I don't think rendering a point cloud with spheres is very efficient. You should be able to get away with a particle system and use a texture or a small canvas program to draw a circle.
One of the first three.js sample uses a canvas program, here are the important bits:
var PI2 = Math.PI * 2;
var program = function ( context )
{
context.beginPath();
context.arc( 0, 0, 1, 0, PI2, true );
context.closePath();
context.fill();
};
var particle = new THREE.Particle( new THREE.ParticleCanvasMaterial( {
color: Math.random() * 0x808008 + 0x808080,
program: program
} ) );
Feel free to adapt the code for the WebGL renderer.
Another clever solution I've seen in the examples is using an encoded webm video to store the data and pass that to a GLSL shader which is rendered through a particle system in three.js
If your point cloud comes from a Kinect, these resources might be useful:
DepthCam
KinectJS
When comparing my code to http://threejs.org/examples/#webgl_custom_attributes_particles3
I saw the only difference was:
vec4 outColor = texture2D( texture, gl_PointCoord );
if ( outColor.a < 0.5 ) discard;
gl_FragColor = outColor;
Added to the fragment shader, fixed this problem for me.
It wasn't z fighting because randomly, some corners would overlap distant particles.
material.alphaTest = 0.5 didn't work and turning off depth writes/tests messed up the viewing order.
The problem with the image is that when you have thousands of points
it seems they sometimes obscure each other around the edges. From what
I can gather it seems like the black region in a point's png file
blocks the image immediately behind the current point. (But it is
transparent to points further behind)
You can get rid of the transparency overlapping problem of the underlying square structure by turning
depthTest:false
The problem then is, if you are adding additional objects to the scene the depth-testing will fail and the PointCloud will be rendered in front of the other objects, ignoring the actual order. To get around that you can additionally turn off
depthWrite:false

Categories