I'm currently making a force network visualization that involves a large number of nodes and edges (over 50k+) using a new library called stardust.js. Stardust uses WebGL to make the rendering of nodes and edges much quicker than Canvas/D3.
However, I am unable to figure out how to add zoom and pan to this visualization.
According to this thread on stardust's google group, the creator of the stardust library mentions that there is no support for zoom and pan right now, but it is possible to implement this by setting in the mark specification by setting zoom and pan specifications as parameters.
import { Circle } from P2D;
mark MyMark(translateX: float, translateY: float, scale: float, x: float, y: float, radius: float) {
Circle(Vector2(x * scale + translateX, y * scale + translateY), radius);
}
https://stackoverflow.com/editing-help
// In the js code you can do something like:
marks.attr("translateX", 10).attr("translateY", 15).attr("scale", 2);
This library uses a kind of Typescript language where one defines "marks" (which is what all the nodes and edges are), and it should be possible to define these marks with the above parameters. But how can one implement this?
Is there an easier way to do this? Can one add a library like Pixi.js on to this visualization to make it zoom and pan?
There is no need to define custom marks (it can be done with custom marks).
The position of the objects is controlled by a Stardust.scale().
var positions = Stardust.array("Vector2")
.value(d => [d.x, d.y])
.data(nodes);
var positionScale = Stardust.scale.custom("array(pos, value)")
.attr("pos", "Vector2Array", positions)
By modifying the value function you can zoom and translate.
By attaching the zoom to the canvas the star dust drag is no longer working. But that is a different problem.
I used the example https://stardustjs.github.io/examples/graph/
In the zoom callback save the zoom parameters and request a new render of the graph.
var fps = new FPS();
var zoom_scale = 1.0, zoom_t_x = 0.0, zoom_t_y = 0.0;
d3.select(canvas).call(d3.zoom().on("zoom", zoomed));
function zoomed() {
zoom_scale = d3.event.transform.k;
zoom_t_x = d3.event.transform.x;
zoom_t_y = d3.event.transform.y;
requestRender();
}
function render() {
positions.value(d => [d.x*zoom_scale + zoom_t_x, d.y*zoom_scale + zoom_t_y]);
......
}
The example contains an error.
When you use a slider the simulation never stops because the alphaTarget is set to 0.3.
force.alphaTarget(0.3).restart();
It should be changed to
force.alpha(0.3).alphaTarget(0).restart();
Related
TL;DR: I wrote plotly-based javascript simulation of mathematical pendulum. It works very slow. I'm looking for ideas on how to optimize it. Currently trying "bare" d3.js and struggling from the problem of coordinate transformation between SVG's coordinates and my own logical coordinates.
I'm writing web-textbook on ordinary differential equations and want to include interactive simulation and visualization of mathematical pendulum. Visualization should contain pendulum itself, its potential energy graph and full energy contour plot. Then user can choose the initial condition by clicking on energy contour plot, then animation should begin showing how the point moves in the phase space.
I wrote an example of such simulation:
https://jsfiddle.net/ischurov/p1krqnt6/
It's plotly-based. I create three axes and put there necessary graphs. The point that represent current state of the system is also a graph in plotly (i.e. scatter plot with the only one point).
Animation works as follows: I get current coordinates of the point in the phase space, calculate the new position of this point after some time, then update my graph according to this new position. The corresponding code:
var div = document.getElementById('myDiv');
function updateState(phi, v) {
var update = {x: [[phi], [phi], [0, Math.sin(phi)]], y: [[v],
[PotentialEnergy(phi)], [0, -Math.cos(phi)]]};
Plotly.restyle(div, update, [phaseDotIndex, 3, 4]);
}
myPlot.on('plotly_click', function(data){
if(data.points[0].data.type == 'contour'){
updateState(data.points[0].x, data.points[0].y);
}
});
var animate = null;
$('.animate_button').click(function(){
var div = document.getElementById('myDiv');
if(animate === null) {
var phi = div.data[phaseDotIndex].x[0],
v = div.data[phaseDotIndex].y[0],
E = FullEnergy(phi, v);
animate = setInterval(function() {
var phi = div.data[phaseDotIndex].x[0],
v = div.data[phaseDotIndex].y[0],
step = 0.1, newphi, newv, update;
newphi = phi + v * step;
newv = v + step * Force(phi);
/* skip some tweaks here */
updateState(phi, v);
},
100)
}
else
{
clearInterval(animate);
animate = null;
}
}
This code works almost as expected, but really slow and not smooth — at least, under Firefox (If I decrease update interval it works even worse).
I'm looking for ways to optimize this.
I believe that performance problems are due to plotly's update process: in order to move one point it have to recalculate the whole picture and it is slow. So I'm looking for ways to do it in different way.
Are there any ideas?
I'm looked for some direct d3.js approach which can be faster. I see the following steps here:
Draw a graph of potential energy and contour plot of full energy.
Draw the pendulum itself.
Put small circles on the graphs of potential energy and contour plot.
Make 'onclick' event handler to allow user to choose the initial state.
Run animation loop by updating the position of the circles and the pendulum according to current state.
To proceed with step 1, I can use third-party d3.js libraries like conrec for contour plots and/or excellent maurizzzio's function plot or even plotly itself (but I'm not going to use plotly to update the graph). Step 2 seem to be doable, but I didn't try it yet. The most difficult for now are steps 3 and 4 as I don't understand how to transform SVG coordinates into my graph's coordinates (that are plotted with some library) and vice-versa.
Or maybe there are more simple ways to do it?
I'm the author of function plot which is built on top of d3, luckily d3 has methods to perform mappings in d3-scale so assuming that you have a canvas of width x height dimensions which should be mapped linearly to the rectangle [xMin, yMin] x [xMax, yMax] in 2D euclidean space you'd need to create two linear scales
var xScale = d3.scale.linear()
.domain([xMin, xMax])
.range([0, width])
var yScale = d3.scale.linear()
.domain([yMin, yMax])
.range([height, 0])
Note that in SVG the y axis is flipped and because of that the yScale range's flipped too, then any 2D euclidean point is transformed to SVG coordinates as follows
var xCanvas = xScale(point.x)
var yCanvas = yScale(point.y)
The inverse transformation is given by
var xLogical = xScale.invert(point.x)
var xLogical = yScale.invert(point.y)
A possible solution I wrote to your problem using the above is
var instance = functionPlot({
target: '#demo',
disableZoom: true,
data: [{
fn: 'sin(10*(-cos(x) + y^2/2-1))',
fnType: 'implicit'
}]
})
var xScale = instance.meta.xScale
var yScale = instance.meta.yScale
var canvas = instance.canvas
var circle = canvas.append('circle')
.attr("r", 5)
.style("fill", "purple")
var start = Date.now()
function animate() {
// move the point along the circle of radius 1
var t = (Date.now() - start) * 0.003
var xLogical = Math.cos(t)
var yLogical = Math.sin(t)
var xCanvas = xScale(xLogical)
var yCanvas = yScale(yLogical)
circle
.attr('cx', xCanvas)
.attr('cy', yCanvas)
requestAnimationFrame(animate)
}
animate()
<script src="https://cdnjs.cloudflare.com/ajax/libs/d3/3.4.11/d3.min.js"></script>
<script src="https://maurizzzio.github.io/function-plot/js/function-plot.js"></script>
<div id="demo"></div>
Issue in function plot's GitHub
I realise that this isn't exactly what you want but it does demonstrate that your code can be compact and still do what you require.
The following is some code that demonstrates some maths to animate a pendulum, extracted from one of my javascript widgets but the math logic should still be usable in your own project.
Create an image object - pendulumSet, a nice round ball will do.
// variables set for the pendulum
var gravity = -0.0110808; // tuned to -0.110808 as it approximates a 1 second interval
var acceleration = 0.1; //0.1
var velocity = 0.18; //.18
var angle = 8; // 8 (.4 radians = 22.91 degrees)
Create a timer with an interval of 0.01 seconds
Put this in the timer:
acceleration = gravity * angle;
velocity += acceleration;
angle += velocity;
pendulumSet.rotation = angle +180; // rotation as per a widget engine
That's it...
You might have to use something like the following to achieve the rotation in native javascript:
pendulumSet.style.transform = "rotate(90deg)"; // change the 90deg
This code above will simulate a pendulum in a much more compact fashion. This is 1/100th of a second, graphic animation and mathematical calculation so it will take some cpu, that is unavoidable. Nevertheless, the code is compact and the impact is minimal. Depending upon the engine interpreting the code, the cpu usage will be approx. 20-25% of a 2.5ghz core2duo from 2009, easily handled by more modern, faster cpus. Running similar code in Firefox may be noticeably slower as Firefox, in my experience, seems to perform similar animations less efficiently. You just have to try it and see.
The code is taken from this example javascript widget:
A steampunk clock for the desktop
So you can see the exact same code in operation.
We have a web application that displays a SVG map of an office. The map has small icons that represent users walking around with RF tags. This allows administrators of the system to see what rooms users are in. We are using Snap.SVG to load the office SVG file and manipulate it to display the user icons. The challenge is that the map scales to the size of the browser. Using JavaScript to determine the coordinates is not always accurate because the position of the SVG changes based on the browser size.
Here is an example of the map with the icons:
The icons are placed on the map based on X Y coordinates coming from our database. The values for the X Y coordinates are set for each location and were determined using Adobe Illustrator. Currently, we can only place one icon in a room at a time. Because we only have 1 set coordinates the icons overlap if more than one person is in a room at one time.
The second phase of this project is to allow users to draw on of the map to specify locations. Essentially, the user will set points and create a polygon to represent each location on the map. We would use the coordinates of the polygon along with the total area of the polygon to know where on the map we can place icons. This would allow users to define areas without a developer getting involved.
Here is an example of what we want to achieve .
I have been researching how to do this, but have not found anything outside of using something like the Google Maps API to draws polygons on a map. I did find this article that outlines how to dynamically pull points. We thought about using a grid system that is an overlay on the map and the user defines what grid elements are in what locations. So something like [A1,A2,B1,B2]. I persoanlly like the polygon approach as it is more visually appealing and is easier for a user to adopt.
We need some advice on where to start with this and if something like snap.svg is all we need or if we have to rely on other libraries in conjunction with snap.
Update:
With Ian's advice I found a fiddle that describes what he was talking about.
var S;
var pt;
var svg
var box;
window.onload = function(){
svg = $('#mysvg')[0];
S = Snap(svg);
console.log( S );
pt = pt = svg.createSVGPoint(); // create the point
// add the rectangle
box = S.rect(12,12, 12, 12);
box.attr({ fill : 'red', stroke : 'none' });
S.drag(
function(dx, dy, posX, posY, e){
//onmove
pt.x = posX - S.node.offsetLeft;
pt.y = posY - S.node.offsetTop;
console.log(pt.x + "," + pt.y);
// convert the mouse X and Y
//so that it's relative to the svg element
var transformed = pt.matrixTransform(svg.getCTM().inverse());
box.attr({ x : transformed.x, y : transformed.y });
},
function(){
//onstart
},
function(){
//onend
}
);
}
The Fiddle
Is there a way to create a Three.js 3D line series with width and thickness?
Even though the Three.js line object supports linewidth, this attribute is not yet supported in all browsers on all platforms in WebGL.
Here's where you set linewidth in Three.js:
var material = new THREE.LineBasicMaterial({
color: 0xff0000,
linewidth: 5
});
The Three.js ribbon object - which had width - has recently been dropped.
The Three.js tube object generates 3D extrusions but - being Bezier-based - the lines do not pass through the control points.
Can anybody think of a method of drawing a line series (polylines, plotlines) in Three.js that has some sort of user definable 'bulk' such as width, thickness or radius?
This question may be a restating of this question:
Extruding a graph in three.js.
Given that I do not think that there is a readily available method, I would be happy to participate in an effort to create a simple function that responds to this question.
But a response that points to an existing workable method would be cool...
As WestLangley suggests, one possible solution includes the polyline being of constant pixel width - as is currently available with the Three.js canvas renderer.
A comparison of the two renderers is shown here:
Canvas and WebGL Lines Compared via GitHub Pages
Canvas and WebGL Lines Compared via jsFiddle
A solution where you could specify linewidth and similar results occurred on both renderers would be very cool.
There are, however, other ways of thinking of 3D lines where lines have actual physical constructs. They cast shadows, they respond to events. These also need to be looked into.
Here are links to GitHub Pages with two demos of lines made up of multiple meshes:
Sphere and Cylinder Polylines
An 'expensive solution. Each joint is made up of a full sphere.
Cubes Polylines
My guess is that building either of these as smooth single meshes will be complex to problems to solve. So in the meantime here is a link to a partial visualization of 3D lines that are wide and have height:
3D Box Line on jsFiddle
The goal is have to code 'with a low level of complexity - in other words - for dummies'. Thus a 3D line should be as easy and as familiar as adding a sphere or cube. Geometry + material = mesh > scene. And the geometry should be quite economical in terms of creating vertices and faces.
The lines should have width and height. Up is always in the Y direction. The demo shows this. What the demo does not show is corners being mitred nicely...
I cooked up a possible solution which I believe meets most of your requirements:
http://codepen.io/garciahurtado/pen/AGEsf?editors=001
The concept is fairly simple: render any arbitrary geometry in "wireframe mode", then apply a full screen GLSL shader to it to add thickness to the wireframe lines.
The shader is inspired by the blur shaders in the ThreeJS distro, which essentially copy the image a bunch of times along the horizontal and vertical axis. I automated that process and made the number of copies a user defined parameter, while ensuring that the copies were offset by 1 pixel.
I used a 3D cube mesh in my demo (with an ortho camera), but it should be trivial to convert it to a poly line.
The real meat and potatoes of this thing is in the custom shader (fragment shader portion):
uniform sampler2D tDiffuse;
uniform int edgeWidth;
uniform int diagOffset;
uniform float totalWidth;
uniform float totalHeight;
const int MAX_LINE_WIDTH = 30; // Needed due to weird limitations in GLSL around for loops
varying vec2 vUv;
void main() {
int offset = int( floor(float(edgeWidth) / float(2) + 0.5) );
vec4 color = vec4( 0.0, 0.0, 0.0, 0.0);
// Horizontal copies of the wireframe first
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalWidth);
float newUvX = vUv.x + float(i - offset) * uvFactor;
float newUvY = vUv.y + (float(i - offset) * float(diagOffset) ) * uvFactor; // only modifies vUv.y if diagOffset > 0
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
// GLSL does not allow loop comparisons against dynamic variables. Workaround below
if(i == edgeWidth) break;
}
// Now we create the vertical copies
for (int i = 0; i < MAX_LINE_WIDTH; i++) {
float uvFactor = (float(1) / totalHeight);
float newUvX = vUv.x + (float(i - offset) * float(-diagOffset) ) * uvFactor; // only modifies vUv.x if diagOffset > 0
float newUvY = vUv.y + float(i - offset) * uvFactor;
color = max(color, texture2D( tDiffuse, vec2( newUvX, newUvY ) ));
if(i == edgeWidth) break;
}
gl_FragColor = color;
}
Pros:
No need for additional geometry beyond the line vertices
Line thickness is user definable
A full screen shader should be relatively gentle on the GPU
Can be implemented fully within the WebGL canvas
Cons:
Line thickness is close to pixel perfect on horizontal and vertical edges, but slightly off on diagonal edges. This is due to the algorithm used and is a limitation of the solution. Having said that, for low line thickness and complex geometries, this is barely noticeable with the naked eye.
The joints between lines will show gaps for large enough line thickness. You can play with the Codepen demo to see what I mean. I started to implement a solution to this by adding a second "diagonal pass", but it got a little hairy and I think this would only be an issue for higher line thicknesses (+8 pixels) or extreme line angles. If you are interested in this solution, you can look at the original source to see where I was going with it.
Since this uses a full screen filter, you can only use the WebGL context for displaying objects of this thickness. Showing various line widths would require additional rendering passes.
As a potential solution. You could take your 3d points, then use THREE.Vector3.project method to figure out screen-space coordinates. Then simply use canvas and it's lineTo and moveTo operations. Canvas 2d context does support variable line thickness.
var w = renderer.domElement.innerWidth;
var h = renderer.domElement.innerHeight;
vector.project(camera);
context2d.lineWidth = 3;
var x = (vector.x+1)*(w/2);
var y = h - (vector.y+1)*(h/2);
context2d.lineTo(x,y);
Also, i don't think you can use the same canvas for that, so it would have to be a layer (another canvas) above your gl rendering context canvas.
If you have infrequent camera changes - it is also possible to construct line out of polygons and update it's vertex positions based on camera transform. For orthographic camera this would work best as only rotations would require vertex position manipulation.
Lastly, you could disable canvas clearing and draw your lines several times with offset inside a circle or a box. After that you can re-enable clearing. This would require several extra draw operations, but it's probably the most scalable approach.
The reason lines don't work as you'd expect out of the box is due to how ANGLE works, it's used in Chrome and in Firefox to my knowledge, it emulates OpenGL via DirectX. Guys from ANGLE state that WebGL spec only requires support of line thickness up-to 1, so they do not see it as a bug and don't intend to "fix" it. Line thickness should work on non-windows OSs though, where ANGLE is not used.
We've adapted Mike Bostock's original D3 + Leaflet example:
http://bost.ocks.org/mike/leaflet/
so that it does not redraw all paths on each zoom in Leaflet.
Our code is here: https://github.com/madeincluj/Leaflet.D3/blob/master/js/leaflet.d3.js
Specifically, the projection from geographical coordinates to pixels happens here:
https://github.com/madeincluj/Leaflet.D3/blob/master/js/leaflet.d3.js#L30-L35
We draw the SVG paths on the first load, then simply scale/translate the SVG to match the map.
This works very well, except for one issue: D3's path resampling, which looks great at the first zoom level, but looks progressively more broken once you start zooming in.
Is there a way to disable the resampling?
As to why we're doing this: We want to draw a lot of shapes (thousands) and redrawing them all on each zoom is impractical.
Edit
After some digging, seems that resampling happens here:
function d3_geo_pathProjectStream(project) {
var resample = d3_geo_resample(function(x, y) {
return project([ x * d3_degrees, y * d3_degrees ]);
});
return function(stream) {
return d3_geo_projectionRadians(resample(stream));
};
}
Is there a way to skip the resampling step?
Edit 2
What a red herring! We had switched back and forth between sending a raw function to d3.geo.path().projection and a d3.geo.transform object, to no avail.
But in fact the problem is with leaflet's latLngToLayerPoint, which (obviously!) rounds point.x & point.y to integers. Which means that the more zoomed out you are when you initialize the SVG rendering, the more precision you will lose.
The solution is to use a custom function like this:
function latLngToPoint(latlng) {
return map.project(latlng)._subtract(map.getPixelOrigin());
};
var t = d3.geo.transform({
point: function(x, y) {
var point = latLngToPoint(new L.LatLng(y, x));
return this.stream.point(point.x, point.y);
}
});
this.path = d3.geo.path().projection(t);
It's similar to leaflet's own latLngToLayerPoint, but without the rounding. (Note that map.getPixelOrigin() is rounded as well, so probably you'll need to rewrite it)
You learn something every day, don't you.
Coincidentally, I updated the tutorial recently to use the new d3.geo.transform feature, which makes it easy to implement a custom geometric transform. In this case the transform uses Leaflet’s built-in projection without any of D3’s advanced cartographic features, thus disabling adaptive resampling.
The new implementation looks like this:
var transform = d3.geo.transform({point: projectPoint}),
path = d3.geo.path().projection(transform);
function projectPoint(x, y) {
var point = map.latLngToLayerPoint(new L.LatLng(y, x));
this.stream.point(point.x, point.y);
}
As before, you can continue to pass a raw projection function to d3.geo.path, but you’ll get adaptive resampling and antimeridian cutting automatically. So to disable those features, you need to define a custom projection, and d3.geo.transform is an easy way to do this for simple point-based transformations.
I am looking for a generic piece of code (javascript) that would work with jquery UI to constrain movement(drag) of a div within an triangle.
similar to this (http://stackoverflow.com/questions/8515900/how-to-constrain-movement-within-the-area-of-a-circle) but a triangle, and not a circle.
I would prefer the triangle to be defined as a rapheal svg like this...
(function() {
Raphael.fn.triangle = function (cx, cy, r) {
r *= 1.75;
return this.path("M".concat(cx, ",", cy, "m0-", r * .58, "l", r * .5, ",", r * .87, "-", r, ",0z"));
};
var paper = Raphael(document.getElementById("triangle"), "100%", "100%");
var triangle = paper.triangle(100,100,90);
triangle.attr("fill", "#444444");
triangle.attr("stroke", "#444444");
$( "#draggable" ).draggable({ containment: "#triangle svgnode", scroll: false });
looking forward to solutions.
I would like to note that the draggable element could also be a svg node if that is easier.
See this answer which shows how to constrain a jquery draggable to an arbitrary path.
The trick is to alter the ui.position variables within the drag event, to constrain the movement.
Since none of the given answers actually show how to constrain a draggable to a triangular area, I thought I'd share this jsfiddle which demonstrates an actual working example.
I think the key here is not to focus so much on the "triangle" aspect but more importantly to realize a triangle is a polygon. This allows us to address the issue head-on using many existing algorithms that relate to a point and a polygon.
This 2D Graph library JavaScript 2D Graph Library provides all the tools we need to solve this problem. Mainly, each Shape has an associated function constrain which will constrain a Point to the inner area of a the Shape (the edge included) via a LineSegment that connects to the centroid of the Shape. (It also looks like you can set the center point for the Shape as a second argument which would prove handy for concave Polygon's.)
This jsFiddle Triangle Constraint via jQuery UI Draggable uses the jQuery UI Draggable drag callback in conjunction with the Graph library to do the constraint. It actually uses the coordinates of the SVG Polygon to construct the Graph Polygon only inverting the y-axis to switch between screen and Cartesian coordinates.
The set-up that takes place in document ready is fairly simple:
var points = $('polygon').attr('points').trim().split(' ').map(function(vertex) { var coordinates = vertex.split(','); return new aw.Graph.Point(Number(coordinates[0]), Number(-coordinates[1])); }),
triangle = new aw.Graph.Polygon(points);
$('.map-selector').draggable({
containment: '.map',
drag: function (event, ui) {
var left = ui.position.left, top = -ui.position.top;
var constrained = triangle.constrain(new aw.Graph.Point(left, top));
ui.position.left = constrained.x; ui.position.top = -constrained.y;
}
});
Cheers!