I wanted to produce a visualization that contains a good deal of nodes with the d3 force layout (more than 500 hundred nodes). While it is working correctly with as much as 200 hundred nodes it gets very slow with about 500, in the sense that the layout hiccups from one frame to the next and events like mouseover on nodes are far from being responsive. This made me ask several questions.
Is there some kind of limit in the number of nodes after which it is not recommended to use the force layout ? If so, is there any other library that could handle the job ?
If I wanted to speed up the process with d3, which parts should be optimized ? I tried keeping the use of css/attributes markup minimal (just gave a radius and a fill color to nodes + stroke-width and stroke color to links) and reduce the use of interaction (mouseover events) but could there be any more optimization done to the force object which holds all of the information ? The size of data must play a certain role...
Thank you for your input !
One way of doing this would be to handle not every tick event, but only a fraction of them, e.g. skipping a specified number or dynamically adapting the number of events depending on other considerations.
If you want smooth movements, add a transition between the positions set in the handled tick events. You can of course combine these ideas as well and skip events while the transition is running, handling the first one after it has been completed.
Related
I'm developing a D3 application that utilizes a lot of text. Since there is alot of text elements, panning around using D3 zoom causes some lag. Is there a way in which I can improve the performance of my application? I'm happy to hear any suggestions, you guys might have. I've been thinking about paginating my data and detecting pan events but my UI is very complex and the user has the freedom of placing things the way he wants so I am not sure how to go about implementing a pan/pagination solution.
Such an open question is bound to get very opinion-based answers, and without more context you're also bound to get some useless ones, but here you go: it depends on what you want and what you mean by "a significant amount".
Supposing that significant amount is > 1000 elements, consider using canvas instead of SVG. Sure, if you can click on individual text nodes, that's more of a hassle, but it should be really good at panning/zooming.
If that is not possible, look at your code. Are you repositioning all text nodes individually? If so, place them all inside one g node and give that node a transform and zoom. In other words, make that node responsible for all global movement, and place the text nodes only relative to each other.
If that is also not possible, consider removing text nodes if they're outside the bounds of the SVG. Repositioning invisible nodes takes a lot of computation power, be smart about it. This is probably the most complex solution, so try it last.
I am rendering an SVG using D3 that shows circles from JSON data. I want to support zooming and dragging. The JSON structure can get very large. Here is my main issue:
Appending circles for all JSON entries doesn't really work. The page becomes way too slow as there might be thousands of <circle> elements in the DOM.
How I'm solving it:
I keep a reduced copy of the data set that I update in the drag function. On each dragging event, I declare an empty data set:
var reducedData = [];
I go over the entire data set and only push to reducedData the circles that have center coordinates that are visible given the current axis. I then erase the SVG and redraw it using reducedData. I do the same process on every zoom event, only appending to reducedData the circles that have a radius greater than 5 pixels given the current zoom ratio.
Although page is very responsive and seems to work well, it's very inefficient and I'm sure this isn't the best way to do this. What are some alternative solutions to my problem? Thanks.
Of course there's always room for improvement, but I think that your approach is already good enough and may it doesn't get so much better than that. However here are some points for you to consider and/or test by yourself if you wish so...
First off I would recommend you to check if any optimization is really needed. In latest versions of Google Chrome, in its DevTools under the "performance" tab, you can use CPU throttling to simulate a slower device. Then using the timeline tool, you can verify if either your data reduction or DOM manipulations are causing any bottlenecking and dropping your frame rate. If not, don't sweat it, you're good to go.
If from your analysis, you find that the data reduction is slowing down your rendering, you can use the timeline tool to find exactly where it is slow and research for faster alternatives.
In the other hand if it is your DOM manipulation that is causing any trouble, make sure that you're using the general update pattern which ensures that you're creating or removing elements only when really needed. Furthermore you may speed up the creation of circles by duplicating them instead of creating new ones.
Usually when too many data items needs to be visualized, as a last resort we switch from SVG to a canvas based visualization, but I think that would be overkill for your context.
Hope it helps and let us know if you have any questions.
I ended up using Crossfilter.js for fast filtering of the data. That way I don't need to manually keep a reduced copy of the data set. I can simply filter it quickly on each drag and zoom event. Thank you for everyone that answered.
How I solved this issue was to only update visible svg elements as the user is panning/zooming.
function pointInDomain(d, domain) {
return d.x < domain[1] && d.x > domain[0]
}
function zoomed() {
xz = d3.event.transform.rescaleX(x);
xGroup.call(xAxis.scale(xz));
var domain = xz.domain();
clippedArea.selectAll("circle")
.style("visibility", d => pointInDomain(d, domain) ? "visible" : "hidden")
.filter(d => pointInDomain(d, domain))
.attr("cx", d => xz(d.x));
}
JSFiddle
I'm trying to improve my D3 force directed graph's performance. Currently it uses SVG elements but as soon as the number of nodes reaches 500 and links ~ 2000 it becomes almost impossible to use. I'm looking at some alternative ways of rendering the graph.
Canvas seems to be a nice option :
http://bl.ocks.org/mbostock/3180395
But is it possible to attach images to nodes on canvas as it's done here:
http://bl.ocks.org/eesur/be2abfb3155a38be4de4
Thanks
You can use drawImage(). Here's some documentation
And yeah, canvas is a good approach or speeding things up — within reason. The force layout itself takes a bunch of CPU cycles no matter how you render it, and your numbers are already quite high for that. Also, while rendering 500 circles into a canvas at 60fps (fps = frames per second) should be doable, rendering 2000 links in addition will already start slowing things down too. Still, it'll be much better than SVG.
In order to know whether — and how much — your optimizations are improving performance, consider using something like stats.js
My page is running a touchmove event which captures the position of the user's finger on the screen via:
xPos = e.originalEvent.touches[0].pageX;
yPos = e.originalEvent.touches[0].pageY;
The page has many layers (created with position:absolute divs) and at this point, I want to calculte how many such layers exist below the user's current position on the screen.
The only method I can think of is to have an array of all the layers' positions and loop through that. However that seems rather processor intensive when there may be hundreds of layers on screen at once.
Is there a simple way in js or JQuery to count the items that exist in a position, or a better practise way to do it than my array suggestion.
As far as I know, there is no such way. My approach would be to give all layers a certain class, select them all and iterate through them. Depending on what you are trying to achieve with this and how often you'll have to perform this calculation, it may be possible to use caching and approximation (e.g. not checking a certain pixel but an area of x^2 pixels and caching the result, etc) to make things run faster.
That being said, I encourage you to first try the solution that you've thought of and see how fast it actually runs. Browsers are pretty fast for such standard operations (think layer drag & drop with boundary checks), so I'm pretty confident that it won't be as slow as you think it will be :)
I have a D3 setup using "Nodes" and "Lines". When the graph first appears, it bounces in with gravity until it settles in the middle. Does anyone know of a way to have it appear automatically in the middle without the "bounce" sort of effect?
P.S I am using force layout
Calling start resets the cooling parameter, alpha; alpha decays exponentially as the layout converges on its solution, and then stops so as to avoid wasting the cpu. There's no jittering on start (other than coincident nodes, which is necessary to avoid a divide by zero). However, anytime you have conflicting forces and geometric constraints (links), it's natural to expect the layout to adjust when starting.
If you want to avoid this bounce, you either need to keep the graph permanently hot (say by calling d3.timer(function() { force.resume(); })) or you'd need to do something else, like adjust the alpha parameter manually to reheat gradually instead of instantaneously.
Edit: In 2.8.x, you can avoid the first bounce entirely by running the force layout synchronously on startup. For example: http://bl.ocks.org/1667139
Another strategy I've used before is to gradually increase the radii of each node from zero over the first, say, 50 or 100 force ticks. You can see what that looks like (in Protovis, but it would behave the same way in d3) in the Dorling cartograms on the one.org data site.