I have an Ext.tree.TreePanel used with AsyncTreeNodes. The problem is that initially the
root node needs to have more than 1000 descendants. I succeeded at optimizing the DB performance, but the JavaScript performance is terrible - 25 seconds for adding and rendering 1200 nodes. I understand that manipulating the page's DOM is a slow operation, but perhaps there is some way to optimize the initial rendering process.
You could create a custom tree node UI that has a lower DOM footprint. In other words, change the HTML that is used to create each node of the tree to some less verbose (and likely less flexible) HTML.
here are some references for doing that:
http://github.com/jjulian/ext-extensions/blob/master/IndentedTreeNodeUI.js
http://david-burger.blogspot.com/2008/09/ext-js-custom-treenodeui.html
Enjoy.
I don't think you'll have much luck optimizing a tree with that many nodes. Is there any way you could use a grid to deliver the information? You could at least set up paging with that, and it would probably be a lot faster. You could also implement the row expander UX on the grid, which behaves like a tree, sort of, for each row.
Related
Virtual DOM/DOM tree state in steps
So I've stumbled upon this image recently and I'm a bit confused about part where Virtual DOM state is applied to the real DOM. So let's say we changed <div> to a <header> in our React code, this change would be applied to Virtual DOM and then whole subtree would be re-rendered. But in Vanilla JS I could do the very same thing and subtree would be re-rendered aswell? So do I understand correctly that Virtual DOM duty among other things is to abstract real DOM operations we would need to do?
Would correct/efficient HTML DOM manipulations have same effect as working with Virtual DOM in complex applications?
So do I understand correctly that Virtual DOM duty among other things is to abstract real DOM operations we would need to do?
Yes.
Would correct/efficient HTML DOM manipulations have same effect as working with Virtual DOM in complex applications?
Yes.
You're right. A virtual DOM ultimately translates to calls to the real DOM. Therefore, the virtual DOM can only ever be slower than optimal usage of the real DOM. If you're adding, removing, moving elements on the actual DOM in a way that actually represents what changes you're making, it will be efficient.
Then why does Virtual DOM exist?
People invented virtual DOM to abstract away "how things are changing" and instead think about "how things are currently." The architecture of things like React and Vue is basically that components have the code to declare the way that the DOM should currently look. Then the code in these frameworks does a big re-computation of how they think things should be, then it diffs it against how the components said things should be the last time it asked them, then it applies that diff to the real DOM.
So, virtual DOM is a way to ameliorate the inherent slowness of this design, under the belief that its benefits will outweigh the costs.
Sources
https://svelte.dev/blog/virtual-dom-is-pure-overhead
https://medium.com/#hayavuk/why-virtual-dom-is-slower-2d9b964b4c9e
I'm not sure I fully understand your questions but I'll try to answer as fully as I can.
React works in Virtual DOM, which means it has a copy of the real dom and whenever it detects a change, it sees what the difference is and only rerenders that component and it's children in the subtree, NOT the whole one.
The way React detects change is through props. If the props of a component has been changed, it rerenders the component.
One way you could evade unnecessary re-renders of the children is if you use PureComponents, ShouldComponentUpdate, or the new memo and memo hooks. (memoization).
Also, the one other difference between normal(JavaScript) DOM and virtual DOM is that Virtual DOM is made up of objects and normal DOM is made of arrays. So, in arrays, search time is O(n) and in objects, it's constant time O(1) and that's why it is way faster
I hope this will clarify your doubts about the virtual DOM without going too much in detail.
Real DOM
It updates slow.
Can directly update HTML.
Creates a new DOM if element updates.
Uses a lot of memory resource.
Virtual DOM
Updates faster and it's light-weight.
Can't directly update HTML.
Updates the JSX if element updates.
Increased performance as uses less memory
My institution is dealing with several fairly large networks:
around 1000 nodes and 2000 edges (so smaller than this one),
fixed node positions,
potentially dozens of attributes per element,
and a minified network file size of 1 MB and up.
We visualize the networks in custom-built HTML pages (one network per page) with cytoscape.js. The interface allows the user to select different pre-defined coloring options for the nodes (the main functionality) - colors are calculated from one of a set of numeric node attributes. Accessory functions include node qtips, finding a node and centering on it (via a select2 dropdown), and highlighting nodes based on their assignment to pre-defined groups (via another select2 drowpdown).
We face the problem that the page (and so necessarily the whole browser) becomes unresponsive for at least 5 seconds whenever a color change is initiated, and we are looking for strategies to remedy that. Here are some of the things that we have tried or are contemplating:
Transplant node attributes to separate files that are fetched on demand via XHRs (done, but performance impact unclear).
Offload cytoscape.js to a Web worker (produced errors - probably due to worker DOM restrictions - or did not improve performance).
Cache color hue calculation results with lodash's memoize() or similar (pending).
Export each colorized network as an image and put some fixed-position HTML elements (or a canvas?) on top of the image stack for node qtips and such. So we would bascially replace cytoscape.js with static images and custom JavaScript.
I appreciate any suggestions on alternative or complementary strategies for improving performance, as well as comments on our attempts so far. Thanks!
Changing the style for 1000 nodes and 2000 edges takes ~150ms the machine I'm using at this moment. It follows that the issue is very likely to be in your own code.
You haven't posted an example or a sample of what you've currently written, so it's not possible for me to say what the performance issue is in your code.
You've hinted that you're using widgets like <select>. I suspect you're reading from the DOM (e.g. widget state) for each element. DOM operations (even reads) are slow.
Whatever your performance issue is, you've decided to use custom functions for styling. The docs note this several times, but I'll reiterate here: Custom functions can be expensive. Using custom functions is like flying a plane without autopilot. You can do it, but now you have to pay attention to all the little details yourself. If you stick to the in-built stylesheets features, style application is handled for you quickly and automatically (like autopilot).
If you want to continue to use expensive custom functions, you'll have to use caching. Lodash makes this easy. Even if you forgo custom functions for setting .data() with mappers, you still may find caching with Lodash useful for your calculations.
You may find these materials useful:
Chrome JS profiling
Firefox performance profiling
Cytoscape.js performance hints
When having 2 <div> elements both with fair amounts of graphic content and only one is shown at a time, is it better to hide then one not shown, or empty it and re-insert the html tags again?
When you hide, does everything stay in memory?
The answer is most probably: it depends.
As humans, we have some orientation around what's required in terms of performance for different operation, but in this case when you are not sure, best practice is estimate and benchmark. The idea is to understand what's the compromise in every method:
First, and easier, is benchmark the memory impact of your hidden div. Note that a hidden <div> is kept in memory, but is not rendered in page, thus having a smaller memory footprint than a rendered element. Measuring this can be easily done, even simply using chrome task manager. Check multiple changing between the two, and measure the memory footprint when using each method. Is is really as major as you expected?
The second, a little more complicated: measure the impact to reloading and re-rendering on your client's system, and on the user experience of your app. Best is to use a WEAK machine, and maybe even a slow connection. Measure the delay created if any, if not using code and reporting, at least by your feel, and try to measure cpu peaking and process slowdown if any. Switch between the divs multiple times, in slow and rapid manners. Does it still feel slick?
In this case, I tend to guess the memory footprint is much smaller than you assume, but that's just my experience. I believe hiding and showing will require less effort than emptying and reloading.
That said, I'm certain after doing both, It'll probably become very clear to you what's the correct method for you. Hiding if the memory footprint is small, reloading if it's so large as to be worth the slow of reloading. Only you can measure and figure where's the line.
Side note: When hiding, best practice is to use display:none;. This ensures elements are removed from the render tree, which provides better performance than opacity: 0; / visibility: hidden;, although if you need one of those for specific functionality use them. If you don't need the functionality, then use display: none;. Also note that jQuery's .hide() uses display: none;, so it's best practice. From jQuery .hide() documentation:
This is roughly equivalent to calling .css( "display", "none" )
Sources:
How Web Browsers Work: Render Tree Construction
jQuery documentation: .hide()
jQuery documentation: .empty()
I am using a jsTree with around 1500 nodes, nested to a max of 4 levels (most are only 1 level deep), and I'm getting Internet Explorer's "this script is running slowly" error. I began with just a straight html_data <li> structure, generated by ASP.NET. The tree wouldn't finish loading at all. Then I tried xml_data and json_data, which was a little better but eventually errored out. My last-stitch effort was async loading. This fixed the initial load problem, but now I get IE's error when I expand one of the larger branches.
More details: I'm using the checkbox plugin, and I will also need the ability to search. Unfortunately, when searching, the user could potentially enter as little as one character so I'm looking at some large set of search results.
Has anybody done something similar with such a large data set? Any suggestions on speeding up the jsTree? Or, am I better off exploring other options for my GUI?
I realize I haven't posted any code, but any general techniques/gotcha's are welcome.
I haven't completely solved my problem, but I made some improvements so that I think it might be usable (I am still testing). I thought it could be useful for other people:
First, I was using jsTree in a jQuery dialog, but that seems to hurt performance. If possible, don't mix large jsTrees and Dialogs.
Lazy loading is definitely the way to go with large trees. I tried json_data and xml_data, and they were both easy to implement. They seem to perform about the the same, but that's just based on basic observation.
Last, I implemented a poor man's paging. In my server-side JSON request handler, if a node has more than X children, I simply split into many nodes each having a portion of those children. For instance, if node X has say 1000 children, I gave X child nodes X1, X2, X3,..., X10 where X1 has children first 100 children, X2 has next 100 children and so on. This may not make sense for some people since you're modifying the tree structure, but I think it will work for me.
jsTree supports all your needs
use json_data plugin with ajax support where the brach would be too big.
search plugin support ajax call too
I'm a bit disappointed in it's performance myself.
Sounds like you need to try lazy loading: instead of loading the whole tree all at once, only load as needed.
That is, initially load only the trunk of the tree (so all nodes are "closed"), then only load a node's children when user clicks to open it.
JsTree can do this, see the documentation.
(Is that you mean by "async loading"?)
jstree sucks - it is the "refresh" which is slow 10 seconds for a 1000 child nodes being added, or to load a tree with 10000 items among 40 nodes it takes over a minute. after days of development I have told my colleague to look at slickgrid instead, as everyone will refuse to use a page which takes so long to do anything. it is quicker if you do not structure it correctly eg 3 seconds for 1000 nodes but then the arrow will not have any effect to close it down.
This is to replace a combination of ms treeview and ms imagelist which loads the same 10000 items across forty parent nodes in 3 seconds.
I'm trying to optimize a sortable table I've written. The bottleneck is in the dom manipulation. I'm currently creating new table rows and inserting them every time I sort the table. I'm wondering if I might be able to speed things up by simple rearranging the rows, not recreating the nodes. For this to make a significant difference, dom node rearranging would have to be a lot snappier than node creating. Is this the case?
thanks,
-Morgan
I don't know whether creating or manipulating is faster, but I do know that it'll be faster if you manipulate the entire table when it's not on the page and then place it on all at once. Along those lines, it'll probably be slower to re-arrange the existing rows in place unless the whole table is removed from the DOM first.
This page suggests that it'd be fastest to clone the current table, manipulate it as you wish, then replace the table on the DOM.
I'm drawing this table about twice as quickly now, using innerHTML, building the entire contents as a string, rather than inserting nodes one-by-by.
You may find this page handy for some benchmarks:
http://www.quirksmode.org/dom/innerhtml.html
I was looking for an answer to this and decided to set up a quick benchmark http://jsfiddle.net/wheresrhys/2g6Dn/6/
It uses jQuery, so is not a pure benchmark, and it's probably skewed in other ways too. But the result it gives is that moving DOM nodes is about twice as fast as creating and detroying dom nodes every time
if you can, it is better to do the dom manipulation not as actual dom manipulation, but as some sort of method within your script and then manipulating the dom. So rather than doing what is called a repaint on every single node, you clump what would have been your repaint on every single node into its own method, and then attach those nodes into a parent that would then be attached to the actual dom, resulting in just two repaints instead of hundreds. I say two b/c you need to cleanup what is in the dom already before updating with your new data.