How do I speed it up?
I get the results from JSON web service (lightning fast). Adding nodes to the tree using
something like
parentNode.addChild({
key: key,
title: value,
addClass: cssClass
});
Unfortunately, a tree with 100+ elements takes 1.5 minutes to load.
I am disappointed...is it not made to be used with that many nodes? Anything I can do at this point aside from switching to another component?
Thanks!
This benchmark shows that it is loads pretty fast:
http://wwwendt.de/tech/dynatree/doc/test-bench.html
(Theres always room for improvement though...)
Your problem might be, that you load and add the nodes seperately?
In this case the tree is also rendered 100+ times, and that is slow indeed.
Have a look at the sample, to see how load a batch of nodes with one call:
http://wwwendt.de/tech/dynatree/doc/sample-lazy.html
Looks like there is an example of lazy loading the tree. Might try that out: http://wwwendt.de/tech/dynatree/doc/samples.html
Related
So I'm trying to create an infinite scrolling table using AngularJS, similar to this: http://jsfiddle.net/vojtajina/U7Bz9/
The problem I'm having is that in the jsfiddle example, if I keep scrolling till I have a million results, the browser is going to slow to a crawl, wouldn't it? Because there would now be 1,000,000 results in $scope.items. It would be better if I only ever had, for example, 1000 results at a time inside $scope.items, and the results I was viewing happen to be within those 1000.
Example use case: page loads and I see the first 10 results (out of 1,000,000). Even though I only see 10, the first 1000 results are actually loaded. I then scroll to the very bottom of the list to see the last 10 items. If I scroll back up to the top again, I would expect that the top 10 results will have to be loaded again from the server.
I have a project I did with ExtJS that a similar situation: an infinite scrolling list with several thousand results in it. The ExtJS way to handle this was to load the current page of results, then pre-load a couple of extra pages of results as well. At any one time though, there was only ever 10 pages of results stored locally.
So I guess my question is how would I go about implementing this in AngularJS? I kow I haven't provided much code, so I'm not expecting people to just give the coded answer, but at least some advice in which direction to go.
A couple of things:
"Infinite scrolling" to "1,000,000" rows is likely to have issues regardless of the framework, just because you've created millions and millions of DOM nodes (presuming you have more than one element in each record)
The implementation you're looking at doing with <li ng-repeat="item in items">{{item.foo}}</li> or anything like that will have issues very quickly for one big reason: {{item.foo}}} or any ngBind like that will set up a $watch on that field, which creates a lot of overhead in the form of function references, etc. So while 10,000 small objects in an "array" isn't going to be that bad... 10,000-20,000 additional function references for each of those 10,000 items will be.
What you'd want to do in this case would be create a directive that handles the adding and removing of DOM elements that are "too far" out of view as well as keeping the data up to date. That should mitigate most performance issues you might have.
... good infinite scrolling isn't simple, I'm sorry to say.
I like the angular-ui implementation ui-scroll...
https://github.com/angular-ui/ui-scroll
... over ngInfiniteScroll. The main difference with ui-scroll from a standard infinite scroll is that previous elements are removed when leaving the viewport.
From their readme...
The common way to present to the user a list of data elements of undefined length is to start with a small portion at the top of the list - just enough to fill the space on the page. Additional rows are appended to the bottom of the list as the user scrolls down the list.
The problem with this approach is that even though rows at the top of the list become invisible as they scroll out of the view, they are still a part of the page and still consume resources. As the user scrolls down the list grows and the web app slows down.
This becomes a real problem if the html representing a row has event handlers and/or angular watchers attached. A web app of an average complexity can easily introduce 20 watchers per row. Which for a list of 100 rows gives you total of 2000 watchers and a sluggish app.
Additionally, ui-scroll is actively maintained.
It seems that http://kamilkp.github.io/angular-vs-repeat would be what you are looking for. It is a virtual scrolling directive.
So turns out that the ng-grid module for AngularJS has pretty much exactly what I needed. If you look at the examples page, the Server-Side Processing Example is also an infinite scrolling list that only loads the data that is needed.
Thanks to those who commented and answered anyway.
Latest URL : ng-grid
You may try using ng-infinite-scroll :
http://binarymuse.github.io/ngInfiniteScroll/
Check out virtualRepeat from Angular Material
It implements dynamic reuse of rows visible in the viewport area, just like ui-scroll
I am using a jsTree with around 1500 nodes, nested to a max of 4 levels (most are only 1 level deep), and I'm getting Internet Explorer's "this script is running slowly" error. I began with just a straight html_data <li> structure, generated by ASP.NET. The tree wouldn't finish loading at all. Then I tried xml_data and json_data, which was a little better but eventually errored out. My last-stitch effort was async loading. This fixed the initial load problem, but now I get IE's error when I expand one of the larger branches.
More details: I'm using the checkbox plugin, and I will also need the ability to search. Unfortunately, when searching, the user could potentially enter as little as one character so I'm looking at some large set of search results.
Has anybody done something similar with such a large data set? Any suggestions on speeding up the jsTree? Or, am I better off exploring other options for my GUI?
I realize I haven't posted any code, but any general techniques/gotcha's are welcome.
I haven't completely solved my problem, but I made some improvements so that I think it might be usable (I am still testing). I thought it could be useful for other people:
First, I was using jsTree in a jQuery dialog, but that seems to hurt performance. If possible, don't mix large jsTrees and Dialogs.
Lazy loading is definitely the way to go with large trees. I tried json_data and xml_data, and they were both easy to implement. They seem to perform about the the same, but that's just based on basic observation.
Last, I implemented a poor man's paging. In my server-side JSON request handler, if a node has more than X children, I simply split into many nodes each having a portion of those children. For instance, if node X has say 1000 children, I gave X child nodes X1, X2, X3,..., X10 where X1 has children first 100 children, X2 has next 100 children and so on. This may not make sense for some people since you're modifying the tree structure, but I think it will work for me.
jsTree supports all your needs
use json_data plugin with ajax support where the brach would be too big.
search plugin support ajax call too
I'm a bit disappointed in it's performance myself.
Sounds like you need to try lazy loading: instead of loading the whole tree all at once, only load as needed.
That is, initially load only the trunk of the tree (so all nodes are "closed"), then only load a node's children when user clicks to open it.
JsTree can do this, see the documentation.
(Is that you mean by "async loading"?)
jstree sucks - it is the "refresh" which is slow 10 seconds for a 1000 child nodes being added, or to load a tree with 10000 items among 40 nodes it takes over a minute. after days of development I have told my colleague to look at slickgrid instead, as everyone will refuse to use a page which takes so long to do anything. it is quicker if you do not structure it correctly eg 3 seconds for 1000 nodes but then the arrow will not have any effect to close it down.
This is to replace a combination of ms treeview and ms imagelist which loads the same 10000 items across forty parent nodes in 3 seconds.
I am appending large amounts of table row elements at a time and experiencing some major bottlenecks. At the moment I am using jQuery, but i'm open to a javascript based solution if it gets the job done.
I have the need to append anywhere from 0-100 table rows at a given time (it's actually potentially more, but I'll be paginating anything over 100).
Right now I am appending each table row individually to the dom...
loop {
..build html str...
$("#myTable").append(row);
}
Then I fade them all in at once
$("#myTable tr").fadeIn();
There are a couple things to consider here...
1) I am binding data to each individual table row, which is why i switched from a mass append to appending individual rows in the first place.
2) I really like the fade effect. Although not essential to the application I am very big on aesthetics and animations (that of course don't distract from the use of the application). There simply has to be a good way to apply a modest fade effect to larger amounts of data.
(edit)
3) A major reason for me approaching this in the smaller chunk/recursive way is I need to bind specific data to each row. Am I binding my data wrong? Is there a better way to keep track of this data than binding it to their respective tr?
Is it better to apply affects/dom manipulations in large chunks or smaller chunks in recursive functions?
Are there situations where the it's better to do one or the other? If so, what are the indicators for choosing the appropriate method?
Take a look at this post by John Resig, it explains the benefit of using DocumentFragments when doing large additions to the DOM.
A DocumentFragment is a container that you can add nodes to without actually altering the DOM in any way. When you are ready you add the entire fragment to the DOM and this places it's content into the DOM in a single operation.
Also, doing $("#myTable") on each iteration is really not recommended - do it once before the loop.
i suspect your performance problems are because you are modifying the DOM multiple times, in your loop.
Instead, try modifying it once after you get all your rows. Browsers are really good at innerHTML replaces. Try something like
$("#myTable").html("all the rows dom here");
note you might have to play with the exact selector to use, to get the dom in the correct place. But the main idea is use innerHTML, and use it as few times as possible.
I am wondering if anyone can give some safe guidelines on the maximum level of dom manipulation you can do with jquery without freezing the browser.
Also the best methods for mass DOM manipulation.
Basically at any one time I could be dealing with lists of up to 40k li's
All I am really doing is showing one, hiding the other.
So here are my questions
How many li's could I theoretically manipulate at a time without crashing the browser?
How should I work with the li's?
Manipulate as a single object (ul level)
Loop over each li in the first list removing them, then loop inserting each new li.
Loop over chunks of li's (if so how big 10, 100, 1000, 10000 at a time?)
How should I fetch the data? (its in json format)
Grab the data for the entire list (40k li's worth) in one ajax call.
Insert the data for every list at page creation(could be upwards of 20 lists = 800,000 li's
Do several ajax calls to fetch the data (if so how big 10, 100, 1000, 10000 at a time?)
If you really want to be manipulating that many then you should probably adopt something like http://developer.yahoo.com/yui/3/async-queue/
As the answer for how many you should be working with at a time, you could build in a calibration which looks at how quickly the last set completed and work with more / less accordingly. This could get you something that works in everything from desktop chrome to mobile IE.
The same goes for the ajax calls, make it able to ramp up according to the net speed.
As a warning: this is extremely dependent on your computer performance. Frankly - anything approaching 100 elements in a DOM manipulation starts getting a little silly and expensive. That said:
1) Depends on your system, my older system tops at about 30 and my newer one can get up to 120 before I break things.
2) Work with them on as large a level as possible. Manipulating a ul with a bunch of li's in it is much faster than manipulating a bunch of li's. Use jQuery and store the object in a variable (so you're not querying the DOM each time it's used) to enhance performance.
3) Initially load the first data the user will see, then fetch it in similarly sized batches. If you can only see 20 li elements at a time there is no reason loading any more than that plus a little buffer area (30?).
I'm trying to optimize a sortable table I've written. The bottleneck is in the dom manipulation. I'm currently creating new table rows and inserting them every time I sort the table. I'm wondering if I might be able to speed things up by simple rearranging the rows, not recreating the nodes. For this to make a significant difference, dom node rearranging would have to be a lot snappier than node creating. Is this the case?
thanks,
-Morgan
I don't know whether creating or manipulating is faster, but I do know that it'll be faster if you manipulate the entire table when it's not on the page and then place it on all at once. Along those lines, it'll probably be slower to re-arrange the existing rows in place unless the whole table is removed from the DOM first.
This page suggests that it'd be fastest to clone the current table, manipulate it as you wish, then replace the table on the DOM.
I'm drawing this table about twice as quickly now, using innerHTML, building the entire contents as a string, rather than inserting nodes one-by-by.
You may find this page handy for some benchmarks:
http://www.quirksmode.org/dom/innerhtml.html
I was looking for an answer to this and decided to set up a quick benchmark http://jsfiddle.net/wheresrhys/2g6Dn/6/
It uses jQuery, so is not a pure benchmark, and it's probably skewed in other ways too. But the result it gives is that moving DOM nodes is about twice as fast as creating and detroying dom nodes every time
if you can, it is better to do the dom manipulation not as actual dom manipulation, but as some sort of method within your script and then manipulating the dom. So rather than doing what is called a repaint on every single node, you clump what would have been your repaint on every single node into its own method, and then attach those nodes into a parent that would then be attached to the actual dom, resulting in just two repaints instead of hundreds. I say two b/c you need to cleanup what is in the dom already before updating with your new data.