Mass Dom manipulation - javascript

I am wondering if anyone can give some safe guidelines on the maximum level of dom manipulation you can do with jquery without freezing the browser.
Also the best methods for mass DOM manipulation.
Basically at any one time I could be dealing with lists of up to 40k li's
All I am really doing is showing one, hiding the other.
So here are my questions
How many li's could I theoretically manipulate at a time without crashing the browser?
How should I work with the li's?
Manipulate as a single object (ul level)
Loop over each li in the first list removing them, then loop inserting each new li.
Loop over chunks of li's (if so how big 10, 100, 1000, 10000 at a time?)
How should I fetch the data? (its in json format)
Grab the data for the entire list (40k li's worth) in one ajax call.
Insert the data for every list at page creation(could be upwards of 20 lists = 800,000 li's
Do several ajax calls to fetch the data (if so how big 10, 100, 1000, 10000 at a time?)

If you really want to be manipulating that many then you should probably adopt something like http://developer.yahoo.com/yui/3/async-queue/
As the answer for how many you should be working with at a time, you could build in a calibration which looks at how quickly the last set completed and work with more / less accordingly. This could get you something that works in everything from desktop chrome to mobile IE.
The same goes for the ajax calls, make it able to ramp up according to the net speed.

As a warning: this is extremely dependent on your computer performance. Frankly - anything approaching 100 elements in a DOM manipulation starts getting a little silly and expensive. That said:
1) Depends on your system, my older system tops at about 30 and my newer one can get up to 120 before I break things.
2) Work with them on as large a level as possible. Manipulating a ul with a bunch of li's in it is much faster than manipulating a bunch of li's. Use jQuery and store the object in a variable (so you're not querying the DOM each time it's used) to enhance performance.
3) Initially load the first data the user will see, then fetch it in similarly sized batches. If you can only see 20 li elements at a time there is no reason loading any more than that plus a little buffer area (30?).

Related

Performance of searching through a list of items in javascript

I have a list of approx. 2000 questions and I am trying to create an interface where you can filter through them all using a text input.
I tried going through this React tutorial since i thought it would perform well enough but there is a considerable lag. Or at least there is when I run the code in an Electron container (perhaps I'd get better performance compiling it with Webpack). I just tried putting my code in to a jsfiddle and with 3000 elements the performance starts to suffer.
Is it futile trying to search through this many objects with html and js or is there a simpler way with better performance?
So the lag is not because of the filtering, but because you are trying to render too many objects in one hit. You can see this by typing a sequence of zeros into the filter input. Each zero typed requires less time, as obviously the result size gets smaller and smaller.
I have updated your fiddle here to show the performance if you only render the first 100 items in the result set (even though all 3000 are processed on each input change).
Essentially I am just generating the full rows variable, and then using .slice(0, 100) to generate a truncated version before rendering.
What you should do in this situation is think about UI/UX, and that it really isn't necessary to render thousands of items at the same time. You could implement some sort of pagination or infinite scroll, etc.

AngularJS Infinite Scrolling with lots of data

So I'm trying to create an infinite scrolling table using AngularJS, similar to this: http://jsfiddle.net/vojtajina/U7Bz9/
The problem I'm having is that in the jsfiddle example, if I keep scrolling till I have a million results, the browser is going to slow to a crawl, wouldn't it? Because there would now be 1,000,000 results in $scope.items. It would be better if I only ever had, for example, 1000 results at a time inside $scope.items, and the results I was viewing happen to be within those 1000.
Example use case: page loads and I see the first 10 results (out of 1,000,000). Even though I only see 10, the first 1000 results are actually loaded. I then scroll to the very bottom of the list to see the last 10 items. If I scroll back up to the top again, I would expect that the top 10 results will have to be loaded again from the server.
I have a project I did with ExtJS that a similar situation: an infinite scrolling list with several thousand results in it. The ExtJS way to handle this was to load the current page of results, then pre-load a couple of extra pages of results as well. At any one time though, there was only ever 10 pages of results stored locally.
So I guess my question is how would I go about implementing this in AngularJS? I kow I haven't provided much code, so I'm not expecting people to just give the coded answer, but at least some advice in which direction to go.
A couple of things:
"Infinite scrolling" to "1,000,000" rows is likely to have issues regardless of the framework, just because you've created millions and millions of DOM nodes (presuming you have more than one element in each record)
The implementation you're looking at doing with <li ng-repeat="item in items">{{item.foo}}</li> or anything like that will have issues very quickly for one big reason: {{item.foo}}} or any ngBind like that will set up a $watch on that field, which creates a lot of overhead in the form of function references, etc. So while 10,000 small objects in an "array" isn't going to be that bad... 10,000-20,000 additional function references for each of those 10,000 items will be.
What you'd want to do in this case would be create a directive that handles the adding and removing of DOM elements that are "too far" out of view as well as keeping the data up to date. That should mitigate most performance issues you might have.
... good infinite scrolling isn't simple, I'm sorry to say.
I like the angular-ui implementation ui-scroll...
https://github.com/angular-ui/ui-scroll
... over ngInfiniteScroll. The main difference with ui-scroll from a standard infinite scroll is that previous elements are removed when leaving the viewport.
From their readme...
The common way to present to the user a list of data elements of undefined length is to start with a small portion at the top of the list - just enough to fill the space on the page. Additional rows are appended to the bottom of the list as the user scrolls down the list.
The problem with this approach is that even though rows at the top of the list become invisible as they scroll out of the view, they are still a part of the page and still consume resources. As the user scrolls down the list grows and the web app slows down.
This becomes a real problem if the html representing a row has event handlers and/or angular watchers attached. A web app of an average complexity can easily introduce 20 watchers per row. Which for a list of 100 rows gives you total of 2000 watchers and a sluggish app.
Additionally, ui-scroll is actively maintained.
It seems that http://kamilkp.github.io/angular-vs-repeat would be what you are looking for. It is a virtual scrolling directive.
So turns out that the ng-grid module for AngularJS has pretty much exactly what I needed. If you look at the examples page, the Server-Side Processing Example is also an infinite scrolling list that only loads the data that is needed.
Thanks to those who commented and answered anyway.
Latest URL : ng-grid
You may try using ng-infinite-scroll :
http://binarymuse.github.io/ngInfiniteScroll/
Check out virtualRepeat from Angular Material
It implements dynamic reuse of rows visible in the viewport area, just like ui-scroll

jstree performance issues

I am using a jsTree with around 1500 nodes, nested to a max of 4 levels (most are only 1 level deep), and I'm getting Internet Explorer's "this script is running slowly" error. I began with just a straight html_data <li> structure, generated by ASP.NET. The tree wouldn't finish loading at all. Then I tried xml_data and json_data, which was a little better but eventually errored out. My last-stitch effort was async loading. This fixed the initial load problem, but now I get IE's error when I expand one of the larger branches.
More details: I'm using the checkbox plugin, and I will also need the ability to search. Unfortunately, when searching, the user could potentially enter as little as one character so I'm looking at some large set of search results.
Has anybody done something similar with such a large data set? Any suggestions on speeding up the jsTree? Or, am I better off exploring other options for my GUI?
I realize I haven't posted any code, but any general techniques/gotcha's are welcome.
I haven't completely solved my problem, but I made some improvements so that I think it might be usable (I am still testing). I thought it could be useful for other people:
First, I was using jsTree in a jQuery dialog, but that seems to hurt performance. If possible, don't mix large jsTrees and Dialogs.
Lazy loading is definitely the way to go with large trees. I tried json_data and xml_data, and they were both easy to implement. They seem to perform about the the same, but that's just based on basic observation.
Last, I implemented a poor man's paging. In my server-side JSON request handler, if a node has more than X children, I simply split into many nodes each having a portion of those children. For instance, if node X has say 1000 children, I gave X child nodes X1, X2, X3,..., X10 where X1 has children first 100 children, X2 has next 100 children and so on. This may not make sense for some people since you're modifying the tree structure, but I think it will work for me.
jsTree supports all your needs
use json_data plugin with ajax support where the brach would be too big.
search plugin support ajax call too
I'm a bit disappointed in it's performance myself.
Sounds like you need to try lazy loading: instead of loading the whole tree all at once, only load as needed.
That is, initially load only the trunk of the tree (so all nodes are "closed"), then only load a node's children when user clicks to open it.
JsTree can do this, see the documentation.
(Is that you mean by "async loading"?)
jstree sucks - it is the "refresh" which is slow 10 seconds for a 1000 child nodes being added, or to load a tree with 10000 items among 40 nodes it takes over a minute. after days of development I have told my colleague to look at slickgrid instead, as everyone will refuse to use a page which takes so long to do anything. it is quicker if you do not structure it correctly eg 3 seconds for 1000 nodes but then the arrow will not have any effect to close it down.
This is to replace a combination of ms treeview and ms imagelist which loads the same 10000 items across forty parent nodes in 3 seconds.

Javascript Efficiency with Constants passed as Parameters?

I have a general JavaScript question. I'll give you my scenario and then ask you the question.
Scenario
I am making a table with (currently) over 3000 rows, and it is growing by 5-10 every day. I am using a javascript plugin to style this table and add useful functionality. It currently takes 15 seconds to fully load the page, after which everything runs smoothly (sorting, paging, etc.). That is a very slow initial load, though. The plugin offers a way with less DOM parsing, where you pass it an array of the information to be placed inside the table, which I am very intrigued by. However, I want to make this as fast as possible, because there will still be an array of 3000 rows (each with 11 columns of, on average, 10 characters).
Question
Would it be significantly faster to use a JavaScript const to store this giant array? Specifically, does JavaScript know not to put a const on the stack when passed as a parameter?
Furthermore, is this simply too much for JavaScript to handle? Should I dismiss this idea and start with AJAX now (which would mean much slower functionality but much faster pageload)?
Thanks!
Because you say interaction is fast once the page is loaded I guess your biggest bottleneck is transfering the data over the wire.
I would send everything as JSON (compressed with gzip) which is very lightweight and fast to load.
I think styling should be done with CSS not JS. Also if you want the best UX initialize your table with less (1-200 elements), and then deal with the rest. It is better for the user if you show something right at the beginning.
Storing the array can't be a problem because GC will clear it up.

how to handle large dataset like sproutcore

I really don't have any substantial code to show here, actually, that's kinda why I am writing: I looked at the SproutCore demo, especially the Collection demo, on http://demo.sproutcore.com/sample_controls/, and am amazed by its loading 200,000 records to the page so easily. I tried using Rails to provide 200,000 records and in a completely blank HTML page with
<% #projects.each do |p| %>
<%= p.title %>
<% end %>
that freezes the browser for seconds on my m1530 laptop with 4gb ram and t7700 256gb ssd.
Yet the sproutcore demo does not freeze and takes less than 3 seconds to load.
What do you think the one technique they are using to enable this is?
Thanks!
The technology that SproutCore uses to display and scroll smoothly through "infinite" lists of data has very little to do with where the data comes from and almost all to do with the integration of special SproutCore view classes, SC.CollectionView (the parent class of SC.ListView and SC.GridView) and SC.ScrollView; the collection of powerful client side datastore classes: SC.Store and SC.SparseArray; and the SproutCore runtime and controller architecture.
The fact is that you simply cannot render a list of several hundred thousand items in it and expect the browser not to grind to a halt. That is too many elements to insert into the DOM tree and that is why SC.CollectionView is optimized to only generate elements for the currently showing items in the list (ex. if only 20 items are visible out of 20 million, only 20 elements are in the DOM). It gets even better than that though, because by default as items scroll in and out of view, the few existing elements are updated in place with the new item information so that the DOM tree is not even touched. This would not be possible though without the integration of SC.ScrollView that allows the collection to be aware of its visible rect and when a scroll is about to happen.
On top of that, there is the entire SproutCore runtime architecture which is used to ensure that all DOM manipulations are queued up so that you only touch the DOM once per run loop if needed when a display property changes (ex. toggling a display property 50 times in one run loop only touches the DOM once with the final value). This is an important factor in extreme performance that affects all SproutCore views including SC.CollectionView.
Finally, to make the list really scream, you cannot load several million items into the client in one request, nor can you even store them all in client memory. This leads me to another optimization of SC.CollectionView and the SproutCore data store, which is to work with sparse data. SC.CollectionView will never try to iterate over every item in its content, so it doesn't need all the data present, only what is being shown. When we load data into the client, we would use an SC.SparseArray to page in a bit of data at a time as needed. The whole system is very elegantly designed so that when the collection view requests an item that the sparse array doesn't yet have, the sparse array fetches it (or the next page of items) in the background. Because of bindings and observers, when the new data comes in we can update the list in place, which means that the scrolling doesn't block while data is being brought in.
That demo above is very outdated, here is a new one that uses the technologies I mentioned above: http://showcase.sproutcore.com/#demos/Big%20Data (source is here: https://github.com/sproutcore/demos/tree/master/apps/big_data). In this demo, I scroll through 50,000 names, which is all I could generate and split into 500 JSON files of a 100 names each that are loaded remotely from the server. Once you scroll past 100 names, you will see that the next 100 names are paged in and there is a brief flash of placeholder text "…" (how long you see the placeholder text depends on your Internet connection).
I used 50,000 names, but I don't see any problem showing a list of 500,000 or 5 million names though. However, at that scale you would want to also 'un-page' data as you bring in new data using SC.Store#unloadRecords to keep the memory use down.
There's a few other technologies in play to make this whole thing possible that I've missed, but those are the main ones at least.
I imagine the demo provided isn't being generated dynamically - it's static data.
Very few systems would be able to iterate a collection of live data that size. There are a number of techniques including streaming the dataset (using batch iteration through the records) through to caching and ajax partial loading strategies.
More on sproutcore here.. http://ostatic.com/blog/sproutcore-raises-the-bar-for-client-side-programming
If on the other hand you are looking for concurrency then node.js is the way to go.

Categories