I am appending large amounts of table row elements at a time and experiencing some major bottlenecks. At the moment I am using jQuery, but i'm open to a javascript based solution if it gets the job done.
I have the need to append anywhere from 0-100 table rows at a given time (it's actually potentially more, but I'll be paginating anything over 100).
Right now I am appending each table row individually to the dom...
loop {
..build html str...
$("#myTable").append(row);
}
Then I fade them all in at once
$("#myTable tr").fadeIn();
There are a couple things to consider here...
1) I am binding data to each individual table row, which is why i switched from a mass append to appending individual rows in the first place.
2) I really like the fade effect. Although not essential to the application I am very big on aesthetics and animations (that of course don't distract from the use of the application). There simply has to be a good way to apply a modest fade effect to larger amounts of data.
(edit)
3) A major reason for me approaching this in the smaller chunk/recursive way is I need to bind specific data to each row. Am I binding my data wrong? Is there a better way to keep track of this data than binding it to their respective tr?
Is it better to apply affects/dom manipulations in large chunks or smaller chunks in recursive functions?
Are there situations where the it's better to do one or the other? If so, what are the indicators for choosing the appropriate method?
Take a look at this post by John Resig, it explains the benefit of using DocumentFragments when doing large additions to the DOM.
A DocumentFragment is a container that you can add nodes to without actually altering the DOM in any way. When you are ready you add the entire fragment to the DOM and this places it's content into the DOM in a single operation.
Also, doing $("#myTable") on each iteration is really not recommended - do it once before the loop.
i suspect your performance problems are because you are modifying the DOM multiple times, in your loop.
Instead, try modifying it once after you get all your rows. Browsers are really good at innerHTML replaces. Try something like
$("#myTable").html("all the rows dom here");
note you might have to play with the exact selector to use, to get the dom in the correct place. But the main idea is use innerHTML, and use it as few times as possible.
Related
I'm building a browser based game in JavaScript.
It contains a lot of Information visualized via tables.
The game is turn-based, so whenever a turn is completed, I need to adjust a lot of innerHTML of those tables.
My question is:
Is it smarter to give IDs to all the <td> and update the innerHTML or is it smarter to wrap the tables inside a div, clear the div and rebuild all tables from scratch, then append them?
It depends on how long a view stays active, if the view is shared, how many cells change and how frequently.
If you have a high number users looking at different views/pages that stay active for a long time, then it might produce less load on your servers if you can make infrequent updates to individual cells.
If the changes happen less frequent and a high proportion of cells change, then it may be best to refresh the whole table. This would be 'less chatty' and use less network bandwidth overall.
However if you have a high number of users, all looking at the same view/page, you may wish to look into CQRS and caching your views or view data.
Rather replace the innerHTML, the code will look nicer and it will be a lot more effortless, because instead of recreating the whole thing you would just be replacing a string in an object, which is obviously a lighter task. So in most cases it makes sense to do that.
Consider using a framework or templates, though.
An app I am working on has a 50 row x 100 col grid, where each cell is a div containing 1 textbox, with one outer div containing the entire grid to handle scrolling. Exact dimensions can vary, but that it a typical max size. So, that works out to about 5000 divs, each with its own textbox, for a total of 10,000 elements.
The contents of the grid need to be updated via ajax calls to load data for different periods.
My question is, which of these 2 approaches would be more efficient:
1) Have the ajax call return the full formatted HTML for the grid, and simply set the innerHtml of the containing div to the new contents.
2) Have ajax cal return JSON or similar, then use javascript/jquery to loop through and update the values of each grid cell in the existing divs. Might need to add or delete a few columns in this case, but number of rows will remain constant.
For smaller grids/tables, returning the complete html works great, and requires very little client JS code, but I have heard about performance issues when manipulating large numbers of DOM elements. With the huge number of attributes and properties associated with each element, I can see where it could add up to a lot of overhead to create/destroy 1000s of them. So, I thought I would ask for advise here before deciding which way to go on this. Thanks.
I've been through the very same situation. For tables of a certain size and complexity, it is much faster to render pregenerated HTML over and over again than to hunt for and update the data, but you wind up with a server communication lag to contend with.
The key here is "manipulate". If you're just painting the grid, it's not very costly in terms of time unless there are a lot of events that fire on complete, like event bindings and so forth. Even those aren't too bad. At some point, there is a diminishing return and single cell updates becomes more tolerable.
It really depends on what the users do with the grid. If they are updating many rows / cells, you might want a bulk update function for applying changes to multiple selected rows at once so they don't have to wait for each individual cell change to refresh. This was our solution for large tables as individual cell updates when making lots of changes were too time consuming and reloading the whole table from the server was not too bad if you're only doing it once per batch of updates.
Is it good to depend on the dom (CSS)hover, for detecting when i'm on another div, or figure that from the already stored array of all the div.s positions and dimensions on the page,
keep in mind that the said array, updates every time an element changes either position/dimension
i want the process to be effecient, a part of me want to depend on that array for detecting when i'm over another div. but i'm afraid that, that will be an extra processing.
can anybody please help me ?? (thanks in advance)
To my knowledge, relying on the DOM would be the more advantageous of the two. Like, arbitter said, relying on CSS will probably have a very slight performance advantage, but really this type of process wouldn't slow down your program all that much, if any.
I am wondering if anyone can give some safe guidelines on the maximum level of dom manipulation you can do with jquery without freezing the browser.
Also the best methods for mass DOM manipulation.
Basically at any one time I could be dealing with lists of up to 40k li's
All I am really doing is showing one, hiding the other.
So here are my questions
How many li's could I theoretically manipulate at a time without crashing the browser?
How should I work with the li's?
Manipulate as a single object (ul level)
Loop over each li in the first list removing them, then loop inserting each new li.
Loop over chunks of li's (if so how big 10, 100, 1000, 10000 at a time?)
How should I fetch the data? (its in json format)
Grab the data for the entire list (40k li's worth) in one ajax call.
Insert the data for every list at page creation(could be upwards of 20 lists = 800,000 li's
Do several ajax calls to fetch the data (if so how big 10, 100, 1000, 10000 at a time?)
If you really want to be manipulating that many then you should probably adopt something like http://developer.yahoo.com/yui/3/async-queue/
As the answer for how many you should be working with at a time, you could build in a calibration which looks at how quickly the last set completed and work with more / less accordingly. This could get you something that works in everything from desktop chrome to mobile IE.
The same goes for the ajax calls, make it able to ramp up according to the net speed.
As a warning: this is extremely dependent on your computer performance. Frankly - anything approaching 100 elements in a DOM manipulation starts getting a little silly and expensive. That said:
1) Depends on your system, my older system tops at about 30 and my newer one can get up to 120 before I break things.
2) Work with them on as large a level as possible. Manipulating a ul with a bunch of li's in it is much faster than manipulating a bunch of li's. Use jQuery and store the object in a variable (so you're not querying the DOM each time it's used) to enhance performance.
3) Initially load the first data the user will see, then fetch it in similarly sized batches. If you can only see 20 li elements at a time there is no reason loading any more than that plus a little buffer area (30?).
I'm trying to optimize a sortable table I've written. The bottleneck is in the dom manipulation. I'm currently creating new table rows and inserting them every time I sort the table. I'm wondering if I might be able to speed things up by simple rearranging the rows, not recreating the nodes. For this to make a significant difference, dom node rearranging would have to be a lot snappier than node creating. Is this the case?
thanks,
-Morgan
I don't know whether creating or manipulating is faster, but I do know that it'll be faster if you manipulate the entire table when it's not on the page and then place it on all at once. Along those lines, it'll probably be slower to re-arrange the existing rows in place unless the whole table is removed from the DOM first.
This page suggests that it'd be fastest to clone the current table, manipulate it as you wish, then replace the table on the DOM.
I'm drawing this table about twice as quickly now, using innerHTML, building the entire contents as a string, rather than inserting nodes one-by-by.
You may find this page handy for some benchmarks:
http://www.quirksmode.org/dom/innerhtml.html
I was looking for an answer to this and decided to set up a quick benchmark http://jsfiddle.net/wheresrhys/2g6Dn/6/
It uses jQuery, so is not a pure benchmark, and it's probably skewed in other ways too. But the result it gives is that moving DOM nodes is about twice as fast as creating and detroying dom nodes every time
if you can, it is better to do the dom manipulation not as actual dom manipulation, but as some sort of method within your script and then manipulating the dom. So rather than doing what is called a repaint on every single node, you clump what would have been your repaint on every single node into its own method, and then attach those nodes into a parent that would then be attached to the actual dom, resulting in just two repaints instead of hundreds. I say two b/c you need to cleanup what is in the dom already before updating with your new data.