javascript/dom -- how expensive is creating vs rearranging dom nodes? - javascript

I'm trying to optimize a sortable table I've written. The bottleneck is in the dom manipulation. I'm currently creating new table rows and inserting them every time I sort the table. I'm wondering if I might be able to speed things up by simple rearranging the rows, not recreating the nodes. For this to make a significant difference, dom node rearranging would have to be a lot snappier than node creating. Is this the case?
thanks,
-Morgan

I don't know whether creating or manipulating is faster, but I do know that it'll be faster if you manipulate the entire table when it's not on the page and then place it on all at once. Along those lines, it'll probably be slower to re-arrange the existing rows in place unless the whole table is removed from the DOM first.
This page suggests that it'd be fastest to clone the current table, manipulate it as you wish, then replace the table on the DOM.

I'm drawing this table about twice as quickly now, using innerHTML, building the entire contents as a string, rather than inserting nodes one-by-by.

You may find this page handy for some benchmarks:
http://www.quirksmode.org/dom/innerhtml.html

I was looking for an answer to this and decided to set up a quick benchmark http://jsfiddle.net/wheresrhys/2g6Dn/6/
It uses jQuery, so is not a pure benchmark, and it's probably skewed in other ways too. But the result it gives is that moving DOM nodes is about twice as fast as creating and detroying dom nodes every time

if you can, it is better to do the dom manipulation not as actual dom manipulation, but as some sort of method within your script and then manipulating the dom. So rather than doing what is called a repaint on every single node, you clump what would have been your repaint on every single node into its own method, and then attach those nodes into a parent that would then be attached to the actual dom, resulting in just two repaints instead of hundreds. I say two b/c you need to cleanup what is in the dom already before updating with your new data.

Related

Rendering multiple (hundreds or even thousands) web-components

The problem I am facing is that rendering a lot of web-components is slow. Scripting takes around 1,5s and then another 3s for rendering (mostly Layout + Recalculate styles) for ~5k elements, I plan to put much more than that into the DOM. My script to prepare those Elements takes around 100-200ms, the rest comes from constructor and other callbacks.
For normal HTML Elements a perf gain can be achieved with documentFragment, where you basically prepare a batch of elements, and only when you're done you attach them to the DOM.
Unfortunately, each web-component will call its constructor and other callbacks like connectedCallback, attributeChangedCallback etc. When having a lot of such components it's not really optimal.
Is there a way to "prerender" web-components before inserting them into the DOM?
I've tried to put them inside template elements and clone the contents, but the constructor is still called for each instance of my-component. One thing that did improve the performance is putting content that is attached to the shadow DOM inside a template outside of component and cloning it instead of using this.attachShadow({ mode: 'open' }).innerHTML=..., but that's not enough.
Do you really need to render all ~5k elements at once? You will face performance problems rendering that many elements in the DOM independently of if you manage to pre-initialize the components in memory.
A common technique for this scenario is to use a technique called "windowing" or "lazy rendering": the idea is to render only a small subset of your components at any given time depending on what's on the user viewport.
After a very quick search, I didn't find a library for web-components that implements this, but for reference in React you have react-window and in Vue vue-windowing

Performance implications of removing element from the DOM?

I have an animation loop which adds a new element to the DOM and animates it, this can go repeated basically forever. My question is, when each animation cycle has finished, is it better to remove the element from the DOM or to just leave it hidden on the page? I can't reuse the element since the way the loop works, a new animation may begin while the other one is finishing, so there could be multiple elements on the page at a given time. I realize this question is rather elementary, but would appreciate some insight. Thanks.
Performance implications
Hardly any. However, there are memory implications - filling up the DOM with (hidden) elements and never stopping to do so is evil. Of course, at some point this slows down the whole process.
is it better to remove the element from the DOM or to just leave it hidden on the page?
Definitely remove it.
I can't reuse the element since the way the loop works, a new animation may begin while the other one is finishing, so there could be multiple elements on the page at a given time.
You still could reuse them by maintaining an element pool, but that's probably not necessary. Removing old ones and creating new ones is fine.

depend on a dom hover or an array of divs, positions and dimensions

Is it good to depend on the dom (CSS)hover, for detecting when i'm on another div, or figure that from the already stored array of all the div.s positions and dimensions on the page,
keep in mind that the said array, updates every time an element changes either position/dimension
i want the process to be effecient, a part of me want to depend on that array for detecting when i'm over another div. but i'm afraid that, that will be an extra processing.
can anybody please help me ?? (thanks in advance)
To my knowledge, relying on the DOM would be the more advantageous of the two. Like, arbitter said, relying on CSS will probably have a very slight performance advantage, but really this type of process wouldn't slow down your program all that much, if any.

Performance for appending large element/datasets to the dom

I am appending large amounts of table row elements at a time and experiencing some major bottlenecks. At the moment I am using jQuery, but i'm open to a javascript based solution if it gets the job done.
I have the need to append anywhere from 0-100 table rows at a given time (it's actually potentially more, but I'll be paginating anything over 100).
Right now I am appending each table row individually to the dom...
loop {
..build html str...
$("#myTable").append(row);
}
Then I fade them all in at once
$("#myTable tr").fadeIn();
There are a couple things to consider here...
1) I am binding data to each individual table row, which is why i switched from a mass append to appending individual rows in the first place.
2) I really like the fade effect. Although not essential to the application I am very big on aesthetics and animations (that of course don't distract from the use of the application). There simply has to be a good way to apply a modest fade effect to larger amounts of data.
(edit)
3) A major reason for me approaching this in the smaller chunk/recursive way is I need to bind specific data to each row. Am I binding my data wrong? Is there a better way to keep track of this data than binding it to their respective tr?
Is it better to apply affects/dom manipulations in large chunks or smaller chunks in recursive functions?
Are there situations where the it's better to do one or the other? If so, what are the indicators for choosing the appropriate method?
Take a look at this post by John Resig, it explains the benefit of using DocumentFragments when doing large additions to the DOM.
A DocumentFragment is a container that you can add nodes to without actually altering the DOM in any way. When you are ready you add the entire fragment to the DOM and this places it's content into the DOM in a single operation.
Also, doing $("#myTable") on each iteration is really not recommended - do it once before the loop.
i suspect your performance problems are because you are modifying the DOM multiple times, in your loop.
Instead, try modifying it once after you get all your rows. Browsers are really good at innerHTML replaces. Try something like
$("#myTable").html("all the rows dom here");
note you might have to play with the exact selector to use, to get the dom in the correct place. But the main idea is use innerHTML, and use it as few times as possible.

Optimizing Ext.tree.TreePanel performance

I have an Ext.tree.TreePanel used with AsyncTreeNodes. The problem is that initially the
root node needs to have more than 1000 descendants. I succeeded at optimizing the DB performance, but the JavaScript performance is terrible - 25 seconds for adding and rendering 1200 nodes. I understand that manipulating the page's DOM is a slow operation, but perhaps there is some way to optimize the initial rendering process.
You could create a custom tree node UI that has a lower DOM footprint. In other words, change the HTML that is used to create each node of the tree to some less verbose (and likely less flexible) HTML.
here are some references for doing that:
http://github.com/jjulian/ext-extensions/blob/master/IndentedTreeNodeUI.js
http://david-burger.blogspot.com/2008/09/ext-js-custom-treenodeui.html
Enjoy.
I don't think you'll have much luck optimizing a tree with that many nodes. Is there any way you could use a grid to deliver the information? You could at least set up paging with that, and it would probably be a lot faster. You could also implement the row expander UX on the grid, which behaves like a tree, sort of, for each row.

Categories