Performance implications of removing element from the DOM? - javascript

I have an animation loop which adds a new element to the DOM and animates it, this can go repeated basically forever. My question is, when each animation cycle has finished, is it better to remove the element from the DOM or to just leave it hidden on the page? I can't reuse the element since the way the loop works, a new animation may begin while the other one is finishing, so there could be multiple elements on the page at a given time. I realize this question is rather elementary, but would appreciate some insight. Thanks.

Performance implications
Hardly any. However, there are memory implications - filling up the DOM with (hidden) elements and never stopping to do so is evil. Of course, at some point this slows down the whole process.
is it better to remove the element from the DOM or to just leave it hidden on the page?
Definitely remove it.
I can't reuse the element since the way the loop works, a new animation may begin while the other one is finishing, so there could be multiple elements on the page at a given time.
You still could reuse them by maintaining an element pool, but that's probably not necessary. Removing old ones and creating new ones is fine.

Related

Does animating an element deep in the DOM really hurt performance?

I read the following and it got me thinking
Help the browser to render
The browser manages rendering tree, and
elements depend on each other. If the animated element is deep in the
DOM, then other elements depend on it’s geometry and position. Even if
the animation actually doesn’t shift them, the browser has to perform
additional calculations.
To make the animation consume less CPU (and be smoother), don’t
animate the element deep in DOM.
This is referring to vanilla JS - is it outdated advice? Does jQuery have this issue or does it do something clever to avoid it?
It does makes sense. By changing an element the browser will have to verify if this change affects anything 'up the chain'. You can bypass this by making that object independent of the layout. You can make it positioned absolute or animate a transform propriety. In that case the animated element should not affect anything on the page.
Yes, jQuery has that issue. If you want performantanimations you should use CSS or Native element.animate if available: http://updates.html5rocks.com/2014/05/Web-Animations---element-animate-is-now-in-Chrome-36

depend on a dom hover or an array of divs, positions and dimensions

Is it good to depend on the dom (CSS)hover, for detecting when i'm on another div, or figure that from the already stored array of all the div.s positions and dimensions on the page,
keep in mind that the said array, updates every time an element changes either position/dimension
i want the process to be effecient, a part of me want to depend on that array for detecting when i'm over another div. but i'm afraid that, that will be an extra processing.
can anybody please help me ?? (thanks in advance)
To my knowledge, relying on the DOM would be the more advantageous of the two. Like, arbitter said, relying on CSS will probably have a very slight performance advantage, but really this type of process wouldn't slow down your program all that much, if any.

tab switch in js ,which is faster?

I always romoving 'current' class of all siblings then add 'current' class to my clicked one. I want to know will it be faster only removing 'current' class of which has 'current'.seems to be a simple question, but I really want to know.
Yes, filtering the query to a smaller set of elements will perform faster, because there are less elements to check.
In modern browsers, jQuery will use native methods to query the DOM, so adding the selector has a negligible performance impact.
I don't think there's much difference, since there's only one "current". It doesn't matters too much for querying one more element or one less.
Usually I'll first find out the outer element to narrow down
$('#selectionDiv').find() ....
Depending on how many elements you are re-classing, the impact of the optimization will of course vary.
I tested it, http://jsperf.com/reclassing-all-or-one, using 7 (seemed reasonable for for example navigation tabs) divs and I think the difference was significant (reclassing all 30% slower than only one), percentage wise. If one cares about actual time though it may not be, but I can't really see any reason not do be distinct.

Performance for appending large element/datasets to the dom

I am appending large amounts of table row elements at a time and experiencing some major bottlenecks. At the moment I am using jQuery, but i'm open to a javascript based solution if it gets the job done.
I have the need to append anywhere from 0-100 table rows at a given time (it's actually potentially more, but I'll be paginating anything over 100).
Right now I am appending each table row individually to the dom...
loop {
..build html str...
$("#myTable").append(row);
}
Then I fade them all in at once
$("#myTable tr").fadeIn();
There are a couple things to consider here...
1) I am binding data to each individual table row, which is why i switched from a mass append to appending individual rows in the first place.
2) I really like the fade effect. Although not essential to the application I am very big on aesthetics and animations (that of course don't distract from the use of the application). There simply has to be a good way to apply a modest fade effect to larger amounts of data.
(edit)
3) A major reason for me approaching this in the smaller chunk/recursive way is I need to bind specific data to each row. Am I binding my data wrong? Is there a better way to keep track of this data than binding it to their respective tr?
Is it better to apply affects/dom manipulations in large chunks or smaller chunks in recursive functions?
Are there situations where the it's better to do one or the other? If so, what are the indicators for choosing the appropriate method?
Take a look at this post by John Resig, it explains the benefit of using DocumentFragments when doing large additions to the DOM.
A DocumentFragment is a container that you can add nodes to without actually altering the DOM in any way. When you are ready you add the entire fragment to the DOM and this places it's content into the DOM in a single operation.
Also, doing $("#myTable") on each iteration is really not recommended - do it once before the loop.
i suspect your performance problems are because you are modifying the DOM multiple times, in your loop.
Instead, try modifying it once after you get all your rows. Browsers are really good at innerHTML replaces. Try something like
$("#myTable").html("all the rows dom here");
note you might have to play with the exact selector to use, to get the dom in the correct place. But the main idea is use innerHTML, and use it as few times as possible.

javascript/dom -- how expensive is creating vs rearranging dom nodes?

I'm trying to optimize a sortable table I've written. The bottleneck is in the dom manipulation. I'm currently creating new table rows and inserting them every time I sort the table. I'm wondering if I might be able to speed things up by simple rearranging the rows, not recreating the nodes. For this to make a significant difference, dom node rearranging would have to be a lot snappier than node creating. Is this the case?
thanks,
-Morgan
I don't know whether creating or manipulating is faster, but I do know that it'll be faster if you manipulate the entire table when it's not on the page and then place it on all at once. Along those lines, it'll probably be slower to re-arrange the existing rows in place unless the whole table is removed from the DOM first.
This page suggests that it'd be fastest to clone the current table, manipulate it as you wish, then replace the table on the DOM.
I'm drawing this table about twice as quickly now, using innerHTML, building the entire contents as a string, rather than inserting nodes one-by-by.
You may find this page handy for some benchmarks:
http://www.quirksmode.org/dom/innerhtml.html
I was looking for an answer to this and decided to set up a quick benchmark http://jsfiddle.net/wheresrhys/2g6Dn/6/
It uses jQuery, so is not a pure benchmark, and it's probably skewed in other ways too. But the result it gives is that moving DOM nodes is about twice as fast as creating and detroying dom nodes every time
if you can, it is better to do the dom manipulation not as actual dom manipulation, but as some sort of method within your script and then manipulating the dom. So rather than doing what is called a repaint on every single node, you clump what would have been your repaint on every single node into its own method, and then attach those nodes into a parent that would then be attached to the actual dom, resulting in just two repaints instead of hundreds. I say two b/c you need to cleanup what is in the dom already before updating with your new data.

Categories