Virtual DOM/DOM tree state in steps
So I've stumbled upon this image recently and I'm a bit confused about part where Virtual DOM state is applied to the real DOM. So let's say we changed <div> to a <header> in our React code, this change would be applied to Virtual DOM and then whole subtree would be re-rendered. But in Vanilla JS I could do the very same thing and subtree would be re-rendered aswell? So do I understand correctly that Virtual DOM duty among other things is to abstract real DOM operations we would need to do?
Would correct/efficient HTML DOM manipulations have same effect as working with Virtual DOM in complex applications?
So do I understand correctly that Virtual DOM duty among other things is to abstract real DOM operations we would need to do?
Yes.
Would correct/efficient HTML DOM manipulations have same effect as working with Virtual DOM in complex applications?
Yes.
You're right. A virtual DOM ultimately translates to calls to the real DOM. Therefore, the virtual DOM can only ever be slower than optimal usage of the real DOM. If you're adding, removing, moving elements on the actual DOM in a way that actually represents what changes you're making, it will be efficient.
Then why does Virtual DOM exist?
People invented virtual DOM to abstract away "how things are changing" and instead think about "how things are currently." The architecture of things like React and Vue is basically that components have the code to declare the way that the DOM should currently look. Then the code in these frameworks does a big re-computation of how they think things should be, then it diffs it against how the components said things should be the last time it asked them, then it applies that diff to the real DOM.
So, virtual DOM is a way to ameliorate the inherent slowness of this design, under the belief that its benefits will outweigh the costs.
Sources
https://svelte.dev/blog/virtual-dom-is-pure-overhead
https://medium.com/#hayavuk/why-virtual-dom-is-slower-2d9b964b4c9e
I'm not sure I fully understand your questions but I'll try to answer as fully as I can.
React works in Virtual DOM, which means it has a copy of the real dom and whenever it detects a change, it sees what the difference is and only rerenders that component and it's children in the subtree, NOT the whole one.
The way React detects change is through props. If the props of a component has been changed, it rerenders the component.
One way you could evade unnecessary re-renders of the children is if you use PureComponents, ShouldComponentUpdate, or the new memo and memo hooks. (memoization).
Also, the one other difference between normal(JavaScript) DOM and virtual DOM is that Virtual DOM is made up of objects and normal DOM is made of arrays. So, in arrays, search time is O(n) and in objects, it's constant time O(1) and that's why it is way faster
I hope this will clarify your doubts about the virtual DOM without going too much in detail.
Real DOM
It updates slow.
Can directly update HTML.
Creates a new DOM if element updates.
Uses a lot of memory resource.
Virtual DOM
Updates faster and it's light-weight.
Can't directly update HTML.
Updates the JSX if element updates.
Increased performance as uses less memory
Related
The problem I am facing is that rendering a lot of web-components is slow. Scripting takes around 1,5s and then another 3s for rendering (mostly Layout + Recalculate styles) for ~5k elements, I plan to put much more than that into the DOM. My script to prepare those Elements takes around 100-200ms, the rest comes from constructor and other callbacks.
For normal HTML Elements a perf gain can be achieved with documentFragment, where you basically prepare a batch of elements, and only when you're done you attach them to the DOM.
Unfortunately, each web-component will call its constructor and other callbacks like connectedCallback, attributeChangedCallback etc. When having a lot of such components it's not really optimal.
Is there a way to "prerender" web-components before inserting them into the DOM?
I've tried to put them inside template elements and clone the contents, but the constructor is still called for each instance of my-component. One thing that did improve the performance is putting content that is attached to the shadow DOM inside a template outside of component and cloning it instead of using this.attachShadow({ mode: 'open' }).innerHTML=..., but that's not enough.
Do you really need to render all ~5k elements at once? You will face performance problems rendering that many elements in the DOM independently of if you manage to pre-initialize the components in memory.
A common technique for this scenario is to use a technique called "windowing" or "lazy rendering": the idea is to render only a small subset of your components at any given time depending on what's on the user viewport.
After a very quick search, I didn't find a library for web-components that implements this, but for reference in React you have react-window and in Vue vue-windowing
When making a state change, React will run a reconciliation algorithm and may completely re-create parts of the DOM.
If I have a CSS animation or transition being applied to a DOM node, how do React developers typically avoid interfering with the DOM during the animation (which might stop the animation)?
Does this "just work", because developers know to only start the animation (e.g. by adding or removing a class) when a specific part of the component lifecycle has completed (e.g. onComponentDidUpdate) - i.e. after they can be sure no further major DOM subtree changes will occur?
Usually this should "just work" with Reacts dom diffing. A more controlled approach would be to move whatever dom that is being animated into it's own component and using the shouldComponentUpdate method to control when you let React perform a re-render.
How does D3 dom-manipulation mechanism influence (if any at all) on React's virtual-dom?
I've found many examples show that both libraries can work together great, but none of them refer to this issue..
It might not be an issue at all btw, it's just a big question that I raised, but couldn't find an answer for.
EDIT:
I've just learned that only when "writing" to the virtual-dom, the dom get's updated. and ALWAYS when "reading" from the actual "reading" is done on the virtual-dom.
So when I use D3 to update the DOM directly, the virtual-dom has no idea about it, and I won't be able to read the new changes from the virtual-dom.
That's what I was afraid of, and now I wonder how React's helping me when I have to use D3?
You follow the rules of each when interacting with them. With regard to react, you wrap the d3 dom manipulation in a component, and that's it.
Depending on the components you use, you can either have components that do everything in d3, or write some primitives which allow you to use react components instead of low level d3.
I have an Ext.tree.TreePanel used with AsyncTreeNodes. The problem is that initially the
root node needs to have more than 1000 descendants. I succeeded at optimizing the DB performance, but the JavaScript performance is terrible - 25 seconds for adding and rendering 1200 nodes. I understand that manipulating the page's DOM is a slow operation, but perhaps there is some way to optimize the initial rendering process.
You could create a custom tree node UI that has a lower DOM footprint. In other words, change the HTML that is used to create each node of the tree to some less verbose (and likely less flexible) HTML.
here are some references for doing that:
http://github.com/jjulian/ext-extensions/blob/master/IndentedTreeNodeUI.js
http://david-burger.blogspot.com/2008/09/ext-js-custom-treenodeui.html
Enjoy.
I don't think you'll have much luck optimizing a tree with that many nodes. Is there any way you could use a grid to deliver the information? You could at least set up paging with that, and it would probably be a lot faster. You could also implement the row expander UX on the grid, which behaves like a tree, sort of, for each row.
I'm trying to optimize a sortable table I've written. The bottleneck is in the dom manipulation. I'm currently creating new table rows and inserting them every time I sort the table. I'm wondering if I might be able to speed things up by simple rearranging the rows, not recreating the nodes. For this to make a significant difference, dom node rearranging would have to be a lot snappier than node creating. Is this the case?
thanks,
-Morgan
I don't know whether creating or manipulating is faster, but I do know that it'll be faster if you manipulate the entire table when it's not on the page and then place it on all at once. Along those lines, it'll probably be slower to re-arrange the existing rows in place unless the whole table is removed from the DOM first.
This page suggests that it'd be fastest to clone the current table, manipulate it as you wish, then replace the table on the DOM.
I'm drawing this table about twice as quickly now, using innerHTML, building the entire contents as a string, rather than inserting nodes one-by-by.
You may find this page handy for some benchmarks:
http://www.quirksmode.org/dom/innerhtml.html
I was looking for an answer to this and decided to set up a quick benchmark http://jsfiddle.net/wheresrhys/2g6Dn/6/
It uses jQuery, so is not a pure benchmark, and it's probably skewed in other ways too. But the result it gives is that moving DOM nodes is about twice as fast as creating and detroying dom nodes every time
if you can, it is better to do the dom manipulation not as actual dom manipulation, but as some sort of method within your script and then manipulating the dom. So rather than doing what is called a repaint on every single node, you clump what would have been your repaint on every single node into its own method, and then attach those nodes into a parent that would then be attached to the actual dom, resulting in just two repaints instead of hundreds. I say two b/c you need to cleanup what is in the dom already before updating with your new data.