When making a state change, React will run a reconciliation algorithm and may completely re-create parts of the DOM.
If I have a CSS animation or transition being applied to a DOM node, how do React developers typically avoid interfering with the DOM during the animation (which might stop the animation)?
Does this "just work", because developers know to only start the animation (e.g. by adding or removing a class) when a specific part of the component lifecycle has completed (e.g. onComponentDidUpdate) - i.e. after they can be sure no further major DOM subtree changes will occur?
Usually this should "just work" with Reacts dom diffing. A more controlled approach would be to move whatever dom that is being animated into it's own component and using the shouldComponentUpdate method to control when you let React perform a re-render.
Related
React has just released the startTransition API in v18. While it seems like a "magic spell" to bring more responsiveness, the doc is quite vague about its behavior under the hood. The only sentence related to its behavior in the doc is "Updates in a transition yield to more urgent updates such as clicks."
As a React programmer, I am concerned about whether it will cause glitches to my components. E.g.:
Will the state split into two branches during transition? Let's assume we have a current state. When a transition is turning it into slow_update(state), another event occurs that turns the state into fast_update(state). We will have both slow_update(state) and fast_update(state) calculated in this process. This never happens in the old React when state updates are done in order.
If some useEffect hooks are triggered by the state update, will the callback execute twice if the transition is interrupted? Will the callback receive some inconsistent data since the state is branched during transition?
I am also curious about will it will make my app any more responsive at all. E.g.:
Does startTransition apply to render phase, or it only optimizes reconciliation? Does it apply to just the callback itself, or also the follow-up renders, useMemos and useEffects on the updated state? It seems that CPU-heavy logic blocks the main thread anyway so React has no way to interrupt it. Therefore, if my component filters through a long list in a startTransition callback, it is optimized at all?
If urgent updates occur repeatedly, will the slow update be blocked forever? For example, if the component uses requestAnimationFrame to update its state every 16ms, and the slow transition takes longer time, will the slow transition be interrupted again and again?
I have read the doc, the release note, the reactwg discussion and the official example, but they do not answer the questions above. Since that concurrent-related bugs are hard to fix or reproduce, it would be great if there are some thumb rules (like the rules of hooks) that keep the transition code always correct.
In general, what code will benefit from startTransition, and what is needed to avoid glitches?
The problem I am facing is that rendering a lot of web-components is slow. Scripting takes around 1,5s and then another 3s for rendering (mostly Layout + Recalculate styles) for ~5k elements, I plan to put much more than that into the DOM. My script to prepare those Elements takes around 100-200ms, the rest comes from constructor and other callbacks.
For normal HTML Elements a perf gain can be achieved with documentFragment, where you basically prepare a batch of elements, and only when you're done you attach them to the DOM.
Unfortunately, each web-component will call its constructor and other callbacks like connectedCallback, attributeChangedCallback etc. When having a lot of such components it's not really optimal.
Is there a way to "prerender" web-components before inserting them into the DOM?
I've tried to put them inside template elements and clone the contents, but the constructor is still called for each instance of my-component. One thing that did improve the performance is putting content that is attached to the shadow DOM inside a template outside of component and cloning it instead of using this.attachShadow({ mode: 'open' }).innerHTML=..., but that's not enough.
Do you really need to render all ~5k elements at once? You will face performance problems rendering that many elements in the DOM independently of if you manage to pre-initialize the components in memory.
A common technique for this scenario is to use a technique called "windowing" or "lazy rendering": the idea is to render only a small subset of your components at any given time depending on what's on the user viewport.
After a very quick search, I didn't find a library for web-components that implements this, but for reference in React you have react-window and in Vue vue-windowing
Virtual DOM/DOM tree state in steps
So I've stumbled upon this image recently and I'm a bit confused about part where Virtual DOM state is applied to the real DOM. So let's say we changed <div> to a <header> in our React code, this change would be applied to Virtual DOM and then whole subtree would be re-rendered. But in Vanilla JS I could do the very same thing and subtree would be re-rendered aswell? So do I understand correctly that Virtual DOM duty among other things is to abstract real DOM operations we would need to do?
Would correct/efficient HTML DOM manipulations have same effect as working with Virtual DOM in complex applications?
So do I understand correctly that Virtual DOM duty among other things is to abstract real DOM operations we would need to do?
Yes.
Would correct/efficient HTML DOM manipulations have same effect as working with Virtual DOM in complex applications?
Yes.
You're right. A virtual DOM ultimately translates to calls to the real DOM. Therefore, the virtual DOM can only ever be slower than optimal usage of the real DOM. If you're adding, removing, moving elements on the actual DOM in a way that actually represents what changes you're making, it will be efficient.
Then why does Virtual DOM exist?
People invented virtual DOM to abstract away "how things are changing" and instead think about "how things are currently." The architecture of things like React and Vue is basically that components have the code to declare the way that the DOM should currently look. Then the code in these frameworks does a big re-computation of how they think things should be, then it diffs it against how the components said things should be the last time it asked them, then it applies that diff to the real DOM.
So, virtual DOM is a way to ameliorate the inherent slowness of this design, under the belief that its benefits will outweigh the costs.
Sources
https://svelte.dev/blog/virtual-dom-is-pure-overhead
https://medium.com/#hayavuk/why-virtual-dom-is-slower-2d9b964b4c9e
I'm not sure I fully understand your questions but I'll try to answer as fully as I can.
React works in Virtual DOM, which means it has a copy of the real dom and whenever it detects a change, it sees what the difference is and only rerenders that component and it's children in the subtree, NOT the whole one.
The way React detects change is through props. If the props of a component has been changed, it rerenders the component.
One way you could evade unnecessary re-renders of the children is if you use PureComponents, ShouldComponentUpdate, or the new memo and memo hooks. (memoization).
Also, the one other difference between normal(JavaScript) DOM and virtual DOM is that Virtual DOM is made up of objects and normal DOM is made of arrays. So, in arrays, search time is O(n) and in objects, it's constant time O(1) and that's why it is way faster
I hope this will clarify your doubts about the virtual DOM without going too much in detail.
Real DOM
It updates slow.
Can directly update HTML.
Creates a new DOM if element updates.
Uses a lot of memory resource.
Virtual DOM
Updates faster and it's light-weight.
Can't directly update HTML.
Updates the JSX if element updates.
Increased performance as uses less memory
I'm trying to implement a generic Drag-and-Drop module that I may or may not release on NPM.
I took inspiration from an already existing DnD module which consisted of essentially two main components: Draggables and DropZones.
The Draggable components would enable any content that they enclosed to be dragged around, and DropZones were the components that would react to when a Draggable component was dragged in/out of them and/or dropped onto them.
The module's functionality was too basic for what I needed, so I set off on making my own.
Everything has been going fine except for one aspect: measuring the absolute position of a large number of components in order to find their corresponding bounding shapes for collision detection.
Currently, I've been using the View's measure method to measure the absolute position and size of my components, however, if you call this method too many times within a short period of time, React Native will throw this error: Warning: Please report: Excessive number of pending callbacks: 501. Some pending callbacks that might have leaked by never being called from native code and it lists the offending method as {"module":"UIManager","method":"measure"}.
If I make a basic for loop that repeatedly calls the measure method, I can get it up to just over 500 iterations before React Native complains (this number may be hardware-dependent; I'm not sure).
Also, the same problem happens when using the View's other measuring methods (e.g. measureInWindow, measureLayout).
What's even worse is that up until now, I have been calling the measure method in a View's onLayout callback, however, I recently discovered that onLayout does not get called when a style changes that doesn't directly affect the given View (such as when a parent component's margin changes).
This means that the given components can be re-positioned (relative to the screen) without them knowing about it.
Because of this I will have to use a different means of notifying my components that they need to remeasure themselves.
I thought about using some of the life-cycle methods that pertain to component updates, but that would most likely be even worse, because most of them get called every time the component updates which is far more often than a layout happens.
Edit: One other problem with using the life-cycle methods that pertain to component updates is if a parent component prevents it's child components from being updated (such as through shouldComponentUpdate).
There are around 3 hundred components rendered inside the wrapper component and its taking to much of time to render. But I need thousand of components to get rendered inside the wrapper container. How can I achieve this without any performance issues while rendering the components
Image shows the rendering time taken by 300 components which is too much
If you have a scroll and all of your components are not in the viewport at the same time, you may use the Proxy Pattern.
There is an ember addon called ember-in-viewport to detect whether your component is in viewport or not. By using it, you may implement the proxy pattern.
Here is a sample twiddle. At the application.hbs, if you use my-proxy-component instead of my-component, page rendering would be nearly 3 times faster.
These tips are kinda hacky but might help:
You might want to lazy load some of the components, like for example load the most critical ones and then load the rest changing a computed property after a fixed timeout on didRender event (during the idle state).
For example:
onDidRender: Ember.on('didRender', function() {
Ember.run.later(() => {
this.set('displayLazyItems', true);
}, 2000);
})
Another thing you might want to do is inline code instead of creating a new component for small items. Rendering those might take some serious time.
Record a timeline and ensure that the performance is actually coming from render time, sometimes is actually some javascript execution or loading that is messing up.
Lastly, ember 2.10 includes glimmer 2, this new render engine can optimize render time up to 50%, you might want to consider using it.