I have a react-virtualized's infinite scroll list, (Most of the setup is copied from this example). I'm providing it with a rowRenderer function like the specs requires. this works fine if the rowRenderer function is very lightweight (i.e returns a very basic component as row).
but the rendering of my RowComponent includes some Array.map over some properties. this shouldn't cause any problem except that the rowRenderer functions is being called tens or even hundreds of times while scrolling. this causes a performance issue, making the scroll not smooth enough.
So far I tried:
Caching my rowRenderer this works, but I don't like this solution as it may cause problems in the future.
Make my RowComponent's render function pure and implement shouldComponentUpdate using react-addons-shallow-compare. this slightly improved the performance but not enough.
In this example, the rowRenderer function is also being called many times per scroll (no perf issues there as the function is very lightweight), which makes me believe this behavior is by design. so:
Is caching a good solution? any advice in how to sync it with my app's state (I use redux for state management)? is there something I missed in the docs that can reduce calls to rowRenderer (there's no reason for my rows to change while scrolling)?
Author of react-virtualized here.
Your rowRenderer methods should be lightweight because, as you've found, they may be called rapidly when a user is scrolling. The good news is that- since browsers manage scrolling in a separate thread from the UI, this usually doesn't cause any performance problems. If anything, you may notice some empty/white space at the edge of the list in the direction you're scrolling- indicating that your renderers aren't able to keep up with the user's scrolling speed.
One caveat to be aware of though is that if you attach touch or wheel event handlers to a react-virtualized component or one of its DOM ancestors, this will force the browser to scroll in the main/UI thread. That can definitely cause slowness.
I'm currently in the middle of a major update (version 7) which, among other things, will passed named arguments to user-functions like rowRenderer. This will enable me to pass meta information (like whether or not the list is currently scrolling) which could enable you to defer "heavy" logic while a scroll is in progress. Unfortunately this isn't possible in version 6 unless you're willing to use a timeout as doron-zavelevsky mentions.
Edit: You may be happy to learn that with this commit cell caching has made its way into the upcoming version 7 release.
From my experience with this library (I'm not using the latest version though) - this is by design.
It makes sense - in order to avoid rendering all the list at once - and to allow you infinite scroll - it asks you every time to render the currently viewed item.
Your goal is to optimize the render function - as you yourself mentioned.
One more thing that can improve your overall experience is to check and see if your item contains some complex code in its componentDidMount life-cycle method - or any other code that runs post-render. If that's the case - you can optimize for fast scrolling by delaying these calculations with a timeout - and only let them run if the component is still mounted when the timeout passes.
Consider the case that you fast scroll over items to get to the bottom - there's no sense in fully populating all the items you scroll past on the way there. So you return the render result as fast as you can - and inside the item you wait ~200ms - and then you check whether the component is still mounted and do the real work.
Since isMounted is obsolete you can simply set a variable to true during componentDidMount and back to false of componentWillUnmount.
Related
React has just released the startTransition API in v18. While it seems like a "magic spell" to bring more responsiveness, the doc is quite vague about its behavior under the hood. The only sentence related to its behavior in the doc is "Updates in a transition yield to more urgent updates such as clicks."
As a React programmer, I am concerned about whether it will cause glitches to my components. E.g.:
Will the state split into two branches during transition? Let's assume we have a current state. When a transition is turning it into slow_update(state), another event occurs that turns the state into fast_update(state). We will have both slow_update(state) and fast_update(state) calculated in this process. This never happens in the old React when state updates are done in order.
If some useEffect hooks are triggered by the state update, will the callback execute twice if the transition is interrupted? Will the callback receive some inconsistent data since the state is branched during transition?
I am also curious about will it will make my app any more responsive at all. E.g.:
Does startTransition apply to render phase, or it only optimizes reconciliation? Does it apply to just the callback itself, or also the follow-up renders, useMemos and useEffects on the updated state? It seems that CPU-heavy logic blocks the main thread anyway so React has no way to interrupt it. Therefore, if my component filters through a long list in a startTransition callback, it is optimized at all?
If urgent updates occur repeatedly, will the slow update be blocked forever? For example, if the component uses requestAnimationFrame to update its state every 16ms, and the slow transition takes longer time, will the slow transition be interrupted again and again?
I have read the doc, the release note, the reactwg discussion and the official example, but they do not answer the questions above. Since that concurrent-related bugs are hard to fix or reproduce, it would be great if there are some thumb rules (like the rules of hooks) that keep the transition code always correct.
In general, what code will benefit from startTransition, and what is needed to avoid glitches?
The bulk of my page is essentially a list of 40-100~ish components, each containing about 11 KB of data (in JSON format). The problem is that 100 times 11 KB makes 1.1 MB, which seems to be a little too memory-intensive to store in the Redux state for browsers on older mobile devices. It makes my gorgeous CSS animations look choppy and buttons take about a second to toggle on/off state.
Because each component is exactly 148px tall, my first thought was to use a virtual list (Virtuoso) which only renders as many items at a time as can fit on the screen (which is like 5-8 tops). This made the first time render much faster, but did nothing to make animations smoother, which definitely confirmed it's mostly a memory issue.
So, if I can't store and keep all my data in the state object, then I need to do something similar to Virtuoso and only keep as much data as I need to render the current screen. Now, I'm not exactly sure how Redux works internally, but if the state is immutable, doesn't that mean that everything that ever goes there stays there? And wouldn't that mean I'm trying to do something impossible, or at the very least anti-pattern?
Oh, and to make things worse, the data will need to be updated every 3 seconds, which means a component will sometimes disappear, only to reappear with the next update. I haven't tested how this would affect Virtuoso scrolling yet, but I don't exactly expect perfect plug-and-play behaviour.
I would appreciate ideas on how to solve this with redux and (possibly) its middleware, since it's the only architecture I'm familiar with atm and switching to e.g. Flux would mean having to learn it from scratch AND rewrite about 2000 lines of redux code.
I have never used Virtuoso, react-virtualized or react-window, but this looks like an issue that you can tackle with one of those libraries.
Do other parts of your React application care about the data you are rendering in the list?
If not, put the data in this component's local state and not in Redux.
If they do, maybe try to set this huge list in localStorage when the component mounts, unset it when it unmounts, and use the component's local state to select start / end indexes to pick a slice of the data.
See also: https://blog.jakoblind.no/is-using-a-mix-of-redux-state-and-react-local-component-state-ok/
There are around 3 hundred components rendered inside the wrapper component and its taking to much of time to render. But I need thousand of components to get rendered inside the wrapper container. How can I achieve this without any performance issues while rendering the components
Image shows the rendering time taken by 300 components which is too much
If you have a scroll and all of your components are not in the viewport at the same time, you may use the Proxy Pattern.
There is an ember addon called ember-in-viewport to detect whether your component is in viewport or not. By using it, you may implement the proxy pattern.
Here is a sample twiddle. At the application.hbs, if you use my-proxy-component instead of my-component, page rendering would be nearly 3 times faster.
These tips are kinda hacky but might help:
You might want to lazy load some of the components, like for example load the most critical ones and then load the rest changing a computed property after a fixed timeout on didRender event (during the idle state).
For example:
onDidRender: Ember.on('didRender', function() {
Ember.run.later(() => {
this.set('displayLazyItems', true);
}, 2000);
})
Another thing you might want to do is inline code instead of creating a new component for small items. Rendering those might take some serious time.
Record a timeline and ensure that the performance is actually coming from render time, sometimes is actually some javascript execution or loading that is messing up.
Lastly, ember 2.10 includes glimmer 2, this new render engine can optimize render time up to 50%, you might want to consider using it.
DOM blocking is something many people not familiar with JavaScript's strictly single-threaded synchronous execution model find out about the hard way, and it's usually just something we want to work around somehow (using timeouts, web-workers, etc). All well and good.
However, I would like to know if blocking of the actual user-visible rendering is something you can actually rely on. I'm 90% sure it is de facto the case in most browsers but I am hoping this isn't just a happily consistent accident. I can't seem to find any definitive statements from DOM specifications or even vendor documentation like MDM.
What worries me slightly is that while changes to the DOM are indeed not visible looking at the page, the internal DOM geometry (including CSS transforms and filters) does actually update during synchronous execution. For example:
console.log(element.getBoundingRect().width);
element.classList.add("scale-and-rotate");
console.log(element.getBoundingRect().width);
element.classList.remove("scale-and-rotate");
... will indeed report two different width values, though the page does not appear to flash. Synchronously waiting after the class is added (using a while loop) doesn't make the temporary changes visible either. Doing a Timeline trace in Chrome reveals that internally paint and re-paint is taking place just the same, which makes sense...
My concern is that, lacking a specific reason not, some browsers, like say, those dealing with underpowered mobile CPUs, may choose to actually reflect those internal calculations in the user-visible layout during that function's execution, and thus will result in an ugly "flash" during such temporary operations. So, more concretely, what I'm asking is: Do they have a specific reason not to?
(If you are wondering why I care about this at all, I sometimes need to measure calculated dimensions using getBoundingRect for elements in a certain state to plan out spacing or animations or other such things, without actually putting them in that state or animating them first...)
According to various sources, getting the position or size of a DOM element will trigger a reflow of the output if necessary, so that the returned values are correct. As a matter of fact, reading the offsetHeight of an element has become a way to force a reflow, as reported by Alexander Skutin and Daniel Norton.
Paul Irish gives a list of several actions that cause a reflow. Among them are these element box metrics methods and properties:
elem.offsetLeft, elem.offsetTop, elem.offsetWidth, elem.offsetHeight,
elem.offsetParent elem.clientLeft, elem.clientTop, elem.clientWidth,
elem.clientHeight elem.getClientRects(), elem.getBoundingClientRect()
Stoyan Stefanov describes strategies used by browsers to optimize reflows (e.g. queueing DOM changes and performing them in batches), and adds the following remark:
But sometimes the script may prevent the browser from optimizing the
reflows, and cause it to flush the queue and perform all batched
changes. This happens when you request style information, such as
offsetTop, offsetLeft, offsetWidth, offsetHeight
scrollTop/Left/Width/Height
clientTop/Left/Width/Height
getComputedStyle(), or currentStyle in IE
All of these above are essentially requesting style information about
a node, and any time you do it, the browser has to give you the most
up-to-date value. In order to do so, it needs to apply all scheduled
changes, flush the queue, bite the bullet and do the reflow.
There is nothing in Javascript related to concurrency that is anything but de facto. JS simply does not define a concurrency model. Everything is happy accident or years of consensus.
That said, if your function does not make any calls to weird things like XMLHttpRequest or "alert" or something like that, you can basically treat it as single-threaded with no interrupts.
Currently, I am rendering WebGL content using requestAnimationFrame which runs at (ideally) 60 FPS. I'm also concurrently scheduling an "update" process, which handles AI, physics, and so on using setTimeout. I use the latter because I only really need to update objects roughly 30 times per second, and it's not really part of the draw sequence; it seemed like a good idea to save the remaining CPU for actual render passes, since most of my animations are fairly hardware intensive.
My question is one of best practices. setTimeout and setInterval are not particularly kind to battery life and CPU consumption, especially when the browser is not in focus. On the other hand, using requestAnimationFrame (or tying the updates directly into the existing render phase) will potentially enforce far more updates every second than are strictly necessary, and may stop updating altogether when the browser is not in focus or at other times the browser deems unnecessary for "animation".
What is the best course of action for updating, but not rendering content?
setTimeout and setInterval are not particularly kind to battery life and CPU consumption
Let's be honest: Neither is requestAnimationFrame. The difference is that RAF automatically turns off when you leave the tab. That behavior can be emulated with setTimeout if you use the Page Visibility API, though, so in reality the power consumption problems between the two are about on par if used intelligently.
Beyond that, though, setTimeout\Interval is perfectly appropriate for use in your case. The only thing that you may want to be aware of is that you'll be hard pressed to get it perfectly in sync with the render loop. You'll have cases where you may draw one too many times before your animation update hits, which can lead to minor stuttering. If you're rendering at 60hz and updating at 30hz it shouldn't be a big issue, but you'll want to be aware of it.
If staying perfectly in sync with the render loop is important to you, you could simply have a if(framecount % 2) { updateLogic(); } at the top of your RAF callback, which effectively limits your updates to 30hz (every other frame) and it's always in sync with the draw.