I use Aurelia on a daily basis. Recently, I have been looking into using Redux (i.e. I built a couple of small trial apps using Aurelia+Redux) and have been really impressed (my development workflow and clarity of reasoning about my application is greatly improved). I have decided that I would like to start working it in to real-world applications.
With that said, I have a concern regarding performance (I have looked around at posts about performance but haven't seen my issue address directly). I think this question is not specific to Aurelia, more a question about Redux and using it with non-react libraries.
Let me preface my question with my understanding of Redux (perhaps my question is really arising from a flawed understanding?). Essentially, the way I understand Redux is that there is a store (a javascript object) and a reducing function. Now, the reducing function may be defined as a tree of functions (each responsible for modifying a specific branch of the overall store), however, in reality, Redux receives a single reducing function (it has no way of knowing how many functions were composed to create this single function).
The way I am using Redux is like so (just an example):
#inject(Store)
export class TodosListCustomElement {
constructor(store) {
this.store = store;
}
activate() {
this.update();
this.unsubcribe = this.store.subscribe(this.update.bind(this));
}
deactivate() {
this.unsubcribe();
}
update() {
const newState = this.store.getState();
this.todos = newState.todos;
}
toggleCompleted(index) {
this.store.dispatch({
type: UPDATE_TODO,
payload: {
index,
values: {
isCompleted: !this.todos[index].isCompleted
}
}
});
}
}
Essentially, each component down the component tree, subscribes itself to store changes and refreshes the data it needs from the store.
My concern is that there seems to be a lot happening on each published action. For example, say I have a large application with a similarly large store and reducer tree. Suppose there is some throttled textbox that dispatches changes to a single text field (in one item of a list) in the store every 250 ms. That would mean that as a user types, the entire reducer function is executed every 250ms (which could mean executing quite a large number of its descendant reducers) as well as all the subscribing functions are executed as well. Basically, it seems like there is a lot of overhead to change even the smallest part of the store.
Contrast this with a standard binding (in Aurelia) where there is just a single bound function (mutation observer) that needs to execute every 250ms to update the model...
Since I am new to Redux, I guess there is a good chance that I am naively misunderstanding something etc. I apologize in advance and hope to be corrected/put on the right track (because my limited experience using Redux has been very enjoyable).
Thanks in advance
You're actually describing the situation pretty well, on multiple levels.
First, the React-Redux bindings do a lot of work to ensure that connected components only actually re-render when some of the data relevant to a given component has changed. This is done by having a connected component supply a function called mapStateToProps, which extracts the data that component wants from the store state. The wrapper components generated by connect will re-run their mapState functions after each dispatch, and do shallow comparisons between the latest returned values and the previous returned values to see if the data has changed. That cuts down on the amount of actual UI updates that need to be done.
There's also tradeoffs involved in how you handle connected forms. Yes, dispatching an action for every single keystroke is likely to be inefficient overall. I personally use a React form wrapper component that buffers those text input changes locally, and only dispatches a debounced Redux action after the user is done typing.
The React-Redux bindings were recently rewritten, and are now primarily based on memoized selector functions rather than having most of the logic inside of React components. I don't know how the Aurelia bindings are put together, but I suspect that they could probably leverage a lot of the work that's been done to optimize the React bindings.
You may be interested in some of the articles I have on Redux-related performance. See the Redux FAQ question at http://redux.js.org/docs/faq/Performance.html#performance-scaling , as well as the articles in my React/Redux links list at https://github.com/markerikson/react-redux-links/blob/master/react-performance.md#redux-performance .
Related
Can you globally instantiate a class and that will be reliable on react-native? i.e.
// logs.ts
const instance = new Instance()
export { instance }
// something.tsx
const Component = () => {
instance.method()
...
}
If method were to increment a property by 1 would that property always have been incremented the number of times method was called throughout the project?
The primary benefit here is simplicity. For example, it's easier to define
class SomeClass {
constructor(properties){}
addProperty(key,value){ this.properties[key] = value }
}
than it is to do the equivalent in redux. Why don't people do the above more often?
Just changing the value on some object will not result in state changes/cause a component to re-render. You still need a state provider to have the state set to. If you want to change the state of that Provider you'll need to run setState (or dispatch and useReducer) which means you'll need to pass the dispatch of that function around to all of its children. As your app grows larger you'll definitely want to/need to useReducer and perhaps even several providers, you'll be re-implementing redux which is only about 200 lines of code anyway. So the question will become why did you re-implement redux which is such a popular library that most people know exactly how to use and there is plenty of documentation for, in favor of your homegrown version of redux which doesn't provide much additional value?
In redux a primary benefit is the tooling like redux-logger, redux-thunk, redux dev tools and time travel and others etc which allows you to replay the changes made or undo them. Sure it is easy to modify an object but using redux allows you to testably and consistently see how an object (the redux state) changes over time and undo them. You can test that action creators return the correct actions. You can separately test that given specific actions the reducer behaves as expected and replay that with mockStore. In short, using redux you get the enterprise version supported by a large community of experts who have helped improve and implement essentially the simple class that you demoed.
Although you can do this, it breaks away from the point of redux. Redux is meant to be a centralized state management store, and therefore, allow components to
access shared state from a single location and analyze shared states, because it comes from one place.
track history and actions taken to get state there
makes maintaining, debugging and collaborating on a codebase
much easier.
Doing the first option, all these benefits are lost.
That said, if you have multiple components in one file, and you want them all to share that same global state (not exporting it), using the class constructor to do so isn't a big deal, as long as the project isn't being worked on by other developers.
However, it would make more sense in that case to make an overall component class, and pass state down, as that is the point of React. Declaring a class outside and trying to manage state in the first way, would be a declarative approach, which is what React doesn't want developers to do (i.e there is a better way, like creating a hierarchy of components). That way, components re-render on a change in value, and no weird bugs arise
The offical RFC
There is a example for effect
function createSharedComposable(composable) {
let subscribers = 0
let state, scope
const dispose = () => {
if (scope && --subscribers <= 0) {
scope.stop()
state = scope = null
}
}
return (...args) => {
subscribers++
if (!state) {
scope = effectScope(true)
state = scope.run(() => composable(...args))
}
onScopeDispose(dispose)
return state
}
}
I know what it will do, it will force all components to calculate only once when we use useMouse API
But I can't understand the concept of effect, and how does it work?
Espeically some APIs for effect like getCurrentScope. I tried to see the return values of getCurrentScope, but i have gained nothing.
Please help me!
effect is a common term used in reactive frameworks (both VueJS and React) to refer to (I believe) side effect. If you are familiar with functional programming, you probably already know that it is called side effect because it is not "pure function", because it mutates shared or global state.
Ignore the academic terminology, effect in these systems merely refers to any application defined method that does something bespoke, like
const foo = () => {
// I do something bespoke
}
The meaning of effect is really that broad. What your method actually does in its body does not matter to the framework. All that the framework knows is foo does some unstructured things. What VueJS does in extra, is to monitor, through its reactivity system, if your effect depends on any reactive data. And if it does, VueJS will re-run your effect every time the data it depends on changes.
effect (or side effect) isn't something bad or special or advanced. In fact, your application is all about making effects/side effects. For example, the commonest effect in a VueJS application is DOM manipulation. It is so common that VueJS extracts it into a different abstraction: template. Behind the scene, templates are compiled to render functions - which look a lot like the foo above - that get re-evaluated every time some dependent reactive data changes. That is how VueJS keeps your UI up to date.
Another extreme of common effects are those really bespoke ones, e.g. you just want to do some old fashion imperative things (like the jQuery style) whenever your data changes. And VueJS let you do it through watchEffect: you give VueJS a lambda, and VueJS will blindly call it every time its dependency changes without asking what it is doing.
VueJS discovers your dependency on reactive data by running your effect. As long as your effect accesses any reactive data (say, yourState.bar) during its execution, VueJS will notice that and record a dependency of your effect on yourState.bar
In its essence, the reactivity system is just the modern version of the good-old observable/subscriber pattern. Reactive states are the observables, and effects are the subscribers/observers. If you look beyond the magic layer and think of VueJS in the form of a subscriber pattern, there is one issue it cannot avoid: whenever you have subscribe, you will have to deal with unsubscribe, otherwise you will have memory or resource leaks simply because you keep holding on to subscribers (they in turn hold on to other things) and nothing can be freed. This unsubscribe part is what the RFC calls "effect dispose".
Typically you will have two challenges when dealing with this unsubscribing/disposing/cleaning up/cancelling business:
deciding when to unsubscribe/dispose
knowing how to unsubscribe/dispose
In a typical reactive framework, both of the above are application's responsibility. As the application dev, you are the only one who knows when a subscription is no longer needed and how to reverse the additional resource allocation (if any) you made at the time of creating the subscription.
But in a typical VueJS app you rarely need to manually deal with any kind of cleanup (stopping the DOM patching, watch, or computed etc). That is because VueJS takes care of it automatically. The reactive effects established within a component's setup method will be automatically disposed (whatever needed for a proper clean up) when the component is unmounted. How does that happen? Let's just say some other magic exists inside VueJS to associate all your effects with the life cycle of the corresponding component. Technically, as the RFC says, that magic is effectScope.
Conceptually, each component creates an effectScope. All your effects defined inside component setup method will be associated with that scope. When the component destroys, VueJS automatically destroys the scope, which will clean up the associated effects.
The RFC proposes to make effectScope a public api so that people can use it without using a VueJS component. This is possible because Vue3 is built with modularization. You can use Vue's reactivity module without using the entire VueJS. But without the underlying effectScope, you then have to manually dispose all your effects.
What would making a coffee look like in code?
snowingfox.getCupsOutOfCupboard();
snowingfox.getCoffeeOffShelf();
snowingfox.getMilkOutOfFridge();
snowingfox.boilingWater();
// ...
Now imagine each morning I wake up and have a coffee. You could say I'm making
a coffee in reaction to waking up. How would I run this code repeatedly in
response to an isMorning variable becoming true?
This is what effect solves in Vue 3. It wraps around a chunk of
code that should be executed in response to reactive data being changed. In practise you most likely won't use effect directly, and instead rely on
things like computed and watchEffect (which use effect in their
implementations).
In short: effect is one part of Vue's reactivity system and is Vue's way of
marking and locating code that should be re-run in response to data updates.
Docs: https://v3.vuejs.org/guide/reactivity.html
Course: https://www.vuemastery.com/courses/vue-3-reactivity/vue3-reactivity/
Here's how the initial code could be implemented to be reactive:
import { ref, watchEffect } from 'vue';
const isMorning = ref(false);
watchEffect(() => {
if (!isMorning.value) return;
snowingfox.getCupsOutOfCupboard();
snowingfox.getCoffeeOffShelf();
snowingfox.getMilkOutOfFridge();
snowingfox.boilingWater();
});
From what I can tell, redux will notify all subscribers to the store when anything in the store changes no matter if it's a subscription to a deeply nested leaf or a subscription to the top level of the state.
In an application where you follow the guiding principle:
many individual components should be connected to the store instead of just a few... [docs]
You could end up with lots of listeners and potentially performance issues?
Disclaimer: I understand that the selector functions will only cause a re-render if the result of the selector function changes. I understand that just because the listener function is evaluated, doesn't mean the subscribing component will re-render. I understand that evaluating a selector function is comparatively cheap to a react component rendering.
However, I just would like to confirm that this is indeed how redux works?
e.g. given the following example listener
const result = useSelector(state => state.a.b.c.d.e.f.g.h.i.j.k)
if we update some other value down some other path, not relevant to the above listener e.g.
const exampleReducer = (state) => {
return { ...state, asdf: 'asdf' }
}
From my understanding, all listeners, including the example above, will be invoked.
For context, my actual use case is I'm using https://easy-peasy.now.sh/ which is built on redux. To be clear, I don't have any current performance issues in production related to binding too many listeners. However, each time I attach a listener via the useStoreState hook, I'm wondering whether I should minimize binding yet another listener to the store.
Also if you're curious, inspired by this thinking, I implemented a state tree which only notifies the relevant listeners.
Perhaps this is a premature optimization for a state library... but if so why? Is there an assumption that applications using redux will have simple and fast selectors and that the application bottleneck will be elsewhere?
I'm a Redux maintainer and author of React-Redux v7. The other couple answers are actually pretty good, but I wanted to provide some additional info.
Yes, the Redux store will always run all store.subscribe() listener callbacks after every dispatched action. However, that does not mean that all React components will re-render. Per your useSelector(state => state.a.b.c.d) example, useSelector will compare the current selector result with the previous selector result, and only force this component to re-render if the value has changed.
There's multiple reasons why we suggest to "connect more components to read from Redux":
Each component will be reading a smaller scoped value from the Redux store, because there's less overall data that it cares about
This means that fewer components will be forced to re-render after a given action because there's less of a chance that this specific piece of data was actually updated
Conversely, it means that you don't have cases where one large parent component is reading many values at once, is always forced to re-render, and thus always causes all of its children to re-render as well.
So, it's not the number of listener callbacks that's really the issue - it's how much work those listener callbacks do, and how many React components are forced to re-render as a result. Overall, our performance tests have shown that having more listeners reading less individual data results in fewer React components being re-rendered, and the cost of more listeners is noticeably less than the cost of more React components rendering.
For more info, you should read my posts on how both React and React-Redux work, which these topics in extensive detail:
A (Mostly) Complete Guide to React Rendering Behavior
The History and Implementation of React-Redux
ReactNext 2019: A Deep Dive into React-Redux
You may also want to read the Github issue that discussed the development of React-Redux v7, where we dropped the attempt to use React Context for state propagation in v6 because it wasn't sufficiently performant enough, and returned to using direct store subscriptions in components in v7.
But yes, you're worrying too much about performance ahead of time. There's a lot of nuances to how both React and React-Redux behave. You should actually benchmark your own app in a production setting to see if you actually have any meaningful performance issues, then optimize that as appropriate.
From what I can tell, redux will notify all subscribers to the store when anything in the store changes no matter if it's a subscription to a deeply nested leaf or a subscription to the top level of the state.
Yes, all subscribers are notified. But notice the difference between Redux and its React-Redux utils.
You could end up with lots of listeners and potentially performance issues?
With React-Redux you subscribe to a store (of Redux) by having a selector (useSelector/connect).
By default, every subscribed component in React-Redux will be rerendered if its subscribed store portion changed, to handle it you pass a selector which bailout the renders.
But for Redux:
The notification itself handled by Redux (outside React).
Redux doesn't handle the "deeply nested leaf" (React Context API handles it as part of React-Redux implementation) it doesn't handle locations - it just calling callbacks.
The notifications are batched in a while loop outside the React context (optimized).
// There is a single Subscription instance per store
// Code inside Subscription.js
notify() {
batch(() => {
let listener = first
while (listener) {
listener.callback()
listener = listener.next
}
})
}
In conclusion, if it's not a case of premature optimization:
Premature optimization is spending a lot of time on something that you
may not actually need. “Premature optimization is the root of all
evil” is a famous saying among software developers.
All subscribers in Redux will be notified, but it's not influencing the UI.
All subscribed components will be rerendered only if the portion of the state changed (enhanced with selectors) - influencing the UI, therefore thinking about optimizations, you should subscribe to comparable portions of the store.
I assume you are talking about react components that get state from redux (tags in your question) using react-redux. The react-redux tag is missing but that is what is mostly used and used in the standard create react app template.
You can use the useSelector hook or mapStateToProps with connect. Both more or less work the same way.
If an action causes a new state to be created then all functions passed to useSelector or mapStateToProps will be executed and the component will be re rendered only when they return a value that is not referentially the same as previous value. For mapStateToProps it works a little different as it does a shallow equal comparison with the value returned.
You can use reselect to compose selectors and re use logic to get certain branches and/or adapt the returned data from the state and to memoize the adapted data so jsx is not needlessly created.
Note that when a component re creates jsx that does not mean the DOM is re created since React will do a virtual DOM compare of the current jsx with the last one but you can optimize by not re creating jsx at all with memoized selectors (using reselect) and having pure components.
The worst thing you can do is pass a handler function that is re created on every render since that will cause jsx to be re created because props changed and DOM to be re created since the handler function causes virtual DOM compare to fail, for example this:
{people.map((person) => (
<Person
key={person.id}
onClick={() => someAction(person.id)}
person={person}
/>
))}
You could prevent this from happening using the useCallback hook from React or create a PersonContainer that will create the callback () => someAction(person.id) only when re rendered.
This is not exactly answer to the original question BUT related enough to be mentioned.
Sometimes on very complicated React systems Redux notifications may actually cause trouble. If every other optimization technique seems exhausted you might want to hack Redux itself a little bit.
See:
https://www.npmjs.com/search?q=redux-notification-enhancer
This enhancer allows to reduce amount of renders in React by reducing notifications coming from Redux. It can be achieved by enabling throttling and/or marking certain actions as passive (updating state but not causing render). Both optimization techniques will increase complexity along with performance - use with care.
So I started learning React a week ago and I inevitably got to the problem of state and how components are supposed to communicate with the rest of the app. I searched around and Redux seems to be the flavor of the month. I read through all the documentation and I think it's actually a pretty revolutionary idea. Here are my thoughts on it:
State is generally agreed to be pretty evil and a large source of bugs in programming. Instead of scattering it all throughout your app Redux says why not just have it all concentrated in a global state tree that you have to emit actions to change? Sounds interesting. All programs need state so let's stick it in one impure space and only modify it from within there so bugs are easy to track down. Then we can also declaratively bind individual state pieces to React components and have them auto-redraw and everything is beautiful.
However, I have two questions about this whole design. For one, why does the state tree need to be immutable? Say I don't care about time travel debugging, hot reload, and have already implemented undo/redo in my app. It just seems so cumbersome to have to do this:
case COMPLETE_TODO:
return [
...state.slice(0, action.index),
Object.assign({}, state[action.index], {
completed: true
}),
...state.slice(action.index + 1)
];
Instead of this:
case COMPLETE_TODO:
state[action.index].completed = true;
Not to mention I am making an online whiteboard just to learn and every state change might be as simple as adding a brush stroke to the command list. After a while (hundreds of brush strokes) duplicating this entire array might start becoming extremely expensive and time-consuming.
I'm ok with a global state tree that is independent from the UI that is mutated via actions, but does it really need to be immutable? What's wrong with a simple implementation like this (very rough draft. wrote in 1 minute)?
var store = { items: [] };
export function getState() {
return store;
}
export function addTodo(text) {
store.items.push({ "text": text, "completed", false});
}
export function completeTodo(index) {
store.items[index].completed = true;
}
It's still a global state tree mutated via actions emitted but extremely simple and efficient.
Isn't Redux just glorified global state?
Of course it is. But the same holds for every database you have ever used. It is better to treat Redux as an in-memory database - which your components can reactively depend upon.
Immutability enables checking if any sub-tree has been altered very efficient because it simplifies down to an identity check.
Yes, your implementation is efficient, but the entire virtual dom will have to be re-rendered each time the tree is manipulated somehow.
If you are using React, it will eventually do a diff against the actual dom and perform minimal batch-optimized manipulations, but the full top-down re-rendering is still inefficient.
For an immutable tree, stateless components just have to check if the subtree(s) it depends on, differ in identities compared to previous value(s), and if so - the rendering can be avoided entirely.
Yes it is!!!
Since there is no governance of who is allowed to write a specific property/variable/entry to the store and practically you can dispatch any action from anywhere, the code tends to be harder to maintain and even spaghetti when your code base grows and/or managed by more than one person.
I had the same questions and issues with Redux when I started use it so I have created a library that fix these issue:
It is called Yassi:
Yassi solves the problems you mentioned by define a globally readable and privately writable store. It means that anyone can read a property from the store (such as in Redux but simpler).
However only the owner of the property, meaning the object that declare the property can write/update that property in the store
In addition, Yassi has other perks in it such as zero boilerplate to declare entry in the store by using annotations (use #yassit('someName'))
Update the value of that entry does not require actions/reducers or other such cumbersome code snippets, instead just update the variable like in regular object.
Having spent some time working with flux (both ‘vanilla' and with various frameworks including alt and fluxible) I am left with a question about best practice with regard to loading the initial state of components. More specifically about components directly accessing the store to do it.
The flux ‘model’ prescribes a unidirectional flow of data from Action>Dispatcher>Store>View in a loop, yet it seems this convention is eschewed when loading the initial state of components, most docs/tutorials contain examples where rather than firing an action to get the data, the component calls a function on the store directly (examples below).
It seems to me that components should have little/no information about the store, only about the actions that they can fire, so introducing this link seems both unintuitive and potentially dangerous as it may encourage future developers to jump straight to the store from the component instead of going via the dispatcher. It also runs counter to the ‘Law of Demeter’ which Flux is supposed to adhere very strongly to.
What is best practice for this? Is there a reason that this always seems to be the case? Its quite possible that I have missed out something fundamental, so please let me know if so!
Thanks.
Examples of components calling the store directly.
Flux React example from the fb flux repo example chat app (https://github.com/facebook/flux/tree/master/examples/flux-chat)
MessageSection.react.js
getInitialState: function() {
return getStateFromStores();
},
function getStateFromStores() {
return {
messages: MessageStore.getAllForCurrentThread(),
thread: ThreadStore.getCurrent()
};
}
Another example from the same repo for the TODOapp
(https://github.com/facebook/flux/tree/master/examples/flux-todomvc)
TodoApp.react.js
function getTodoState() {
return {
allTodos: TodoStore.getAll(),
areAllComplete: TodoStore.areAllComplete()
};
}
Example of the alt implementation of the above todo app: (https://github.com/goatslacker/alt/tree/master/examples/todomvc)
TodoApp.js
function getTodoState() {
return {
allTodos: TodoStore.getState().todos,
areAllComplete: TodoStore.areAllComplete()
};
}
and finally the alt specific tutorial:
(https://github.com/goatslacker/alt-tutorial/blob/master/src/components/Locations.jsx)
Locations.js
componentDidMount() {
LocationStore.fetchLocations();
},
It depends on how the structure of you app looks like. Often you want to fetch some data before showing something to the user. What I have found to be a good practice is to have a high end component which fires on mount an action which fetches any initial data from an API. This means that when this action is done fetching it calls the store which caches the data and then emits a change.
This change emit then sets in motion the re-rendering of the whole application.
This way you keep the uni-directional data flow. The whole point with Flux is letting the user extract the data flow functionality out of components to keep them more clean, discourage components to directly communicate with each other and decrease the amount of unnecessary properties which has to be passed around the application.
In the examples the initial state fetches some initial value from the store. This is a common way to fetch the initial value, but you could set it in the component as well. Both ways I would say is a good practice. This does not mean that every component is free to fetch whatever they like from the store.
Anyway, the goal would be to keep the code and data flow as intuitive as possible with Flux. All of this are reasons why there are so many implementations of Flux.