I am looking for a way to turn a middleware on and off. I introduced a tutorial functionality - I listen to what the user is doing with the UI by checking each action with a "guidance" middleware. if the user clicks on the right place he moves to the next step in the tutorial. However this behaviour is only needed when the tutorial mode is on. Any ideas?
const store = createStore(holoApp, compose(applyMiddleware(timestamp, ReduxThunk, autosave, guidance),
window.devToolsExtension ? window.devToolsExtension() : f => f));
for now my solution was to keep the "on" switch in a guidanceState reducer and dirty check it in the middleware:
const guidance = store => next => action => {
let result = next(action)
const state = store.getState();
const { guidanceState } = state;
const { on } = guidanceState;
if (on) {
....
However, ~95% of the time the tutorial mode would be off so dirty checking every action all the time feels a bit, well, dirty... ;) Any other ways?
Don't do stateful things in middleware (unless you have a good pattern for managing that state, like Sagas). Don't do stateful things with your middleware stack at all if you can avoid it. (If you must do so, #TimoSta's solution is the correct one).
Instead, manage your tours with a reducer:
const finalReducer = combineReducers({
// Your other reducers
tourState: tourReducer
});
function tourReducer(state = initalTourState, action) {
switch(action.type) {
case TOUR_LAST_STEP:
return /* compose next tour step state here */;
case TOUR_NEXT_STEP:
return /* compose last tour step state here */;
case TOUR_CLOSE:
return undefined; // Nothing to do in this case
default:
return state;
}
}
Then, in your application use the current state of tourState to move the highlighting, and if there is nothing in tourState, turn the tour off.
store.subscribe(() => {
const state = store.getState();
if (state.tourState) {
tourManager.setTourStep(state.tourState);
} else {
tourManager.close();
}
});
You don't have to use a stateful tour manager either - if you're using React it could just be a component that pulls out tourState with a connect wrapper and renders null if there is no state:
// waves hands vigorously
const TourComponent = (props) => {
if (props.currentStep) return <TourStep ...props.currentStep />;
return null;
}
I don't know of any way to replace middlewares on the fly via redux's API.
Instead, you could create a completely new store with the old store's state as initial state and the new set of middlewares. This may work seamlessly with your application.
Three ideas you could consider:
Have the middleware listen for "GUIDANCE_START" and "GUIDANCE_STOP" actions. When those come through, update some behavior, and don't actually pass them to next.
You could write a middleware that constructs its own middleware pipeline internally, and dynamically adds and removes the guidance middleware as needed (somewhat related discussion at replaceMiddleware feature for use with lazy-loaded modules)
This might be a good use case for something like a saga, rather than a middleware. I know I've seen discussions of using sagas for onboarding workflows, such as the Key&Pad app (source:key-and-pad)
Related
react functional component is taking snapshot of state at the time of subscription.
For ex. PFB code.
If i click setSocketHandler button and then press setWelcomeString button. Now if i receive message over socket when i log welcomestring it is empty.
But if i click setWelcomeString button and then click setSocketHandler button. Now if i receive message on socket Welcome is getting logged on console.
I have seen same behaviour in project so just created this simple app to prove.
If i use class component which is commented below.. everything works fine.
So my question is why react functional component is working on a state at the time of reg and not on actual state at the time message is received.
This is very weird. How to make it work in functional component correctly.
import React, {useEffect, useState} from 'react';
import logo from './logo.svg';
import './App.css';
const io = require('socket.io-client');
const socket = io.connect('http://localhost:3000/');
const App : React.FunctionComponent = () => {
const [welcomeString, setWelcomeString] = useState("");
const buttonCliecked = () => {
console.log("clocked button");
setWelcomeString("Welcome")
}
const onsockethandlerclicked = () => {
console.log("socket handler clicked");
socket.on('out', () => {
console.log("Recived message")
console.log(welcomeString);
});
}
return (
<div>
<header className="component-header">User Registration</header>
<label>{welcomeString}</label>
<button onClick={buttonCliecked}>setWelcomeString</button>
<button onClick={onsockethandlerclicked}>setSocketHandler</button>
</div>
);
}
/*class App extends React.Component {
constructor(props) {
super(props);
this.state = {
welcomeString:""
}
}
buttonCliecked = () => {
console.log("clocked button");
this.setState({ welcomeString:"Welcome"})
}
onsockethandlerclicked = () => {
console.log("socket handler clicked");
socket.on('out', () => {
console.log("Recived message")
console.log(this.state.welcomeString);
});
}
render() {
return (
<div>
<header className="component-header">User Registration</header>
<label>{this.state.welcomeString}</label>
<button onClick={this.buttonCliecked}>setwelcomestring</button>
<button onClick={this.onsockethandlerclicked}>setSocketHandler</button>
</div>
);
}
}*/
export default App;
For those of us coming from a Redux background, useReducer can seem deceptively complex and unnecessary. Between useState and context, it’s easy to fall into the trap of thinking that a reducer adds unnecessary complexity for the majority of simpler use cases; however, it turns out useReducer can greatly simplify state management. Let’s look at an example.
As with my other posts, this code is from my booklist project. The use case is that a screen allows users to scan in books. The ISBNs are recorded, and then sent to a rate-limited service that looks up the book info. Since the lookup service is rate limited, there’s no way to guarantee your books will get looked up anytime soon, so a web socket is set up; as updates come in, messages are sent down the ws, and handled in the ui. The ws’s api is dirt simple: the data packet has a _messageType property on it, with the rest of the object serving as the payload. Obviously a more serious project would design something sturdier.
With component classes, the code to set up the ws was straightforward: in componentDidMount the ws subscription was created, and in componentWillUnmount it was torn down. With this in mind, it’s easy to fall into the trap of attempting the following with hooks
const BookEntryList = props => {
const [pending, setPending] = useState(0);
const [booksJustSaved, setBooksJustSaved] = useState([]);
useEffect(() => {
const ws = new WebSocket(webSocketAddress("/bookEntryWS"));
ws.onmessage = ({ data }) => {
let packet = JSON.parse(data);
if (packet._messageType == "initial") {
setPending(packet.pending);
} else if (packet._messageType == "bookAdded") {
setPending(pending - 1 || 0);
setBooksJustSaved([packet, ...booksJustSaved]);
} else if (packet._messageType == "pendingBookAdded") {
setPending(+pending + 1 || 0);
} else if (packet._messageType == "bookLookupFailed") {
setPending(pending - 1 || 0);
setBooksJustSaved([
{
_id: "" + new Date(),
title: `Failed lookup for ${packet.isbn}`,
success: false
},
...booksJustSaved
]);
}
};
return () => {
try {
ws.close();
} catch (e) {}
};
}, []);
//...
};
We put the ws creation in a useEffect call with an empty dependency list, which means it’ll never re-fire, and we return a function to do the teardown. When the component first mounts, our ws is set up, and when the component unmounts, it’s torn down, just like we would with a class component.
The problem
This code fails horribly. We’re accessing state inside the useEffect closure, but not including that state in the dependency list. For example, inside of useEffect the value of pending will absolutely always be zero. Sure, we might call setPending inside the ws.onmessage handler, which will cause that state to update, and the component to re-render, but when it re-renders our useEffect will not re-fire (again, because of the empty dependency list)—as a result that closure will go on closing over the now-stale value for pending.
To be clear, using the Hooks linting rule, discussed below, would have caught this easily. More fundamentally, it’s essential to break with old habits from the class component days. Do not approach these dependency lists from a componentDidMount / componentDidUpdate / componentWillUnmount frame of mind. Just because the class component version of this would have set up the web socket once, in componentDidMount, does not mean you can do a direct translation into a useEffect call with an empty dependency list.
Don’t overthink, and don’t be clever: any value from your render function’s scope that’s used in the effect callback needs to be added to your dependency list: this includes props, state, etc. That said—
The solution
While we could add every piece of needed state to our useEffect dependency list, this would cause the web socket to be torn down, and re-created on every update. This would hardly be efficient, and might actually cause problems if the ws sends down a packet of initial state on creation, that might already have been accounted for, and updated in our ui.
If we look closer, however, we might notice something interesting. Every operation we’re performing is always in terms of prior state. We’re always saying something like “increment the number of pending books,” “add this book to the list of completed,” etc. This is precisely where a reducer shines; in fact, sending commands that project prior state to a new state is the whole purpose of a reducer.
Moving this entire state management to a reducer would eliminate any references to local state within the useEffect callback; let’s see how.
function scanReducer(state, [type, payload]) {
switch (type) {
case "initial":
return { ...state, pending: payload.pending };
case "pendingBookAdded":
return { ...state, pending: state.pending + 1 };
case "bookAdded":
return {
...state,
pending: state.pending - 1,
booksSaved: [payload, ...state.booksSaved]
};
case "bookLookupFailed":
return {
...state,
pending: state.pending - 1,
booksSaved: [
{
_id: "" + new Date(),
title: `Failed lookup for ${payload.isbn}`,
success: false
},
...state.booksSaved
]
};
}
return state;
}
const initialState = { pending: 0, booksSaved: [] };
const BookEntryList = props => {
const [state, dispatch] = useReducer(scanReducer, initialState);
useEffect(() => {
const ws = new WebSocket(webSocketAddress("/bookEntryWS"));
ws.onmessage = ({ data }) => {
let packet = JSON.parse(data);
dispatch([packet._messageType, packet]);
};
return () => {
try {
ws.close();
} catch (e) {}
};
}, []);
//...
};
While slightly more lines, we no longer have multiple update functions, our useEffect body is much more simple and readable, and we no longer have to worry about stale state being trapped in a closure: all of our updates happen via dispatches against our single reducer. This also aids in testability, since our reducer is incredibly easy to test; it’s just a vanilla JavaScript function. As Sunil Pai from the React team puts it, using a reducer helps separate reads, from writes. Our useEffect body now only worries about dispatching actions, which produce new state; before it was concerned with both reading existing state, and also writing new state.
You may have noticed actions being sent to the reducer as an array, with the type in the zero slot, rather than as an object with a type key. Either are allowed with useReducer; this is just a trick Dan Abramov showed me to reduce the boilerplate a bit :)
What about functional setState()
Lastly, some of you may be wondering why, in the original code, I didn’t just do this
setPending(pending => pending - 1 || 0);
rather than
setPending(pending - 1 || 0);
This would have removed the closure problem, and worked fine for this particular use case; however, the minute updates to booksJustSaved needed access to the value of pending, or vice versa, this solution would have broken down, leaving us right where we started. Moreover, I find the reducer version to be a bit cleaner, with the state management nicely separated in its own reducer function.
All in all, I think useReducer() is incredibly under-utilized at present. It’s nowhere near as scary as you might think. Give it a try!
Happy coding!
So I'm working on implementing an application in React with Redux Saga and I'm kind of baffled at how little information there is out there for my particular use case, as it doesn't seem that strange. Quite possibly I am using the wrong terms or thinking about the problem in the wrong way, as I am rather new to React/Redux. In any event, I have been stymied by all my attempts to google this issue and would appreciate some insight from someone more experienced in the framework than I am.
My application state has a userSettings property on it which manages a few configuration options for the logged in user. At one point in the application, a user can flip a switch to disable the display of an "at a glance" dashboard widget, and I need to pass this information off to a backend API to update their settings info in the database, and then update the state according to whether this backend update was successful.
My code as it stands currently has a main saga for all user settings updates, which I intend to reach via a more specific saga for this setting in particular, thus:
Dashboard.js
function mapStateToProps(state) {
const { userSettings } = state;
return { userSettings };
}
...
class Dashboard extends Component {
...
...
hasDashboardAtAGlanceHiddenToggle() {
const { dispatch, userSettings } = this.props;
dispatch(setHasDashboardAtAGlanceHidden(!userSettings.hasDashboardAtAGlanceHidden));
}
}
export default connect(mapStateToProps)(Dashboard);
updateUserSettingsSaga.js
import { take, put, call } from 'redux-saga/effects';
import axios from 'axios';
import {
UPDATE_USER_SETTINGS,
SET_HAS_DASHBOARD_AT_A_GLANCE_HIDDEN,
updateUserSettings,
updatedUserSettingsSuccess
} from '../../actions';
export function* setHasDashboardAtAGlanceHiddenSaga() {
const action = yield take(SET_HAS_DASHBOARD_AT_A_GLANCE_HIDDEN);
const newValue = action.data;
//QUESTION HERE -- how to get full object to pass to updateUserSettings
yield put(updateUserSettings(stateObjectWithNewValuePopulated));
}
export default function* updateUserSettingsSaga(data) {
yield take(UPDATE_USER_SETTINGS);
try {
const response = yield call(axios.put, 'http://localhost:3001/settings', data);
yield put(updatedUserSettingsSuccess(response.data));
} catch (e) {
yield put(updatedUserSettingsFailure());
}
}
My question, as noted in the code, is that I'm not sure where/how the logic to merge the updated value into the state should occur. As near as I can figure, I have three options:
Build the updated state in the component before dispatching the initial action, ie:
hasDashboardAtAGlanceHiddenToggle() {
const { dispatch, userSettings } = this.props;
const newState = Object.assign({}, userSettings , {
hasDashboardAtAGlanceHidden: !userSettings.hasDashboardAtAGlanceHidden
});
dispatch(setHasDashboardAtAGlanceHidden(userSettings));
}
}
Use redux-saga's select effect and build the full state object in the more specific initial saga, ie:
export function* setHasDashboardAtAGlanceHiddenSaga() {
const action = yield take(SET_HAS_DASHBOARD_AT_A_GLANCE_HIDDEN);
const newValue = action.data;
const existingState = select(state => state.userSettings);
const updatedState = Object.assign({}, existingState, {
hasDashboardAtAGlanceHidden: newValue
});
yield put(updateUserSettings(updatedState));
}
Retrieve the server's copy of the user settings object before updating it, ie:
export default function* updateUserSettingsSaga() {
const action = yield take(UPDATE_USER_SETTINGS);
try {
const current = yield call(axios.get, 'http://localhost:3001/settings');
const newState = Object.assign({}, current.data, action.data);
const response = yield call(axios.put, 'http://localhost:3001/settings', newState);
yield put(updatedUserSettingsSuccess(response.data));
} catch (e) {
yield put(updatedUserSettingsFailure());
}
}
All of these will (I think) work as options, but I'm not at all clear on which would be the idiomatic/accepted/preferable approach within the context of Redux Saga, and there is a bewildering lack of examples (at least that I've been able to find) featuring POST/PUT instead of GET when interfacing with outside APIs. Any help or guidance would be appreciated -- even if it's just that I'm thinking about this in the wrong way. :D
The GET/PUT/POST aspect isn't relevant to the question. Overall, your question really comes down to the frequently asked question "How do I split logic between action creators and reducers?". Quoting that answer:
There's no single clear answer to exactly what pieces of logic should go in a reducer or an action creator. Some developers prefer to have “fat” action creators, with “thin” reducers that simply take the data in an action and blindly merge it into the corresponding state. Others try to emphasize keeping actions as small as possible, and minimize the usage of getState() in an action creator. (For purposes of this question, other async approaches such as sagas and observables fall in the "action creator" category.)
There are some potential benefits from putting more logic into your reducers. It's likely that the action types would be more semantic and more meaningful (such as "USER_UPDATED" instead of "SET_STATE"). In addition, having more logic in reducers means that more functionality will be affected by time travel debugging.
This comment sums up the dichotomy nicely:
Now, the problem is what to put in the action creator and what in the reducer, the choice between fat and thin action objects. If you put all the logic in the action creator, you end up with fat action objects that basically declare the updates to the state. Reducers become pure, dumb, add-this, remove that, update these functions. They will be easy to compose. But not much of your business logic will be there. If you put more logic in the reducer, you end up with nice, thin action objects, most of your data logic in one place, but your reducers are harder to compose since you might need info from other branches. You end up with large reducers or reducers that take additional arguments from higher up in the state.
I also wrote my own thoughts on "thick and thin reducers" a while back.
So, ultimately it's a matter of how you prefer to structure the logic.
In my redux project, I want to check something ( for example network connection ) in every action dispatch. Should I implement using a reducer which accepts all type of actions( without type checking ) as given below
export default (state = defaultState) => ({
...state,
neworkStatus: navigator.onLine
})
or with a middleware.
const NetworkMiddleware = store => next => (action) => {
const result = next(action)
const state = store.getState()
if (navigator.onLine && !state.NetworkDetector.networkStatus) next({ type: 'NETWORK_SUCCESS' })
if (!navigator.onLine && state.NetworkDetector.networkStatus) next({ type: 'NETWORK_ERROR' })
return result
}
export default NetworkMiddleware;
what is the difference between these two implementations
It provides a third-party extension point between dispatching an
action, and the moment it reaches the reducer. People use Redux
middleware for logging, crash reporting, talking to an asynchronous
API, routing, and more.
I think it would be better to use a middleware to analyse network activity. Read these Redux docs for further information.
A middleware in redux intercepts actions and performs some specific activity before it goes to the reducer to update the state. Middleware is meant to perform such actions without making the changes to the state in store. If you perform such tracking or modification by writing a reducer, you end up maintaining a state in the store for this activity which may have nothing to do with your component update or re-rendering. This is not a good practice I suppose and doesn't go as per the framework design. So it is better to achieve it via use of a middleware.
Example code: https://github.com/d6u/example-redux-update-nested-props/blob/master/one-connect/index.js
View live demo: http://d6u.github.io/example-redux-update-nested-props/one-connect.html
How to optimize small updates to props of nested component?
I have above components, Repo and RepoList. I want to update the tag of the first repo (Line 14). So I dispatched an UPDATE_TAG action. Before I implemented shouldComponentUpdate, the dispatch takes about 200ms, which is expected since we are wasting lots of time diffing <Repo/>s that haven't changed.
After added shouldComponentUpdate, dispatch takes about 30ms. After production build React.js, the updates only cost at about 17ms. This is much better, but timeline view in Chrome dev console still indicate jank frame (longer than than 16.6ms).
Imagine if we have many updates like this, or <Repo/> is more complicated than current one, we won't be able to maintain 60fps.
My question is, for such small updates to a nested component's props, is there a more efficient and canonical way to update the content? Can I still use Redux?
I got a solution by replacing every tags with an observable inside reducer. Something like
// inside reducer when handling UPDATE_TAG action
// repos[0].tags of state is already replaced with a Rx.BehaviorSubject
get('repos[0].tags', state).onNext([{
id: 213,
text: 'Node.js'
}]);
Then I subscribe to their values inside Repo component using https://github.com/jayphelps/react-observable-subscribe. This worked great. Every dispatch only costs 5ms even with development build of React.js. But I feel like this is an anti-pattern in Redux.
Update 1
I followed the recommendation in Dan Abramov's answer and normalized my state and updated connect components
The new state shape is:
{
repoIds: ['1', '2', '3', ...],
reposById: {
'1': {...},
'2': {...}
}
}
I added console.time around ReactDOM.render to time the initial rendering.
However, the performance is worse than before (both initial rendering and updating). (Source: https://github.com/d6u/example-redux-update-nested-props/blob/master/repo-connect/index.js, Live demo: http://d6u.github.io/example-redux-update-nested-props/repo-connect.html)
// With dev build
INITIAL: 520.208ms
DISPATCH: 40.782ms
// With prod build
INITIAL: 138.872ms
DISPATCH: 23.054ms
I think connect on every <Repo/> has lots of overhead.
Update 2
Based on Dan's updated answer, we have to return connect's mapStateToProps arguments return an function instead. You can check out Dan's answer. I also updated the demos.
Below, the performance is much better on my computer. And just for fun, I also added the side effect in reducer approach I talked (source, demo) (seriously don't use it, it's for experiment only).
// in prod build (not average, very small sample)
// one connect at root
INITIAL: 83.789ms
DISPATCH: 17.332ms
// connect at every <Repo/>
INITIAL: 126.557ms
DISPATCH: 22.573ms
// connect at every <Repo/> with memorization
INITIAL: 125.115ms
DISPATCH: 9.784ms
// observables + side effect in reducers (don't use!)
INITIAL: 163.923ms
DISPATCH: 4.383ms
Update 3
Just added react-virtualized example based on "connect at every with memorization"
INITIAL: 31.878ms
DISPATCH: 4.549ms
I’m not sure where const App = connect((state) => state)(RepoList) comes from.
The corresponding example in React Redux docs has a notice:
Don’t do this! It kills any performance optimizations because TodoApp will rerender after every action.
It’s better to have more granular connect() on several components in your view hierarchy that each only
listen to a relevant slice of the state.
We don’t suggest using this pattern. Rather, each connect <Repo> specifically so it reads its own data in its mapStateToProps. The “tree-view” example shows how to do it.
If you make the state shape more normalized (right now it’s all nested), you can separate repoIds from reposById, and then only have your RepoList re-render if repoIds change. This way changes to individual repos won’t affect the list itself, and only the corresponding Repo will get re-rendered. This pull request might give you an idea of how that could work. The “real-world” example shows how you can write reducers that deal with normalized data.
Note that in order to really benefit from the performance offered by normalizing the tree you need to do exactly like this pull request does and pass a mapStateToProps() factory to connect():
const makeMapStateToProps = (initialState, initialOwnProps) => {
const { id } = initialOwnProps
const mapStateToProps = (state) => {
const { todos } = state
const todo = todos.byId[id]
return {
todo
}
}
return mapStateToProps
}
export default connect(
makeMapStateToProps
)(TodoItem)
The reason this is important is because we know IDs never change. Using ownProps comes with a performance penalty: the inner props have to be recalculate any time the outer props change. However using initialOwnProps does not incur this penalty because it is only used once.
A fast version of your example would look like this:
import React from 'react';
import ReactDOM from 'react-dom';
import {createStore} from 'redux';
import {Provider, connect} from 'react-redux';
import set from 'lodash/fp/set';
import pipe from 'lodash/fp/pipe';
import groupBy from 'lodash/fp/groupBy';
import mapValues from 'lodash/fp/mapValues';
const UPDATE_TAG = 'UPDATE_TAG';
const reposById = pipe(
groupBy('id'),
mapValues(repos => repos[0])
)(require('json!../repos.json'));
const repoIds = Object.keys(reposById);
const store = createStore((state = {repoIds, reposById}, action) => {
switch (action.type) {
case UPDATE_TAG:
return set('reposById.1.tags[0]', {id: 213, text: 'Node.js'}, state);
default:
return state;
}
});
const Repo = ({repo}) => {
const [authorName, repoName] = repo.full_name.split('/');
return (
<li className="repo-item">
<div className="repo-full-name">
<span className="repo-name">{repoName}</span>
<span className="repo-author-name"> / {authorName}</span>
</div>
<ol className="repo-tags">
{repo.tags.map((tag) => <li className="repo-tag-item" key={tag.id}>{tag.text}</li>)}
</ol>
<div className="repo-desc">{repo.description}</div>
</li>
);
}
const ConnectedRepo = connect(
(initialState, initialOwnProps) => (state) => ({
repo: state.reposById[initialOwnProps.repoId]
})
)(Repo);
const RepoList = ({repoIds}) => {
return <ol className="repos">{repoIds.map((id) => <ConnectedRepo repoId={id} key={id}/>)}</ol>;
};
const App = connect(
(state) => ({repoIds: state.repoIds})
)(RepoList);
console.time('INITIAL');
ReactDOM.render(
<Provider store={store}>
<App/>
</Provider>,
document.getElementById('app')
);
console.timeEnd('INITIAL');
setTimeout(() => {
console.time('DISPATCH');
store.dispatch({
type: UPDATE_TAG
});
console.timeEnd('DISPATCH');
}, 1000);
Note that I changed connect() in ConnectedRepo to use a factory with initialOwnProps rather than ownProps. This lets React Redux skip all the prop re-evaluation.
I also removed the unnecessary shouldComponentUpdate() on the <Repo> because React Redux takes care of implementing it in connect().
This approach beats both previous approaches in my testing:
one-connect.js: 43.272ms
repo-connect.js before changes: 61.781ms
repo-connect.js after changes: 19.954ms
Finally, if you need to display such a ton of data, it can’t fit in the screen anyway. In this case a better solution is to use a virtualized table so you can render thousands of rows without the performance overhead of actually displaying them.
I got a solution by replacing every tags with an observable inside reducer.
If it has side effects, it’s not a Redux reducer. It may work, but I suggest to put code like this outside Redux to avoid confusion. Redux reducers must be pure functions, and they may not call onNext on subjects.
I've developed a smallish standalone web app with React and Redux which is hosted on its own web server. We now want to reuse/integrate most parts of this app into another React/Redux web app.
In theory this should work quite nicely because all my React components, reducers and most action creators are pure. But I have a few action creators which return thunks that depend on the app state. They may dispatch async or sync actions, but that's not the issue here.
Let's say my root reducer looks like this:
const myAppReducer = combineReducers({
foo: fooReducer,
bar: barReducer,
baz: bazReducer
});
and my most complex action creators depend on many state slices (luckily there are only a few of those):
const someAction = function () {
return (dispatch, getState) => {
const state = getState();
if (state.foo.someProp && !state.bar.anotherProp) {
dispatch(fetchSomething(state.baz.currentId);
} else {
dispatch(doSomethingSynchronous());
}
};
}
Now the problem is that my action creators expect everything to be inside the root of the state object. But if we want to integrate this app into another redux app we'll have to mount my appReducer with its own key:
// The otherAppReducer that wants to integrate my appReducer
const otherAppReducer = combineReducers({
....
myApp: myAppReducer
});
This obviously breaks my action creators that return thunks and need to read app state, because now everything is contained in the "myApp" state slice.
I did a lot of research and thinking how to properly solve this the last few days, but it seems I'm the first one trying to integrate a Redux based app into another Redux based app.
A few hacks/ideas that came to mind so far:
Create my own thunk type so I can do instanceof checks in a custom thunk middleware and make it pass my thunks a custom getState function which will then return the correct state slice.
Mount my root reducer with it's own key and make my thunks depend on that key.
So far I think the best approach would be to create my own custom middleware, but I'm not really happy with the fact that other apps will now depend on my middleware and custom thunk type. I think there must be a more generic approach.
Any ideas/suggestions? How would you solve this kind of problem?
Have you considered not depending on store.getState()? I would decouple the actions from the application state altogether and take in the data you need from where the actions are called.
So for example:
const someAction = function (someProp, anotherProp, currentId) {
return dispatch => {
if (someProp && !anotherProp) {
dispatch(fetchSomething(currentId);
} else {
dispatch(doSomethingSynchronous());
}
};
}
This makes the actions totally reusable, with the downside of you having to now have that information elsewhere. Where else? If convenient, inside your component using this.context.store, or via props with connect, or maybe better, by having wrapper actions for your specific applications, so:
const someApplicationAction = () => {
return (dispatch, getState) => {
const { foo, bar, baz } = getState();
dispatch(someGenericAction(foo.someProp, bar.anotherProp, baz.currentID));
};
}