I have a redux saga setup where for some reason my channel blocks when I'm trying to take from it and I can't work out why.
I have a PubSub mechanism which subscribes to an event and when received calls this function:
const sendRoundEnd = (msg, data) => {
console.log('putting round end')
roundEndChannel.put({
type: RUN_ENDED,
data: data.payload
})
}
I have a watcher for this channel defined like this:
function* watchRoundEndChannel() {
while(true) {
console.log('before take')
const action = yield take(roundEndChannel)
console.log('after take')
yield put(action)
}
}
And I have a reducer setup which is listening for the put of RUN_ENDED like this:
case RUN_ENDED:
console.log(action)
return {
...state,
isRunning: false,
roundResult: action.data
}
Finally, I have a roundEndChannel const within the file (but not within the functions) and I export the following function as part of an array which is fed into yield all[]:
takeEvery(roundEndChannel, watchRoundEndChannel)
So if my understanding is right, when I get the msg from my pubsub I should first hit sendRoundEnd which puts to roundEndChannel which should in turn put the RUN_ENDED action.
What's weird however is that when I run these functions and receive the message from the pubsub, the following is logged:
putting round end
before take
I never get to the after take which suggests to me that the channel doesn't have anything in it, but I'm pretty sure that isn't the case as it should have been put to in the event handler of the pubsub immediately prior.
It feels like I'm missing something simple here, does anyone have any ideas (or ways I can examine the channel at different points to see what's in there?)
Arg, managed to fix this. The problem was I had exported the watchRoundEndChannel wrapped in a takeEvery which was snatching my pushed events up.
I exported the function like this fork(watchRoundEndChannel) and things work as I expected.
Related
I'm using redux-saga to fetch data from API and for that my code looks something like this.
useEffect(()=>{
getListFromAPI(); // dispatches an action that fetches data
getDataByResponseFromList(list.userName); // I also need to call another API depending on response from first API.
}, []);
getDataByResponseFromList(list.userName) fetches data from API depending on response from the getListFromAPI().
This resulted in an error "list.userName not defined", which was obvious because list is not defined yet.
to fix this I wrote another useEffect like below.
useEffect(()=>{
if(!Empty(list))
getDataByResponseFromList(list.userName);
}, [list]);
This worked fine, but in other situation I also need to call this code when one more state changes which is a general "connection" state. So my code becomes something like this.
useEffect(()=>{
if(!Empty(list) && connection)
getDataByResponseFromList(list.userName);
}, [list, connection]);
But now on page load this code runs two times, once when list is populated and once connection is setup, I know exactly the problem is occurring but I'm not sure about the right way to fix this. what is right way to fix such issues.
A Solution I tried :
As a solution I created a global variable to keep track for only one time execution.
let firstTimeExecution = true; // for only onetime execution
export const MyComponent = ({propsList}) => {
...
useEffect(()=>{
if(!Empty(list) && connection && firstTimeExecution){
getDataByResponseFromList(list.userName);
firstTimeExecution = false;
}
}, [list, connection]);
}
This worked perfectly but I'm not sure if this is the best practice and if I should do that.
Since you are using sagas, it might be easier to do the orchestration there rather than in the component.
function * someSaga() {
// wait for connections & list in any order
const [_, result] = yield all([
take(CONNECTION), // action dispatched once you have connection
take(FETCH_LIST_SUCCESS)
])
// call a saga with a userName from the fetch list success action
yield call(getDataByResponseFromList, result.userName);
}
If you expect FETCH_LIST_SUCCESS to happen multiple times and want to call the getData saga every time:
function * someSaga() {
// Do nothing until we have connection
yield take(CONNECTION)
// call a saga with a userName from the fetch list success action
yield takeEvery(FETCH_LIST_SUCCESS, function*(action) {
yield call(getDataByResponseFromList, action.userName);
})
}
If you need to get the data for every getListFromAPI but it can be called multiple times before you get the connection (I am assuming here you don't the the connection for the getListFromAPI itself), you can also buffer the actions and then process them once you have it.
function * someSaga() {
const chan = yield actionChannel(FETCH_LIST_SUCCESS)
yield take(CONNECTION)
yield takeEvery(chan, function*(action) {
yield call(getDataByResponseFromList, action.userName);
})
}
TL;DR: Is there some well-known solution out there using React/Redux for being able to offer a snappy and immediately responsive UI, while keeping an API/database up to date with changes that can gracefully handle failed API requests?
I'm looking to implement an application with a "card view" using https://github.com/atlassian/react-beautiful-dnd where a user can drag and drop cards to create groups. As a user creates, modifies, or breaks up groups, I'd like to make sure the API is kept up to date with the user's actions.
HOWEVER, I don't want to have to wait for an API response to set the state before updating the UI.
I've searched far and wide, but keep coming upon things such as https://redux.js.org/tutorials/fundamentals/part-6-async-logic which suggests that the response from the API should update the state.
For example:
export default function todosReducer(state = initialState, action) {
switch (action.type) {
case 'todos/todoAdded': {
// Return a new todos state array with the new todo item at the end
return [...state, action.payload]
}
// omit other cases
default:
return state
}
}
As a general concept, this has always seemed odd to me, since it's the local application telling the API what needs to change; we obviously already have the data before the server even responds. This may not always be the case, such as creating a new object and wanting the server to dictate a new "unique id" of some sort, but it seems like there might be a way to just "fill in the blanks" once the server does response with any missing data. In the case of an UPDATE vs CREATE, there's nothing the server is telling us that we don't already know.
This may work fine for a small and lightweight application, but if I'm looking at API responses in the range of 500-750ms on average, the user experience is going to just be absolute garbage.
It's simple enough to create two actions, one that will handle updating the state and another to trigger the API call, but what happens if the API returns an error or a network request fails and we need to revert?
I tested how Trello implements this sort of thing by cutting my network connection and creating a new card. It eagerly creates the card immediately upon submission, and then removes the card once it realizes that it cannot update the server. This is the sort of behavior I'm looking for.
I looked into https://redux.js.org/recipes/implementing-undo-history, which offers a way to "rewind" state, but being able to implement this for my purposes would need to assume that subsequent API calls all resolve in the same order that they were called - which obviously may not be the case.
As of now, I'm resigning myself to the fact that I may need to just follow the established limited pattern, and lock the UI until the API request completes, but would love a better option if it exists within the world of React/Redux.
The approach you're talking about is called "optimistic" network handling -- assuming that the server will receive and accept what the client is doing. This works in cases where you don't need server-side validation to determine if you can, say, create or update an object. It's also equally easy to implement using React and Redux.
Normally, with React and Redux, the update flow is as follows:
The component dispatches an async action creator
The async action creator runs its side-effect (calling the server), and waits for the response.
The async action creator, with the result of the side-effect, dispatches an action to call the reducer
The reducer updates the state, and the component is re-rendered.
Some example code to illustrate (I'm pretending we're using redux-thunk here):
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
const results = await myApi.postData(data);
dispatch(UpdateMyStore(results));
};
However, you can easily flip the order your asynchronous code runs in by simply not waiting for your asynchronous side effect to resolve. In practical terms, this means you don't wait for your API response. For example:
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
// we're not waiting for the api response anymore,
// we just dispatch whatever data we want to our reducer
dispatch(UpdateMyStore(data));
myApi.postData(data);
};
One last thing though -- doing things this way, you will want to put some reconciliation mechanic in place, to make sure the client does know if the server calls fail, and that it retries or notifies the user, etc.
The key phrase here is "optimistic updates", which is a general pattern for updating the "local" state on the client immediately with a given change under the assumption that any API request will succeed. This pattern can be implemented regardless of what actual tool you're using to manage state on the client side.
It's up to you to define and implement what appropriate changes would be if the network request fails.
I'm in the process of converting some of my code to use redux-saga (I previously had just redux-thunk but now run them both side-by-side) and have used this example as a basis.
The only problem I've run up against is using the following function as an argument to takeEvery sometimes works as expected, but other times breaks because it receives a function rather than an action object.
const requestAction = action => action.type.includes('REQUEST');
function* watchAuthenticationStatus() {
yield takeEvery(requestAction, ensureAuthenticated);
}
Logging the action out from inside requestAction shows me that a bunch of the time my actions come through as objects as expected e.g.
{ type: "data/FETCH_REQUEST", data: Array(1) }
However, I also see a lot of this type of thing:
ƒ (_x) {
return _ref.apply(this, arguments);
}
ƒ (_x2, _x3) {
return _ref3.apply(this, arguments);
}
I got around this for the time being by doing a check to see if action.type exists, but is anyone able to explain why I see these anonymous functions at times?
EDIT
As requested below, I checked the ordering of my middleware, I originally had:
const sagaMiddleware = createSagaMiddleware();
const store = createStore(
rootReducer,
composeEnhancers(applyMiddleware(sagaMiddleware, thunk))
);
sagaMiddleware.run(rootSaga);
Re-ordering saga and thunk to the following fixed my problem:
composeEnhancers(applyMiddleware(thunk, sagaMiddleware))
Re-ordering the middleware fixed this, originally it looked like this:
composeEnhancers(applyMiddleware(sagaMiddleware, thunk))
Placing thunk first fixed the issue:
composeEnhancers(applyMiddleware(thunk, sagaMiddleware))
Can anyone tell me what the difference between using chain on a reducer function and doing work in the main index reducer function in redux-auto
I want to save an error,
A) store/chat/send.js
import actions from 'redux-auto'
//...
function rejected(chat, payload, error){
return chat;
} onError.chain = (c, p, error) => actions.logger.save(error)
//...
or
B) store/logger/index.js
import actions from 'redux-auto'
import save from './save'
export default function (errorsLog = [], action)
{
if(action.type == actions.chat.send.rejected){
return save(errorsLog,action.payload)
}
return errorsLog
}
They both work
My questions:
I don't know what would be better. What is the difference?
Why would I use one over the other?
Also, can't I just call the action logger.save(...) inside the
rejected. Why does this chain feature exist?
Thanks for any help :)
A) Using the chain(OnError) will fire the action AFTER the source(rejected) reducer has completed. creating a new call across you store.
B) You are changing the state in the source reducer call
Your qustions:
1,2) Using chaining will make you code more readable as the next function is collocated with the source reducer, but but having it in the index group all action that will happen to that part of the store.
3) Calling an action function directly within a reducer function. Is an anti-pattern. This is dispatching an action in the middle of a dispatched action. The reducer will be operating on inconsistent data.
One of the main Redux point is predictability. We should use as more pure functions as possible. The reducer must not have any side-effects at all.
Recently I've worked on the same feature - error (user action, etc) logging. I think all of this actions are side-effects. They have no profit for user and can't be a part of main business logic.
That's why I use custom middleware to capture all actions I need to log. The action which I need to log I've marked with some meta-prop (ex. {log: 'errorLog'}) and in the middleware I checked every action. If it has a log prop then I make some logger magic.
Finally I've got clearly understandable code where all of logging side-effects incapsulated in middleware.
I am a beginner in Node js and was wondering if someone could help me out.
Winston allows you to pass in a callback which is executed when all transports have been logged - could someone explain what this means as I am slightly lost in the context of callbacks and Winston?
From https://www.npmjs.com/package/winston#events-and-callbacks-in-winston I am shown an example which looks like this:
logger.info('CHILL WINSTON!', { seriously: true }, function (err, level, msg, meta) {
// [msg] and [meta] have now been logged at [level] to **every** transport.
});
Great... however I have several logger.info across my program, and was wondering what do I put into the callback? Also, do I need to do this for every logger.info - or can I put all the logs into one function?
I was thinking to add all of the log call into an array, and then use async.parallel so they all get logged at the same time? Good or bad idea?
The main aim is to log everything before my program continues with other tasks.
Explanation of the code above in callback and winston context would be greatly appreciated!
Winston allows you to pass in a callback which is executed when all transports have been logged
This means that if you have a logger that handles more than one transport (for instance, console and file), the callback will be executed only after the messages have been logged on all of them (in this case, on both the console and the file).
An I/O operation on a file will always take longer than just outputting a message on the console. Winston makes sure that the callback will be triggered, not at the end of the first transport logging, but at the end of the last one of them (that is, the one that takes longest).
You don't need to use a callback for every logger.info, but in this case it can help you make sure everything has been logged before continuing with the other tasks:
var winston = require('winston');
winston.add(winston.transports.File, { filename: './somefile.log' });
winston.level = 'debug';
const tasks = [x => {console.log('task1');x();},x => {console.log('task2');x();},x => {console.log('task3');x()}];
let taskID = 0;
let complete = 0;
tasks.forEach(task => {
task(() => winston.debug('CHILL WINSTON!', `logging task${++taskID}`, waitForIt));
});
function waitForIt() {
// Executed every time a logger has logged all of its transports
if (++complete===tasks.length) nowGo();
};
function nowGo() {
// Now all loggers have logged all of its transports
winston.log('debug', 'All tasks complete. Moving on!');
}
Sure... you probably won't define tasks that way, but just to show one way you could launch all the tasks in parallel and wait until everythings has been logged to continue with other tasks.
Just to explain the example code:
The const tasks is an array of functions, where each one accepts a function x as a parameter, first performs the task at hand, in this case a simple console.log('task1'); then executes the function received as parameter, x();
The function passed as parameter to each one of those functions in the array is the () => winston.debug('CHILL WINSTON!',`logging task${++taskID}`, waitForIt)
The waitForIt, the third parameter in this winston.debug call, is the actual callback (the winston callback you inquired about).
Now, taskID counts the tasks that have been launched, while complete counts the loggers that have finished logging.
Being async, one could launch them as 1, 2, 3, but their loggers could end in a 1, 3, 2 sequence, for all we know. But since all of them will trigger the waitForIt callback once they're done, we just count how many have finished, then call the nowGo function when they all are done.
Compare it to
var winston = require('winston');
var logger = new winston.Logger({
level:'debug',
transports: [
new (winston.transports.Console)(),
new (winston.transports.File)({filename: './somefile.log'})
]
});
const tasks = [x => {console.log("task1");x();},x => {console.log("task2");x();},x => {console.log("task3");x()}];
let taskID = 0;
let complete = 0;
tasks.forEach(task => {
task(() => logger.debug('CHILL WINSTON!', `logging task${++taskID}`, (taskID===tasks.length)? nowGo : null));
});
logger.on('logging', () => console.log(`# of complete loggers: ${++complete}`));
function nowGo() {
// Stop listening to the logging event
logger.removeAllListeners('logging');
// Now all loggers have logged all of its transports
logger.debug('All tasks complete. Moving on!');
}
In this case, the nowGo would be the callback, and it would be added only to the third logger.debug call. But if the second logger finished later than the third, it would have continued without waiting for the second one to finish logging.
In such simple example it won't make a difference, since all of them finish equally fast, but I hope it's enough to get the concept.
While at it, let me recommend the book Node.js Design Patterns by Mario Casciaro for more advanced async flow sequencing patterns. It also has a great EventEmitter vs callback comparison.
Hope this helped ;)