I have a redux action for a search in the application. When the user starts the search it batches the queries and send 30 queries per request and queues first 10 requests. Whenever any one of the request is successful, it will add the next request to the request queue. All this is happening as a redux action, and whenever a request is successful it will dispatch action to append the result into the store. I would like input regarding how to handle this if the user clicks "cancel search" and enters a new search term. How can I cancel existing request and redux actions so the previous searches requests will not succeed and add to the result store?
Example Code Below :-
function search(queries){
// split the queries into chunks of size 30
batches = _.chunks(queries, 30);
let count = 0;
//get the first batch manually
getBatch(batches[0]);
function getBatch(batch){
axios.post('url', batch.join(',')).then((response) => {
// recursively call get batch to get rest of the data
if(count < batches.length) { getBatch(batches[count++]); }
// dispatch action to append the result into the store
dispatch({ type: 'APPEND_RESULT', payload: response })
}
}
}
this is a minimal code for sharing the concept
I have read about cancellable promises axios supports it. But I am not sure how to control this recursive call on a second execution of the same function.
eg: user input will be { ids :[1,2,3,..1000] } I am trying to create batches and sent parallel requests { ids:[1,2, .. 29,30 }, { ids: [31, 32, ..
59,60]} etc.
This looks like a candidate for redux observable. You can create an observable and then add a debounce (like mentioned in the above one comment ) and you can easily cancel or switchMap to the new request whenever the current request is finished so that any remaining items in the previous queue wont be considered.
Answer is total IMHO
In my opinion, the whole way to solve problem is broken here. You'll drop your server by constant asking it for any input change. Just imagine how much traffic will your website consume for mobile!
The best workaround here is to:
Add _.debounce to your input with a small waiting time (100ms is good)
Have only one query in memory. On each new event from user input cancel the request and override the current one.
Dispatch an action with data on each server response
Related
I am making BDD test with a Cucumber-Playwright suit. A page I am making a test for has buttons that will trigger a PUT API request and update the page (Note: The button will not link to new address, just trigger and API request).
I want to make sure that all network events have finished before moving on to the next step as it may try to act too soon before the API request has returned and cause the test to fail.
I want to avoid hard waits so I read the documentation and found a step structure that uses Promise.all([]) to combine two or more steps. From what I understand they will check that each step in the array to be true at the same time before moving on.
So the steps looks like this:
await Promise.all([inviteUserButton.click(), page.waitForLoadState('networkidle')])
await page.goto('https://example/examplepage')
This stage of the test is flaky however, it will work about 2/3 times. From the trace files I read to debug the test I see that the the network response repsondes with net::ERR_ABORTED POST https://.....
I believe this is due to to the page.goto() step has interrupted the network request/response. Due to this, it will cause the coming assertion to fail as it was not completed.
Is there a way to test that all the pages network events have finished from a onClick event or similar before moving onto the next step?
How about this:
The correct practice is to wait for response before clicking the button to avoid any race conditions so that it will work correctly every time as long as api(s) returning back with ok response.
In this , you may add N number of api calls with commas where response from multiple api is expected.
Note: You may need to change the response status code from 200 to 204 , depending on how the api is implemented.
const [resp]= await Promise.all([
this.page.waitForResponse(resp => resp.url().includes('/api/odata/any unique sub string of API endpoint') && resp.status() === 200),
//API2 ,
//API3,
//APIN,
this.page.click('inviteUSerButtonLocator'),
]);
const body= await resp.json() //Next step of your scenario
If you don't wrap it in a promise is it still flaky?
await inviteUserButton.click();
await page.waitForLoadState('networkidle');
await page.goto('https://example/examplepage');
How long do the network requests last? The default timeout is set to 30 seconds, did you try to increase it?
waitForLoadState(state?: "load"|"domcontentloaded"|"networkidle", options?: {
/**
* Maximum operation time in milliseconds, defaults to 30 seconds, pass `0` to disable timeout. The default value can be
* changed by using the
* [browserContext.setDefaultNavigationTimeout(timeout)](https://playwright.dev/docs/api/class-browsercontext#browser-context-set-default-navigation-timeout),
* [browserContext.setDefaultTimeout(timeout)](https://playwright.dev/docs/api/class-browsercontext#browser-context-set-default-timeout),
* [page.setDefaultNavigationTimeout(timeout)](https://playwright.dev/docs/api/class-page#page-set-default-navigation-timeout)
* or [page.setDefaultTimeout(timeout)](https://playwright.dev/docs/api/class-page#page-set-default-timeout) methods.
*/
timeout?: number;
}): Promise<void>;
You can try and wait for more milliseconds
await Promise.all([inviteUserButton.click(), page.waitForLoadState('networkidle', { timeout:60_000 })])
await page.goto('https://example/examplepage')
In my Ionic/Angular app, I have a 60 second timer observable which just emits the current time synced with server time. Each minute I fetch permissions, settings, etc. I pass a token with each request. On logout I revoke the token. Here's a sample of what my logic looks like.
Side note: There's also a feature where a user can "change login type" where they can "become" an administrator, for example, and this process may also trigger a similar circumstance.
this.clientTimeSub = this.timeService.clientTime
.pipe(takeUntil(this.logoutService.isLoggingOut$))
.subscribe(async (latestClientTime) => {
this.clientTime = { ...latestClientTime };
// if client time just rolled over to a new minute, update settings
if (
this.clientTime?.time?.length === 7 &&
this.clientTime?.time?.slice(-1) === '0'
) {
await updateSettings();
await updatePermissions();
// etc
// These functions will:
// (1) make an api call (using the login token!)
// (2) update app state
// (3) save to app storage
}
});
When I am logging out of the app, there's a small time window where I could be in the middle of sending multiple api requests and the token is no longer valid, due to the timer rolling to a new minute just as I was logging out, or close to it. I am then presented with a 401: Unauthorized in the middle of logging out.
My naive solution was to tell this observable to stop propagation when a Subject or BehaviorSubject fires a value telling this observable that it is logging out, you can see this here .pipe(takeUntil(this.logoutService.isLoggingOut$)).
Then, in any of my logout methods, I would use:
logout() {
this.isLoggingOut.next(true);
...
// Logout logic here, token becomes invalidated somewhere here
// then token is deleted from state, etc, navigate back to login...
...
this.isLoggingOut.next(false);
}
In that small time window of logging out, the client timer should stop firing and checking if it's rolled to a new minute, preventing any further api calls that may be unauthenticated.
Is there a way I can easily prevent this issue from happening or is there a flaw in my logic that may be causing this issue?
I appreciate any help, thank you!
First of all, it is not the best way to use async-await along with RXJS. Its because RXJS as a reactive way of functional programming, have its "pipeable" operators so you can kinda "chain" everything.
So instead of having a logic of calculating time in your subscribe callback function you should rather use, for example filter() RXJ operator, and instead of using await-async you can use switchMap operator and inside it, use forkJoin or concat operator.
this.timeService.clientTime
.pipe(
// Filter stream (according to your calculation)
filter((time) => {
// here is your logic to calculate if time has passed or whatever else you are doing
// const isValid = ...
return isValid;
}),
// Switch to another stream so you can call api calls
// Here with "from" we are converting promises to observables in order to be able to use magic of RXJS
switchMap(_ => forkJoin([from(updateSettings), from(updatePermissions)])),
// Take until your logout
takeUntil(this.logoutService.isLoggingOut$)
).subcribe(([updateSettings, updatePermissions]) => {
// Basically your promises should just call API services, and other logic should be here
// Here you can use
// (2) update app state
// (3) save to app storage
})
If you split actions like in my example, in your promises you just call api calls to update whatever you are doing, then when its done, in subscribe callback you can update app state, save to app storage etc. So you can have 2 scenarios here:
Api calls from promises, are still in progress. If you trigger logout in the meanwhile takeUntil will do the thing and you will not update app state etc.
If both Api calls from promises are done, you are in a subscribe callback block and if its just a synchronous code (hopefully) it will be done. And then async code can be executed (your timer can now emit next value, its all about Event Loop in javascript)
TL;DR: Is there some well-known solution out there using React/Redux for being able to offer a snappy and immediately responsive UI, while keeping an API/database up to date with changes that can gracefully handle failed API requests?
I'm looking to implement an application with a "card view" using https://github.com/atlassian/react-beautiful-dnd where a user can drag and drop cards to create groups. As a user creates, modifies, or breaks up groups, I'd like to make sure the API is kept up to date with the user's actions.
HOWEVER, I don't want to have to wait for an API response to set the state before updating the UI.
I've searched far and wide, but keep coming upon things such as https://redux.js.org/tutorials/fundamentals/part-6-async-logic which suggests that the response from the API should update the state.
For example:
export default function todosReducer(state = initialState, action) {
switch (action.type) {
case 'todos/todoAdded': {
// Return a new todos state array with the new todo item at the end
return [...state, action.payload]
}
// omit other cases
default:
return state
}
}
As a general concept, this has always seemed odd to me, since it's the local application telling the API what needs to change; we obviously already have the data before the server even responds. This may not always be the case, such as creating a new object and wanting the server to dictate a new "unique id" of some sort, but it seems like there might be a way to just "fill in the blanks" once the server does response with any missing data. In the case of an UPDATE vs CREATE, there's nothing the server is telling us that we don't already know.
This may work fine for a small and lightweight application, but if I'm looking at API responses in the range of 500-750ms on average, the user experience is going to just be absolute garbage.
It's simple enough to create two actions, one that will handle updating the state and another to trigger the API call, but what happens if the API returns an error or a network request fails and we need to revert?
I tested how Trello implements this sort of thing by cutting my network connection and creating a new card. It eagerly creates the card immediately upon submission, and then removes the card once it realizes that it cannot update the server. This is the sort of behavior I'm looking for.
I looked into https://redux.js.org/recipes/implementing-undo-history, which offers a way to "rewind" state, but being able to implement this for my purposes would need to assume that subsequent API calls all resolve in the same order that they were called - which obviously may not be the case.
As of now, I'm resigning myself to the fact that I may need to just follow the established limited pattern, and lock the UI until the API request completes, but would love a better option if it exists within the world of React/Redux.
The approach you're talking about is called "optimistic" network handling -- assuming that the server will receive and accept what the client is doing. This works in cases where you don't need server-side validation to determine if you can, say, create or update an object. It's also equally easy to implement using React and Redux.
Normally, with React and Redux, the update flow is as follows:
The component dispatches an async action creator
The async action creator runs its side-effect (calling the server), and waits for the response.
The async action creator, with the result of the side-effect, dispatches an action to call the reducer
The reducer updates the state, and the component is re-rendered.
Some example code to illustrate (I'm pretending we're using redux-thunk here):
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
const results = await myApi.postData(data);
dispatch(UpdateMyStore(results));
};
However, you can easily flip the order your asynchronous code runs in by simply not waiting for your asynchronous side effect to resolve. In practical terms, this means you don't wait for your API response. For example:
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
// we're not waiting for the api response anymore,
// we just dispatch whatever data we want to our reducer
dispatch(UpdateMyStore(data));
myApi.postData(data);
};
One last thing though -- doing things this way, you will want to put some reconciliation mechanic in place, to make sure the client does know if the server calls fail, and that it retries or notifies the user, etc.
The key phrase here is "optimistic updates", which is a general pattern for updating the "local" state on the client immediately with a given change under the assumption that any API request will succeed. This pattern can be implemented regardless of what actual tool you're using to manage state on the client side.
It's up to you to define and implement what appropriate changes would be if the network request fails.
I have an application which listens to a websocket endpoint and processes the data received from it and saves it to a database.
The problem of race condition arises when two callbacks are invoked concurrently (for example: one task may begin processing, then another task may begin processing and update the database, then the first task may update the database - so in the end the database updates are out of order).
The solution I thought of was to record the exact time a callback is called, process the data, then attach the time to the data passed to the database and in the database compare this time with the last update time and act accordingly.
One possible problem I thought of is that the time may be recorded out of order (for example: consider the scenario where the first callback is called, then the second callback is called and the time is recorded, then the time is recorded for the first callback).
How would you do it the right way? Solutions to this problem or other ways to go about it?
EDIT: To be more specific as I'm intending for the program to be as real-time as possible I'd like to allow for the most up-to-date callback to be processed without delay (without waiting for all other previous callbacks to entirely process) but to ensure that the end result of the processing (as is recorded in the database) adheres to the order in which the callbacks arrived (is not corrupt)
You can the data handler callback return a promise of when it's finished.
Each time you get a new data from the socket, wait for that promise before handling it, then store the resulting promise for the next data to wait for.
That would look like this:
const ready = Promise.resolve();
socket.on(..., data => {
ready = ready.then(() => processData(data);
});
This will have no effect on any other code.
EDIT: To do expensive work outside the lock, you can write
socket.on(..., data => {
const result = doExpensiveWork(data); // Returns a promise
ready = Promise.all(result, ready).then(([result]) => insertData(result));
});
We have an action that fetches an object async, let's call it getPostDetails, that takes a parameter of which post to fetch by
an id. The user is presented with a list of posts and can click on one to get some details.
If a user clicks on "Post #1", we dispatch a GET_POST action which might look something like this.
const getPostDetails = (id) => ({
type: c.GET_POST_DETAILS,
promise: (http) => http.get(`http://example.com/posts/#${id}`),
returnKey: 'facebookData'
})
This is picked up by a middleware, which adds a success handler to the promise, which will call an action like
GET_POST__OK with the deserialized JSON object. The reducer sees this object and applies it to a store. A typical
__OK reducer looks like this.
[c.GET_ALL__OK]: (state, response) => assign(state, {
currentPost: response.postDetails
})
Later down the line we have a component that looks at currentPost and displays the details for the current post.
However, we have a race condition. If a user submits two GET_POST_DETAILS actions one right after the other, there is
no guarantee what order we recieve the __OK actions in, if the second http request finishes before the first, the
state will become incorrect.
Action => Result
---------------------------------------------------------------------------------
|T| User Clicks Post #1 => GET_POST for #1 dispatched => Http Request #1 pending
|i| User Clicks Post #2 => GET_POST for #2 dispatched => Http Request #2 pending
|m| Http Request #2 Resolves => Results for #2 added to state
|e| Http Request #1 Resolves => Results for #1 added to state
V
How can we make sure the last item the user clicked always will take priority?
The problem is due to suboptimal state organization.
In a Redux app, state keys like currentPost are usually an anti-pattern. If you have to “reset” the state every time you navigate to another page, you'll lose one of the main benefits of Redux (or Flux): caching. For example, you can no longer navigate back instantly if any navigation resets the state and refetches the data.
A better way to store this information would be to separate postsById and currentPostId:
{
currentPostId: 1,
postsById: {
1: { ... },
2: { ... },
3: { ... }
}
}
Now you can fetch as many posts at the same time as you like, and independently merge them into the postsById cache without worrying whether the fetched post is the current one.
Inside your component, you would always read state.postsById[state.currentPostId], or better, export getCurrentPost(state) selector from the reducer file so that the component doesn’t depend on specific state shape.
Now there are no race conditions and you have a cache of posts so you don’t need to refetch when you go back. Later if you want the current post to be controlled from the URL bar, you can remove currentPostId from Redux state completely, and instead read it from your router—the rest of the logic would stay the same.
While this isn’t strictly the same, I happen to have another example with a similar problem. Check out the code before and the code after. It’s not quite the same as your question, but hopefully it shows how state organization can help avoid race conditions and inconsistent props.
I also recorded a free video series that explains these topics so you might want to check it out.
Dan's solution is probably a better one, but an alternative solution is to abort the first request when the second one begins.
You can do this by splitting your action creator into an async one which can read from the store and dispatch other actions, which redux-thunk allows you to do.
The first thing your async action creator should do is check the store for an existing promise, and abort it if there is one. If not, it can make the request, and dispatch a 'request begins' action, which contains the promise object, which is stored for next time.
That way, only the most recently created promise will resolve. When one does, you can dispatch a success action with the received data. You can also dispatch an error action if the promise is rejected for some reason.