RxJS share vs shareReplay differences - javascript

There seems to be an odd discrepancy with how share and shareReplay (with refcount:true) unsubscribe.
Consider the following (can paste into rxviz.com):
const { interval } = Rx;
const { take, shareReplay, share, timeoutWith, startWith , finalize} = RxOperators;
const shareReplay$ = interval(2000).pipe(
finalize(() => console.log('[finalize] Called shareReplay$')),
take(1),
shareReplay({refcount:true, bufferSize: 0}));
shareReplay$.pipe(
timeoutWith(1000, shareReplay$.pipe(startWith('X'))),
)
const share$ = interval(2000).pipe(
finalize(() => console.log('[finalize] Called on share$')),
take(1),
share());
share$.pipe(
timeoutWith(1000, share$.pipe(startWith('X'))),
)
The output from streamReplay$ will be -X-0- while the output from shareStream$ will be -X--0. It appears as though share unsubscribes from the source before timeoutWith can re-subscribe, while shareReplay manages to keep the shared subscription around for long enough for it to be re-used.
I want to use this to add a local timeout on an RPC (while keeping the call open), and any re-subscribe would be disastrous, so want to avoid the risk one of these behaviors is a mistake and gets changed in the future.
I could use race(), and merge the rpc call with a delayed startsWith (so they both subscribe at the same time) but it would be more code and operators.
EDIT: One possible solution is to merge two subscriptions one to a shared request and one to a delayed stream that takes until the shared stream emits:
merge(share$, of('still working...').pipe( delay(1000), takeUntil(share$)));
This way the shated stream is subscribed to at the same time, so there is no "grey area" when one operator unsubscribes as a child subscribes. (Will turn this into an answer unless someone comes up with a better suggestion) or can explain the intentions/differences between share and shareReplay

Quick Aside:
I suspect you're not going to get many answers until you describe the behavior you're after.
use this to add a local timeout on an RPC (while keeping the call open)
I'm not sure what keeping an RPC call opens entails. It's sounds like you want something that cancels the request within RxJS but doesn't cancel the inflight RPC. I feel like that's a domain mismatch. Why not keep your observables aligned with the semantics of the calls they represent?
Toward a solution
Having seen your update, I suspect you don't need a timeout. It looks like the behavior you're after can be achieved without unsubscribing to the observable representing your RPC. I'd argue that doing so simplifies your logic and makes it easier to maintain/extend in the future.
It also sidesteps all the async interleaving concerns you had earlier.
Here I'll take an educated guess at the behaviour you're after. It appears like you want to following:
Take one value from the source (happens to be an RPC in this case).
If that value takes more than 1s to arrive, emit `"still working..." and continue to wait for one value from the source.
If that's the case, you don't need share at all. Normally a timeout means to cancel an inflight request if it takes too long. Maybe you don't want a timeout at all, you want to stay subscribed to the source...
If that's the case, here's a way to do this:
const source$ = rpc(arg1 arg2);
// create a unique token (its memory address).
// We'll embed a message inside so it's doing double duty
const token = {a: "still working..."};
merge(
source$,
timer(1000).pipe(mapTo(token))
).pipe(
// take(1), but ignoring the token
takeWhile(v => v === token, true),
// Unwrap the token, emit the contained message
map(v => v === token ? v.a : v)
);
On the other hand, if you know that observable wrapping your RPC will never emit "still working..." then you don't need a unique token and you can simplify this to check the value directly.
merge(
rpc(arg1 arg2),
timer(1000).pipe(mapTo("still working..."))
).pipe(
takeWhile(v => v === "still working...", true)
);

Timing Guarantees
Javascript's runtime doesn't come with baked in timing guarantees of any sort. In a single threaded environment, that makes sense (and you can start to do a bit better with web workers and such). If something compute-heavy happens, everything covered in that timing window just waits. Mostly it happens in the expected order though.
Regardless,
const hello = (name: string) => () => console.log(`Hello ${name}`);
setTimeout(hello("first"), 2000);
setTimeout(hello("second"), 2000);
In Node, V8, or SpiderMonkey you do preserve the order here. But what about this?
const hello = (name: string) => () => console.log(`Hello ${name}`);
setTimeout(hello("first"), 1);
setTimeout(hello("second"), 0);
Here you would assume second always comes first because it's supposed to happen a millisecond earlier. If you run this using SpiderMonkey, the order depends on how busy the event loop is. They bucket shorter timeouts since a 0ms timeout takes about 8ms on average anyway.
Always make asynchronous dependencies explicit
In JavaScript, it's best practice to never ever make any timing dependencies implicit.
In the code below, we can reasonably know that that data will not be undefined when we call data.value. This implicitly relies on asynchronous interleaving:
let data;
setTimeout(() => {data = {value: 5};}, 1000);
setTimeout(() => console.log(data.value), 2000);
We really should make this dependency explicit. Either by checking if data is undefined or by restructuring our calls
setTimeout(() => {
const data = {value: 5};
setTimeout(() => console.log(data.value), 1000);
}, 1000);
Share vs ShareReplay
want to avoid the risk one of these behaviors is a mistake and gets changed in the future.
The real risk here isn't even in how the library implements the difference. It's a language-level risk as well. You're depending on asynchronous interleaving either way. You run the risk of having a bug that only appears once in a blue moon and can't be easily re-created/tested etc.
The share operator has a ShareConfig (source)
export interface ShareConfig<T> {
connector?: () => SubjectLike<T>;
resetOnError?: boolean | ((error: any) => Observable<any>);
resetOnComplete?: boolean | (() => Observable<any>);
resetOnRefCountZero?: boolean | (() => Observable<any>);
}
If you use vanilla shareReplay(1) or replay({resetOnRefCountZero: false}) then you aren't relying on how events are ordered in the JS Event Loop.

Related

Is there a well-established way to update local state immediately without waiting for an API response in React/Redux?

TL;DR: Is there some well-known solution out there using React/Redux for being able to offer a snappy and immediately responsive UI, while keeping an API/database up to date with changes that can gracefully handle failed API requests?
I'm looking to implement an application with a "card view" using https://github.com/atlassian/react-beautiful-dnd where a user can drag and drop cards to create groups. As a user creates, modifies, or breaks up groups, I'd like to make sure the API is kept up to date with the user's actions.
HOWEVER, I don't want to have to wait for an API response to set the state before updating the UI.
I've searched far and wide, but keep coming upon things such as https://redux.js.org/tutorials/fundamentals/part-6-async-logic which suggests that the response from the API should update the state.
For example:
export default function todosReducer(state = initialState, action) {
switch (action.type) {
case 'todos/todoAdded': {
// Return a new todos state array with the new todo item at the end
return [...state, action.payload]
}
// omit other cases
default:
return state
}
}
As a general concept, this has always seemed odd to me, since it's the local application telling the API what needs to change; we obviously already have the data before the server even responds. This may not always be the case, such as creating a new object and wanting the server to dictate a new "unique id" of some sort, but it seems like there might be a way to just "fill in the blanks" once the server does response with any missing data. In the case of an UPDATE vs CREATE, there's nothing the server is telling us that we don't already know.
This may work fine for a small and lightweight application, but if I'm looking at API responses in the range of 500-750ms on average, the user experience is going to just be absolute garbage.
It's simple enough to create two actions, one that will handle updating the state and another to trigger the API call, but what happens if the API returns an error or a network request fails and we need to revert?
I tested how Trello implements this sort of thing by cutting my network connection and creating a new card. It eagerly creates the card immediately upon submission, and then removes the card once it realizes that it cannot update the server. This is the sort of behavior I'm looking for.
I looked into https://redux.js.org/recipes/implementing-undo-history, which offers a way to "rewind" state, but being able to implement this for my purposes would need to assume that subsequent API calls all resolve in the same order that they were called - which obviously may not be the case.
As of now, I'm resigning myself to the fact that I may need to just follow the established limited pattern, and lock the UI until the API request completes, but would love a better option if it exists within the world of React/Redux.
The approach you're talking about is called "optimistic" network handling -- assuming that the server will receive and accept what the client is doing. This works in cases where you don't need server-side validation to determine if you can, say, create or update an object. It's also equally easy to implement using React and Redux.
Normally, with React and Redux, the update flow is as follows:
The component dispatches an async action creator
The async action creator runs its side-effect (calling the server), and waits for the response.
The async action creator, with the result of the side-effect, dispatches an action to call the reducer
The reducer updates the state, and the component is re-rendered.
Some example code to illustrate (I'm pretending we're using redux-thunk here):
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
const results = await myApi.postData(data);
dispatch(UpdateMyStore(results));
};
However, you can easily flip the order your asynchronous code runs in by simply not waiting for your asynchronous side effect to resolve. In practical terms, this means you don't wait for your API response. For example:
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
// we're not waiting for the api response anymore,
// we just dispatch whatever data we want to our reducer
dispatch(UpdateMyStore(data));
myApi.postData(data);
};
One last thing though -- doing things this way, you will want to put some reconciliation mechanic in place, to make sure the client does know if the server calls fail, and that it retries or notifies the user, etc.
The key phrase here is "optimistic updates", which is a general pattern for updating the "local" state on the client immediately with a given change under the assumption that any API request will succeed. This pattern can be implemented regardless of what actual tool you're using to manage state on the client side.
It's up to you to define and implement what appropriate changes would be if the network request fails.

Why would I use RxJS interval() or timer() polling instead of window.setInterval()?

Use case: Call a function every minute (60000 ms) that dispatches store action to fetch lastUpdated status of items, which upon response and filtering, updates the store, and updated store is read as an observable and displayed in the view). This needs to happen for as long as the web app is open (so indefinitely).
Currently, I'm using this:
this.refreshDate = window.setInterval(
() => this.store.dispatch(new FetchLastUpdate())
, 60000);
And when view is destroyed/dismounted, I delete the interval as so:
if (this.refreshDate) {
clearInterval(this.refreshDate);
}
Is this efficient/effective, or is it troublesome?
Why would I want to use an RxJS polling strategy like:
interval(60000)
.pipe(
startWith(0),
switchMap(() => this.store.dispatch(new FetchLastUpdate()))
);
Or
timer(0, 60000)
.pipe(
switchMap(() => this.store.dispatch(new FetchLastUpdate()))
);
TL;DR: window.setInterval() vs. RxJS timer()/interval()
Conclusion/answers (for ease of research):
There is great benefit to using RxJS functions to set an interval or perform polling, these benefits are explained in the selected answer but also in comments, but it is concluded (by discussions in the comments) that for the very simple requirement defined in the "Use case" section at the beginning of this post, it is unnecessary to use RxJS, and in fact if you are not using RxJS in any other part of your program, do not import it just for this, however in my case, I had already imported and used RxJS elsewhere.
Advantage of RxJS:
Laziness
You can create your Observables and until you call subscribe nothing is happening. Observable = pure function. This gives you more control, easier reasoning and allows for next point...
Composability
You can combine interval/timer with other operators creating custom logic very easily in unified way - for example you can map, repeat, retry, take... etc. see all operators
Error Handling
In case of an error you are responsible for calling clearTimeout/clearInterval - Observables are handling this for you. Resulting in cleaner code and fewer memory leak bugs.
Of course anything you do with Observables you can also do without Observables - but that's not the point. Observables are here to make your life easier.
Also note that interval/timer are not good observable factories for polling because they do not "wait" for your async action to finish (you can end up with multiple async calls running over each other). For that I tend to use defer and repeatWhen like this:
defer(() => doAsyncAction())
.pipe(
repeatWhen(notifications => notifications.pipe(delay(1234)))
);
window.setInterval doesn't care about your callbacks state, it'll execute at the given interval despite the status of the execution of the past callback, and the only way to make it stop and skip is clear the interval or reinitialize it.
On the other hand, RxJS Observable based solutions(interval, timer) allow you to pipe conditional operators (takeWhile, skipWhile for example) which allows you to add a stop or implement a stop-start logic by just flipping a boolean flag, instead of adding complicated logic of clearing the interval, and then recreating it.
And they are observables, you can listen to them all across the application, and attach any number of listeners to it.
Error Handling is better too, you subscribe to all successes, and handle everything in a catch callback.

Functional Programming and async/promises

I'm refactoring some old node modules into a more functional style. I'm like a second year freshman when it comes to FP :) Where I keep getting hung up is handling large async flows. Here is an example where I'm making a request to a db and then caching the response:
// Some external xhr/promise lib
const fetchFromDb = make => {
return new Promise(resolve => {
console.log('Simulate async db request...'); // just simulating a async request/response here.
setTimeout(() => {
console.log('Simulate db response...');
resolve({ make: 'toyota', data: 'stuff' });
}, 100);
});
};
// memoized fn
// this caches the response to getCarData(x) so that whenever it is invoked with 'x' again, the same response gets returned.
const getCarData = R.memoizeWith(R.identity, (carMake, response) => response.data);
// Is this function pure? Or is it setting something outside the scope (i.e., getCarData)?
const getCarDataFromDb = (carMake) => {
return fetchFromDb(carMake).then(getCarData.bind(null, carMake));
// Note: This return statement is essentially the same as:
// return fetchFromDb(carMake).then(result => getCarData(carMake, result));
};
// Initialize the request for 'toyota' data
const toyota = getCarDataFromDb('toyota'); // must be called no matter what
// Approach #1 - Just rely on thenable
console.log(`Value of toyota is: ${toyota.toString()}`);
toyota.then(d => console.log(`Value in thenable: ${d}`)); // -> Value in thenable: stuff
// Approach #2 - Just make sure you do not call this fn before db response.
setTimeout(() => {
const car = getCarData('toyota'); // so nice!
console.log(`later, car is: ${car}`); // -> 'later, car is: stuff'
}, 200);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
I really like memoization for caching large JSON objects and other computed properties. But with a lot of asynchronous requests whose responses are dependent on each other for doing work, I'm having trouble keeping track of what information I have and when. I want to get away from using promises so heavily to manage flow. It's a node app, so making things synchronous to ensure availability was blocking the event loop and really affecting performance.
I prefer approach #2, where I can get the car data simply with getCarData('toyota'). But the downside is that I have to be sure that the response has already been returned. With approach #1 I'll always have to use a thenable which alleviates the issue with approach #2 but introduces its own problems.
Questions:
Is getCarFromDb a pure function as it is written above? If not, how is that not a side-effect?
Is using memoization in this way an FP anti-pattern? That is, calling it from a thenable with the response so that future invocations of that same method return the cached value?
Question 1
It's almost a philosophical question here as to whether there are side-effects here. Calling it does update the memoization cache. But that itself has no observable side-effects. So I would say that this is effectively pure.
Update: a comment pointed out that as this calls IO, it can never be pure. That is correct. But that's the essence of this behavior. It's not meaningful as a pure function. My answer above is only about side-effects, and not about purity.
Question 2
I can't speak for the whole FP community, but I can tell you that the Ramda team (disclaimer: I'm a Ramda author) prefers to avoid Promises, preferring more lawful types such Futures or Tasks. But the same questions you have here would be in play with those types substituted for Promises. (More on these issues below.)
In General
There is a central point here: if you're doing asynchronous programming, it will spread to every bit of the application that touches it. There is nothing you will do that changes this basic fact. Using Promises/Tasks/Futures helps avoid some of the boilerplate of callback-based code, but it requires you to put the post response/rejection code inside a then/map function. Using async/await helps you avoid some of the boilerplate of Promise-based code, but it requires you to put the post reponse/rejection code inside async functions. And if one day we layer something else on top of async/await, it will likely have the same characteristics.
(While I would suggest that you look at Futures or Tasks instead of Promises, below I will only discuss Promises. The same ideas should apply regardless.)
My suggestion
If you're going to memoize anything, memoize the resulting Promises.
However you deal with your asynchrony, you will have to put the code that depends on the result of an asynchronous call into a function. I assume that the setTimeout of your second approach was just for demonstration purposes: using timeout to wait for a DB result over the network is extremely error-prone. But even with setTimeout, the rest of your code is running from within the setTimeout callback.
So rather than trying to separate the cases for when your data has already been cached and when it hasn't, simply use the same technique everywhere: myPromise.then(... my code ... ). That could look something like this:
// getCarData :: String -> Promise AutoInfo
const getCarData = R.memoizeWith(R.identity, make => new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 100)
})
)
getCarData('toyota').then(carData => {
console.log('now we can go', carData)
// any code which depends on carData
})
// later
getCarData('toyota').then(carData => {
console.log('now it is cached', carData)
})
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
In this approach, whenever you need car data, you call getCarData(make). Only the first time will it actually call the server. After that, the Promise is served out of the cache. But you use the same structures everywhere to deal with it.
I only see one reasonable alternative. I couldn't tell if your discussion about having to have to wait for the data before making remaining calls means that you would be able to pre-fetch your data. If that's the case, then there is one additional possibility, one which would allow you to skip the memoization as well:
// getCarData :: String -> Promise AutoInfo
const getCarData = make => new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 100)
})
const makes = ['toyota', 'ford', 'audi']
Promise.all(makes.map(getCarData)).then(allAutoInfo => {
const autos = R.zipObj(makes, allAutoInfo)
console.log('cooking with gas', autos)
// remainder of app that depends on auto data here
})
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
But this one means that nothing will be available until all your data has been fetched. That may or may not be all right with you, depending on all sorts of factors. And for many situations, it's not even remotely possible or desirable. But it is possible that yours is one where it is helpful.
One technical point about your code:
const getCarDataFromDb = (carMake) => {
return fetchFromDb(carMake).then(getCarData.bind(null, carMake));
};
Is there any reason to use getCarData.bind(null, carMake) instead of () => getCarData(carMake)? This seems much more readable.
Is getCarFromDb a pure function as it is written above?
No. Pretty much anything that uses I/O is impure. The data in the DB could change, the request could fail, so it doesn't give any reliable guarantee that it will return consistent values.
Is using memoization in this way an FP anti-pattern? That is, calling it from a thenable with the response so that future invocations of that same method return the cached value?
It's definitely an asynchrony antipattern. In your approach #2 you are creating a race condition where the operation will succeed if the DB query completes in less than 200 ms, and fail if it takes longer than that. You've labeled a line in your code "so nice!" because you're able to retrieve data synchronously. That suggests to me that you're looking for a way to skirt the issue of asynchrony rather than facing it head-on.
The way you're using bind and "tricking" memoizeWith into storing the value you're passing into it after the fact also looks very awkward and unnatural.
It is possible to take advantage of caching and still use asynchrony in a more reliable way.
For example:
// Some external xhr/promise lib
const fetchFromDb = make => {
return new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 2000);
});
};
const getCarDataFromDb = R.memoizeWith(R.identity, fetchFromDb);
// Initialize the request for 'toyota' data
const toyota = getCarDataFromDb('toyota'); // must be called no matter what
// Finishes after two seconds
toyota.then(d => console.log(`Value in thenable: ${d.data}`));
// Wait for 5 seconds before getting Toyota data again.
// This time, there is no 2-second wait before the data comes back.
setTimeout(() => {
console.log('About to get Toyota data again');
getCarDataFromDb('toyota').then(d => console.log(`Value in thenable: ${d.data}`));
}, 5000);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
The one potential pitfall here is that if a request should fail, you'll be stuck with a rejected promise in your cache. I'm not sure what would be the best way to address that, but you'd surely need some way of invalidating that part of the cache or implementing some sort of retry logic somewhere.

in rx.js make source.subscribe await it's observer using async/await

I have an observer that goes like this.
var source = rx.Observable.fromEvent(eventAppeared.emitter, 'event')
.filter(mAndF.isValidStreamType)
.map(mAndF.transformEvent)
.share();
I then share it with a number of subscribers. These subscribers all take the event and perform some async operations on them.
so my subscribers are like
source.subscribe(async function(x) {
const func = handler[x.eventName];
if (func) {
await eventWorkflow(x, handler.handlerName, func.bind(handler));
}
});
There's a bit of extra stuff in there but I think the intent is clear.
I need every "handler" that handles this particular event to handle it and block till it gets back. Then process the next event.
What I've found with the above code is that it's just calling the event with out awaiting it and my handlers are stepping on themselves.
I've read a fair number of posts, but I can't really see how to do it. Most people are talking about making the observer awaitable. But that's not what I need is it? It seems like what I need is to make the observer awaitable. I can't find anything on that which usually means it's either super easy or a super ridiculous thing to do. I'm hoping for the former.
Please let me know if you need any further clarification.
---update---
what I have realized is that what I need is a fifo queue or buffer ( first in first out ), sometimes referred to as back pressure. I need all messages processed in order and only when the preceding message is done processing.
---end update---
at first I thought it was cuz I was using rx 2.5.3 but I just upgraded to 4.1.0 and it's still not synchronous.
There's no way to tell a source observable to put events on hold from within a subscribe, it just lets us "observe" incoming events. Asynchronous things should be managed via Rx operators.
For example, to let your asynchronous handlers process events sequentially, you could try to use concatMap operator:
source
.concatMap(x => {
const func = handler[x.eventName];
return func ?
eventWorkflow(x, handler.handlerName, func.bind(handler)) :
Rx.Observable.empty();
})
.subscribe();
Note that in the example above await is not needed as concatMap knows how to deal with a promise which eventWorkflow returns: concatMap converts it to an observable and waits until the observable completes before proceeding with the next event.
So ultimately what I have found is that what I need is more accurately described as a fifo queue or buffer. I need the messages to wait until the previous message is done processing.
I have also pretty certain that rxjs doesn't offer this ( sometimes referred to as backpressure ). So what I have done is to just import a fifo queue and hook it to each subscriber.
I am using concurrent-queue which so far seems to be working pretty well.

Why complete is not invoked after performing a concatenation?

I need to query a device multiple times. Every query needs to be asynchronous and the device doesn't support simultaneous queries at a time.
Moreover, once it is queried, it can not be queried again immediately after. It needs at least a 1 second pause to work properly.
My two queries, performed by saveClock() and saveConfig(), return a Promise and both resolve by returning undefined as expected.
In the following code why removing take() prevents toArray() from being called?
What's happening here, is there a better way to achieve the same behavior?
export const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.map(action => {
// access store and create object data
// ...
return data;
})
.mergeMap(data =>
Rx.Observable.from([
Rx.Observable.of(data).mergeMap(data => saveClock(data.id, data.clock)),
Rx.Observable.timer(1000),
Rx.Observable.of(data).mergeMap(data => saveConfig(data.id, data.config)),
Rx.Observable.of(data.id)
])
)
.concatAll()
.take(4)
.toArray()
// [undefined, 0, undefined, "id"]
.map(x => { type: COMPLETED, id: x[3] });
There are a couple things I see:
Your final .map() is missing parenthesis, which in its current form is a syntax error but a subtle change could make it accidentally a labeled statement instead of returning an object. Because in its current form it's a syntax error, I imagine this is just a bug in this post, not in your code (which wouldn't even run), but double check!
// before
.map(x => { type: COMPLETED, id: x[3] });
// after
.map(x => ({ type: COMPLETED, id: x[3] }));
With that fixed, the example does run with a simple redux-observable test case: http://jsbin.com/hunale/edit?js,output So if there's nothing notable I did differently than you, problem appears to be in code not provided. Feel free to add additional insight or even better, reproduce it in a JSBin/git repo for us.
One thing you didn't mention but is very very noteworthy is that in redux-observable, your epics will typically be long-lived "process managers". This epic will actually only process one of these saves, then complete(), which is probably not what you actually want? Can the user only save something one time per application boot? Seems unlikely.
Instead, you'll want to keep the top-level stream your epic returns alive and listening for future actions by encapsulating this logic inside the mergeMap. The take(4) and passing the data.id then become extraneous:
const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.mergeMap(data =>
Rx.Observable.from([
Rx.Observable.of(data).mergeMap(data => saveClock(data.id, data.clock)),
Rx.Observable.timer(1000),
Rx.Observable.of(data).mergeMap(data => saveConfig(data.id, data.config))
])
.concatAll()
.toArray()
.map(() => ({ type: COMPLETED, id: data.id }))
);
This separation of streams is described by Ben Lesh in his recent AngularConnect talks, in the context of errors but it's still applicable: https://youtu.be/3LKMwkuK0ZE?t=20m (don't worry, this isn't Angular specific!)
Next, I wanted to share some unsolicited refactoring advice that may make your life easier, but certainly this is opinionated so feel free to ignore:
I would refactor to more accurately reflect the order of events visually, and reduce the complexity:
const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.mergeMap(data =>
Rx.Observable.from(saveClock(data.id, data.clock))
.delay(1000)
.mergeMap(() => saveConfig(data.id, data.config))
.map(() => ({ type: COMPLETED, id: data.id }))
);
Here we're consuming the Promise returned by saveClock, delaying it's output for 1000ms, the mergeMapping the result to a call to saveConfig() which also returns a Promise that will be consumed. Then finally mapping the result of that to our COMPLETE action.
Finally, keep in mind that if your Epic does stay alive and is long lived, there's nothing in this epic as-is to stop it from receiving multiple SAVE requests while other ones are still in-flight or have not yet exhausted the required 1000ms delay between requests. i.e. if that 1000ms space between any request is indeed required, your epic itself does not entirely prevent your UI code from breaking that. In that case, you may want to consider adding a more complex buffered backpressure mechanism, for example using the .zip() operator with a BehaviorSubject.
http://jsbin.com/waqipol/edit?js,output
const saveEpic = (action$, store) => {
// used to control how many we want to take,
// the rest will be buffered by .zip()
const requestCount$ = new Rx.BehaviorSubject(1)
.mergeMap(count => new Array(count));
return action$.ofType(SAVE)
.zip(requestCount$, action => action)
.mergeMap(data =>
Rx.Observable.from(saveClock(data.id, data.clock))
.delay(1000)
.mergeMap(() => saveConfig(data.id, data.config))
.map(() => ({ type: COMPLETED, id: data.id }))
// we're ready to take the next one, when available
.do(() => requestCount$.next(1))
);
};
This makes it so that requests to save that come in while we're still processing an existing one is buffered, and we only take one of them at a time. Keep in mind though that this is an unbounded buffer--meaning that the queue of pending actions can potentially grow infinitely quicker than the buffer is flushed. This is unavoidable unless you adopted a strategy for lossy backpressure, like dropping requests that overlap, etc.
If you have other epics which have overlapping requirements to not sending requests more than once a second, you would need to create some sort of single supervisor that makes this guarantee for all the epics.
This may all seem very complex, but perhaps ironically this is much easier to do in RxJS than with traditional imperative code. The hardest part is actually knowing the patterns.

Categories