Functional Programming and async/promises - javascript

I'm refactoring some old node modules into a more functional style. I'm like a second year freshman when it comes to FP :) Where I keep getting hung up is handling large async flows. Here is an example where I'm making a request to a db and then caching the response:
// Some external xhr/promise lib
const fetchFromDb = make => {
return new Promise(resolve => {
console.log('Simulate async db request...'); // just simulating a async request/response here.
setTimeout(() => {
console.log('Simulate db response...');
resolve({ make: 'toyota', data: 'stuff' });
}, 100);
});
};
// memoized fn
// this caches the response to getCarData(x) so that whenever it is invoked with 'x' again, the same response gets returned.
const getCarData = R.memoizeWith(R.identity, (carMake, response) => response.data);
// Is this function pure? Or is it setting something outside the scope (i.e., getCarData)?
const getCarDataFromDb = (carMake) => {
return fetchFromDb(carMake).then(getCarData.bind(null, carMake));
// Note: This return statement is essentially the same as:
// return fetchFromDb(carMake).then(result => getCarData(carMake, result));
};
// Initialize the request for 'toyota' data
const toyota = getCarDataFromDb('toyota'); // must be called no matter what
// Approach #1 - Just rely on thenable
console.log(`Value of toyota is: ${toyota.toString()}`);
toyota.then(d => console.log(`Value in thenable: ${d}`)); // -> Value in thenable: stuff
// Approach #2 - Just make sure you do not call this fn before db response.
setTimeout(() => {
const car = getCarData('toyota'); // so nice!
console.log(`later, car is: ${car}`); // -> 'later, car is: stuff'
}, 200);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
I really like memoization for caching large JSON objects and other computed properties. But with a lot of asynchronous requests whose responses are dependent on each other for doing work, I'm having trouble keeping track of what information I have and when. I want to get away from using promises so heavily to manage flow. It's a node app, so making things synchronous to ensure availability was blocking the event loop and really affecting performance.
I prefer approach #2, where I can get the car data simply with getCarData('toyota'). But the downside is that I have to be sure that the response has already been returned. With approach #1 I'll always have to use a thenable which alleviates the issue with approach #2 but introduces its own problems.
Questions:
Is getCarFromDb a pure function as it is written above? If not, how is that not a side-effect?
Is using memoization in this way an FP anti-pattern? That is, calling it from a thenable with the response so that future invocations of that same method return the cached value?

Question 1
It's almost a philosophical question here as to whether there are side-effects here. Calling it does update the memoization cache. But that itself has no observable side-effects. So I would say that this is effectively pure.
Update: a comment pointed out that as this calls IO, it can never be pure. That is correct. But that's the essence of this behavior. It's not meaningful as a pure function. My answer above is only about side-effects, and not about purity.
Question 2
I can't speak for the whole FP community, but I can tell you that the Ramda team (disclaimer: I'm a Ramda author) prefers to avoid Promises, preferring more lawful types such Futures or Tasks. But the same questions you have here would be in play with those types substituted for Promises. (More on these issues below.)
In General
There is a central point here: if you're doing asynchronous programming, it will spread to every bit of the application that touches it. There is nothing you will do that changes this basic fact. Using Promises/Tasks/Futures helps avoid some of the boilerplate of callback-based code, but it requires you to put the post response/rejection code inside a then/map function. Using async/await helps you avoid some of the boilerplate of Promise-based code, but it requires you to put the post reponse/rejection code inside async functions. And if one day we layer something else on top of async/await, it will likely have the same characteristics.
(While I would suggest that you look at Futures or Tasks instead of Promises, below I will only discuss Promises. The same ideas should apply regardless.)
My suggestion
If you're going to memoize anything, memoize the resulting Promises.
However you deal with your asynchrony, you will have to put the code that depends on the result of an asynchronous call into a function. I assume that the setTimeout of your second approach was just for demonstration purposes: using timeout to wait for a DB result over the network is extremely error-prone. But even with setTimeout, the rest of your code is running from within the setTimeout callback.
So rather than trying to separate the cases for when your data has already been cached and when it hasn't, simply use the same technique everywhere: myPromise.then(... my code ... ). That could look something like this:
// getCarData :: String -> Promise AutoInfo
const getCarData = R.memoizeWith(R.identity, make => new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 100)
})
)
getCarData('toyota').then(carData => {
console.log('now we can go', carData)
// any code which depends on carData
})
// later
getCarData('toyota').then(carData => {
console.log('now it is cached', carData)
})
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
In this approach, whenever you need car data, you call getCarData(make). Only the first time will it actually call the server. After that, the Promise is served out of the cache. But you use the same structures everywhere to deal with it.
I only see one reasonable alternative. I couldn't tell if your discussion about having to have to wait for the data before making remaining calls means that you would be able to pre-fetch your data. If that's the case, then there is one additional possibility, one which would allow you to skip the memoization as well:
// getCarData :: String -> Promise AutoInfo
const getCarData = make => new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 100)
})
const makes = ['toyota', 'ford', 'audi']
Promise.all(makes.map(getCarData)).then(allAutoInfo => {
const autos = R.zipObj(makes, allAutoInfo)
console.log('cooking with gas', autos)
// remainder of app that depends on auto data here
})
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
But this one means that nothing will be available until all your data has been fetched. That may or may not be all right with you, depending on all sorts of factors. And for many situations, it's not even remotely possible or desirable. But it is possible that yours is one where it is helpful.
One technical point about your code:
const getCarDataFromDb = (carMake) => {
return fetchFromDb(carMake).then(getCarData.bind(null, carMake));
};
Is there any reason to use getCarData.bind(null, carMake) instead of () => getCarData(carMake)? This seems much more readable.

Is getCarFromDb a pure function as it is written above?
No. Pretty much anything that uses I/O is impure. The data in the DB could change, the request could fail, so it doesn't give any reliable guarantee that it will return consistent values.
Is using memoization in this way an FP anti-pattern? That is, calling it from a thenable with the response so that future invocations of that same method return the cached value?
It's definitely an asynchrony antipattern. In your approach #2 you are creating a race condition where the operation will succeed if the DB query completes in less than 200 ms, and fail if it takes longer than that. You've labeled a line in your code "so nice!" because you're able to retrieve data synchronously. That suggests to me that you're looking for a way to skirt the issue of asynchrony rather than facing it head-on.
The way you're using bind and "tricking" memoizeWith into storing the value you're passing into it after the fact also looks very awkward and unnatural.
It is possible to take advantage of caching and still use asynchrony in a more reliable way.
For example:
// Some external xhr/promise lib
const fetchFromDb = make => {
return new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 2000);
});
};
const getCarDataFromDb = R.memoizeWith(R.identity, fetchFromDb);
// Initialize the request for 'toyota' data
const toyota = getCarDataFromDb('toyota'); // must be called no matter what
// Finishes after two seconds
toyota.then(d => console.log(`Value in thenable: ${d.data}`));
// Wait for 5 seconds before getting Toyota data again.
// This time, there is no 2-second wait before the data comes back.
setTimeout(() => {
console.log('About to get Toyota data again');
getCarDataFromDb('toyota').then(d => console.log(`Value in thenable: ${d.data}`));
}, 5000);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
The one potential pitfall here is that if a request should fail, you'll be stuck with a rejected promise in your cache. I'm not sure what would be the best way to address that, but you'd surely need some way of invalidating that part of the cache or implementing some sort of retry logic somewhere.

Related

RxJS share vs shareReplay differences

There seems to be an odd discrepancy with how share and shareReplay (with refcount:true) unsubscribe.
Consider the following (can paste into rxviz.com):
const { interval } = Rx;
const { take, shareReplay, share, timeoutWith, startWith , finalize} = RxOperators;
const shareReplay$ = interval(2000).pipe(
finalize(() => console.log('[finalize] Called shareReplay$')),
take(1),
shareReplay({refcount:true, bufferSize: 0}));
shareReplay$.pipe(
timeoutWith(1000, shareReplay$.pipe(startWith('X'))),
)
const share$ = interval(2000).pipe(
finalize(() => console.log('[finalize] Called on share$')),
take(1),
share());
share$.pipe(
timeoutWith(1000, share$.pipe(startWith('X'))),
)
The output from streamReplay$ will be -X-0- while the output from shareStream$ will be -X--0. It appears as though share unsubscribes from the source before timeoutWith can re-subscribe, while shareReplay manages to keep the shared subscription around for long enough for it to be re-used.
I want to use this to add a local timeout on an RPC (while keeping the call open), and any re-subscribe would be disastrous, so want to avoid the risk one of these behaviors is a mistake and gets changed in the future.
I could use race(), and merge the rpc call with a delayed startsWith (so they both subscribe at the same time) but it would be more code and operators.
EDIT: One possible solution is to merge two subscriptions one to a shared request and one to a delayed stream that takes until the shared stream emits:
merge(share$, of('still working...').pipe( delay(1000), takeUntil(share$)));
This way the shated stream is subscribed to at the same time, so there is no "grey area" when one operator unsubscribes as a child subscribes. (Will turn this into an answer unless someone comes up with a better suggestion) or can explain the intentions/differences between share and shareReplay
Quick Aside:
I suspect you're not going to get many answers until you describe the behavior you're after.
use this to add a local timeout on an RPC (while keeping the call open)
I'm not sure what keeping an RPC call opens entails. It's sounds like you want something that cancels the request within RxJS but doesn't cancel the inflight RPC. I feel like that's a domain mismatch. Why not keep your observables aligned with the semantics of the calls they represent?
Toward a solution
Having seen your update, I suspect you don't need a timeout. It looks like the behavior you're after can be achieved without unsubscribing to the observable representing your RPC. I'd argue that doing so simplifies your logic and makes it easier to maintain/extend in the future.
It also sidesteps all the async interleaving concerns you had earlier.
Here I'll take an educated guess at the behaviour you're after. It appears like you want to following:
Take one value from the source (happens to be an RPC in this case).
If that value takes more than 1s to arrive, emit `"still working..." and continue to wait for one value from the source.
If that's the case, you don't need share at all. Normally a timeout means to cancel an inflight request if it takes too long. Maybe you don't want a timeout at all, you want to stay subscribed to the source...
If that's the case, here's a way to do this:
const source$ = rpc(arg1 arg2);
// create a unique token (its memory address).
// We'll embed a message inside so it's doing double duty
const token = {a: "still working..."};
merge(
source$,
timer(1000).pipe(mapTo(token))
).pipe(
// take(1), but ignoring the token
takeWhile(v => v === token, true),
// Unwrap the token, emit the contained message
map(v => v === token ? v.a : v)
);
On the other hand, if you know that observable wrapping your RPC will never emit "still working..." then you don't need a unique token and you can simplify this to check the value directly.
merge(
rpc(arg1 arg2),
timer(1000).pipe(mapTo("still working..."))
).pipe(
takeWhile(v => v === "still working...", true)
);
Timing Guarantees
Javascript's runtime doesn't come with baked in timing guarantees of any sort. In a single threaded environment, that makes sense (and you can start to do a bit better with web workers and such). If something compute-heavy happens, everything covered in that timing window just waits. Mostly it happens in the expected order though.
Regardless,
const hello = (name: string) => () => console.log(`Hello ${name}`);
setTimeout(hello("first"), 2000);
setTimeout(hello("second"), 2000);
In Node, V8, or SpiderMonkey you do preserve the order here. But what about this?
const hello = (name: string) => () => console.log(`Hello ${name}`);
setTimeout(hello("first"), 1);
setTimeout(hello("second"), 0);
Here you would assume second always comes first because it's supposed to happen a millisecond earlier. If you run this using SpiderMonkey, the order depends on how busy the event loop is. They bucket shorter timeouts since a 0ms timeout takes about 8ms on average anyway.
Always make asynchronous dependencies explicit
In JavaScript, it's best practice to never ever make any timing dependencies implicit.
In the code below, we can reasonably know that that data will not be undefined when we call data.value. This implicitly relies on asynchronous interleaving:
let data;
setTimeout(() => {data = {value: 5};}, 1000);
setTimeout(() => console.log(data.value), 2000);
We really should make this dependency explicit. Either by checking if data is undefined or by restructuring our calls
setTimeout(() => {
const data = {value: 5};
setTimeout(() => console.log(data.value), 1000);
}, 1000);
Share vs ShareReplay
want to avoid the risk one of these behaviors is a mistake and gets changed in the future.
The real risk here isn't even in how the library implements the difference. It's a language-level risk as well. You're depending on asynchronous interleaving either way. You run the risk of having a bug that only appears once in a blue moon and can't be easily re-created/tested etc.
The share operator has a ShareConfig (source)
export interface ShareConfig<T> {
connector?: () => SubjectLike<T>;
resetOnError?: boolean | ((error: any) => Observable<any>);
resetOnComplete?: boolean | (() => Observable<any>);
resetOnRefCountZero?: boolean | (() => Observable<any>);
}
If you use vanilla shareReplay(1) or replay({resetOnRefCountZero: false}) then you aren't relying on how events are ordered in the JS Event Loop.

How to control C# Task (async/await in same way as javascript Promise)?

I'm js developer and jumped to C#. I'm learning async/await/Task in C# and can't understand how to control C# Task.
In javascript I can save Promise resolve handler to call it later (and resolve promise), for example, to organize communication between some "badly connected" app parts (request doesn't returns result, but result is sent separately and fires some "replyEvent"), and do something like this:
// javascript
const handlers = {}
// event firing on reply
const replyEventListener = (requestId, result) => {
if (handlers[requestId]) {
handlers[requestId](result)
delete handlers[requestId]
}
}
const requestFunction = (id, params) => new Promise(resolve => {
// handler is saved to use later from "outside"
handlers[id] = resolve;
// do required request
makeRequest(params)
})
// .. somewhere in code request becomes simple awaitable
const resultOfRequest = await requestFunction(someUniqueId)
How can I do same with C# Task?
I think the philosophy behind Promise (in JS) and Task (in C#) is different.
We don't implement them exactly the same way.
Read the documentation on the MSDN :
Task Class - Remarks
Task<TResult> Class
You must empty your cup to be able to fill it again.
A good article in MSDN is the following which shows examples of how Tasks can be used like Promises in the sense that you can start them and then await them later.
Asynchronous programming with async and await
If you want to await a Task in C# like you would a Promise in JS, you would do the following:
Create a method or function that returns a Task. Usually this will be a Task of some class object. In the article above, it references creating breakfast as an analogy, so you can define FryEggsAsync as a method that takes in an int number of how many eggs to fry and returns a Task<Egg>. The C# convention is to end any function or method name with Async to indicate that a Task is being returned.
Create your Task, which is similar to a Promise in JS, so you could do var eggsTask = FryEggsAsync(2), which would store a Task<Egg> in eggsTask, that you can then await or pass to other functions as needed.
To await your Task to resolve it and get the result, you would simply do await eggsTask.
You can also use Task.WaitAll to await multiple Tasks at once, similar to Promise.all in JS, by passing in an array of Tasks as an argument to that method call. See the Task.WaitAll documentation for some examples of this. The difference here though is that the results are stored separately in each task passed to Task.WaitAll, instead of aggregated together into a results array like in JS with Promise.all, but if you need to await multiple Tasks then this will be an easier way to do so.

How do Promises change the use of functions

I am having trouble finding a use for Promises. Wouldn't these 2 approaches below work the same exact way? Since the while loop in loopTest() is synchronous, logStatement() function wouldn't run until it's complete anyways so how would the the 2nd approach be any different ..wouldn't it be pointless in waiting for it to resolve() ?
1st approach:
function loopTest() {
while ( i < 10000 ) {
console.log(i)
i++
})
}
function logStatement() {
console.log("Logging test")
}
loopTest();
logStatement();
2nd approach:
function loopTest() {
return new Promise((resolve, reject) => {
while ( i < 10000 ) {
console.log(i)
i++
if (i === 999) {
resolve('I AM DONE')
}
})
});
}
function logStatement() {
console.log("Logging test")
}
loopTest().then(logStatement());
Promises don't make anything asynchronous,¹ so you're right, there's no point to using a promise in the code you've shown.
The purpose of promises is to provide a standard, composable means of observing the result of things that are already asynchronous (like ajax calls).
There are at least three massive benefits to having a standardized way to observe the results of asynchronous operations:
We can have standard semantics for consuming individual promises, rather than every API defining its own signature for callback functions. (Does it signal error with an initial parameter that's null on success, like Node.js? Does it call the callback with an object with a success flag? Or...)
We can have standard ways of composing/combining them, such as Promise.all, Promise.race, Promise.allSettled, etc.
We can have syntax to consume them with our usual control structures, which we have now in the form of async functions and await.
But again, throwing a promise at a synchronous process almost never does anything useful.²
¹ One very small caveat there: The handler functions to attach to a promise are always triggered asynchronously, whether the promise is already settled or not.
² Another small caveat: Sometimes, you have a synchronous result you want to include in a composition operation (Promise.all, etc.) with various asynchronous operations. In that case, wrapping the value in a promise that's instantly fulfilled is useful — and in fact, all the standard promise combinators (Promise.all, etc.) do that for you, as does await.
There's no point in what you are doing, because your function body is just a blocking loop.
To get a benefit from Promises, use it with APIs that do something with IO, such as a HTTP request, or reading a file from disk.
These APIs all traditionally used callbacks, and are now mostly Promise based.
Anything function that uses a Promise-based function, should itself also be Promise-based. This is why you see a lot of promises in modern code, as a promise only has to be used at 1 level in a stack for the entire stack to be asynchronous in nature.
Is this a better example of how Promises are used? This is all I can think of to make it show use to me:
Version 1
function getData() {
fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(data => data.json())
.then(json => console.log(json))
}
function logInfo() {
console.log("i am a logger")
}
getData()
logInfo()
// "I am a logger"
// {"test": "json"}
Version 2
function getData() {
return fetch('https://jsonplaceholder.typicode.com/todos/1')
.then(data => data.json())
.then(json => console.log(json))
}
function logInfo() {
console.log("i am a logger")
}
getData().then(logInfo);
// "{"test": "json"}
// "I am a logger"
// waits for API result to log _then_ logInfo is run , which makes a log statement
There's definitely benefits to using Promises but that's only in certain scenarios where their usage would seem viable.
Your example could represent what would happen when you retrieve data from an external source synchronously, it would block the thread preventing further code from executing until the loop terminates (I explain below why exactly that happens) - wrapping it in a promise gives no different output in that the thread is still being blocked and when the next message in the queue has to be processed, it gets processed like normal right after it ends.
However an implementation similar to this could achieve a while loop running in a non-blocking manner, just an idea (don't mean to derail this topic with setInterval's implementation):
let f = () => {
let tick = Date.now;
let t = tick();
let interval = setInterval(() => {
if (tick() - t >= 3000) {
console.log("stop");
clearInterval(interval);
}
}, 0);
};
f()
console.log("start");
Basically the time is checked/handled in a separate thread in the browser and the callback is executed every time the time specified runs out while the interval hasn't been cleared, after the call stack becomes empty (so UI function isn't affected) and the current executing function terminates/ends or after other functions above it in the stack finish running. I don't know about the performance implications of doing something like this but I feel like this should only be used when necessary, since the callback would have to be executed very frequently (with 0 timeout, although it's not guaranteed to be 0 anyway).
why it happens
I mainly want to clarify that while the handler functions will be scheduled to be executed asynchronously, every message in the queue has to be processed completely before the next one and for the duration your while loop executes, no new message can be processed in the event queue so it would be pointless to involve Promises where the same thing would happen without them.
So basically the answer to:
wouldn't it be pointless in waiting for it to resolve() ?
is yes, it would be pointless in this case.

Implementing event synchronization primitives in JavaScript/TypeScript using async/await/Promises

I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions that, when it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished:
processComplete(); // Let the program know we're done processing
Right now, that function looks something like this:
let complete = false;
function processComplete() {
complete = true;
}
In order to determine whether the process is complete, other code either uses timeouts or process.nextTick and checks the complete variable over and over again in loops. This is complicated and inefficient.
I'd instead like to let various async functions simply use await to wait and be awoken when the process is complete:
// This code will appear in many different places
await /* something regarding completion */;
console.log("We're done!");
If I were programming in Windows in C, I'd use an event synchronization primitive and the code would look something like this:
Event complete;
void processComplete() {
SetEvent(complete);
}
// Elsewhere, repeated in many different places
WaitForSingleObject(complete, INFINITE);
console.log("We're done!");
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await? In other words, how can I implement an event synchronization primitive using await and async or Promises?
This pattern is quite close to your code:
const processComplete = args => new Promise(resolve => {
// ...
// In the middle of a callback for a async function, etc.:
resolve(); // instead of `complete = true;`
// ...
}));
// elsewhere
await processComplete(args);
console.log("We're done!");
More info: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Promise.
It really depends on what you mean by "other code" in this scenario. It sounds like you want to use some variation of the delegation pattern or the observer pattern.
A simple approach is to take advantage of the fact that JavaScript allows you to store an array of functions. Your processComplete() method could do something like this:
function processComplete(){
arrayOfFunctions.forEach(fn => fn());
}
Elsewhere, in your other code, you could create functions for what needs to be done when the process is complete, and add those functions to the arrayOfFunctions.
If you don't want these different parts of code to be so closely connected, you could set up a completely separate part of your code that functions as a notification center. Then, you would have your other code tell the notification center that it wants to be notified when the process is complete, and your processComplete() method would simply tell the notification center that the process is complete.
Another approach is to use promises.
I have a long, complicated asynchronous process in TypeScript/JavaScript spread across many libraries and functions
Then make sure that every bit of the process that is asynchronous returns a promise for its partial result, so that you can chain onto them and compose them together or await them.
When it is finally finished receiving and processing all of its data, calls a function processComplete() to signal that it's finished
It shouldn't. The function that starts the process should return a promise, and when the process is finished it should fulfill that promise.
If you don't want to properly promisify every bit of the whole process because it's too cumbersome, you can just do
function startProcess(…);
… // do whatever you need to do
return new Promise(resolve => {
processComplete = resolve;
// don't forget to reject when the process failed!
});
}
In JavaScript or TypeScript, rather than setting a boolean complete value to true, what exactly could processComplete do to make wake up any number of functions that are waiting using await?
If they are already awaiting the result of the promise, there is nothing else that needs to be done. (The awaited promise internally has such a flag already). It's really just doing
// somewhere:
var resultPromise = startProcess(…);
// elsewhere:
await resultPromise;
… // the process is completed here
You don't even need to fulfill the promise with a useful result if all you need is to synchronise your tasks, but you really should. (If there's no data they are waiting for, what are they waiting for at all?)

Why complete is not invoked after performing a concatenation?

I need to query a device multiple times. Every query needs to be asynchronous and the device doesn't support simultaneous queries at a time.
Moreover, once it is queried, it can not be queried again immediately after. It needs at least a 1 second pause to work properly.
My two queries, performed by saveClock() and saveConfig(), return a Promise and both resolve by returning undefined as expected.
In the following code why removing take() prevents toArray() from being called?
What's happening here, is there a better way to achieve the same behavior?
export const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.map(action => {
// access store and create object data
// ...
return data;
})
.mergeMap(data =>
Rx.Observable.from([
Rx.Observable.of(data).mergeMap(data => saveClock(data.id, data.clock)),
Rx.Observable.timer(1000),
Rx.Observable.of(data).mergeMap(data => saveConfig(data.id, data.config)),
Rx.Observable.of(data.id)
])
)
.concatAll()
.take(4)
.toArray()
// [undefined, 0, undefined, "id"]
.map(x => { type: COMPLETED, id: x[3] });
There are a couple things I see:
Your final .map() is missing parenthesis, which in its current form is a syntax error but a subtle change could make it accidentally a labeled statement instead of returning an object. Because in its current form it's a syntax error, I imagine this is just a bug in this post, not in your code (which wouldn't even run), but double check!
// before
.map(x => { type: COMPLETED, id: x[3] });
// after
.map(x => ({ type: COMPLETED, id: x[3] }));
With that fixed, the example does run with a simple redux-observable test case: http://jsbin.com/hunale/edit?js,output So if there's nothing notable I did differently than you, problem appears to be in code not provided. Feel free to add additional insight or even better, reproduce it in a JSBin/git repo for us.
One thing you didn't mention but is very very noteworthy is that in redux-observable, your epics will typically be long-lived "process managers". This epic will actually only process one of these saves, then complete(), which is probably not what you actually want? Can the user only save something one time per application boot? Seems unlikely.
Instead, you'll want to keep the top-level stream your epic returns alive and listening for future actions by encapsulating this logic inside the mergeMap. The take(4) and passing the data.id then become extraneous:
const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.mergeMap(data =>
Rx.Observable.from([
Rx.Observable.of(data).mergeMap(data => saveClock(data.id, data.clock)),
Rx.Observable.timer(1000),
Rx.Observable.of(data).mergeMap(data => saveConfig(data.id, data.config))
])
.concatAll()
.toArray()
.map(() => ({ type: COMPLETED, id: data.id }))
);
This separation of streams is described by Ben Lesh in his recent AngularConnect talks, in the context of errors but it's still applicable: https://youtu.be/3LKMwkuK0ZE?t=20m (don't worry, this isn't Angular specific!)
Next, I wanted to share some unsolicited refactoring advice that may make your life easier, but certainly this is opinionated so feel free to ignore:
I would refactor to more accurately reflect the order of events visually, and reduce the complexity:
const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.mergeMap(data =>
Rx.Observable.from(saveClock(data.id, data.clock))
.delay(1000)
.mergeMap(() => saveConfig(data.id, data.config))
.map(() => ({ type: COMPLETED, id: data.id }))
);
Here we're consuming the Promise returned by saveClock, delaying it's output for 1000ms, the mergeMapping the result to a call to saveConfig() which also returns a Promise that will be consumed. Then finally mapping the result of that to our COMPLETE action.
Finally, keep in mind that if your Epic does stay alive and is long lived, there's nothing in this epic as-is to stop it from receiving multiple SAVE requests while other ones are still in-flight or have not yet exhausted the required 1000ms delay between requests. i.e. if that 1000ms space between any request is indeed required, your epic itself does not entirely prevent your UI code from breaking that. In that case, you may want to consider adding a more complex buffered backpressure mechanism, for example using the .zip() operator with a BehaviorSubject.
http://jsbin.com/waqipol/edit?js,output
const saveEpic = (action$, store) => {
// used to control how many we want to take,
// the rest will be buffered by .zip()
const requestCount$ = new Rx.BehaviorSubject(1)
.mergeMap(count => new Array(count));
return action$.ofType(SAVE)
.zip(requestCount$, action => action)
.mergeMap(data =>
Rx.Observable.from(saveClock(data.id, data.clock))
.delay(1000)
.mergeMap(() => saveConfig(data.id, data.config))
.map(() => ({ type: COMPLETED, id: data.id }))
// we're ready to take the next one, when available
.do(() => requestCount$.next(1))
);
};
This makes it so that requests to save that come in while we're still processing an existing one is buffered, and we only take one of them at a time. Keep in mind though that this is an unbounded buffer--meaning that the queue of pending actions can potentially grow infinitely quicker than the buffer is flushed. This is unavoidable unless you adopted a strategy for lossy backpressure, like dropping requests that overlap, etc.
If you have other epics which have overlapping requirements to not sending requests more than once a second, you would need to create some sort of single supervisor that makes this guarantee for all the epics.
This may all seem very complex, but perhaps ironically this is much easier to do in RxJS than with traditional imperative code. The hardest part is actually knowing the patterns.

Categories