I need to query a device multiple times. Every query needs to be asynchronous and the device doesn't support simultaneous queries at a time.
Moreover, once it is queried, it can not be queried again immediately after. It needs at least a 1 second pause to work properly.
My two queries, performed by saveClock() and saveConfig(), return a Promise and both resolve by returning undefined as expected.
In the following code why removing take() prevents toArray() from being called?
What's happening here, is there a better way to achieve the same behavior?
export const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.map(action => {
// access store and create object data
// ...
return data;
})
.mergeMap(data =>
Rx.Observable.from([
Rx.Observable.of(data).mergeMap(data => saveClock(data.id, data.clock)),
Rx.Observable.timer(1000),
Rx.Observable.of(data).mergeMap(data => saveConfig(data.id, data.config)),
Rx.Observable.of(data.id)
])
)
.concatAll()
.take(4)
.toArray()
// [undefined, 0, undefined, "id"]
.map(x => { type: COMPLETED, id: x[3] });
There are a couple things I see:
Your final .map() is missing parenthesis, which in its current form is a syntax error but a subtle change could make it accidentally a labeled statement instead of returning an object. Because in its current form it's a syntax error, I imagine this is just a bug in this post, not in your code (which wouldn't even run), but double check!
// before
.map(x => { type: COMPLETED, id: x[3] });
// after
.map(x => ({ type: COMPLETED, id: x[3] }));
With that fixed, the example does run with a simple redux-observable test case: http://jsbin.com/hunale/edit?js,output So if there's nothing notable I did differently than you, problem appears to be in code not provided. Feel free to add additional insight or even better, reproduce it in a JSBin/git repo for us.
One thing you didn't mention but is very very noteworthy is that in redux-observable, your epics will typically be long-lived "process managers". This epic will actually only process one of these saves, then complete(), which is probably not what you actually want? Can the user only save something one time per application boot? Seems unlikely.
Instead, you'll want to keep the top-level stream your epic returns alive and listening for future actions by encapsulating this logic inside the mergeMap. The take(4) and passing the data.id then become extraneous:
const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.mergeMap(data =>
Rx.Observable.from([
Rx.Observable.of(data).mergeMap(data => saveClock(data.id, data.clock)),
Rx.Observable.timer(1000),
Rx.Observable.of(data).mergeMap(data => saveConfig(data.id, data.config))
])
.concatAll()
.toArray()
.map(() => ({ type: COMPLETED, id: data.id }))
);
This separation of streams is described by Ben Lesh in his recent AngularConnect talks, in the context of errors but it's still applicable: https://youtu.be/3LKMwkuK0ZE?t=20m (don't worry, this isn't Angular specific!)
Next, I wanted to share some unsolicited refactoring advice that may make your life easier, but certainly this is opinionated so feel free to ignore:
I would refactor to more accurately reflect the order of events visually, and reduce the complexity:
const saveEpic = (action$, store) =>
action$.ofType(SAVE)
.mergeMap(data =>
Rx.Observable.from(saveClock(data.id, data.clock))
.delay(1000)
.mergeMap(() => saveConfig(data.id, data.config))
.map(() => ({ type: COMPLETED, id: data.id }))
);
Here we're consuming the Promise returned by saveClock, delaying it's output for 1000ms, the mergeMapping the result to a call to saveConfig() which also returns a Promise that will be consumed. Then finally mapping the result of that to our COMPLETE action.
Finally, keep in mind that if your Epic does stay alive and is long lived, there's nothing in this epic as-is to stop it from receiving multiple SAVE requests while other ones are still in-flight or have not yet exhausted the required 1000ms delay between requests. i.e. if that 1000ms space between any request is indeed required, your epic itself does not entirely prevent your UI code from breaking that. In that case, you may want to consider adding a more complex buffered backpressure mechanism, for example using the .zip() operator with a BehaviorSubject.
http://jsbin.com/waqipol/edit?js,output
const saveEpic = (action$, store) => {
// used to control how many we want to take,
// the rest will be buffered by .zip()
const requestCount$ = new Rx.BehaviorSubject(1)
.mergeMap(count => new Array(count));
return action$.ofType(SAVE)
.zip(requestCount$, action => action)
.mergeMap(data =>
Rx.Observable.from(saveClock(data.id, data.clock))
.delay(1000)
.mergeMap(() => saveConfig(data.id, data.config))
.map(() => ({ type: COMPLETED, id: data.id }))
// we're ready to take the next one, when available
.do(() => requestCount$.next(1))
);
};
This makes it so that requests to save that come in while we're still processing an existing one is buffered, and we only take one of them at a time. Keep in mind though that this is an unbounded buffer--meaning that the queue of pending actions can potentially grow infinitely quicker than the buffer is flushed. This is unavoidable unless you adopted a strategy for lossy backpressure, like dropping requests that overlap, etc.
If you have other epics which have overlapping requirements to not sending requests more than once a second, you would need to create some sort of single supervisor that makes this guarantee for all the epics.
This may all seem very complex, but perhaps ironically this is much easier to do in RxJS than with traditional imperative code. The hardest part is actually knowing the patterns.
Related
There seems to be an odd discrepancy with how share and shareReplay (with refcount:true) unsubscribe.
Consider the following (can paste into rxviz.com):
const { interval } = Rx;
const { take, shareReplay, share, timeoutWith, startWith , finalize} = RxOperators;
const shareReplay$ = interval(2000).pipe(
finalize(() => console.log('[finalize] Called shareReplay$')),
take(1),
shareReplay({refcount:true, bufferSize: 0}));
shareReplay$.pipe(
timeoutWith(1000, shareReplay$.pipe(startWith('X'))),
)
const share$ = interval(2000).pipe(
finalize(() => console.log('[finalize] Called on share$')),
take(1),
share());
share$.pipe(
timeoutWith(1000, share$.pipe(startWith('X'))),
)
The output from streamReplay$ will be -X-0- while the output from shareStream$ will be -X--0. It appears as though share unsubscribes from the source before timeoutWith can re-subscribe, while shareReplay manages to keep the shared subscription around for long enough for it to be re-used.
I want to use this to add a local timeout on an RPC (while keeping the call open), and any re-subscribe would be disastrous, so want to avoid the risk one of these behaviors is a mistake and gets changed in the future.
I could use race(), and merge the rpc call with a delayed startsWith (so they both subscribe at the same time) but it would be more code and operators.
EDIT: One possible solution is to merge two subscriptions one to a shared request and one to a delayed stream that takes until the shared stream emits:
merge(share$, of('still working...').pipe( delay(1000), takeUntil(share$)));
This way the shated stream is subscribed to at the same time, so there is no "grey area" when one operator unsubscribes as a child subscribes. (Will turn this into an answer unless someone comes up with a better suggestion) or can explain the intentions/differences between share and shareReplay
Quick Aside:
I suspect you're not going to get many answers until you describe the behavior you're after.
use this to add a local timeout on an RPC (while keeping the call open)
I'm not sure what keeping an RPC call opens entails. It's sounds like you want something that cancels the request within RxJS but doesn't cancel the inflight RPC. I feel like that's a domain mismatch. Why not keep your observables aligned with the semantics of the calls they represent?
Toward a solution
Having seen your update, I suspect you don't need a timeout. It looks like the behavior you're after can be achieved without unsubscribing to the observable representing your RPC. I'd argue that doing so simplifies your logic and makes it easier to maintain/extend in the future.
It also sidesteps all the async interleaving concerns you had earlier.
Here I'll take an educated guess at the behaviour you're after. It appears like you want to following:
Take one value from the source (happens to be an RPC in this case).
If that value takes more than 1s to arrive, emit `"still working..." and continue to wait for one value from the source.
If that's the case, you don't need share at all. Normally a timeout means to cancel an inflight request if it takes too long. Maybe you don't want a timeout at all, you want to stay subscribed to the source...
If that's the case, here's a way to do this:
const source$ = rpc(arg1 arg2);
// create a unique token (its memory address).
// We'll embed a message inside so it's doing double duty
const token = {a: "still working..."};
merge(
source$,
timer(1000).pipe(mapTo(token))
).pipe(
// take(1), but ignoring the token
takeWhile(v => v === token, true),
// Unwrap the token, emit the contained message
map(v => v === token ? v.a : v)
);
On the other hand, if you know that observable wrapping your RPC will never emit "still working..." then you don't need a unique token and you can simplify this to check the value directly.
merge(
rpc(arg1 arg2),
timer(1000).pipe(mapTo("still working..."))
).pipe(
takeWhile(v => v === "still working...", true)
);
Timing Guarantees
Javascript's runtime doesn't come with baked in timing guarantees of any sort. In a single threaded environment, that makes sense (and you can start to do a bit better with web workers and such). If something compute-heavy happens, everything covered in that timing window just waits. Mostly it happens in the expected order though.
Regardless,
const hello = (name: string) => () => console.log(`Hello ${name}`);
setTimeout(hello("first"), 2000);
setTimeout(hello("second"), 2000);
In Node, V8, or SpiderMonkey you do preserve the order here. But what about this?
const hello = (name: string) => () => console.log(`Hello ${name}`);
setTimeout(hello("first"), 1);
setTimeout(hello("second"), 0);
Here you would assume second always comes first because it's supposed to happen a millisecond earlier. If you run this using SpiderMonkey, the order depends on how busy the event loop is. They bucket shorter timeouts since a 0ms timeout takes about 8ms on average anyway.
Always make asynchronous dependencies explicit
In JavaScript, it's best practice to never ever make any timing dependencies implicit.
In the code below, we can reasonably know that that data will not be undefined when we call data.value. This implicitly relies on asynchronous interleaving:
let data;
setTimeout(() => {data = {value: 5};}, 1000);
setTimeout(() => console.log(data.value), 2000);
We really should make this dependency explicit. Either by checking if data is undefined or by restructuring our calls
setTimeout(() => {
const data = {value: 5};
setTimeout(() => console.log(data.value), 1000);
}, 1000);
Share vs ShareReplay
want to avoid the risk one of these behaviors is a mistake and gets changed in the future.
The real risk here isn't even in how the library implements the difference. It's a language-level risk as well. You're depending on asynchronous interleaving either way. You run the risk of having a bug that only appears once in a blue moon and can't be easily re-created/tested etc.
The share operator has a ShareConfig (source)
export interface ShareConfig<T> {
connector?: () => SubjectLike<T>;
resetOnError?: boolean | ((error: any) => Observable<any>);
resetOnComplete?: boolean | (() => Observable<any>);
resetOnRefCountZero?: boolean | (() => Observable<any>);
}
If you use vanilla shareReplay(1) or replay({resetOnRefCountZero: false}) then you aren't relying on how events are ordered in the JS Event Loop.
I'm facing a problem, and I've been trying to find a solution using RxJs, but can't seem to find one that fits it...
I have 3 different REST requests, that will be called sequentially, and each of them needs the response of the previous one as an argument
I want to implement a progress bar, which increments as the requests succeed
Here is what I thought :
I am going to use pipes and concatMap() to avoid nested subscriptions and subscribe to each request when the previous one is done.
Consider this very simplified version. Assume that each of represents a whole REST successful request (will handle errors later), and that I will do unshown work with the n parameter...
const request1 = of('success 1').pipe(
delay(500),
tap(n => console.log('received ' + n)),
);
const request2 = (n) => of('success 2').pipe(
delay(1000),
tap(n => console.log('received ' + n))
);
const request3 = (n) => of('success 3').pipe(
delay(400),
tap(n => console.log('received ' + n))
);
request1.pipe(
concatMap(n => request2(n).pipe(
concatMap(n => request3(n))
))
)
However, when I subscribe to the last piece of code, I will only get the response of the last request, which is expected as the pipe resolves to that.
So with concatMap(), I can chain my dependent REST calls correctly, but can't follow the progress.
Though I could follow the progress quite easily with nested subscriptions, but I am trying hard to avoid this and use the best practice way.
How can I chain my dependent REST calls, but still be able to do stuff each time a call succeeds ?
This is a generalized solution, though not as simple. But it does make progress observable while still avoiding the share operator, which can introduce unexpected statefulness if used incorrectly.
const chainRequests = (firstRequestFn, ...otherRequestFns) => (
initialParams
) => {
return otherRequestFns.reduce(
(chain, nextRequestFn) =>
chain.pipe(op.concatMap((response) => nextRequestFn(response))),
firstRequestFn(initialParams)
);
};
chainRequests takes a variable number of functions and returns a function that accepts initial parameters and returns an observable that concatMaps the functions together as shown manually in the question. It does this by reducing each function into an accumulation value that happens to be an observable.
Remember, RxJS leads us out of callback hell if we know the path.
const chainRequestsWithProgress = (...requestFns) => (initialParams) => {
const progress$ = new Rx.BehaviorSubject(0);
const wrappedFns = requestFns.map((fn, i) => (...args) =>
fn(...args).pipe(op.tap(() => progress$.next((i + 1) / requestFns.length)))
);
const chain$ = Rx.defer(() => {
progress$.next(0);
return chainRequests(...wrappedFns)(initialParams);
});
return [chain$, progress$];
};
chainRequestsWithProgress returns two observables - the one that eventually emits the last response, and one that emits progress values when the first observable is subscribed to. We do this by creating a BehaviorSubject to serve as our stream of progress values, and wrapping each of our request functions to return the same observable they normally would, but we also pipe it to tap so it can push a new progress value to the BehaviorSubject.
The progress is zeroed out upon each subscription to the first observable.
If you wanted to return a single observable that produced the progress state as well as the eventual result value, you could have chainRequestsWithProgress instead return:
chain$.pipe(
op.startWith(null),
op.combineLatest(progress$, (result, progress) => ({ result, progress }))
)
and you'll have an observable that emits an object representing the progress toward the eventual result, then that result itself. Food for thought - does progress$ have to emit just numbers?
Caveat
This assumes request observables emit exactly one value.
The simplest solution would be to have a progress counter variable that is updated from a tap when each response comes back.
let progressCounter = 0;
request1.pipe(
tap(_ => progressCounter = 0.33),
concatMap(n => request2(n).pipe(
tap(_ => progressCounter = 0.66),
concatMap(n => request3(n)
.pipe(tap(_ => progressCounter = 1)))
))
);
If you want the progress itself to be observable then you want to share the request observables as to not make duplicate requests) and then combine them to get the progress.
An example of how you may want to approach that can be found at: https://www.learnrxjs.io/recipes/progressbar.html
I am currently struggling to wrap my head around angular (2+), the HttpClient and Observables.
I come from a promise async/await background, and what I would like to achieve in angular, is the equivalent of:
//(...) Some boilerplate to showcase how to avoid callback hell with promises and async/await
async function getDataFromRemoteServer() {
this.result = await httpGet(`/api/point/id`);
this.dependentKey = someComplexSyncTransformation(this.result);
this.dependentResult = await httpGet(`/api/point/id/dependent/keys/${this.dependentKey}`);
this.deeplyNestedResult = await httpGet(`/api/point/id/dependen/keys/${this.dependentResult.someValue}`);
}
The best I could come op with in angular is:
import { HttpClient } from `#angular/common/http`;
//(...) boilerplate to set component up.
constructor(private http: HttpClient) {}
// somewhere in a component.
getDataFromRemoteServer() {
this.http.get(`/api/point/id`).subscribe( result => {
this.result = result;
this.dependentKey = someComplexSyncTransformation(this.result);
this.http.get(`/api/point/id/dependent/keys/${this.dependentKey}`).subscribe( dependentResult => {
this.dependentResult = dependentResult;
this.http.get(`/api/point/id/dependen/keys/${this.dependentResult.someValue}`).subscribe( deeplyNestedResult => {
this.deeplyNestedResult = deeplyNestedResult;
});
})
});
}
//...
As you might have noticed, I am entering the Pyramid of Doom with this approach, which I would like to avoid.
So how could I write the angular snippet in a way as to avoid this?
Thx!
Ps: I am aware of the fact that you can call .toPromise on the result of the .get call.
But let's just assume I want to go the total Observable way, for now.
When working with observables, you won't call subscribe very often. Instead, you'll use the various operators to combine observables together, forming a pipeline of operations.
To take the output of one observable and turn it into another, the basic operator is map. This is similar to how you can .map an array to produce another array. For a simple example, here's doubling all the values of an observable:
const myObservable = of(1, 2, 3).pipe(
map(val => val * 2)
);
// myObservable is an observable which will emit 2, 4, 6
Mapping is also what you do to take an observable for one http request, and then make another http request. However, we will need one additional piece, so the following code is not quite right:
const myObservable = http.get('someUrl').pipe(
map(result => http.get('someOtherUrl?id=' + result.id)
)
The problem with this code is that it creates an observable that spits out other observables. A 2-dimensional observable if you like. We need to flatten this down so that we have an observable that spits out the results of the second http.get. There are a few different ways to do the flattening, depending on what order we want the results to be in if multiple observables are emitting multiple values. This is not much of an issue in your case since each of these http observables will only emit one item. But for reference, here are the options:
mergeMap will let all the observables run in whatever order, and outputs in whatever order the values arrive. This has its uses, but can also result in race conditions
switchMap will switch to the latest observable, and cancel old ones that may be in progress. This can eliminate race conditions and ensure you have only the latest data.
concatMap will finish the entirety of the first observable before moving on to the second. This can also eliminate race conditions, but won't cancel old work.
Like i said, it doesn't matter much in your case, but i'd recommend using switchMap. So my little example above would become:
const myObservable = http.get('someUrl').pipe(
switchMap(result => http.get('someOtherUrl?id=' + result.id)
)
Now here's how i can use those tools with your code. In this code example, i'm not saving all the this.result, this.dependentKey, etc:
getDataFromRemoteServer() {
return this.http.get(`/api/point/id`).pipe(
map(result => someComplexSyncTransformation(result)),
switchMap(dependentKey => this.http.get(`/api/point/id/dependent/keys/${dependentKey}`)),
switchMap(dependantResult => this.http.get(`/api/point/id/dependent/keys/${dependentResult.someValue}`)
});
}
// to be used like:
getDataFromRemoteServer()
.subscribe(deeplyNestedResult => {
// do whatever with deeplyNestedResult
});
If its important to you to save those values, then i'd recommend using the tap operator to highlight the fact that you're generating side effects. tap will run some code whenever the observable emits a value, but will not mess with the value:
getDataFromRemoteServer() {
return this.http.get(`/api/point/id`).pipe(
tap(result => this.result = result),
map(result => someComplexSyncTransformation(result)),
tap(dependentKey => this.dependentKey = dependentKey),
// ... etc
});
}
I'm refactoring some old node modules into a more functional style. I'm like a second year freshman when it comes to FP :) Where I keep getting hung up is handling large async flows. Here is an example where I'm making a request to a db and then caching the response:
// Some external xhr/promise lib
const fetchFromDb = make => {
return new Promise(resolve => {
console.log('Simulate async db request...'); // just simulating a async request/response here.
setTimeout(() => {
console.log('Simulate db response...');
resolve({ make: 'toyota', data: 'stuff' });
}, 100);
});
};
// memoized fn
// this caches the response to getCarData(x) so that whenever it is invoked with 'x' again, the same response gets returned.
const getCarData = R.memoizeWith(R.identity, (carMake, response) => response.data);
// Is this function pure? Or is it setting something outside the scope (i.e., getCarData)?
const getCarDataFromDb = (carMake) => {
return fetchFromDb(carMake).then(getCarData.bind(null, carMake));
// Note: This return statement is essentially the same as:
// return fetchFromDb(carMake).then(result => getCarData(carMake, result));
};
// Initialize the request for 'toyota' data
const toyota = getCarDataFromDb('toyota'); // must be called no matter what
// Approach #1 - Just rely on thenable
console.log(`Value of toyota is: ${toyota.toString()}`);
toyota.then(d => console.log(`Value in thenable: ${d}`)); // -> Value in thenable: stuff
// Approach #2 - Just make sure you do not call this fn before db response.
setTimeout(() => {
const car = getCarData('toyota'); // so nice!
console.log(`later, car is: ${car}`); // -> 'later, car is: stuff'
}, 200);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
I really like memoization for caching large JSON objects and other computed properties. But with a lot of asynchronous requests whose responses are dependent on each other for doing work, I'm having trouble keeping track of what information I have and when. I want to get away from using promises so heavily to manage flow. It's a node app, so making things synchronous to ensure availability was blocking the event loop and really affecting performance.
I prefer approach #2, where I can get the car data simply with getCarData('toyota'). But the downside is that I have to be sure that the response has already been returned. With approach #1 I'll always have to use a thenable which alleviates the issue with approach #2 but introduces its own problems.
Questions:
Is getCarFromDb a pure function as it is written above? If not, how is that not a side-effect?
Is using memoization in this way an FP anti-pattern? That is, calling it from a thenable with the response so that future invocations of that same method return the cached value?
Question 1
It's almost a philosophical question here as to whether there are side-effects here. Calling it does update the memoization cache. But that itself has no observable side-effects. So I would say that this is effectively pure.
Update: a comment pointed out that as this calls IO, it can never be pure. That is correct. But that's the essence of this behavior. It's not meaningful as a pure function. My answer above is only about side-effects, and not about purity.
Question 2
I can't speak for the whole FP community, but I can tell you that the Ramda team (disclaimer: I'm a Ramda author) prefers to avoid Promises, preferring more lawful types such Futures or Tasks. But the same questions you have here would be in play with those types substituted for Promises. (More on these issues below.)
In General
There is a central point here: if you're doing asynchronous programming, it will spread to every bit of the application that touches it. There is nothing you will do that changes this basic fact. Using Promises/Tasks/Futures helps avoid some of the boilerplate of callback-based code, but it requires you to put the post response/rejection code inside a then/map function. Using async/await helps you avoid some of the boilerplate of Promise-based code, but it requires you to put the post reponse/rejection code inside async functions. And if one day we layer something else on top of async/await, it will likely have the same characteristics.
(While I would suggest that you look at Futures or Tasks instead of Promises, below I will only discuss Promises. The same ideas should apply regardless.)
My suggestion
If you're going to memoize anything, memoize the resulting Promises.
However you deal with your asynchrony, you will have to put the code that depends on the result of an asynchronous call into a function. I assume that the setTimeout of your second approach was just for demonstration purposes: using timeout to wait for a DB result over the network is extremely error-prone. But even with setTimeout, the rest of your code is running from within the setTimeout callback.
So rather than trying to separate the cases for when your data has already been cached and when it hasn't, simply use the same technique everywhere: myPromise.then(... my code ... ). That could look something like this:
// getCarData :: String -> Promise AutoInfo
const getCarData = R.memoizeWith(R.identity, make => new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 100)
})
)
getCarData('toyota').then(carData => {
console.log('now we can go', carData)
// any code which depends on carData
})
// later
getCarData('toyota').then(carData => {
console.log('now it is cached', carData)
})
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
In this approach, whenever you need car data, you call getCarData(make). Only the first time will it actually call the server. After that, the Promise is served out of the cache. But you use the same structures everywhere to deal with it.
I only see one reasonable alternative. I couldn't tell if your discussion about having to have to wait for the data before making remaining calls means that you would be able to pre-fetch your data. If that's the case, then there is one additional possibility, one which would allow you to skip the memoization as well:
// getCarData :: String -> Promise AutoInfo
const getCarData = make => new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 100)
})
const makes = ['toyota', 'ford', 'audi']
Promise.all(makes.map(getCarData)).then(allAutoInfo => {
const autos = R.zipObj(makes, allAutoInfo)
console.log('cooking with gas', autos)
// remainder of app that depends on auto data here
})
<script src="//cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
But this one means that nothing will be available until all your data has been fetched. That may or may not be all right with you, depending on all sorts of factors. And for many situations, it's not even remotely possible or desirable. But it is possible that yours is one where it is helpful.
One technical point about your code:
const getCarDataFromDb = (carMake) => {
return fetchFromDb(carMake).then(getCarData.bind(null, carMake));
};
Is there any reason to use getCarData.bind(null, carMake) instead of () => getCarData(carMake)? This seems much more readable.
Is getCarFromDb a pure function as it is written above?
No. Pretty much anything that uses I/O is impure. The data in the DB could change, the request could fail, so it doesn't give any reliable guarantee that it will return consistent values.
Is using memoization in this way an FP anti-pattern? That is, calling it from a thenable with the response so that future invocations of that same method return the cached value?
It's definitely an asynchrony antipattern. In your approach #2 you are creating a race condition where the operation will succeed if the DB query completes in less than 200 ms, and fail if it takes longer than that. You've labeled a line in your code "so nice!" because you're able to retrieve data synchronously. That suggests to me that you're looking for a way to skirt the issue of asynchrony rather than facing it head-on.
The way you're using bind and "tricking" memoizeWith into storing the value you're passing into it after the fact also looks very awkward and unnatural.
It is possible to take advantage of caching and still use asynchrony in a more reliable way.
For example:
// Some external xhr/promise lib
const fetchFromDb = make => {
return new Promise(resolve => {
console.log('Simulate async db request...')
setTimeout(() => {
console.log('Simulate db response...')
resolve({ make: 'toyota', data: 'stuff' });
}, 2000);
});
};
const getCarDataFromDb = R.memoizeWith(R.identity, fetchFromDb);
// Initialize the request for 'toyota' data
const toyota = getCarDataFromDb('toyota'); // must be called no matter what
// Finishes after two seconds
toyota.then(d => console.log(`Value in thenable: ${d.data}`));
// Wait for 5 seconds before getting Toyota data again.
// This time, there is no 2-second wait before the data comes back.
setTimeout(() => {
console.log('About to get Toyota data again');
getCarDataFromDb('toyota').then(d => console.log(`Value in thenable: ${d.data}`));
}, 5000);
<script src="https://cdnjs.cloudflare.com/ajax/libs/ramda/0.25.0/ramda.min.js"></script>
The one potential pitfall here is that if a request should fail, you'll be stuck with a rejected promise in your cache. I'm not sure what would be the best way to address that, but you'd surely need some way of invalidating that part of the cache or implementing some sort of retry logic somewhere.
i am new to RXJS, i found Redux-observable canceling async request using takeUntil is very useful. but while i was testing it i found that the actual request is still going on even though we cancel the request..
i have this JSbin code snippet to test.
https://jsbin.com/hujosafocu/1/edit?html,js,output
here the actual request is not canceling, even if you cancel the request by clicking the cancel (multiple times) button.
i am not sure this is how it should be.. if yes, then what does it meant by canceling async request. I am bit confused.. Please share some thoughts..
any respond to this will greatly appreciate.. thanks
The issue is very subtle, but obviously important. Given your code:
const fetchUserEpic = action$ =>
action$.ofType(FETCH_USER)
.delay(2000) // <-- while we're waiting, there is nothing to cancel!
.mergeMap(action =>
Observable.fromPromise(
jQuery.getJSON('//api.github.com/users/redux-observable', data => {
alert(JSON.stringify(data));
})
)
.map(fetchUserFulfilled)
.takeUntil(action$.ofType(FETCH_USER_CANCELLED))
);
The kicker is the .delay(2000). What this is saying is, "don't emit the action to the rest of the chain until after 2000ms". Because your .takeUntil(action$.ofType(FETCH_USER_CANCELLED)) cancellation logic is inside the mergeMap's projection function, it is not yet listening for FETCH_USER_CANCELLED because there is nothing to cancel yet!
If you really want to introduce an arbitrary delay before you make the ajax call, but cancel both the delay OR the pending ajax (if it reaches there) you can use Observable.timer()
const fetchUserEpic = action$ =>
action$.ofType(FETCH_USER)
.mergeMap(action =>
Observable.timer(2000)
.mergeMap(() =>
Observable.fromPromise(
jQuery.getJSON('//api.github.com/users/redux-observable', data => {
alert(JSON.stringify(data));
})
)
.map(fetchUserFulfilled)
)
.takeUntil(action$.ofType(FETCH_USER_CANCELLED))
);
I imagine you don't really want to introduce the arbitrary delay before your ajax calls in real-world apps, in which case this problem won't exist and the example in the docs is a good starting reference.
Another thing to note is that even without the delay or timer, cancelling the ajax request from your code doesn't cancel the real underlying XMLHttpRequest--it just ignores the response. This is because Promises are not cancellable.
Instead, I would highly recommend using RxJS's AjaxObservable, which is cancellable:
Observable.ajax.getJSON('//api.github.com/users/redux-observable')
This can be imported in several ways. If you're already importing all of RxJS a la import 'rxjs';, it's available as expected. Otherwise, there are several other ways:
import { ajax } from 'rxjs/observable/dom/ajax';
ajax.getJSON('/path/to/thing');
// or
import { Observable } from 'rxjs/Observable';
import 'rxjs/add/observable/dom/ajax';
Observable.ajax.getJSON('/path/to/thing');
It's important to remember, like all the Observable factories, Observable.ajax is lazy meaning it does not make the AJAX request until someone subscribes to it! Where as jQuery.getJSON makes it right away.
So you can put it together like this:
const fetchUserEpic = action$ =>
action$.ofType(FETCH_USER)
.mergeMap(action =>
Observable.timer(2000)
.mergeMap(() =>
Observable.ajax.getJSON('//api.github.com/users/redux-observable')
.do(data => alert(JSON.stringify(data)))
.map(fetchUserFulfilled)
)
.takeUntil(action$.ofType(FETCH_USER_CANCELLED))
);
A working demo of this can be found here: https://jsbin.com/podoke/edit?js,output
This may help someone in future..
Like jayphelps mentioned above the better solution is using RXjs AjaxObservable, because it is canceling the actual XMLHttpRequest rather than neglecting the responds.
But currently there is some issues going on RxJS v5 "RxJS Observable.ajax cross domain issue"
good solution i found is "allows bypassing default configurations"
like below:
const fetchUserEpic = action$ =>
action$.ofType(FETCH_USER)
.mergeMap(action =>
Observable.timer(2000)
.mergeMap(() =>
Observable.ajax({
url:`//api.github.com/users/redux-observable`,
crossDomain: true
})
.do(data => alert(JSON.stringify(data)))
.map(fetchUserFulfilled)
)
.takeUntil(action$.ofType(FETCH_USER_CANCELLED))
);
https://github.com/ReactiveX/rxjs/issues/1732