managing state with FRP - javascript

Some say that FRP is about handling event streams without explicitly managing state. This person, for example:
http://www.slideshare.net/borgesleonardo/functional-reactive-programming-in-clojurescript
Others motivate FRP by pointing out the difficulties of programming entirely by side-effects, as one must do with asynchronous callbacks.
http://cs.brown.edu/~sk/Publications/Papers/Published/mgbcgbk-flapjax/
However in experimenting with FRP (flapjax), I keep hitting the same problem: an inability to handle state, except explicitly, via side-effects.
For example, an animation queue. Changes arrive on an event stream. When the first change arrives, I need to queue a draw to happen some time in the future (e.g. with window.requestAnimationFrame), and arrange to accumulate changes between now and the future draw event. When the draw event happens, I need to draw the accumulated changes.
This is about six lines of code using an imperative style with observer pattern, but I can't find a reasonable way to express this in FRP. The only thing that comes close involves closing the related event streams over a shared state, and explicitly managing the state and rendering events via side-effects. It's hardly an improvement on imperative callbacks.
How is this supposed to be handled in FRP?
Here's a flapjax utility for closing over state:
function worldE(init, handlers) {
var r = fj.receiverE();
fj.forEach(function(h) {
h[0].mapE(function (ev) {
r.sendEvent(init = h[1](init, ev));
});
}, handlers);
return r;
}
And here it is used in an animation loop:
function initialize(opts) {
var blitE = fj.receiverE();
function accumulate(state, data) {
if (!state.queued) {
window.requestAnimationFrame(blitE.sendEvent);
}
return {queued: true, changes: _.extend({}, state.changes, data)};
}
function dodraw(state, _) {
draw(state.changes);
return {queued: false, changes: {}};
}
worldE({queued: false, changes: {}},
[[opts.data_source, accumulate], [blitE, dodraw]]);
}
Things to note: it's larger, less readable and less maintainable than the equivalent callback code. It still requires explicitly managing state. And it operates via side effects.
Is there a better way of doing this in FRP? A different pattern, or a different library?

I'm not familiar with Flapjax, but looking at the docs, you can use collectE to create a stream accumulating the state.
Then, create a second stream of animationFrame events using receiverE and sendEvent.
Finally, emulate bacon.js's sampledBy (https://github.com/baconjs/bacon.js/wiki/Diagrams#sampledby) to create a third stream of [animationFrame, state] tuples.

Here a bit more flapjax sugar: instead of using sampledBy use snapshotE which will snapshot already collected changes array on each frame event. Check your google groups topic, I've writen some code for it.

FRP is entirely about managing state. But its about managing state without time. To do this you must keep the state within the tree or graph. This prevents any old bit of code from mutating it at some random point in time.
In FRP state is only updated when a parent node fires an update. When this happens that events value (and any other parents value) are used to calculate the new nodes value.
blandw's answer above is correct as collectE is what you want.
In collectE the previous value of this node is provided when its parent node fires. This allows you to store any state you wish, such as a collection of changes. But this collection is only ever updated when a parent event has fired.
So by restricting your mutation of state to when the parents update, you save alot of hassel. This is hard to see with a basic example. It becomes more apparent when you have worked on a large project.
However to updates blandw's answer, do not use receiverE for any reason other than getting events out of non frp systems. This should never happen in the middle of an FRP graph, but only as a top level node with no parents.
It use in any other case violates the whole point of FRP.

Related

how to log all events (use actions) an DOM elements that have event listeners?

I would like to record all events that are fired through user action on DOM elements. A feature like recording user actions (macro) on my website so the app can later re-generate the current state by executing use actions sequentially. How to do it?
Is there any API or solution to find all events that are processed by event listeners?
Or should I gather events by myself? If so, what would be your approach/solution/design?
this OP says no: https://stackoverflow.com/a/63346192/5078847
It would be technically possible to record such a thing by
(1) using addEventListener exclusively in your site's code (if you don't, you'll have to also iterate through all on- properties and scan for inline handlers too, which is quite a pain)
(2) Overwrite the addEventListener prototype with a custom hander that, when fired, stores information uniquely identifying the click in an array (for example, save the name of the event fired, and a full selector string to the element the event is dispatched to, and if you need it, also the amount of time since the page was loaded)
(3) When needed, save the array somewhere
(4) To emulate the user's prior actions, retrieve the array, then iterate through it. For each action, create and dispatch an event to the unique selector at the time required.
But this is really, really convoluted. It would make a lot more sense for there to be a single source of truth for what's being displayed on your page. To save a state, just serialize the object that holds the data. To resume a state, retrieve the object and render according to its contents.
There isn't anything built in, like the other post said. Chrome devtools has a function getEventListeners that gets all the handlers for a given element (singular). You also can't use this outside of Chrome's devtools.
You could (but shouldn't) hijack addEventListener from the prototype chain.
Based off of this old forum here
/** !! Foot-Gun !! **/
/** Proof of Concept/Probably Doesn't Work **/
HTMLElement.prototype._addEventListener = HTMLElement.prototype.addEventListener;
HTMLElement.prototype.addEventListener = function(eventName, handlerFunction, eventOptions) {
// reportFunction you would have to write yourself
this._addEventListener(eventName, reportFunction, eventOptions);
this._addEventListener(eventName, handlerFunction, eventOptions);
// Store an array of them on the element
if ('currentListeners' in this === false) {
this.currentListeners = [
{ eventName, handlerFunction, eventOptions }
];
} else {
this.currentListeners.push({ eventName, handlerFunction, eventOptions });
}
}
Granted, I just handed out a loaded foot-gun for anyone who wants one. It's an anti-pattern at best, it doesn't control/track state, emulate user interactions, etc. etc.
I wouldn't recommend this for "regenerating" or rerendering UI as it's gonna be more trouble than it's worth.
If you're trying to use this for debugging, there are a couple of SAAS whose whole business models are based off of this, like HotJar and Sentry.io. I'd recommend checking them out first if you're looking for a solution.

Block additional Firestore triggers from executing in a Firestore trigger

Is there any way to specify something like "FINAL" when you execute a Firestore trigger so an app with a lot of triggers doesn't create recursive updates?
For a simple example here is a trigger that adds the created_date field to every document that is created:
exports.setCreatedDate = functions.firestore
.document('{collectionID}/{docID}')
.onCreate((change, context) => {
const patch = {created_at: Date.now()};
return change.ref.set(patch, {merge: true});
});
And another trigger that does the same when a document is updated:
exports.setUpdatedDate = functions.firestore
.document('{collectionID}/{docID}')
.onUpdate((change, context) => {
const patch = {updated_at: Date.now()};
return change.after.ref.set(patch, {merge: true});
});
Is there any built in way (I have a couple hacky workarounds already) to block the update from firing after we set the created_at time?
We are moving as much of the application logic for an enterprise app that has to scale to 100 million+ records, and cascading triggers are causing huge spikes in Firestore requests.
One obvious solution is to use single triggers, then have a stack of functions that are called, but I love the simplicity of using strictly core functionality and being able to write as many isolated triggers as we like.
Is there any way to specify something like "FINAL" when you execute a Firestore trigger so an app with a lot of triggers doesn't create recursive updates?
No.
Is there any built in way (I have a couple hacky workarounds already) to block the update from firing after we set the created_at time?
No.
You will have to write code in your function to determine if the invocation should not make any more changes, and just return early. It's common to either check to see if the change has already been made by looking at existing fields in the document.

RxJs 6 load more pagination stream

I'm trying to build one of those more modern paginations where there's not dedicated links for the individual pages but one where more results are loaded automatically when the user scrolls to the bottom. On the web page are multiple widgets that allow you to modify the search parameters. When the parameters change more results should be fetched via ajax beginning from the first page again.
I'm fairly new to RxJs and I'm having issues wrapping my head arround how to identify the observables/subjects I need and how to compose them to achive the described behavior.
Here's the specific flow I have in mind:
When the page is first loaded an initial set of parameters is taken and used to load the first page. When a "load more" event is fired the next page should be fetched and rendered to the page.
When the parameters change the page should be loaded starting from page 1 again.
When the server signals that there are no more results to load I should get notified about that via an observable. If further "load more" events are fired after no more pages are available the ajax request should not be made to save bandwidth on mobile devices.
Lastly as long as a network request is open i want to be able to display a loader so i need an observable that informs me about whether there are open requests or not.
As a bonus: Currently I've implemented signaling no more results by returning a 404 from the backend when a page one bigger than last page is requested. I'd like to use catchError on the ajax observable in such a way, that it gracefully stops the ajax request without breaking the subscription.
Here's what I was able to come up with so far, but it has multiple Problems (described below):
import { BehaviorSubject, Subject, fromEvent } from 'rxjs';
import { map, mergeMap, switchMap, takeUntil, tap } from 'rxjs/operators';
import { ajax } from 'rxjs/ajax';
import { stringify } from 'qs';
const paramsEl = document.querySelector<HTMLTextAreaElement>('#params');
const paramsChangedBtn = document.querySelector<HTMLButtonElement>('#paramsSubmit');
const loadNextPageBtn = document.querySelector<HTMLButtonElement>('#loadNextPage');
const getParams = () => JSON.parse(paramsEl.value);
const params$ = new BehaviorSubject(getParams());
const page$ = new BehaviorSubject(1);
const noMoreResults$ = new Subject<void>(); // <- public
const connections$ = new BehaviorSubject(0);
const loading$ = new BehaviorSubject(false); // <- public
// for the sake of this example we're not using an IntersectionObserver etc. but a plain button to fire a "load more" event
const loadNextPage$ = fromEvent(loadNextPageBtn, 'click');
// same for params changed event. In my real app I've got a working stream fed from the widgets
fromEvent(paramsChangedBtn, 'click').subscribe(e => params$.next(getParams()));
// when the params change, reset page to 1
params$.subscribe(() => page$.next(1));
// update loading$ observable for displaying/hiding a loader
connections$.subscribe(connections => {
if(connections > 0 && loading$.getValue() === false) loading$.next(true);
if(connections <= 0 && loading$.getValue() === true) loading$.next(false);
});
// when we need to load the next page, increment the page observable
loadNextPage$.subscribe(e => page$.next(page$.getValue() + 1));
//////////////
// whenever a new page should be requested, get the current parameters and fetch data for this page
page$
.pipe(
takeUntil(noMoreResults$),
tap(() => connections$.next(connections$.getValue() + 1)),
mergeMap(page => {
const qs = stringify({
...params$.getValue(),
page,
});
return ajax.getJSON<any>(`https://httpbin.org/get?${qs}`)
// this doesn't seem to do anything
// furthermore the ajax request would already have been made at this point
// .pipe(
// takeUntil(noMoreResults$)
// );
}),
tap(() => connections$.next(connections$.getValue() - 1)),
)
.subscribe(data => {
console.log(data.args);
// for testing purposes pretend we have no more data at page 5
if(data.args.page === "5") noMoreResults$.next();
});
// for debugging purposes
loading$.subscribe(loading => console.log('loading: ', loading));
noMoreResults$.subscribe(() => console.warn('no more results'));
You can find the running version of this here on stackblitz.
Here's the issues with the code:
Current pace of takeUntil(noMoreResults$) breaks the subscription when noMoreResults$ has been triggered and then params$ emits no further pages are loaded. (See comment in the code for the other location in the ajax pipe).
using params$.getValue() when mergeMapping to the ajax observable feels wrong, however I don't know how to pass down both the page number as well as the parameters in one stream properly.
In general I think I've overused Subjects / BehaviorSubjects quite a bit but I'm not sure. Can you either confirm or deny this?
The composition of the observables feel very messy and hard to follow. Is this based on what I'm trying to do or is there room for improvement for this problem?
Can you please provide a working example as well as elaborating on the biggest mistakes I've made.
I have been keeping this question open in my tabs ever since you created this question, wanting to help, but also, wanting to learn enough of the RxJS so I could create a solution myself.
I'm really sorry, but I haven't looked at your example, but instead, I created my own. I would have to ask you to please forgive me for the extremely large answer that I will provide here.
I was mainly driven by the excellent talk by Ben Lesh, one of the creators of modern RxJS which you can find here. I strongly suggest that you look at this video, even multiple times, to try to understand some of the stuff I used in my solution to this problem.
Like Ben, I also used Angular framework as the basis for this project. You can find my solution at GitHub. Also, just like Ben has been explaining couple of times in his talk some Angular specific stuff, I will try to do it here as well.
What I've got in my app are a simple FeedComponent and a FeedService. The service is being injected using Angular Dependency Injection to the FeedComponent.
Now, I've got an HTML bound to FeedComponent that looks like this:
<div class="loading" *ngIf="loading$ | async">
Loading...
</div>
<div class="filter">
<form #form="ngForm">
...
</form>
</div>
<div #articles class="articles">
<article *ngFor="let article of feed$ | async">
...
</article>
</div>
You can see that I've got three sections: a <div> responsible for displaying a message that the feed is being loaded; another <div> with filtering <form> that displays filtering options; and a third <div> responsible for displaying feed items as a list of <article>s.
Angular specific stuff here include *ngIf and *ngFor directives and async pipe (|). With a single sentence: *ngIf renders certain DOM element if condition stated in attribute value is truthy; *ngFor loops through an array of provided items and renders certain DOM element number of times of the array's length; pipe | transforms items so that items to the left are always input items to the pipe to the right, so does the async pipe do - it transforms an Observable (you can tell that it's an Observable by the $ sign suffix that I and many others are using) to transform items that come from the Observable to what the directive understands. async pipe is also explained in Ben's talk.
Let's get started: you can see that I'm using two Observables in my HTML template, and that's all you need. The one that will give you an array of FeedItems so you can display them on the page, and the other one that will emit boolean values when feed is being fetched from server. If you would not use Angular, but rather some other framework or library, or nothing at all, you could still have only these two streams. You would have to manually subscribe to them (and unsubscribe later, when not needed anymore) and when you'd get results, you should update the DOM accordingly. Angular and async pipe do all of this here for me.
These two are feed$ and loading$ Observable streams, respectively. Both of them are defined in the FeedComponent that is bound to this HTML, very simply, like this:
feed$ = this.feedService.feed$;
loading$ = this.feedService.loading$.pipe(delay(10));
As I said, feedService is injected to FeedComponent through FeedComponent's constructor using Angular DI:
constructor(private feedService: FeedService) {}
You would just have to create new FeedService object if you'd use your own JS framework/lib or no lib at all. I'm adding delay of 10 ms to feedService.loading$ stream because I'm getting some Angular error that I should not explain here. You may not need it at all if not using Angular.
Now, to be able to provide feed items (FeedItems[]) through feed$ stream, you need to listen to the two possible events: a scroll event that would fire when the user has scrolled enough to the bottom of the page and an event that happens when filter form input values change. These two events need to be combined to a single Observable that we will call filterSeed$ - it will emit values contained in the input elements of the used form.
The first event stream can be formed out of these two Observables:
scrollPercent$: Observable<number> = fromEvent(document, 'scroll')
.pipe(
map(() => {
const scrollTop = this.articles.nativeElement.getBoundingClientRect().top;
const docHeight = this.articles.nativeElement.getBoundingClientRect().height;
const winHeight = window.innerHeight;
const scroll = scrollTop / (winHeight - docHeight);
return Math.round(scroll * 100);
})
);
loadMore$: Observable<number> = this.scrollPercent$
.pipe(
filter(percent => percent >= 80),
take(1),
repeatWhen(() => this.feedLoadingStops$)
);
scrollPercent$ is an Observable that emits some numbers. They represent scroll percentage when document's scroll event fires (created using fromEvent). Whatever event it emits, I don't really care about it. I only care about when it emits, so I can map it to percentages using some simple math. this.articles.nativeElement is Angular specific, so if you need another example, please take a look at this Pen about how to achieve it with jQuery. The returning value of the map function is rounded scroll percentage.
loadMore$ is an Observable that fires events only when a user has scrolled enough so that new feed items should be loaded - it fires scroll percentage number, but we don't really care about that, you'll see that we're ignoring these numbers later. The threshold when this should happen is at or after 80% of the scroll. So, I'm using filter here to let only those items that are above the threshold (remember, I need loadMore$ to emit when this threshold is reached and passed). And I'm using take(1) here because I really only need one such item.
Just like Ben has had a problem when the whole stream only worked once, I was having it as well. Because take completes (effectively unsubscribes) from the source when it takes that one item, I need to resubscribe again to the same source, which is, all the way to the top, the stream created by fromEvent.
So, I need to start listening to the scroll events once again, but there's catch here: I don't want to start doing it immediately, but rather when the loading completes. So, I need to use repeatWhen instead of just repeat. repeatWhen takes a factory function that it calls when needed to get an Observable to subscribe to. It listens to the provided Observable (this.feedLoadingStops$) and resubscribes to the source when the this.feedLoadingStops$ emits.
The this.feedLoadingStops$ looks like this:
feedLoadingStops$ = this.loading$.pipe(map(v => !v), filter(v => v));
It inverts false values so they become true, and vice versa, so that emitted true value indicates that loading has stopped (remember, when loading$ emits false, it indicates that loading has stopped). It also filters just true values so that we only get emits when it stops loading.
But, you may wander why. Why did repeat work for Ben and not for me? Why did I have to resubscribe only when the loading stops. It's because we both used higher order mapping operators after events fired by fromEvent to flatten the HTTP requests later on. I will certainly come to that later, but what he used was the exhaustMap operator, and I used switchMap which would always switch to the latest emitted item by the source and subscribe to it. When loadMore$ is resubscribed again (using just repeat), it would start listening to the scroll events again, and since user would certainly continue scrolling more, loadMore$ would start emitting once again and the switchMap would continue to resubscribe to the provided HTTP request all the way until user wouldn't stop scrolling. Which is really not what we want - we don't want to create multiple, exactly the same HTTP requests to fetch a single resource just because user is doing something we're responsible to solve. exhaustMap is different so that it does exactly the opposite to swithcMap - it will wait for the first emitted item to finish (basically, it will exhaust) until it subscribes to the next.
That was the explanation of the first event stream that will help create filterSeed$ Observable. The other one is rather simple one. Angular provides such Observables on its own when it comes to forms. I used this.form.valueChanges and I was automatically subscribed to any form input element value changes. Since I only have a single <select> which I use to fake should I load the feed with all items or only items with text or only items with images, I would like to listen to when a user selects a different option to fire an event.
Since Angular provides this for me, you may want to create your own Observable that would emit changed form input values for you.
And finally, this is what filterSeed$ would look like:
filterSeed$: Observable<FeedFilter> = defer(() => merge(
this.loadMore$.pipe(map(() => this.form.value)),
this.form.valueChanges
));
Here, I want to merge two streams: the one that is created by listening to the loadMore$ events, and the other that is listening to the filter form value changes. And that is exactly what I want: I want to load new feed items only when a user has scrolled enough to the bottom or when it changed a filter. this.form.valueChanges already provides FeedFilter items, but this.loadMore$ does not. Remember, this.loadMore$ emits numbers which I sad I don't really care about, so I'm mapping them to this.form.value. This is yet again Angular specific, so you'd have to implement your own reading of the whole form input elements. Current this.form.value is always the same as the last emitted item from this.form.valueChanges.
The reason to use defer here is yet another Angular specific because by the time filterSeed$ is created, this.form is still undefined, so I need to wait until subscription happens to make sure this.form is available. And I will subscribe to this Observable in ngAfterViewInit() lifecycle hook which is called upon component's view creation. You may not need to use defer here if not using Angular. Now, when everything is ready, it's time to subscribe to this.filterSeed$ and do some data loading. Don't forget to unsubscribe when leaving component/page not to leak memory by not removing all of the event listeners created by fromEvent.
ngAfterViewInit(): void {
this.subscription = this.filterSeed$.subscribe(this.feedService.filter$);
}
ngOnDestroy(): void {
this.subscription.unsubscribe();
}
We now come to the second part of this answer which is FeedService. The FeedService is only a simple JS object that has some Observables and some state. I need this state in order to be able to work with multiple Observables - some might say this is not the true Rx way, but I found this to be easier for me to solve it this way. I'm also injecting Angular's HttpClient (as http variable) to the FeedService which is only an Angular wrapper to XHR. You could use RxJS ajax static creation method instead - both should behave the same.
The FeedService has got three public Observables and you may have seen all of them being used in FeedComponent: filter$, loading$ and feed$ Observables. The later two are used by FeedComponent to render some stuff to the DOM, while filter$ was used to feed it with filterSeed$. Basically, filter$ is just a Subject:
filter$ = new Subject<FeedFilter>();
And since Subjects are both Observables and Observers, I could use it as an Observer, so I passed it to subscribe method when I subscribed to filterSeed$ in FeedComponent. What this means is that filter$ Observer will subscribe to filterSeed$ and any call to next method (basically, any emission) from filterSeed$ will pass through to the filter$ Subject. This means that anyone else using filter$ will get the value emitted by filterSeed$.
And filter$ is used to create feed$ Observable. Here's what it looks like:
feed$: Observable<FeedItem[]> = this.filter$.pipe(
switchMap(filter => {
if (filter !== this.filter) {
this.filter = filter;
this.nextPage = 1;
this.shouldReset = true;
}
return this.getFeed$;
}),
scan((acc, value) => {
if (this.shouldReset) {
this.shouldReset = false;
return value;
}
return acc.concat(value);
}, [])
)
I'm using two operators here: switchMap and scan. I am also having some state that I keep in the class itself in variables this.filter, this.nextPage and this.shouldReset.
I already mentioned higher order mapping operators. I'm using switchMap here. And it is being used after events fired by filter$ Subject which is connected through subscribe method with the FeedComponent's filterSeed$. So, whenever a refresh event is fired (either by user scrolling enough or by user changing a filter in the filter form), I want to map it to getFeed$ Observable which is responsible for creating HTTP requests. The reason to choose switchMap over others (over exhaustMap which Ben used) is that I want to make sure that I always get the result from the latest filter used by the user. I.e. if the user sent a request with one filter and changed a filter in meantime while the first request is still loading, I want to cancel that request and switch to another HTTP request.
Since filter$ Subject is emitting FeedFilter objects, they are passed to switchMap's callback function as filter parameter. This is where I'm checking if filter is actually the same as the filter in FeedService (this.filter). If they are not the same, it means that the filter is changed, so I need to save the new filter to the FeedService's filter (this.filter = filter;). This also means that I have to reset page to page 1 (this.nextPage = 1;) and set this.shouldReset to true. Then I return getFeed$ to which switchMap internally subscribes.
All of these state holding variables are later used by either getFeed$ Observable or the next operator that comes after switchMap: scan. But, how does getFeed$ looks like. Here's how:
getFeed$: Observable<FeedItem[]> = defer(() => {
if (this.nextPage) {
this.loadingSubject$.next(true);
const url = appendQuery('/feed', { page: this.nextPage, feedFilter: this.filter.feedFilter });
return this.http.get<FakeFeedResponse>(url);
} else {
return NEVER;
}
}).pipe(
catchError(() => /* Potentially handle this.nextPage here */EMPTY),
tap(response => {
this.nextPage = response.nextPage;
this.loadingSubject$.next(false);
}),
map(response => response.items),
share()
);
I'm again using defer here. This is because I'm saving some state outside of these streams, so I want values from these state variables to be read when a subscription to getFeed$ is made, not when getFeed$ object is created.
In the defer's callback function body I'm checking if this.nextPage exists. The server returns null if there are no more items to load, so in that case, I'm returning NEVER which is an Observable that never emits. However, if there are items to load (when nextPage is a valid, truthy number), I'm returning an HTTP get request. nextPage is set to 1 by default (or is being reset to 1 in switchMap if filter is changed). I'm constructing url by appending nextPage and feedFilter as query string to '/feed' route.
Also, I'm using loadingSubject$ to emit true indicating that the loading has started. loadingSubject$ looks like this:
private loadingSubject$ = new BehaviorSubject(false);
loading$: Observable<boolean> = this.loadingSubject$.asObservable();
It is a BehaviorSubject with the default value of false. Values emitted by this Subject are offered to FeedComponent through loading$ Observable. When FeedComponent first subscribes to loading$ Observable, it will get false immediately.
The this.http.get<FakeFeedResponse>(url) request returned to defer is using Angular XHR wrapper. I'm injecting http to FeedService, but you should be able to use RxJS's ajax as I already mentioned. It emits objects of FakeFeedResponse type.
After this Observable is constructed using defer, I want to do some more stuff when it emits. First thing is to handle errors using catchError. If an error happens, I want to return an EMPTY Observable which just completes without emitting any item. I added comment here so that you may add some more error handling or handle (re)setting of nextPage or something.
After that, I'm saving nextPage from the response in tap and also emitting false to loadingSubject$ indicating that the loading has stopped. After tap, I'm using map to extract items from the response.
And then I'm shareing it. This is actually not really needed in my case. Why? Because there is only one subscriber to getFeed$ (which is switchMap) so there's really no need to share it across multiple subscribers, but it can stay here if it would ever need - actually, if there would ever exist another subscriber. I added it because Ben added it as well, but he has more than one subscriber to his getFeed$ Observable.
And that's all about getFeed$ which is being returned to switchMap in its callback method. In feed$ Observable, after switchMap, I'm using scan. Actually, scan is here just so that Angular's async pipe could receive already loaded items by concatenating new values to an already loaded ones (acc.concat(value)). I'm using this.shouldReset flag here so that I don't use concat when the filter is changed. If you would not use Angular, you would probably subscribe yourself to feed$ Observable and you would probably handle this case yourself instead of scan, so you wouldn't probably need to have scan here.
After all, I'm using Angular Interceptor feature to fake all of the server responses. Please take a look how.
And that's it. I'm really sorry for the very long answer, if you have questions, please open an Issue on GitHub. I really hope that this answer might help you shape your solution, which I didn't really look at, I'm sorry.
If you'd like to try this example, you can clone the project, run npm install and then ng serve which will compile the whole project and run a dev server so you can try this project on your own.

Understanding AMD: how to handle response flow with a one-way module relationship

Consider, if you will, an app with a few unique views/states - let's call it a game. You have an overworld screen, a battle screen, a multiplayer interface, and maybe a minigame or two.
For the sake of argument, there isn't a lot of code in common between each view, so it lends itself well to AMD - a central controller/dispatcher, and each game state split into a separate file/view.
dispatcher.core.js
> overworld.view.js
> battle.view.js
> tournament.view.js
> minigame.view.js
Input and key commands get routed to the dispatcher, and trickle down to the current active view, which in turn manipulates the DOM as needed. One-way AMD relationships, so far so good.
The thing I'm getting hung up on is the response flow. The API response data that goes through the system is diverse, often affecting multiple views at the same time. Consider this case:
User presses buttons to move
Key commands gets routed to map view for movement animation
Map sends AJAX request to server for movement result
AJAX returns "battle commence" response to dispatcher
Dispatcher tells map view to disable itself, then battle view to init
The dispatcher was designed for this - to receive instruction and distribute. It seems like the obvious choice, much more than letting views affect each other directly.
However, there's a fundamental flaw here - the one-way relationship between the dispatcher and the views is violated as soon as the AJAX result is sent from the view to the dispatcher. You can either use the dispatcher for your AJAX callback, or you can instruct the dispatcher to make the AJAX call for you - but either way the view requires a way to reference the dispatcher, which as I understand it, violates the core tenet of AMD. For the life of me, I can't figure out how this would be implemented correctly!
My question is this - how would one implement such a structure correctly? Is this a limitation of AMD, or am I misunderstanding it's use on a deeper level?
This question is intended to be for more of the general case, but if it affects answers at all, I'm using Require and jQuery for AMD and AJAX, respectively.
Is this a limitation of AMD, or am I misunderstanding it's use on a deeper level?
AMD does not by any means impose one way relationship between object instances in general. What it does strongly recommend to avoid (because even this is not an absolute requirement) is circular dependencies between modules. And the type of dependencies that matter for AMD are loading dependencies.
You can certainly have a module named dispatcher that goes:
define(function () {
function Dispatcher(views) {
this.views = views;
for (var ix = 0, view; (view = views[ix]); ++ix)
view.init(this);
}
return Dispatcher;
});
And viewA, viewB, that are structured like this:
define(function () {
function View() {
// ...
}
View.prototype.init = function (dispatcher) {
this.dispatcher = dispatcher;
};
// Etc...
return View;
});
Your main module could do:
define(['dispatcher', 'viewA', 'viewB'], function (Dispatcher, ViewA, ViewB) {
var viewA = new ViewA();
var viewB = new ViewB();
var dispatcher = new Dispatcher([viewA, viewB]);
});
The above is meant to be a schematic example of what is possible, not a prescription for a good design. At any rate, the point is that is is perfectly feasible as far as AMD is concerned to have circular references between objects.
There's nothing about AMD that is limiting here; it's entirely about the design of your modules themselves.
A common way to handle this is with an event-emitter.
The dispatcher can call methods directly on a view, but the view emits events which the dispatcher can listen and respond to, removing the need for a circular reference (as the view doesn't care where the events go, so it doesn't require a reference to the dispatcher.)
Fitted to your example workflow, it might look like this:
overworld tracks keypress
overworld animates in response to keypress
overworld emits 'move' event for dispatcher
// overworld.view
this.emit('move', {data});
// dispatcher
overworld.on('move', getMoveResult) // getMoveResult fires AJAX request
response tells dispatcher it's time to battle
dispatcher updates views
overworld.hide()
battle.show()

Sequelize.js afterUpdate hook pass changed values

I'm building a node.js app and I'm evaluating Sequelize.js for persistent objects. One thing I need to do is publish new values when objects are modified. The most sensible place to do this would seem to be using the afterUpdate hook.
It almost works perfectly, but when I save an object the hook is passed ALL the values of the saved object. Normally this is desirable, but to keep the publish/subscribe chatter down, I would rather not republish fields that weren't saved.
So for instance, running the following
tasks[0].updateAttributes({assignee: 10}, ['assignee']);
Would automagically publish the new value for the assignee for that task on the appropriate channel, but not republish any of the other fields, which didn't change.
The closest I've come is with an afterUpdate hook:
Task.hook('afterUpdate', function(task, fn) {
Object.keys(task).forEach(function publishValue(key) {
pubSub.publish('Task:'+task.id+'#'+key, task[key]);
});
return fn();
});
which is pretty straightforward, but since the 'task' object has all the fields, I'm being unnecessarily noisy. (The pubSub system is ignorant of previous values and I'd like to keep it that way.)
I could override the setters in the task object (and all my other objects), but I would prefer not to publish until the object is saved. The object to be saved doesn't seem to have the old values (that I can find), so I can't base my publish on that.
So far the best answer I've come up with from a design standpoint is to tweak one line of dao.js to add the saved values to the returned object, and use that in the hook:
self.__factory.runHooks('after' + hook, _.extend({}, result.values, {savedVals: args[2]} ), function(err, newValues) {
Task.hook('afterUpdate', function(task, fn) {
Object.keys(task.savedVals).forEach(function publishValue(key) {
pubSub.publish('Task:'+task.id+'#'+key, task[key]);
});
return fn();
});
Obviously changing the Sequelize library is not ideal from a maintenance standpoint.
So my question is twofold: is there a better way to get the needed information to my hook without modifying dao.js, or is there a better way to attack my fundamental requirement?
Thanks in advance!
There is not currently. In the implementation for exactly what you describe we simply had to implement logic to compare old and new values, and if they differed, assume that they have changed.

Categories