I play around with CQRS/event sourcing for a couple of months now. Currently, I'm having trouble with another experiment I try and hope somebody could help, explain or even hint on another approach than event sourcing.
I want to build a distributed application in which every user has governance of his/her data. So my idea is each user hosts his own event store while other users may have (conditional) access to it.
When user A performs some command this may imply more than one event store. Two examples:
1) Delete a shared task from a tasklist hosted by both event store A and B
2) Adding the reference to a comment persisted in event store A to a post persisted in event store B.
My only solution currently seems to use a process manager attached to each event store, so when an event was added to one event store, a saga deals with applying the event to other related event stores as well.
Not sure what is the purpose of your solution but if you want one system to react on events from another system, after events are saved to the store, a subscription (like catch-up subscription provided by Greg Young's EventStore) publishes it on a message bus using pub-sub and all interested parties can handle this event.
However, this will be wrong if they just "save" this event to their stores. In fact they should have an event handler that will produce a command inside the local service and this command might (or might not) result in a local event, if all conditions are met. Only something that happens within the boundaries, under the local control, should be saved to the local store.
Related
Reading Event Hub documentation and creating a simple producer-consumer example
Link -> https://learn.microsoft.com/en-us/javascript/api/overview/azure/event-hubs-readme?view=azure-node-latest
I was wondering in a production application how this would work. The reason is that in the current implementation is listening for a specific amount of time then the connection is closing.
Should we send the request to specific REST endpoints and activate the listeners after the producer finishes?
You are correct that in most production scenario's this does not work. Best is to keep the listener open during the lifetime of the application. In most cases when a restart of the application is triggered, processing should resume from the last checkpoint on continuation. The example does not cover this.
From the docs:
For the majority of production scenarios, we recommend that you use the event processor client for reading and processing events. The processor client is intended to provide a robust experience for processing events across all partitions of an event hub in a performant and fault tolerant manner while providing a means to checkpoint its progress. Event processor clients can work cooperatively within the context of a consumer group for a given event hub. Clients will automatically manage distribution and balancing of work as instances become available or unavailable for the group.
Here is an example of processing events combined with checkpointing. For demo purposes the listener stops after a while. You will have to modify the code to run as long as the process is not stopped.
Checkpointing is important if you have a continuous flow of events being send. If the listener is not available for some period you do want to resume processing not from the beginning of the first event nor from new events only. Instead you will want to start from the last know processed event.
Im using azure event hub for a project via this npm package #azure/events-hub
Im wondering if theres a way to make the event receiver only receive new event when it is done processing a previously received event. The point is i want to process one event at a time.
My observation currently is that it sends events to the handler the moment they become available.
The api im using is the client.receive(partitionId, onMessage, onError) from the docs.
Wondering if there's a way to achieve the mentioned behaviour, with this api.
The client.receive() method returns a RecieverHandler object that you could use to stop the stream using it's stop() method. You would then start it again using a fresh client.receive().
Another option would be to use client.recieveBatch() where the max batch size is set to 1.
Neither option is ideal- as Peter Bons mentioned, Event Hubs are not designed for a slow drip of data.The service assumes that you will be able to accept messages at the same rate they came in, and that you will have only 1 receiver per partition. Service Bus is indeed a good alternative to look into. You can choose how many messages a recieve at a time and connect multiple receivers, each processing one message at a time, to scale your solution.
What you are describing is the need to have back pressure and the latest version of the #azure/event-hubs package has indeed solved this problem. It ensures that you receive events only after the previously received events are processed. By default, you will receive events in batches of size 10 and this size is configurable.
See the migration guide from v2 to v5 if you need to upgrade your current application to use the latest version of the package.
Following the flux concepts we can get the next assertions for which I couldn't find explanations.
Every store will receive every action.
Why? My suggestion: since a store contents some business-logic we have to provide it with all possible changes and data so the store can decide what to do with them on its own.
The data in a store must only be mutated by responding to an action.
Why? My suggestion: the reason is violation of unidirectional data-flow in case of not responding to an action.
Every time a store's data changes it must emit a "change" event.
Why? I can't get this point.
Flux is just a way of managing the data flow of your application, so it is up to the developer to make sure this actually happens. But I'll try to paint a picture of why these concepts are a part of Flux.
Every store will receive every action.
If you have only one dispatcher in your application, every store will listen to actions dispatched through that dispatcher. It is up to you whether or not the store should act on the action dispatched, but to be able to react on it the store has to know of it.
Not all actions should lead to changes in a store, though. But the dispatcher simply doesn't care, because it won't know anything about the store implementation. It's just telling all stores that this action happened, do what you want with it or go on with your life without caring.
The data in a store must only be mutated by responding to an action.
You're right that doing it with a different approach could be violation of unidirectional data-flow. Doing things this way makes sure all parts of your application has the correct state based on the actions that happens.
By not doing it this way you would let go of one of the flux strengths. Update your store based on dispatched actions, and other stores will also be aware that the action happened, and thereby react to it if they want to. If you update the store directly you will end up having no clear picture of what parts of your application that are altering the state of your store.
Every time a store's data changes it must emit a "change" event.
People often describe the stores in a flux application as the source of truth. When a store's data changes, the basis for the visualization of your data changes. You want to be confident that if my store holds a certain value, this is what my application uses as it's data.
It's related to the first quote here. The store doesn't know if a listener depends on it's data. By emitting a change, it will let all listeners know that hey, I changed. Make sure you have all my latest changes. If you don't emit change, the listener could end up displaying something based on old data.
All of these statements are related to the same thing: If an action happens in your application, don't make any assumptions about which part of your application that wants to know the details of it. Make sure everyone can act on it, if they want to.
I would like to trigger an event on a meteor server when a document on my collection changes to a specific value, say some field changes from false to true.
I am familiar with binding events to the client; however, I want this event to only be called when the server state changes, specifically the value of a given document in my collection. I want to trigger an external HTTP call from the server when this happens, as I need to message external applications.
Seems like this is an old post. For the benefit of others.
Peerdb package seems to do some of the tasks you are looking for.
https://atmospherejs.com/peerlibrary/peerdb
Also a bit late, but the most classic solution to this type of problem is using the very popular meteor-collection-hooks library. In particular, you'd probably want to use the .after.update hook (click link for full documentation), which allows you to hook into a particular collection after an update is made to a document and compare before and after by comparing doc (doc after update) to this.previous (doc before update).
I have a Backbone Collection that users are performing CRUD-type activities on. I want to postpone any changes from being propagated back to the server — the Collection.sync() should not happen until the user initiates it (like POSTING a form).
As it stands, I have been able to implement on-the-fly updates with no issue (by calling things like Model.destroy() on the models when deleted, or Collection.add() to add new models to the collection. As I understand, I could pass the {silent:true} option to my models, preventing .sync() from being called during .add()/.destroy(), but from what I can tell, that could lead to some headaches later.
I have considered overriding Backbone.sync, but I am not sure if that is the best route — I feel like there is some way to hook into some events, but I am not sure. Of course I have read through the Backbone docs, annotated source, and relevant SO questions before posting this, but I have hit a wall trying to extrapolate for this particular situation.
Eventually I will need to implement this in many places in my application, which is why I am concerned about best-practices at this stage. I am looking for some guidance/suggestions/thoughts on how to proceed with preventing the default behavior of immediately syncing changes with the remote server. Any help is appreciated — thank you for your time!
EDIT:
I went with Alex P's suggestion of refactoring: in my collection I set up some attributes to track the models that have been edited, added, or deleted. Then, when the user triggers the save action, I iterate through the lists and do the appropriate actions.
The first step is to ensure that your collection is being synchronised when you suspect it is. Collection.add() shouldn't trigger a Collection.sync() by default (it's not mentioned in the method documentation or the list of events, and I couldn't see a trigger in the annotated source).
Model.destroy() does trigger a sync(), but that shouldn't be a surprise - it's explicitly defined as "destroying a model on the server", and that sync() is performed on the model, not the collection. Your destroyed models will be removed from any collections that contain them, but I wouldn't expect those collections to sync() unless explicitly asked.
If your collections really are sync()ing when you're not expecting them to, then the most likely culprit is an event listener somewhere. Have you added any event listeners that call sync() for you when they see add or remove events? If your collection should sync() only on user interaction, can you remove those event listeners?
If not, then passing {silent: true} into your methods might be a viable approach. But remember that this is just stopping events from being emitted - it's not stopping that code from running. If something other than an event listener is triggering your sync()s, then preventing those events from being emitted won't stop them.
It would also be worth considering a wider refactor of your app. Right now you modify the collection and models immediately, and try to delay all sync()s until after the user clicks a button. What if you cached a list of all models to destroy & items to add, and only performed the actions when the button is clicked? Storing the model IDs would be sufficient to destroy them, and storing the collection ID and model ID would let you add items. It also means you don't have to fetch() the collection again if the user decides not to save their changes after all.