Azure event hub, receive events only after processing previous events? - javascript

Im using azure event hub for a project via this npm package #azure/events-hub
Im wondering if theres a way to make the event receiver only receive new event when it is done processing a previously received event. The point is i want to process one event at a time.
My observation currently is that it sends events to the handler the moment they become available.
The api im using is the client.receive(partitionId, onMessage, onError) from the docs.
Wondering if there's a way to achieve the mentioned behaviour, with this api.

The client.receive() method returns a RecieverHandler object that you could use to stop the stream using it's stop() method. You would then start it again using a fresh client.receive().
Another option would be to use client.recieveBatch() where the max batch size is set to 1.
Neither option is ideal- as Peter Bons mentioned, Event Hubs are not designed for a slow drip of data.The service assumes that you will be able to accept messages at the same rate they came in, and that you will have only 1 receiver per partition. Service Bus is indeed a good alternative to look into. You can choose how many messages a recieve at a time and connect multiple receivers, each processing one message at a time, to scale your solution.

What you are describing is the need to have back pressure and the latest version of the #azure/event-hubs package has indeed solved this problem. It ensures that you receive events only after the previously received events are processed. By default, you will receive events in batches of size 10 and this size is configurable.
See the migration guide from v2 to v5 if you need to upgrade your current application to use the latest version of the package.

Related

Detecting changes on database table column status

I am having a project in Laravel. In database I have a status column, which shows if exam is started or not. I had an idea in the waiting room checking every single second if the status was changed or not, if changed to 1, when the exam starts, but I am so new to Laravel and everything else, that I even don't get the main idea how I could do this, I don't ask for any code, just for the lead, to move on. yeah, hope someones gets me. Thanks if someone answers me.
Check about laravel cron jobs. You will need a class implementing ShouldQueue interface and using Dispatchable, InteractsWithQueue, Queueable, SerializesModels;
With regards to the storage of the jobs i do recommend Redis or SQS.
In order to keep monitoring the queue in production think about installing supervisor.
Further information here: Queues
Your plan can work, it is called polling.
Basically, you will want to call
setInterval(function() {
//your code here
}, 1000);
setInterval is a function that receives two parameter. The first is a callback function, that will periodically be executed and the second is the length of the period in milliseconds (1000 milliseconds is a second).
Now, you will need to implement your callback function (Javascript, of course) to send an AJAX request to a Laravel action. You will need to look into XMLHttpRequest and its usages, or you can use some libraries to simplify your task, like jQuery or Axios.
On Laravel's side you will need to implement an action and a Route for it. (read this: https://appdividend.com/2022/01/22/laravel-ajax/)
Your Laravel will need to load data from your database, you can use Eloquent for this purpose or raw queries and then respond the POST request with the result.
Now, in your Javascript at the AJAX request's code you will need to have a callback function (yes, a callback inside a callback) which will handle the response and the the changes.
What about leveraging Observers? Also instead of having a status boolean, you could take a similar approach that Laravel has done for soft deletes and set exam_started_at. This way you can also keep track of time stamp and state all in one column. Also, observers are immediate rather than pushing them into a queue. Then generate a websocket event that can report back to your front end, if needed.
check out Laravel observer and soft delete documentation.
I know you specified "when the column on db changes..." but if it's not a strict-requirement you might want to consider implementing event-based architecture. Laravel has support for model events, which essentially allows you to run certain assertions and controls when a model created, updated, deleted etc.
class Exam extends Model
protected static function booted()
{
static::updated(function ($exam) {
if($exam->status=='your-desired-status'){
//your actions
}
//you can even in cooperate change controls
if ($exam->isDirty('status')){
//means status column changed
});
}
}
Of course this solution applies only if Database in question is in Laravel's reach. If database data changes outside the Laravel application these event listeners won't help at all.

Azure Event Hub Listening Events

Reading Event Hub documentation and creating a simple producer-consumer example
Link -> https://learn.microsoft.com/en-us/javascript/api/overview/azure/event-hubs-readme?view=azure-node-latest
I was wondering in a production application how this would work. The reason is that in the current implementation is listening for a specific amount of time then the connection is closing.
Should we send the request to specific REST endpoints and activate the listeners after the producer finishes?
You are correct that in most production scenario's this does not work. Best is to keep the listener open during the lifetime of the application. In most cases when a restart of the application is triggered, processing should resume from the last checkpoint on continuation. The example does not cover this.
From the docs:
For the majority of production scenarios, we recommend that you use the event processor client for reading and processing events. The processor client is intended to provide a robust experience for processing events across all partitions of an event hub in a performant and fault tolerant manner while providing a means to checkpoint its progress. Event processor clients can work cooperatively within the context of a consumer group for a given event hub. Clients will automatically manage distribution and balancing of work as instances become available or unavailable for the group.
Here is an example of processing events combined with checkpointing. For demo purposes the listener stops after a while. You will have to modify the code to run as long as the process is not stopped.
Checkpointing is important if you have a continuous flow of events being send. If the listener is not available for some period you do want to resume processing not from the beginning of the first event nor from new events only. Instead you will want to start from the last know processed event.

Multiple distributed event stores for data governance working together

I play around with CQRS/event sourcing for a couple of months now. Currently, I'm having trouble with another experiment I try and hope somebody could help, explain or even hint on another approach than event sourcing.
I want to build a distributed application in which every user has governance of his/her data. So my idea is each user hosts his own event store while other users may have (conditional) access to it.
When user A performs some command this may imply more than one event store. Two examples:
1) Delete a shared task from a tasklist hosted by both event store A and B
2) Adding the reference to a comment persisted in event store A to a post persisted in event store B.
My only solution currently seems to use a process manager attached to each event store, so when an event was added to one event store, a saga deals with applying the event to other related event stores as well.
Not sure what is the purpose of your solution but if you want one system to react on events from another system, after events are saved to the store, a subscription (like catch-up subscription provided by Greg Young's EventStore) publishes it on a message bus using pub-sub and all interested parties can handle this event.
However, this will be wrong if they just "save" this event to their stores. In fact they should have an event handler that will produce a command inside the local service and this command might (or might not) result in a local event, if all conditions are met. Only something that happens within the boundaries, under the local control, should be saved to the local store.

Meteor trigger event on server collection change

I would like to trigger an event on a meteor server when a document on my collection changes to a specific value, say some field changes from false to true.
I am familiar with binding events to the client; however, I want this event to only be called when the server state changes, specifically the value of a given document in my collection. I want to trigger an external HTTP call from the server when this happens, as I need to message external applications.
Seems like this is an old post. For the benefit of others.
Peerdb package seems to do some of the tasks you are looking for.
https://atmospherejs.com/peerlibrary/peerdb
Also a bit late, but the most classic solution to this type of problem is using the very popular meteor-collection-hooks library. In particular, you'd probably want to use the .after.update hook (click link for full documentation), which allows you to hook into a particular collection after an update is made to a document and compare before and after by comparing doc (doc after update) to this.previous (doc before update).

Should there be only one EventSource object per app?

When using Server-Sent Events should the client establish multiple connections to receive different events it is interested in, or should there be a single connection and the client indicates what it is interested via a separate channel? IMO the latter seems more preferable although to some it might make the client code more complex. The spec supports named events (events that relate to a particular topic), which to me suggests that a Server-Sent Events connection should be used as single channel for all events.
The following code illustrates the first scenario where a multiple Server-Sent Event connections are initiated:
var EventSource eventSource1 = new EventSource("events/topic1");
eventSource1.addEventListener('topic1', topic1Listener, false);
var EventSource eventSource2 = new EventSource("events/topic2");
eventSource2.addEventListener('topic2', topic2Listener, false);
eventSource1 would receive "topic1" events and eventSource2 would receive "topic2" events. Whilst this is pretty straight forward it is also pretty inefficient with a hanging GET occurring for each topic you are interested in.
The alternative is something like the following:
var EventSource eventSource3 = new EventSource("/events?id=1234")
eventSource3.addEventListener('topic3', topic3Listener, false);
eventSource3.addEventListener('topic4', topic4Listener, false);
var subscription = new XMLHttpRequest();
subscription.open("PUT", "/events/topic3?id=1234", true);
subscription.send();
In this example a single EventSource would exist and interest in a particular event would be specified by a separate request with the Server-Sent Event connection and the registration being correlated by the id param. topic3Listener would receive "topic3" events and topic4Listener would not. Whilst requiring slightly more code the benefit is that only a single connection is made, but events can be still be identified and handled differently.
There are a number examples on the web that show the use of named events, but it seems the event names (or topics) are known in advance so there is no need for a client to register interest with the server (example). Whilst I am yet to see an example showing multiple EventSource objects, I also haven't seen an example showing a client using a separate request to register interest in a particular topic, as I am doing above. My interpretation of the spec leads me to believe that indicating an interest in a certain topic (or event name) is entirely up to the developer and that it can be done statically with the client knowing the names of the events it is going to receive or dynamically with the client alerting the server that it is interested in receiving particular events.
I would be pretty interested in hearing other people's thoughts on the topic. NB: I am usually a Java dev so please forgive my mediocre JS code.. :)
I would highly recommend, IMHO, that you have one EventSource object per SSE-providing service, and then emit the messages using different types.
Ultimately, though, it depends on how similar the message types are. For example, if you have 5 different types of messages related to users, have a user EventSource and differentiate with event types.
If you have one event type about users, and another about sandwiches, I'd say keep them in different services, and thus EventSources.
It's a good idea to think of breaking up EventSources the same way you would a restful service. If you wouldn't get two things from the same service with AJAX, you probably shouldn't get them from the same EventSource.
In response to vague and permissive browser standard interpretation*, browser vendors have inconsistently implemented restrictions to the number of persistent connections allowed to a single domain/port. As each event receiver to an async context assumes a single persistent connection allocation for as long as that receiver is open, it is crucial that the number of the EventSource listeners be strictly limited in order to avoid exceeding the varying, vendor-specific limits. In practice this limits you to about 6 EventSource/async context pairs per application. Degradation is graceful (e.g. additional EventSource connection requests will merely wait until a slot is available), but keep in mind there must be connections available for retrieving page resources, responding to XHR, etc.
*The W3C has issued standards with respect to persistent connections that contain language “… SHOULD limit the number of simultaneous connections…” This language means the standard is not mandatory so vendor compliance is variable.
http://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.4

Categories