Controlled event handling flow in qooxdoo - javascript

Preface
The problem described below might be topical for virtually any event-driven JS framework and any application that processes incoming data/event stream. For the sake of definiteness, let's imagine a web-based IM (Facebook chat-like) using qooxdoo framework.
The application receives incoming event stream (via WebSocket, for example) and re-emits events to its internal classes:
Events are processed, roughly speaking, in two stages. First stage handlers are mostly alerts (play a sound on incoming message, display a web notification etc.) Second stage does actual data processing and display. Both stages handlers are invoked simultaneously as events arrive.
This model provides good code decoupling between the application and the handlers, allowing to add/remove and enable/disable them independently. However...
The Problem
...as the application evolves, turns out there can be dependencies between the stages. Some stage 1 handlers should block those of stage 2 (ex., an incoming voice recording should not autoplay until sound alert has completed). Some might even show a user confirmation, and cancel the whole remaining chain if the confirmation has not been given. The event handling in qooxdoo assumes that all the handlers are invoked (nearly) simultaneously, and there is no control over the order and timing of handler invocations.
How do we introduce the required control, while remaining within the event model and not sacrificing its benefits (low coupling, etc.)?
Solution
The candidate solution employs Promises. By default, qooxdoo event handlers do not return anything. Why not making them (optionally) return a Promise? In this case, a promise-aware event mediator should be introduced:
The handlers now should be subscribed to the mediator (this is omitted from the diagram for the sake of clarity). The mediator, in addition to the standard on/off methods, should implement an after method with the following semantics:
after(String name, Function listener, var ctx?) - invoke handler after all other handlers for this event
after(String name, Integer id, Function listener, var ctx?) - invoke handler after another handler with a known ID
after(String name, (Class|String) id, Function listener, var ctx?) - invoke handler after all other handlers of some known class (could be derived from this argument of the corresponding call)
Thus, we extend the existing event semantics at two points:
event handlers may now return Promises;
an after method is introduced for an event emitter/mediator.
The emitter/mediator should resolve dependencies and wire handler invocations to the corresponding then() blocks of Promises.
The proposed solution seems to satisfy both requirements: 1) it implements dependencies between event handlers, 2) it allows to stay within the event handling paradigm. Are there any pitfalls? Can it be done better/cleaner? Any critique and advice is welcome.

Related

Azure Event Hub Listening Events

Reading Event Hub documentation and creating a simple producer-consumer example
Link -> https://learn.microsoft.com/en-us/javascript/api/overview/azure/event-hubs-readme?view=azure-node-latest
I was wondering in a production application how this would work. The reason is that in the current implementation is listening for a specific amount of time then the connection is closing.
Should we send the request to specific REST endpoints and activate the listeners after the producer finishes?
You are correct that in most production scenario's this does not work. Best is to keep the listener open during the lifetime of the application. In most cases when a restart of the application is triggered, processing should resume from the last checkpoint on continuation. The example does not cover this.
From the docs:
For the majority of production scenarios, we recommend that you use the event processor client for reading and processing events. The processor client is intended to provide a robust experience for processing events across all partitions of an event hub in a performant and fault tolerant manner while providing a means to checkpoint its progress. Event processor clients can work cooperatively within the context of a consumer group for a given event hub. Clients will automatically manage distribution and balancing of work as instances become available or unavailable for the group.
Here is an example of processing events combined with checkpointing. For demo purposes the listener stops after a while. You will have to modify the code to run as long as the process is not stopped.
Checkpointing is important if you have a continuous flow of events being send. If the listener is not available for some period you do want to resume processing not from the beginning of the first event nor from new events only. Instead you will want to start from the last know processed event.

What is `emit` javascript function?

While looking through sax nodejs module, i saw multiple emit function calls, but i can't find any information about it.
Is it some V8 native tool for emitting events? Why sax-js do not use EventEmitter for streams then?
In node.js an event can be described simply as a string with a corresponding callback. An event can be "emitted" (or in other words, the corresponding callback be called) multiple times or you can choose to only listen for the first time it is emitted.
The on or addListener method (basically the subscription method) allows you to choose the event to watch for and the callback to be called. The emit method (the publish method), on the other hand, allows you to "emit" an event, which causes all callbacks registered to the event to 'fire', (get called).
reference: https://docs.nodejitsu.com/articles/getting-started/control-flow/what-are-event-emitters/
(This is an outdated link and doesn't work anymore)
Short: Emit's job is to trigger named event(s) which in turn cause functions called listeners to be called.
Detailed: Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain kinds of objects (called "emitters") periodically emit named events that cause Function objects ("listeners") to be called.
All objects that emit events are instances of the EventEmitter class. These objects expose an eventEmitter.on() function that allows one or more functions to be attached to named events emitted by the object.
When the EventEmitter object emits an event, all of the functions attached to that specific event are called synchronously. Any values returned by the called listeners are ignored and will be discarded.
Read More here
Please look at line number 624 of the same file.
function emit (parser, event, data) {
parser[event] && parser[event](data)
}

Emit/Subscribe pattern vs. Pipe in Famous

Both methods can be used so that one event handler can listen to the firing of an event from another event handler. The documentation says they are the same thing, just different implementation. I'm wondering why the framework bothers providing two different methods for this same task? Probably pipe() is better for chaining, but I'm wondering if there is any other hidden advantage of using pipe() over emit()/subscribe()
If you do widgetA.pipe(widgetB) then all events from widgetA are sent to widgetB regardless whether widgetB is listening to them. Pipe is like a firehose.
Subscribe on the other hand, is more performant. WidgetB.subscribe(widgetA) says "of the things you emit, I want to subscribe to a particular subset." Other events are then completely ignored.
This is especially important when interacting with the DOM, which outputs a lot of events (mousedown, mouseup, touchmove, resize, etc...), and it's preferred to use Subscribe when listening to a DOM element.

How do event handlers in JavaScript get processed by the event loop?

I'm a little bit confused about how browsers handle JavaScript events.
Let's say I have two event handlers attached to buttons A and B. Both event handlers take exactly the same time to complete. If I click on button A first and button B next, is it true that the event handler for the button A is always executed first (because the event loop is a FIFO queue), but when they finish is completely unpredictable? If so, what actually determines this order?
Yes. The order of event handlers executed is guaranteed, and in practice they will not overlap.
This is the beauty of the event loop as a concurrency model. You don't have to think about threading issues like deadlocks, livelocks and race conditions most of the time (though not always).
Order of execution is simple and JavaScript in the browser is single threaded most of the time and in practice you do not have to worry about order of execution of things.
However the fact order of mouse events is guaranteed has hardly anything has to do with JavaScript. This is not a part of the JavaScript language but a part of something called the DOM API, the DOM (document object model) is how JavaScript interacts with your browser and the HTML you write.
Things called Host Objects are defined in the JavaScript specification as external objects JS in the browser works with, and their behavior in this case is specified in the DOM API.
Whether or not the order DOM events are registered is guaranteed is not a part of JavaScript but a part of that API. More specifically, it is defined right here. So to your question: Yes, order of event execution is certain except for control keys (like (control alt delete)) which can mess order of evaluation up.
The Javascript engine is single threaded. All of your event handlers happen sequentially; the click handler for A will be called, and finish before the handler for B ever starts. You can see this by sleep()ing in one handler, and verifying that the second handler will not start until the first has finished.
Note that setTimeout is not valid for this test, because it essentially registers a function with the engine to be called back at a later time. setTimeout returns immediately.
This fiddle should demonstrate this behavior.
Well the commands are indeed in a FIFO when executed by javascript. However, the handlers may take different amounts of time to send you the result. In this case the response from handler B may come back earlier and response from handler A may come later.

Why use a callback function over triggering an event in jQuery?

I've seen the callback() concept used a lot in jQuery plugins and I'm starting to think triggering custom events could be a better alternative.
jQuery has a built-in mechanism to trigger('A_CUSTOM_EVENT') so why don't plugin authors simply trigger a 'COMPLETE_EVENT' rather than insist we pass in a callback function that is to handle this 'complete phase'.
Nick
This depends on what you are trying to achieve - it's an architectural choice.
Basically the event-paradigm is open, non-private and persistent. You have an public event, where everybody can register for, and their event functions are basically called as often as they wish until they unregister from the event. Makes sense for recurring events.
Example: Registering to a hover event.
The callback-paradigm is isolated, private and disposable. Someone calls your code and hands over a private callback, which will be disposed once it got executed. In most cases the usability is limited (limited to a single point in time) and/or should not necessarily be public.
Example: Handling an ajax response.
Both of these paradigms have advantages and drawbacks. Using one or the other is up to you and how you want your application to be used.

Categories