Emit/Subscribe pattern vs. Pipe in Famous - javascript

Both methods can be used so that one event handler can listen to the firing of an event from another event handler. The documentation says they are the same thing, just different implementation. I'm wondering why the framework bothers providing two different methods for this same task? Probably pipe() is better for chaining, but I'm wondering if there is any other hidden advantage of using pipe() over emit()/subscribe()

If you do widgetA.pipe(widgetB) then all events from widgetA are sent to widgetB regardless whether widgetB is listening to them. Pipe is like a firehose.
Subscribe on the other hand, is more performant. WidgetB.subscribe(widgetA) says "of the things you emit, I want to subscribe to a particular subset." Other events are then completely ignored.
This is especially important when interacting with the DOM, which outputs a lot of events (mousedown, mouseup, touchmove, resize, etc...), and it's preferred to use Subscribe when listening to a DOM element.

Related

Insert event into browser event queue?

I don't have a particular use-case for this, so I'm open to anyone proving that there is none, however, I've been wondering if the event model of JavaScript in the browser allows code to insert an event into the event queue, so that it will be processed after all the other events already present have been handled.
I found EventTarget.dispatchEvent but that does not fit what I think I might want for two reasons:
It seems to be synchronous, so it will be processed before all other events already in the queue which "feels wrong".
I have to have a specific EventTarget instance to send it to, which also feels like a limitation. If I want the loosest coupling I should not have to know the target (indeed, I believe I might want more than one target to receive this event, and these targets need not necessarily be in a containment hierarchy in the way that DOM Elements/Nodes are.)
Is there some part of the API that I've failed to find?
Is there perhaps provably no value in this?
Or is there perhaps a reliable platform-independent way to achieve everything this concept might achieve?

Controlled event handling flow in qooxdoo

Preface
The problem described below might be topical for virtually any event-driven JS framework and any application that processes incoming data/event stream. For the sake of definiteness, let's imagine a web-based IM (Facebook chat-like) using qooxdoo framework.
The application receives incoming event stream (via WebSocket, for example) and re-emits events to its internal classes:
Events are processed, roughly speaking, in two stages. First stage handlers are mostly alerts (play a sound on incoming message, display a web notification etc.) Second stage does actual data processing and display. Both stages handlers are invoked simultaneously as events arrive.
This model provides good code decoupling between the application and the handlers, allowing to add/remove and enable/disable them independently. However...
The Problem
...as the application evolves, turns out there can be dependencies between the stages. Some stage 1 handlers should block those of stage 2 (ex., an incoming voice recording should not autoplay until sound alert has completed). Some might even show a user confirmation, and cancel the whole remaining chain if the confirmation has not been given. The event handling in qooxdoo assumes that all the handlers are invoked (nearly) simultaneously, and there is no control over the order and timing of handler invocations.
How do we introduce the required control, while remaining within the event model and not sacrificing its benefits (low coupling, etc.)?
Solution
The candidate solution employs Promises. By default, qooxdoo event handlers do not return anything. Why not making them (optionally) return a Promise? In this case, a promise-aware event mediator should be introduced:
The handlers now should be subscribed to the mediator (this is omitted from the diagram for the sake of clarity). The mediator, in addition to the standard on/off methods, should implement an after method with the following semantics:
after(String name, Function listener, var ctx?) - invoke handler after all other handlers for this event
after(String name, Integer id, Function listener, var ctx?) - invoke handler after another handler with a known ID
after(String name, (Class|String) id, Function listener, var ctx?) - invoke handler after all other handlers of some known class (could be derived from this argument of the corresponding call)
Thus, we extend the existing event semantics at two points:
event handlers may now return Promises;
an after method is introduced for an event emitter/mediator.
The emitter/mediator should resolve dependencies and wire handler invocations to the corresponding then() blocks of Promises.
The proposed solution seems to satisfy both requirements: 1) it implements dependencies between event handlers, 2) it allows to stay within the event handling paradigm. Are there any pitfalls? Can it be done better/cleaner? Any critique and advice is welcome.

In node.js, when to use events, when to use a straight callback function?

It seems to me that the "core" node.js callback syntax, i.e.
function foo(data, callback) {
callback(false, data2);
}
is semantically superseded by events, except that
With events, you lose the last bit of static checking
Events are more flexible
Once you have more than 2 or 3 callback-y functions, callbacks get rather clunky
Events might be a very slight perfomance overhead (but premature optimization would be an understatement in almost all cases)
(But then again, you have to memorize the events, too...)
So what would be a good policy for when to use what?
A good policy is to use whatever abstraction best models your use cases
I think performance is a non-issue in this case.
If you are providing a function to a client that performs an asynchronous call, exposing it as a single function (like your example) would seem to be completely valid, and seems to be pretty clean. (this seems to be the way most of the node.js db clients work).
Callbacks quickly get out of hand, when there are more than 2-3, as you mentioned. But would a 2-3 callback function be better modeled as an event emitter?? Maybe, and that's up to you.
IMO 2-3+ callbacks would definately be better modeled using promises, as the calling structure would be flatter.
IMO Event Emitters are often used in longer standing objects. Objects that are alive for a "longer" duration. Where you would like to create an object and subscribe to events over some period of time, which seems to be a completely different use case than a single async function that exposes a callback.
Another option is to model your client as a stream.
I think a good rule of thumb is to see where node standard library (and popular node libararies) applies event emitter to clients, vs where it provides a callback based api to clients.
Node models its tcp client/server as an event emitter

What is `emit` javascript function?

While looking through sax nodejs module, i saw multiple emit function calls, but i can't find any information about it.
Is it some V8 native tool for emitting events? Why sax-js do not use EventEmitter for streams then?
In node.js an event can be described simply as a string with a corresponding callback. An event can be "emitted" (or in other words, the corresponding callback be called) multiple times or you can choose to only listen for the first time it is emitted.
The on or addListener method (basically the subscription method) allows you to choose the event to watch for and the callback to be called. The emit method (the publish method), on the other hand, allows you to "emit" an event, which causes all callbacks registered to the event to 'fire', (get called).
reference: https://docs.nodejitsu.com/articles/getting-started/control-flow/what-are-event-emitters/
(This is an outdated link and doesn't work anymore)
Short: Emit's job is to trigger named event(s) which in turn cause functions called listeners to be called.
Detailed: Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain kinds of objects (called "emitters") periodically emit named events that cause Function objects ("listeners") to be called.
All objects that emit events are instances of the EventEmitter class. These objects expose an eventEmitter.on() function that allows one or more functions to be attached to named events emitted by the object.
When the EventEmitter object emits an event, all of the functions attached to that specific event are called synchronously. Any values returned by the called listeners are ignored and will be discarded.
Read More here
Please look at line number 624 of the same file.
function emit (parser, event, data) {
parser[event] && parser[event](data)
}

Why use a callback function over triggering an event in jQuery?

I've seen the callback() concept used a lot in jQuery plugins and I'm starting to think triggering custom events could be a better alternative.
jQuery has a built-in mechanism to trigger('A_CUSTOM_EVENT') so why don't plugin authors simply trigger a 'COMPLETE_EVENT' rather than insist we pass in a callback function that is to handle this 'complete phase'.
Nick
This depends on what you are trying to achieve - it's an architectural choice.
Basically the event-paradigm is open, non-private and persistent. You have an public event, where everybody can register for, and their event functions are basically called as often as they wish until they unregister from the event. Makes sense for recurring events.
Example: Registering to a hover event.
The callback-paradigm is isolated, private and disposable. Someone calls your code and hands over a private callback, which will be disposed once it got executed. In most cases the usability is limited (limited to a single point in time) and/or should not necessarily be public.
Example: Handling an ajax response.
Both of these paradigms have advantages and drawbacks. Using one or the other is up to you and how you want your application to be used.

Categories