While looking through sax nodejs module, i saw multiple emit function calls, but i can't find any information about it.
Is it some V8 native tool for emitting events? Why sax-js do not use EventEmitter for streams then?
In node.js an event can be described simply as a string with a corresponding callback. An event can be "emitted" (or in other words, the corresponding callback be called) multiple times or you can choose to only listen for the first time it is emitted.
The on or addListener method (basically the subscription method) allows you to choose the event to watch for and the callback to be called. The emit method (the publish method), on the other hand, allows you to "emit" an event, which causes all callbacks registered to the event to 'fire', (get called).
reference: https://docs.nodejitsu.com/articles/getting-started/control-flow/what-are-event-emitters/
(This is an outdated link and doesn't work anymore)
Short: Emit's job is to trigger named event(s) which in turn cause functions called listeners to be called.
Detailed: Node.js core API is built around an idiomatic asynchronous event-driven architecture in which certain kinds of objects (called "emitters") periodically emit named events that cause Function objects ("listeners") to be called.
All objects that emit events are instances of the EventEmitter class. These objects expose an eventEmitter.on() function that allows one or more functions to be attached to named events emitted by the object.
When the EventEmitter object emits an event, all of the functions attached to that specific event are called synchronously. Any values returned by the called listeners are ignored and will be discarded.
Read More here
Please look at line number 624 of the same file.
function emit (parser, event, data) {
parser[event] && parser[event](data)
}
Related
I'm an absolute beginner in extension development trying to wrap my head around the browser runtime, which consists of the event loop + call stack + web APIs. As I understand it, if you run a script containing a function like this:
setTimeout(function msg() {
console.log("Hey guys.");
}, 4000);
The call to setTimeout() will be pushed to the call stack, executed, and sent to the Web API which will start the timer for four seconds, after which it pushes the console.log("Hey guys"); call to the Event Loop queue. It's only until the call stack is empty that this call on the queue is popped, then pushed to the call stack. This callback mechanism is what provides JavaScript engines with asynchronous functionality.
Does the same process apply to the following function call?
chrome.runtime.onInstalled.addListener(() => {
console.log("Hello");
})
Here is where my confusion lies:
Assuming addListener() is pushed to the call stack, what happens after it's executed? Does the browser now "store" this listener function so that it knows to look out for the onInstalled event? And will it push the callback function onto the Event Loop queue once the event has been detected?
Why is the addListenermethod called on the event object chrome.runtime.onInstalledas opposed to taking it as a parameter?
Is the chrome.runtime.onInstalled object just a representation of the actual browser event, not the event itself? (i.e. an object provided by the Chrome API)
Does the same process apply to the following function call?
Kind of.
Assuming addListener() is pushed to the call stack, what happens after it's executed?
When the script starts, addListener adds the listener to the internal event registry without calling it. The registry is keyed on function reference so it'll be registered just once per event i.e. subsequent addListener calls will be effectively ignored.
Then the script ends, all listeners are registered.
Then, in case this script is the non-persistent background script (service worker or an event page), the event that woke the background script will be used to find its registered listeners in the registry and they will be called. This is why the documentation says that API listeners must be registered in "top level", although it's an oversimplification. Technically, the listeners must be registered before the script ends, i.e. it may happen inside a nested function as long as it runs synchronously, and even inside a synchronously imported ES module.
Why is the addListener method called on the event object
This is how the extensions API is implemented internally, there's no universal reason why this exact shape was used, it's just how they liked it and it makes perfect sense because the object contains various useful methods like hasListener, removeListener, and several others.
Is the chrome.runtime.onInstalled object just a representation of the actual browser event, not the event itself?
The word "event" here doesn't mean an instance of the event similar to DOM Event object that is dispatched to JS code. There's no such thing in the extensions API. It's just a static part of the API that is nested to represent the structure thereof: chrome is the common namespace, runtime is the specific API, onInstalled is the event name. This object is constructed by the internal JS layer of the extensions platform when the script starts.
I am a beginning javascript programmer. I have been trying to understand asynchronous javascript but I wanted to clear up a couple of things.
I understand that javascript runs in a single thread and that you can use callback functions to make your code asynchronous, but I am confused about what makes a callback function asynchronous or not.
A lot of the async callbacks seem to follow a pattern where a function has as its parameters a certain action and then a callback function which is to execute when that action is complete:
jQuery.get('page.html', function (data) {
console.log("second");
});
console.log('first');
What is it specifically that makes the callback in the parameter here execute at a later time? Is it that the get method here is pre-defined as some sort of a special method (because it fetches a file) that if you pass a function as a second parameter, it behaves in a asynchronous manner?
How can you make functions that you would write yourself asynchronous?
Thanks
It could be one of several things that makes asynchronous code asynchronous:
Timer Events (i.e. setTimeout() and setInterval(), which each accept callback functions as arguments that they execute at a later time)
DOM Events (i.e. attaching an event listener with a callback function to an HTML element or other DOM node, in which case your callback function is executed upon that event firing)
Other APIs provided by the browser (i.e. XMLHTTPRequest, which emits events based on things the browser does natively under the hood)
In Node.js or similar serverside environment, any I/O libraries that directly access resources such as disks or the network
Generally speaking, setTimeout() and setInterval() are the only vehicles for asynchronous execution in native JS (as opposed to the DOM, browser, or other APIs provided by the specific runtime)
In the case of your example, jQuery's .get() method is simply a wrapper for the browser's XMLHTTPRequest API, which creates a new XHR object that in turn emits events based on the status of the HTTP request, and attaches listeners with callbacks to those events.
Preface
The problem described below might be topical for virtually any event-driven JS framework and any application that processes incoming data/event stream. For the sake of definiteness, let's imagine a web-based IM (Facebook chat-like) using qooxdoo framework.
The application receives incoming event stream (via WebSocket, for example) and re-emits events to its internal classes:
Events are processed, roughly speaking, in two stages. First stage handlers are mostly alerts (play a sound on incoming message, display a web notification etc.) Second stage does actual data processing and display. Both stages handlers are invoked simultaneously as events arrive.
This model provides good code decoupling between the application and the handlers, allowing to add/remove and enable/disable them independently. However...
The Problem
...as the application evolves, turns out there can be dependencies between the stages. Some stage 1 handlers should block those of stage 2 (ex., an incoming voice recording should not autoplay until sound alert has completed). Some might even show a user confirmation, and cancel the whole remaining chain if the confirmation has not been given. The event handling in qooxdoo assumes that all the handlers are invoked (nearly) simultaneously, and there is no control over the order and timing of handler invocations.
How do we introduce the required control, while remaining within the event model and not sacrificing its benefits (low coupling, etc.)?
Solution
The candidate solution employs Promises. By default, qooxdoo event handlers do not return anything. Why not making them (optionally) return a Promise? In this case, a promise-aware event mediator should be introduced:
The handlers now should be subscribed to the mediator (this is omitted from the diagram for the sake of clarity). The mediator, in addition to the standard on/off methods, should implement an after method with the following semantics:
after(String name, Function listener, var ctx?) - invoke handler after all other handlers for this event
after(String name, Integer id, Function listener, var ctx?) - invoke handler after another handler with a known ID
after(String name, (Class|String) id, Function listener, var ctx?) - invoke handler after all other handlers of some known class (could be derived from this argument of the corresponding call)
Thus, we extend the existing event semantics at two points:
event handlers may now return Promises;
an after method is introduced for an event emitter/mediator.
The emitter/mediator should resolve dependencies and wire handler invocations to the corresponding then() blocks of Promises.
The proposed solution seems to satisfy both requirements: 1) it implements dependencies between event handlers, 2) it allows to stay within the event handling paradigm. Are there any pitfalls? Can it be done better/cleaner? Any critique and advice is welcome.
Both methods can be used so that one event handler can listen to the firing of an event from another event handler. The documentation says they are the same thing, just different implementation. I'm wondering why the framework bothers providing two different methods for this same task? Probably pipe() is better for chaining, but I'm wondering if there is any other hidden advantage of using pipe() over emit()/subscribe()
If you do widgetA.pipe(widgetB) then all events from widgetA are sent to widgetB regardless whether widgetB is listening to them. Pipe is like a firehose.
Subscribe on the other hand, is more performant. WidgetB.subscribe(widgetA) says "of the things you emit, I want to subscribe to a particular subset." Other events are then completely ignored.
This is especially important when interacting with the DOM, which outputs a lot of events (mousedown, mouseup, touchmove, resize, etc...), and it's preferred to use Subscribe when listening to a DOM element.
I'm a little bit confused about how browsers handle JavaScript events.
Let's say I have two event handlers attached to buttons A and B. Both event handlers take exactly the same time to complete. If I click on button A first and button B next, is it true that the event handler for the button A is always executed first (because the event loop is a FIFO queue), but when they finish is completely unpredictable? If so, what actually determines this order?
Yes. The order of event handlers executed is guaranteed, and in practice they will not overlap.
This is the beauty of the event loop as a concurrency model. You don't have to think about threading issues like deadlocks, livelocks and race conditions most of the time (though not always).
Order of execution is simple and JavaScript in the browser is single threaded most of the time and in practice you do not have to worry about order of execution of things.
However the fact order of mouse events is guaranteed has hardly anything has to do with JavaScript. This is not a part of the JavaScript language but a part of something called the DOM API, the DOM (document object model) is how JavaScript interacts with your browser and the HTML you write.
Things called Host Objects are defined in the JavaScript specification as external objects JS in the browser works with, and their behavior in this case is specified in the DOM API.
Whether or not the order DOM events are registered is guaranteed is not a part of JavaScript but a part of that API. More specifically, it is defined right here. So to your question: Yes, order of event execution is certain except for control keys (like (control alt delete)) which can mess order of evaluation up.
The Javascript engine is single threaded. All of your event handlers happen sequentially; the click handler for A will be called, and finish before the handler for B ever starts. You can see this by sleep()ing in one handler, and verifying that the second handler will not start until the first has finished.
Note that setTimeout is not valid for this test, because it essentially registers a function with the engine to be called back at a later time. setTimeout returns immediately.
This fiddle should demonstrate this behavior.
Well the commands are indeed in a FIFO when executed by javascript. However, the handlers may take different amounts of time to send you the result. In this case the response from handler B may come back earlier and response from handler A may come later.