This has been on my mind for a few days now.
As per the docs, React has synthetic event system, which is a a cross-browser wrapper around the browser's native event. Going through the docs, is my understanding correct that the custom (synthetic) event system, isn't about efficiency but rather cross-browser compatibility.
In other words, React still appends the event to the element rather than the more efficient approach of event-delegation on the parent element?
I also noticed this in Firefox Inspector which raised the initial curiosity.
The reason for asking the question is that I am working on an app where a user maybe able to select a thousand elements & drag them around the screen, so eventually event delegation is going to come up.
Alright, you perhaps already figured everything on your own, but as I asked myself the same questions, I figured I'd leave this here in case someone else is curious about not only using React but also getting an idea about how it works.
So, I'm not entirely sure about your question (especially the "append the event to element" part) but:
React is all about the virtual DOM. As the name implies, it is therefore built on top of the "real" environment that is the DOM. Consequently, everything takes place in that abstracted layer, including event handling.
Events appear in their "natural" environment, so the DOM or native (depending on the flavor of react you are using)
Consequently, you first need to bring the events up to the virtual DOM, compute your changes there and dispatch them to the representation of components in the virtual DOM, then bring the relevant changes back down to be reflected in the DOM appropriately.
Carrying changes up to the virtual DOM is effectively done by top-level delegation. This means that React itself listens to all events at a document level. This also means that technically, all your events go through one capture + bubbling loop before even entering the React-specific code. I would not be able to say what that implies performance wise, because you do "lose" the time associated to that first DOM traversal, but on the other hand you will do all your changes in the virtual DOM, which is faster than doing them in the real DOM...
Finally, SyntheticEvent is indeed a wrapper, which aims at reducing cross-browser compatibility issues. It also introduces pooling, which makes the thing faster by reducing garbage collection time. Besides, as one native event can generate several SyntheticEvent, it technically lets you create new ones easily (like a syntheticTap event that could be emitted if you receive a native touchStart then a native touchEnd in succession).
I have written a post with more details here. It is far from perfect and their might be some imprecision, but it can perhaps give you some more info on the topic.
Related
I've been developing a framework for web apps based upon and MVC style methodology.
It's more of a general question to the JS gurus amongst you:
if you have lots of views each with various event listeners, does this slow the overall responsiveness down? I'm toying with the idea of creating a global event manager which drills down to active views/objects based on mouse position & focus then calls methods instead of creating lots of listeners all over the place for each and every view.
Would this improve the overall responsiveness of the app or is this largely pointless?
It's hard to create unit tests to check and I'm hoping for some insight from others.
You still need to hang those methods to the events, aren't you? Actually I think and hope that event driven paradigm was designed and implemented as a best solution...
I don't know how exactly are events implemented in browsers, but I expect also some kind of global layer which captures all events and then it propably searches if any listener is registered for it and then it propably goes through DOM and checks the selector. And when everything fits, it calls provided method..
actually calling addEventListener once and calling the functions in the callbacks array will be faster than using multiple addEventListener but this will only make sense if you have more than 20 event functions
By now most folks on this site are probably aware that:
$("#someTable TD.foo").click(function(){
$(e.target).doSomething();
});
is going to perform much worse than:
$("#someTable").click(function(){
if (!$(e.target).is("TD.foo")) return;
$(e.target).doSomething();
});
Now how much worse will of course depend on how many TDs your table has, but this general principle should apply as long as you have at least a few TDs. (NOTE: Of course the smart thing would be to use jQuery delegate instead of the above, but I was just trying to make an example with an obvious differentiation).
Anyhow, I explained this principle to a co-worker, and their response was "Well, for site-wide components (e.g. a date-picking INPUT) why stop there? Why not just bind one handler for each type of component to the BODY itself?" I didn't have a good answer.
Obviously using the delegation strategy means rethinking how you block events, so that's one downside. Also, you hypothetically could have a page where you have a "TD.foo" that shouldn't have an event hooked up to it. But, if you understand and are willing to work around the event bubbling change, and if you enforce a policy of "if you put .foo on a TD, it's ALWAYS going to get the event hooked up", neither of these seems like a big deal.
I feel like I must be missing something though, so my question is: is there any other downside to just delegating all events for all site-wide components to the BODY (as opposed to binding them directly to the HTML elements involved, or delegating them to a non-BODY parent element)?
What you're missing is there are different elements of the performance.
Your first example performs worse when setting up the click handler, but performs better when the actual event is triggered.
Your second example performs better when setting up the click handler, but performs significantly worse when the actual event is triggered.
If all events were put on a top level object (like the document), then you'd have an enormous list of selectors to check on every event in order to find which handler function it goes with. This very issue is why jQuery deprecated the .live() method because it looks for all events on the document object and when there were lots of .live() event handlers registered, performance of each event was bad because it had to compare every event to lots and lots of selectors to find the appropriate event handler for that event. For large scale work, it's much, much more efficient to bind the event as close to the actual object that triggered the event. If the object isn't dynamic, then bind the event right to the object that will trigger it. This might cost a tiny bit more CPU when you first bind the event, but the actual event triggering will be fast and will scale.
jQuery's .on() and .delegate() can be used for this, but it is recommended that you find to an ancestor object that is as close as possible to the triggering object. This prevents a buildup of lots of dynamic events on one top level object and prevents the performance degradation for event handling.
In your example above, it's perfectly reasonable to do:
$("#someTable").on('click', "td.foo", function(e) {
$(e.target).doSomething();
});
That would give you one compact representation of a click handler for all rows and it would continue to work even as you added/removed rows.
But, this would not make as much sense:
$(document).on('click', "#someTable td.foo", function(e) {
$(e.target).doSomething();
});
because this would be mixing the table events in with all other top level events in the page when there is no real need to do that. You are only asking for performance issues in the event handling without any benefit of handling the events there.
So, I think the short answer to your question is that handling all events in one top level place leads to performance issues when the event is triggered as the code has to sort out which handler should get the event when there are a lot of events being handled in the same place. Handling the events as close to the generating object as practical makes the event handling more efficient.
If you were doing it in plain JavaScript, the impact of random clicks anywhere on the page triggering events is almost zero. However in jQuery the consequence could be much greater due to the amount of raw JS commands that it has to run to produce the same effect.
Personally, I find that a little delegation is good, but too much of it will start causing more problems than it solves.
If you remove a node, the corresponding listeners are not removed automatically.
Some events just don't bubble
Different libraries may break the system by stopping event propagation (guess you mentioned that one)
I'm working on a completely ajax web project where a section content is always generated through DOM manipulation or using jQuery's load function. I had been using "live" but am very interested in moving away from "live" and using "on" for performance benefits. When a new page loads a whole new set of bindings required for that section also need to get loaded. The html sections have some parent DOMs (basically wrappers for different content areas of the web page) that never change allowing me to do bindings on them for all future DOM elements that will be created on the page.
In terms of memory and performance trade off which is generally the better way to handle event bindings?
After a new section has finished loading its html, bind all the events needed for that specific page instance on DOM elements that will be removed when a page changes.
Bind every event on the very first page load to DOM elements (not to the document though like live does) that are known to always exist.
Memory issues with listeners can usually be dealt with fairly easily (don't hold large chunks of data in closures, don't create circular references, use delegation, etc.).
"Live" just uses delegation (as far as I know) - you can implement delegation quite simply without it using simple criteria, e.g. class or id, with listeners on the unchanging parent elements. Delegation is a good strategy where it replaces numerous other listeners, the content is constantly being changed and identifying elements that should call functions is simple.
If you follow a strategy of attaching numerous new listeners every time content changes, you also have to detach the old ones when they are replaced as a strategy to reduce the likelihood of memory leaks. Performance (in terms of time taken to attach and remove listeners as part of the DOM update) usually isn't that much of an issue unless you are doing hundreds of them.
With delegation, a parent element listens for events, checks if the event.target/srcElement is one it cares about for that event, then calls the appropriate function perhaps using call to set the value of this if required.
Note that you can also simply include inline listeners in the inserted HTML, then you never need to worry about memory leaks, delegation or adding and removing listeners. Inline listeners using a simple function call are no more complex than adding any other attribute (class, id, whatever) and require zero extra programming on the client. I don't think inline listeners were ever an issue for memory leaks.
Of course the "unobtrusive javascript" mob will howl, but they are very practical, functional and robust, not to mention supported by every browser that ever supported javascript.
So I am just trying to add some functions to hover. the below does about the same thing except the one with the for loop i have the target stored in an array. I actually have a few questions.
is this the correct use of stopPropagation?
what's the best practice for doing something like this?
which one of the below method is faster and uses less resources?
I know I can use hover() but I used bind because I thought it is faster, is my thinking correct?
thank you
for (var i in slides) {
$(slides[i].el).bind( {
mouseenter: function (event) {
event.stopPropagation();
// do something
},
mouseleave: function (event) {
event.stopPropagation();
//do something
}
});
}
$("#vehicleSlides .vehicleAreas").bind( {
mouseenter: function (event) {
event.stopPropagation();
// do something
},
mouseleave: function (event) {
event.stopPropagation();
//do something
}
});
1 - is this the correct use of stopPropagation
If you wish to stop the event bubbling up the DOM tree, then yes.
2 - what's the best practice for doing something like this
Personally, I prefer the jQuery selector followed by methods, but this is just a preference. The best practice is whatever style you and your team all agree upon and use consistently.
3 - which one of the below method is faster and uses less resources
In practical terms, there will be next to no difference between the two.
4 - I know I can use hover() but I used bind because I thought it is faster, is my thinking correct
The jQuery hover method is shorthand for the bind to mouseenter and mouseleave events, so there will be one extra function call using hover, however there will be almost no difference in performance.
Best practice would make consistently correct functionality the highest priority, so it depends on whether or not the event should be seen or heard by a parent node once processed.
That depends on your design. For example, in a "window"-like object that you want to be able to drag around, you could either A. attach a mouse handler to the entire window, or B. attach a listener to a child "background" object to detect the mouse down event to begin dragging.
If you choose design B., then you have to make sure labels and other objects you don't want to receive mouse events have mouse events disabled (in Flash [AS3] set mouseEnabled and mouseChildren to false; not sure about JavaScript). One con of this design is that it would prevent any object in the U.I. from passively processing or modifying event behavior in the bubbling phase, because any interception in the capture phase would prevent it from reaching the background in the first place. One pro of this design is that by allowing the event to bubble, you could have monitors and other global effects processing mouse clicks at higher levels.
On the other hand, if you choose design A., then you don't have to worry about making child objects transparent to the mouse (an event on a label would still bubble up to the window container itself), but instead you have to make sure that event propagation on child objects like buttons is stopped once the event is handled so that they don't reach the window handler at the top of the hierarchy. It really depends on how you want it to function, and a hybrid approach is probably best.
You can have this down to a design science to the point where the "design" isn't a design at all, but a complex truth known to be true scientifically.
Any browser optimization of this system would involve keeping track, during the capture phase, of which parent nodes had bubble-phase event handlers attached for the event type. If for example it entered the target/bubbling phase knowing that no parent nodes had handlers, it could skip the entire bubbling phase, or jump directly to nodes known to have handlers. That's poor design however, IMO, because you might want to attach new handlers to parent nodes at any time during capture or bubbling, or you may want to move a node to another parent to try to cause the event to bubble up a different parent chain. Try it and see how it behaves in different browsers. There's bound to be huge inconsistency like anything else involving HTML rendering and event processing, in terms of both behavior and performance :P
There's probably not much difference between bind and hover. Work avoidance is always a good thing to consider but 1-3 more function calls to get to an event handler isn't going to put a dent in a modern JIT's performance.
You are not invoking stopPropagation incorrectly but if you're doing it for no particular reason other than bubbling making you uncomfortable or because you're afraid of triggering something else by accident, then yes, you are doing it wrong.
The first rule of UI work should always be:
DON'T DO ANYTHING YOU DON'T NEED TO DO
Examples:
Don't solve problems you don't have yet.
Don't do anything "just in case," because that means you don't know what's actually happening and you really need to understand what your stuff actually does and how it works before you call anything done in UI.
Don't stop people from using your UI differently than anticipated. e.g. validating HTML format and throwing errors when somebody tries to make something you wrote work a little differently. What are you, running a customer support line? It helps no one/serves nothing. What does it matter if they prefer a more semantically correct unordered list to a pile of divs?
But on stopProp specifically, if you must use it (and it can solve some problems very elegantly so never say never) try to only hit endpoint nodes with it so other things can be added to the same container without losing the benefits of bubbling. Yes, benefits I say. Don't fear bubbling. Events moving back up the ancestor line are only likely to trigger other UI events if your HTML is a complete disaster (it should be nothing but containers all the way back up to the body, right?).
Also if you can just verify that you have the right target element in the handler before taking action, do that instead of stopProp. But cripes it pisses me off when people add return false and e.stopPropagation to every single UI handler they write. Especially when they themselves pick up the event from a container that encompasses much more than the active element in question.
So don't do that. We might work in the same office some day and I can be whiny and insufferable and I'll sabotage your cheesecake.
I want to use event delegation on a number of buttons in a page of HTML. These buttons are all over the page, and I was wondering how expensive it would be to listen to the entire document for on click events, and then just have those on click events trigger an event delegation handler. Would this be more expensive than having listeners on each of 20+ buttons (it can grow to be over 100 buttons, yes it is silly)?
I don't see how it would be more expensive since it would be listening for clicks on the document object instead of 25 anchor objects
The key idea here is depth. Your event has to traverse up the DOM before it's being executed. If your elements are deep down the DOM tree you may notice some performance degradation.
Couple of things to bear in mind:
the number of anchors doesn't matter for event delegation, that is true
generally speaking event delegation is a superior alternative in most cases, but it's not useful all the time
My suggestion is to analyze these kind of problems, learn how things work, and make decisions by good old common-sense.
I don't see how it would be more expensive since it would be listening for clicks on the document object instead of 25 anchor objects. With that said, just 25-30 buttons is not really resource-intensive so you probably don't need to worry about this.
This is the strategy used by, for example, the jQuery "live" method: listen on the whole document then test the sender against a condition (i.e., selector). Unless the selector is unbearably intensive, this technique is more efficient for large and growing sets of targets.
Did you hard? working code in not a good code.
if you use 20~40 button listeners in a page, instead of using delegate then it will work and probably you will not even see any performance issue. I think you will use forin/$.each to bind all those listeners, so you will not need to write codes for each listeners.
so, my request is please use delegate.
If in future you need to change the logic or you decide to test the application then you will be in trouble.