Im working on a small adminarea for a webpage.
Does it make sense to unbind events for increasing performance(client)? Or does it cost more performance to unbind events and binding it 30Seconds later again?
My questions:
Is the idea behind bind()-unbind() or on().off() just increasing clientbased performance or should i use it for other scenarios? This question comes because my javascript code is growing and growing (about 30%) because of unbinding events. And i think, that some things may not work, when user interacts not, as i want...
.
EDIT: The most times im binding/unbinding keypress events, because i need the arrow keys for diff. scenarios.
Unbinding only to bind again for performance reasons is probably bug-prone and makes things overly complicated in most cases.
Instead of binding event listeners on many specific DOM elements, you could take a more "birds eye" approach and bind just a few listeners near the top of the DOM tree, and then when the event is triggered check what was actually clicked.
That way you won't spend CPU on binding/unbinding lots of event listeners, but instead take a small CPU hit when an event is processed (which is usually not noticeable).
This is covered in detail here: event delegation vs direct binding when adding complex elements to a page
If you try to bind and unbind you are creating race conditions for the garbage collector to actually come in and clean up your code. It is best to bind once and not have to bind again.
If your client side is expected to run for long periods of time (weeks, months) then you should look into memory management and memory leaks as more of a concern for performance.
Binding-unbinding (if not done correctly) may produce memory leaks which are hard to find. If you are using webkit, take heap snapshots of your performance with unbinding versus binding once and then you can make the best decision.
Here's a link:
http://addyosmani.com/blog/taming-the-unicorn-easing-javascript-memory-profiling-in-devtools/
One solution to avoid having to worry about this, especially if you deal with constantly changing elements or large quantities, is to register your event with the body and then specify a selector argument.
Like this:
$("body").on("click", ".my-actual-element", function(aEvent) {
// Event handler code goes here.
});
See more here $.on().
Related
By now most folks on this site are probably aware that:
$("#someTable TD.foo").click(function(){
$(e.target).doSomething();
});
is going to perform much worse than:
$("#someTable").click(function(){
if (!$(e.target).is("TD.foo")) return;
$(e.target).doSomething();
});
Now how much worse will of course depend on how many TDs your table has, but this general principle should apply as long as you have at least a few TDs. (NOTE: Of course the smart thing would be to use jQuery delegate instead of the above, but I was just trying to make an example with an obvious differentiation).
Anyhow, I explained this principle to a co-worker, and their response was "Well, for site-wide components (e.g. a date-picking INPUT) why stop there? Why not just bind one handler for each type of component to the BODY itself?" I didn't have a good answer.
Obviously using the delegation strategy means rethinking how you block events, so that's one downside. Also, you hypothetically could have a page where you have a "TD.foo" that shouldn't have an event hooked up to it. But, if you understand and are willing to work around the event bubbling change, and if you enforce a policy of "if you put .foo on a TD, it's ALWAYS going to get the event hooked up", neither of these seems like a big deal.
I feel like I must be missing something though, so my question is: is there any other downside to just delegating all events for all site-wide components to the BODY (as opposed to binding them directly to the HTML elements involved, or delegating them to a non-BODY parent element)?
What you're missing is there are different elements of the performance.
Your first example performs worse when setting up the click handler, but performs better when the actual event is triggered.
Your second example performs better when setting up the click handler, but performs significantly worse when the actual event is triggered.
If all events were put on a top level object (like the document), then you'd have an enormous list of selectors to check on every event in order to find which handler function it goes with. This very issue is why jQuery deprecated the .live() method because it looks for all events on the document object and when there were lots of .live() event handlers registered, performance of each event was bad because it had to compare every event to lots and lots of selectors to find the appropriate event handler for that event. For large scale work, it's much, much more efficient to bind the event as close to the actual object that triggered the event. If the object isn't dynamic, then bind the event right to the object that will trigger it. This might cost a tiny bit more CPU when you first bind the event, but the actual event triggering will be fast and will scale.
jQuery's .on() and .delegate() can be used for this, but it is recommended that you find to an ancestor object that is as close as possible to the triggering object. This prevents a buildup of lots of dynamic events on one top level object and prevents the performance degradation for event handling.
In your example above, it's perfectly reasonable to do:
$("#someTable").on('click', "td.foo", function(e) {
$(e.target).doSomething();
});
That would give you one compact representation of a click handler for all rows and it would continue to work even as you added/removed rows.
But, this would not make as much sense:
$(document).on('click', "#someTable td.foo", function(e) {
$(e.target).doSomething();
});
because this would be mixing the table events in with all other top level events in the page when there is no real need to do that. You are only asking for performance issues in the event handling without any benefit of handling the events there.
So, I think the short answer to your question is that handling all events in one top level place leads to performance issues when the event is triggered as the code has to sort out which handler should get the event when there are a lot of events being handled in the same place. Handling the events as close to the generating object as practical makes the event handling more efficient.
If you were doing it in plain JavaScript, the impact of random clicks anywhere on the page triggering events is almost zero. However in jQuery the consequence could be much greater due to the amount of raw JS commands that it has to run to produce the same effect.
Personally, I find that a little delegation is good, but too much of it will start causing more problems than it solves.
If you remove a node, the corresponding listeners are not removed automatically.
Some events just don't bubble
Different libraries may break the system by stopping event propagation (guess you mentioned that one)
I want to use event delegation on a number of buttons in a page of HTML. These buttons are all over the page, and I was wondering how expensive it would be to listen to the entire document for on click events, and then just have those on click events trigger an event delegation handler. Would this be more expensive than having listeners on each of 20+ buttons (it can grow to be over 100 buttons, yes it is silly)?
I don't see how it would be more expensive since it would be listening for clicks on the document object instead of 25 anchor objects
The key idea here is depth. Your event has to traverse up the DOM before it's being executed. If your elements are deep down the DOM tree you may notice some performance degradation.
Couple of things to bear in mind:
the number of anchors doesn't matter for event delegation, that is true
generally speaking event delegation is a superior alternative in most cases, but it's not useful all the time
My suggestion is to analyze these kind of problems, learn how things work, and make decisions by good old common-sense.
I don't see how it would be more expensive since it would be listening for clicks on the document object instead of 25 anchor objects. With that said, just 25-30 buttons is not really resource-intensive so you probably don't need to worry about this.
This is the strategy used by, for example, the jQuery "live" method: listen on the whole document then test the sender against a condition (i.e., selector). Unless the selector is unbearably intensive, this technique is more efficient for large and growing sets of targets.
Did you hard? working code in not a good code.
if you use 20~40 button listeners in a page, instead of using delegate then it will work and probably you will not even see any performance issue. I think you will use forin/$.each to bind all those listeners, so you will not need to write codes for each listeners.
so, my request is please use delegate.
If in future you need to change the logic or you decide to test the application then you will be in trouble.
Is it the process of doing the binding, or the having many things bound that's the primarily issue of binding more events than necessary?
The answer's probably both, but to what extent?
Also, I would assume that mouseover events are more expensive than click events, since they have to be checked for more frequently. Right?
The binding of events does take time, so if you bind say, a hundred or more events, user interaction with the browser will be 'uneventful' during the time spent binding all of those events.
The more event handlers on the page, the longer the event queue, the slower the UI.
#Juan nicely summarises event delegation in a single sentence in his answer, as an alternative to binding events to many child elements.
As far as I have noticed, the more listeners you add, the slower the UI will be. Event delegation uses less memory; instead of a listener for each child node, you have a single, smarter handler at a parent element. Less memory, less attaching and detaching handlers.
Mouseover events are not necessarily more expensive, it's not extra memory, it's just that your handler is run very often, so you need to make sure it's light code
I know it is better coding practice to avoid inline javascript like:
<img id="the_image" onclick="do_this(true);return false;"/>
I am thinking about switching this kind of stuff for bound jquery click events like:
$("#the_image").bind("click",function(){
do_this(true);
return false;
});
Will I lose any performance if I bind a ton of click events? I am not worried about the time it takes to initially bind the events, but the response times between clicking and it happening.
I bet if there is a difference, it is negligible, but I will have a ton of functions bound. I'm wondering if browsers treat the onclick attribute the same way as a bound event.
Thanks
Save yourself the worry, use the on event
$("#the_image").on("click",function(){
do_this(true);
return false;
});
One event, with no performance hit with multiple items.
In my work, it depended. I moved all of my events to jquery. Then I profiled the javascript using FireBug to see what was taking the longest. Then I optimized those taking the longest.
If its just a few, you won't notice any degradation. If its hundreds or thousands, then you might.
The difference is negligible. If you have to bind to many items in the page, there can be a performance hit, and you may want to bind to a higher level object, and simply intercept the target item (image) where you are binding the click to a containing DIV tag. Other than that, it should be fine and will depend on your use case specifically.
Look into event bubbling in javascript for more specifics.
I have a big content slideshow kinda page that I'm making that is starting to use a lot of event triggers. Also about half of them use the livequery plugin.
Will I see speed increases by unloading these events between slides so only the active slide has bound events?
Also is the native livequery significantly faster then the livequery plugin?(cause it's certainly less functional)
Also would something like this:
http://dev.jquery.com/attachment/ticket/2698/unload.js
unbind livequery events as well?
I really just need to know how long it takes to unload/load an event listener vs how many cycles they are really eating up if I leave them running. Also any information on live events would be awesome.
I need more details to offer actual code, but you might want to look into Event Delegation:
Event delegation refers to the use of a single event listener on a parent object to listen for events happening on its children (or deeper descendants). Event delegation allows developers to be sparse in their application of event listeners while still reacting to events as they happen on highly specific targets. This proves to be a key strategy for maintaining high performance in event-rich web projects, where the creation of hundreds of event listeners can quickly degrade performance.
A quick, basic example:
Say you have a DIV with images, like this:
<div id="container">
<img src="happy.jpg">
<img src="sad.jpg">
<img src="laugh.jpg">
<img src="boring.jpg">
</div>
But instead of 4 images, you have 100, or 200. You want to bind a click event to images so that X action is performed when the user clicks on it. Most people's first code might look like this:
$('#container img').click(function() {
performAction(this);
});
This is going to bind a crapload of event handlers that will bog down the performance of your page. With Event Delegation, you can do something like this:
$('#container').click(function(e) {
if($(e.target)[0].nodeName.toUpperCase() == 'IMG') {
performAction(e.target);
}
});
This will only bind 1 event to the actual container, you can then figure out what was clicked by using the event's target property and delegate accordingly. This is still kind of a pain, though, and you can actually get this significant performance improvement without doing all this by using jQuery's live function:
$('#container img').live('click', function() {
performAction(this);
});
Hope this helps.
If by "native liveQuery" you mean live(), then yes, live() is significantly faster than liveQuery(). The latter uses setInterval to periodically query the entire document tree for new elements while the former uses event delegation.
Event delegation wins handsdown. In a nutshell, live() will have one handler on the document per event type registered (eg, click), no matter how many selectors you call live() with.
As for your other question, it sounds like you are binding to each slide's elements and want to know if unbinding and binding again is performant? I would say WRT memory, yes. WRT CPU cycles, no.
To be clear, with the liveQuery() approach CPU will never sleep.
For what it's worth. We just ran some tests on this matter. We created a page with a div containing a number of divs, each of which needed to have an onclick handler display an alert dialog with showing their id.
In one case we used DOM Level 0 event registration and defined the event handler for each directly in the html for each: onclick="_do_click(this);". In the other case, we used DOM level 2 event propagation and defined a single event handler on the containing div.
What we found was, at 100,000 contained divs, there was negligible difference in the load time on FireFox. It took a long time period. In Safari, we found that the DOM level 0 took twice the time off the DOM level 2, but was still four times faster than either FireFox case.
So, yes, it does result in better performance, but it seems like you really have to try to create a noticeable penalty.