Inline javascript performance - javascript

I know it is better coding practice to avoid inline javascript like:
<img id="the_image" onclick="do_this(true);return false;"/>
I am thinking about switching this kind of stuff for bound jquery click events like:
$("#the_image").bind("click",function(){
do_this(true);
return false;
});
Will I lose any performance if I bind a ton of click events? I am not worried about the time it takes to initially bind the events, but the response times between clicking and it happening.
I bet if there is a difference, it is negligible, but I will have a ton of functions bound. I'm wondering if browsers treat the onclick attribute the same way as a bound event.
Thanks

Save yourself the worry, use the on event
$("#the_image").on("click",function(){
do_this(true);
return false;
});
One event, with no performance hit with multiple items.

In my work, it depended. I moved all of my events to jquery. Then I profiled the javascript using FireBug to see what was taking the longest. Then I optimized those taking the longest.
If its just a few, you won't notice any degradation. If its hundreds or thousands, then you might.

The difference is negligible. If you have to bind to many items in the page, there can be a performance hit, and you may want to bind to a higher level object, and simply intercept the target item (image) where you are binding the click to a containing DIV tag. Other than that, it should be fine and will depend on your use case specifically.
Look into event bubbling in javascript for more specifics.

Related

jQuery bind() unbind() and on() and off()

Im working on a small adminarea for a webpage.
Does it make sense to unbind events for increasing performance(client)? Or does it cost more performance to unbind events and binding it 30Seconds later again?
My questions:
Is the idea behind bind()-unbind() or on().off() just increasing clientbased performance or should i use it for other scenarios? This question comes because my javascript code is growing and growing (about 30%) because of unbinding events. And i think, that some things may not work, when user interacts not, as i want...
.
EDIT: The most times im binding/unbinding keypress events, because i need the arrow keys for diff. scenarios.
Unbinding only to bind again for performance reasons is probably bug-prone and makes things overly complicated in most cases.
Instead of binding event listeners on many specific DOM elements, you could take a more "birds eye" approach and bind just a few listeners near the top of the DOM tree, and then when the event is triggered check what was actually clicked.
That way you won't spend CPU on binding/unbinding lots of event listeners, but instead take a small CPU hit when an event is processed (which is usually not noticeable).
This is covered in detail here: event delegation vs direct binding when adding complex elements to a page
If you try to bind and unbind you are creating race conditions for the garbage collector to actually come in and clean up your code. It is best to bind once and not have to bind again.
If your client side is expected to run for long periods of time (weeks, months) then you should look into memory management and memory leaks as more of a concern for performance.
Binding-unbinding (if not done correctly) may produce memory leaks which are hard to find. If you are using webkit, take heap snapshots of your performance with unbinding versus binding once and then you can make the best decision.
Here's a link:
http://addyosmani.com/blog/taming-the-unicorn-easing-javascript-memory-profiling-in-devtools/
One solution to avoid having to worry about this, especially if you deal with constantly changing elements or large quantities, is to register your event with the body and then specify a selector argument.
Like this:
$("body").on("click", ".my-actual-element", function(aEvent) {
// Event handler code goes here.
});
See more here $.on().

Why not take JavaScript event delegation to the extreme?

By now most folks on this site are probably aware that:
$("#someTable TD.foo").click(function(){
$(e.target).doSomething();
});
is going to perform much worse than:
$("#someTable").click(function(){
if (!$(e.target).is("TD.foo")) return;
$(e.target).doSomething();
});
Now how much worse will of course depend on how many TDs your table has, but this general principle should apply as long as you have at least a few TDs. (NOTE: Of course the smart thing would be to use jQuery delegate instead of the above, but I was just trying to make an example with an obvious differentiation).
Anyhow, I explained this principle to a co-worker, and their response was "Well, for site-wide components (e.g. a date-picking INPUT) why stop there? Why not just bind one handler for each type of component to the BODY itself?" I didn't have a good answer.
Obviously using the delegation strategy means rethinking how you block events, so that's one downside. Also, you hypothetically could have a page where you have a "TD.foo" that shouldn't have an event hooked up to it. But, if you understand and are willing to work around the event bubbling change, and if you enforce a policy of "if you put .foo on a TD, it's ALWAYS going to get the event hooked up", neither of these seems like a big deal.
I feel like I must be missing something though, so my question is: is there any other downside to just delegating all events for all site-wide components to the BODY (as opposed to binding them directly to the HTML elements involved, or delegating them to a non-BODY parent element)?
What you're missing is there are different elements of the performance.
Your first example performs worse when setting up the click handler, but performs better when the actual event is triggered.
Your second example performs better when setting up the click handler, but performs significantly worse when the actual event is triggered.
If all events were put on a top level object (like the document), then you'd have an enormous list of selectors to check on every event in order to find which handler function it goes with. This very issue is why jQuery deprecated the .live() method because it looks for all events on the document object and when there were lots of .live() event handlers registered, performance of each event was bad because it had to compare every event to lots and lots of selectors to find the appropriate event handler for that event. For large scale work, it's much, much more efficient to bind the event as close to the actual object that triggered the event. If the object isn't dynamic, then bind the event right to the object that will trigger it. This might cost a tiny bit more CPU when you first bind the event, but the actual event triggering will be fast and will scale.
jQuery's .on() and .delegate() can be used for this, but it is recommended that you find to an ancestor object that is as close as possible to the triggering object. This prevents a buildup of lots of dynamic events on one top level object and prevents the performance degradation for event handling.
In your example above, it's perfectly reasonable to do:
$("#someTable").on('click', "td.foo", function(e) {
$(e.target).doSomething();
});
That would give you one compact representation of a click handler for all rows and it would continue to work even as you added/removed rows.
But, this would not make as much sense:
$(document).on('click', "#someTable td.foo", function(e) {
$(e.target).doSomething();
});
because this would be mixing the table events in with all other top level events in the page when there is no real need to do that. You are only asking for performance issues in the event handling without any benefit of handling the events there.
So, I think the short answer to your question is that handling all events in one top level place leads to performance issues when the event is triggered as the code has to sort out which handler should get the event when there are a lot of events being handled in the same place. Handling the events as close to the generating object as practical makes the event handling more efficient.
If you were doing it in plain JavaScript, the impact of random clicks anywhere on the page triggering events is almost zero. However in jQuery the consequence could be much greater due to the amount of raw JS commands that it has to run to produce the same effect.
Personally, I find that a little delegation is good, but too much of it will start causing more problems than it solves.
If you remove a node, the corresponding listeners are not removed automatically.
Some events just don't bubble
Different libraries may break the system by stopping event propagation (guess you mentioned that one)

what is the best practice to do the following using jquery/javascript?

So I am just trying to add some functions to hover. the below does about the same thing except the one with the for loop i have the target stored in an array. I actually have a few questions.
is this the correct use of stopPropagation?
what's the best practice for doing something like this?
which one of the below method is faster and uses less resources?
I know I can use hover() but I used bind because I thought it is faster, is my thinking correct?
thank you
for (var i in slides) {
$(slides[i].el).bind( {
mouseenter: function (event) {
event.stopPropagation();
// do something
},
mouseleave: function (event) {
event.stopPropagation();
//do something
}
});
}
$("#vehicleSlides .vehicleAreas").bind( {
mouseenter: function (event) {
event.stopPropagation();
// do something
},
mouseleave: function (event) {
event.stopPropagation();
//do something
}
});
1 - is this the correct use of stopPropagation
If you wish to stop the event bubbling up the DOM tree, then yes.
2 - what's the best practice for doing something like this
Personally, I prefer the jQuery selector followed by methods, but this is just a preference. The best practice is whatever style you and your team all agree upon and use consistently.
3 - which one of the below method is faster and uses less resources
In practical terms, there will be next to no difference between the two.
4 - I know I can use hover() but I used bind because I thought it is faster, is my thinking correct
The jQuery hover method is shorthand for the bind to mouseenter and mouseleave events, so there will be one extra function call using hover, however there will be almost no difference in performance.
Best practice would make consistently correct functionality the highest priority, so it depends on whether or not the event should be seen or heard by a parent node once processed.
That depends on your design. For example, in a "window"-like object that you want to be able to drag around, you could either A. attach a mouse handler to the entire window, or B. attach a listener to a child "background" object to detect the mouse down event to begin dragging.
If you choose design B., then you have to make sure labels and other objects you don't want to receive mouse events have mouse events disabled (in Flash [AS3] set mouseEnabled and mouseChildren to false; not sure about JavaScript). One con of this design is that it would prevent any object in the U.I. from passively processing or modifying event behavior in the bubbling phase, because any interception in the capture phase would prevent it from reaching the background in the first place. One pro of this design is that by allowing the event to bubble, you could have monitors and other global effects processing mouse clicks at higher levels.
On the other hand, if you choose design A., then you don't have to worry about making child objects transparent to the mouse (an event on a label would still bubble up to the window container itself), but instead you have to make sure that event propagation on child objects like buttons is stopped once the event is handled so that they don't reach the window handler at the top of the hierarchy. It really depends on how you want it to function, and a hybrid approach is probably best.
You can have this down to a design science to the point where the "design" isn't a design at all, but a complex truth known to be true scientifically.
Any browser optimization of this system would involve keeping track, during the capture phase, of which parent nodes had bubble-phase event handlers attached for the event type. If for example it entered the target/bubbling phase knowing that no parent nodes had handlers, it could skip the entire bubbling phase, or jump directly to nodes known to have handlers. That's poor design however, IMO, because you might want to attach new handlers to parent nodes at any time during capture or bubbling, or you may want to move a node to another parent to try to cause the event to bubble up a different parent chain. Try it and see how it behaves in different browsers. There's bound to be huge inconsistency like anything else involving HTML rendering and event processing, in terms of both behavior and performance :P
There's probably not much difference between bind and hover. Work avoidance is always a good thing to consider but 1-3 more function calls to get to an event handler isn't going to put a dent in a modern JIT's performance.
You are not invoking stopPropagation incorrectly but if you're doing it for no particular reason other than bubbling making you uncomfortable or because you're afraid of triggering something else by accident, then yes, you are doing it wrong.
The first rule of UI work should always be:
DON'T DO ANYTHING YOU DON'T NEED TO DO
Examples:
Don't solve problems you don't have yet.
Don't do anything "just in case," because that means you don't know what's actually happening and you really need to understand what your stuff actually does and how it works before you call anything done in UI.
Don't stop people from using your UI differently than anticipated. e.g. validating HTML format and throwing errors when somebody tries to make something you wrote work a little differently. What are you, running a customer support line? It helps no one/serves nothing. What does it matter if they prefer a more semantically correct unordered list to a pile of divs?
But on stopProp specifically, if you must use it (and it can solve some problems very elegantly so never say never) try to only hit endpoint nodes with it so other things can be added to the same container without losing the benefits of bubbling. Yes, benefits I say. Don't fear bubbling. Events moving back up the ancestor line are only likely to trigger other UI events if your HTML is a complete disaster (it should be nothing but containers all the way back up to the body, right?).
Also if you can just verify that you have the right target element in the handler before taking action, do that instead of stopProp. But cripes it pisses me off when people add return false and e.stopPropagation to every single UI handler they write. Especially when they themselves pick up the event from a container that encompasses much more than the active element in question.
So don't do that. We might work in the same office some day and I can be whiny and insufferable and I'll sabotage your cheesecake.

How expensive is listening for events on the entire document?

I want to use event delegation on a number of buttons in a page of HTML. These buttons are all over the page, and I was wondering how expensive it would be to listen to the entire document for on click events, and then just have those on click events trigger an event delegation handler. Would this be more expensive than having listeners on each of 20+ buttons (it can grow to be over 100 buttons, yes it is silly)?
I don't see how it would be more expensive since it would be listening for clicks on the document object instead of 25 anchor objects
The key idea here is depth. Your event has to traverse up the DOM before it's being executed. If your elements are deep down the DOM tree you may notice some performance degradation.
Couple of things to bear in mind:
the number of anchors doesn't matter for event delegation, that is true
generally speaking event delegation is a superior alternative in most cases, but it's not useful all the time
My suggestion is to analyze these kind of problems, learn how things work, and make decisions by good old common-sense.
I don't see how it would be more expensive since it would be listening for clicks on the document object instead of 25 anchor objects. With that said, just 25-30 buttons is not really resource-intensive so you probably don't need to worry about this.
This is the strategy used by, for example, the jQuery "live" method: listen on the whole document then test the sender against a condition (i.e., selector). Unless the selector is unbearably intensive, this technique is more efficient for large and growing sets of targets.
Did you hard? working code in not a good code.
if you use 20~40 button listeners in a page, instead of using delegate then it will work and probably you will not even see any performance issue. I think you will use forin/$.each to bind all those listeners, so you will not need to write codes for each listeners.
so, my request is please use delegate.
If in future you need to change the logic or you decide to test the application then you will be in trouble.

jquery unbinding events speed increases

I have a big content slideshow kinda page that I'm making that is starting to use a lot of event triggers. Also about half of them use the livequery plugin.
Will I see speed increases by unloading these events between slides so only the active slide has bound events?
Also is the native livequery significantly faster then the livequery plugin?(cause it's certainly less functional)
Also would something like this:
http://dev.jquery.com/attachment/ticket/2698/unload.js
unbind livequery events as well?
I really just need to know how long it takes to unload/load an event listener vs how many cycles they are really eating up if I leave them running. Also any information on live events would be awesome.
I need more details to offer actual code, but you might want to look into Event Delegation:
Event delegation refers to the use of a single event listener on a parent object to listen for events happening on its children (or deeper descendants). Event delegation allows developers to be sparse in their application of event listeners while still reacting to events as they happen on highly specific targets. This proves to be a key strategy for maintaining high performance in event-rich web projects, where the creation of hundreds of event listeners can quickly degrade performance.
A quick, basic example:
Say you have a DIV with images, like this:
<div id="container">
<img src="happy.jpg">
<img src="sad.jpg">
<img src="laugh.jpg">
<img src="boring.jpg">
</div>
But instead of 4 images, you have 100, or 200. You want to bind a click event to images so that X action is performed when the user clicks on it. Most people's first code might look like this:
$('#container img').click(function() {
performAction(this);
});
This is going to bind a crapload of event handlers that will bog down the performance of your page. With Event Delegation, you can do something like this:
$('#container').click(function(e) {
if($(e.target)[0].nodeName.toUpperCase() == 'IMG') {
performAction(e.target);
}
});
This will only bind 1 event to the actual container, you can then figure out what was clicked by using the event's target property and delegate accordingly. This is still kind of a pain, though, and you can actually get this significant performance improvement without doing all this by using jQuery's live function:
$('#container img').live('click', function() {
performAction(this);
});
Hope this helps.
If by "native liveQuery" you mean live(), then yes, live() is significantly faster than liveQuery(). The latter uses setInterval to periodically query the entire document tree for new elements while the former uses event delegation.
Event delegation wins handsdown. In a nutshell, live() will have one handler on the document per event type registered (eg, click), no matter how many selectors you call live() with.
As for your other question, it sounds like you are binding to each slide's elements and want to know if unbinding and binding again is performant? I would say WRT memory, yes. WRT CPU cycles, no.
To be clear, with the liveQuery() approach CPU will never sleep.
For what it's worth. We just ran some tests on this matter. We created a page with a div containing a number of divs, each of which needed to have an onclick handler display an alert dialog with showing their id.
In one case we used DOM Level 0 event registration and defined the event handler for each directly in the html for each: onclick="_do_click(this);". In the other case, we used DOM level 2 event propagation and defined a single event handler on the containing div.
What we found was, at 100,000 contained divs, there was negligible difference in the load time on FireFox. It took a long time period. In Safari, we found that the DOM level 0 took twice the time off the DOM level 2, but was still four times faster than either FireFox case.
So, yes, it does result in better performance, but it seems like you really have to try to create a noticeable penalty.

Categories