I am strongly suspecting that my abysmal scrolling performance on mobile devices is due to a multitude of events being fired by the elements being scrolled. Now - is there a handy way to generally prevent all of those events inside of the DOM element being scrolled from firing until scrolling is done? No mousenter, mouseleave, click, focus, active ... nothing ... until the user is done with scrolling?
Thanks for the help.
It's unlikely that you would see performance issues just from the events firing, otherwise there would be performance issues on every page. More likely the code in those handlers is taking too long.
Probably you should debounce or throttle your event handlers as you bind them. I believe there are jQuery plugins available that provide similar functionality, or you can review the code of underscore and extract just the methods you need if you don't want to include the library.
Related
I have a navigation in my webpage. The navigation shows on clicking a button. Then I close the navigation when user clicks anywhere outside the navigation.
$("#navbutton").click(function(){
if($("#navigation:visible"))
$('#navigation').hide();
else {
$('#navigation').show();
}
});
$(document).click(function(event) {
if(!$(event.target).closest('#navigation').length) {
$('#navigation').hide();
}
})
Now here even if the navigation is already hidden the click event on document will keep firing. I have many other click events on document too.
My question is should removing click event on document when the navigation is already hidden be of any advantage? Would there be some memory of browser released? Would my webpage behave faster? I know the effect would be minor for one event. But, suppose I have 100s of similar navigation. Would removing 100s of those unnecessary events be beneficial?
Thanks
After reading the title "You Don't Know JS: Async & Performance" getify just stated very clear that pre-mature optimization is the root of all evil.
You can jump directly of that chapter of his book.
If this is a simple web page, and there are a moderated amount of DOM objects than having event listeners all the time in the memory will do no harm, and if optimized will offer nothing much of performance.
Before you go into optimization, you have to do some benchmarks and you have to understand what are you benchmarking, and comparing to the real world what do those benchmarking results mean ?
If you remove the elements instead of hiding it, if you don't need it in future, the browser now-a-days removes those event handlers themselves which is one of the way to remove memory leaks.
Recently I went through this blog:
http://javascript.crockford.com/memory/leak.html
This is really helpful blog.
Stopping and event or not wouldn't give anything in performance, in fact it would be just unnoticed by the user.
That would be helpful if you want your page to run as fast to IE6 or older browsers. But for browsers just more modern than that you won't get anything more.
You can check this very useful article: The Dangers of Stopping Event Propagation
I have a set of nested DOM elements with mouse event handlers (mouseover, mouseout). Side effects of the events update other views; these updates are potentially computationally expensive, and can create annoying visual flicker, so I would like to minimize them. My first thought was to build a throttling mechanism that delays the handling of a mouse-over event for some interval, giving the mouse a chance to exit the element in question. If no exit occurs within the specified interval, the event is fired; if an exit occurs, the event is canceled without being propagated.
My question is whether existing UI frameworks already support such mechanisms, and, if so, which ones do so? While I can certainly build this, it seems like a problem that others might have solved already.
You can use underscore js' throttle on your mouse event handlers. This was recently blogged about on the toggl blog: http://blog.toggl.com/2013/02/increasing-perceived-performance-with-_throttle/. There was some monkey patching of jQuery involved, though, so it's not the cleanest method.
By now most folks on this site are probably aware that:
$("#someTable TD.foo").click(function(){
$(e.target).doSomething();
});
is going to perform much worse than:
$("#someTable").click(function(){
if (!$(e.target).is("TD.foo")) return;
$(e.target).doSomething();
});
Now how much worse will of course depend on how many TDs your table has, but this general principle should apply as long as you have at least a few TDs. (NOTE: Of course the smart thing would be to use jQuery delegate instead of the above, but I was just trying to make an example with an obvious differentiation).
Anyhow, I explained this principle to a co-worker, and their response was "Well, for site-wide components (e.g. a date-picking INPUT) why stop there? Why not just bind one handler for each type of component to the BODY itself?" I didn't have a good answer.
Obviously using the delegation strategy means rethinking how you block events, so that's one downside. Also, you hypothetically could have a page where you have a "TD.foo" that shouldn't have an event hooked up to it. But, if you understand and are willing to work around the event bubbling change, and if you enforce a policy of "if you put .foo on a TD, it's ALWAYS going to get the event hooked up", neither of these seems like a big deal.
I feel like I must be missing something though, so my question is: is there any other downside to just delegating all events for all site-wide components to the BODY (as opposed to binding them directly to the HTML elements involved, or delegating them to a non-BODY parent element)?
What you're missing is there are different elements of the performance.
Your first example performs worse when setting up the click handler, but performs better when the actual event is triggered.
Your second example performs better when setting up the click handler, but performs significantly worse when the actual event is triggered.
If all events were put on a top level object (like the document), then you'd have an enormous list of selectors to check on every event in order to find which handler function it goes with. This very issue is why jQuery deprecated the .live() method because it looks for all events on the document object and when there were lots of .live() event handlers registered, performance of each event was bad because it had to compare every event to lots and lots of selectors to find the appropriate event handler for that event. For large scale work, it's much, much more efficient to bind the event as close to the actual object that triggered the event. If the object isn't dynamic, then bind the event right to the object that will trigger it. This might cost a tiny bit more CPU when you first bind the event, but the actual event triggering will be fast and will scale.
jQuery's .on() and .delegate() can be used for this, but it is recommended that you find to an ancestor object that is as close as possible to the triggering object. This prevents a buildup of lots of dynamic events on one top level object and prevents the performance degradation for event handling.
In your example above, it's perfectly reasonable to do:
$("#someTable").on('click', "td.foo", function(e) {
$(e.target).doSomething();
});
That would give you one compact representation of a click handler for all rows and it would continue to work even as you added/removed rows.
But, this would not make as much sense:
$(document).on('click', "#someTable td.foo", function(e) {
$(e.target).doSomething();
});
because this would be mixing the table events in with all other top level events in the page when there is no real need to do that. You are only asking for performance issues in the event handling without any benefit of handling the events there.
So, I think the short answer to your question is that handling all events in one top level place leads to performance issues when the event is triggered as the code has to sort out which handler should get the event when there are a lot of events being handled in the same place. Handling the events as close to the generating object as practical makes the event handling more efficient.
If you were doing it in plain JavaScript, the impact of random clicks anywhere on the page triggering events is almost zero. However in jQuery the consequence could be much greater due to the amount of raw JS commands that it has to run to produce the same effect.
Personally, I find that a little delegation is good, but too much of it will start causing more problems than it solves.
If you remove a node, the corresponding listeners are not removed automatically.
Some events just don't bubble
Different libraries may break the system by stopping event propagation (guess you mentioned that one)
Which events are the most resource intensive to have attached? Is a mouseover "worst" than a click? Are there any events that are known to be really harsh on the browser? I have my sights on IE7 mainly, as we are seeing performance issues there. We use event delegation where we can.
Or, how can I profile events which are actually running to determine which have the greatest impact on performance at runtime?
I'm interested in the events themselves, please don't tell me I need to go look into what my functions are doing in those events. Problems may exist there, but that's not my question.
So, to start with, events that fire more often can be more troublesome. So a mouseover event, which fires "continuously" as the mouse moves over an element, could cause a performance impact more easily than a click event, which can only fire as fast as the user can click.
However, it's the code you put in your handler that will have the real performance impact.
If firing speed is an issue, check out the excellent jQuery throttle/debounce plugin: https://github.com/cowboy/jquery-throttle-debounce
I'd imagine a callback's intensity is proportional to how many times it's called.
Events like mouseover or deviceorientation are more demanding than a click or similar 'one time' event.
The more an event have to check (and then throw) the more it consumes i.e. order from the max to the min:
mousemove throws an event at any move
mouseover throws an event at each move if pointing on a relevant item
mouseenter have to watch where is the cursor to then trow something
mouse click only throws an event when you click…
I have a big content slideshow kinda page that I'm making that is starting to use a lot of event triggers. Also about half of them use the livequery plugin.
Will I see speed increases by unloading these events between slides so only the active slide has bound events?
Also is the native livequery significantly faster then the livequery plugin?(cause it's certainly less functional)
Also would something like this:
http://dev.jquery.com/attachment/ticket/2698/unload.js
unbind livequery events as well?
I really just need to know how long it takes to unload/load an event listener vs how many cycles they are really eating up if I leave them running. Also any information on live events would be awesome.
I need more details to offer actual code, but you might want to look into Event Delegation:
Event delegation refers to the use of a single event listener on a parent object to listen for events happening on its children (or deeper descendants). Event delegation allows developers to be sparse in their application of event listeners while still reacting to events as they happen on highly specific targets. This proves to be a key strategy for maintaining high performance in event-rich web projects, where the creation of hundreds of event listeners can quickly degrade performance.
A quick, basic example:
Say you have a DIV with images, like this:
<div id="container">
<img src="happy.jpg">
<img src="sad.jpg">
<img src="laugh.jpg">
<img src="boring.jpg">
</div>
But instead of 4 images, you have 100, or 200. You want to bind a click event to images so that X action is performed when the user clicks on it. Most people's first code might look like this:
$('#container img').click(function() {
performAction(this);
});
This is going to bind a crapload of event handlers that will bog down the performance of your page. With Event Delegation, you can do something like this:
$('#container').click(function(e) {
if($(e.target)[0].nodeName.toUpperCase() == 'IMG') {
performAction(e.target);
}
});
This will only bind 1 event to the actual container, you can then figure out what was clicked by using the event's target property and delegate accordingly. This is still kind of a pain, though, and you can actually get this significant performance improvement without doing all this by using jQuery's live function:
$('#container img').live('click', function() {
performAction(this);
});
Hope this helps.
If by "native liveQuery" you mean live(), then yes, live() is significantly faster than liveQuery(). The latter uses setInterval to periodically query the entire document tree for new elements while the former uses event delegation.
Event delegation wins handsdown. In a nutshell, live() will have one handler on the document per event type registered (eg, click), no matter how many selectors you call live() with.
As for your other question, it sounds like you are binding to each slide's elements and want to know if unbinding and binding again is performant? I would say WRT memory, yes. WRT CPU cycles, no.
To be clear, with the liveQuery() approach CPU will never sleep.
For what it's worth. We just ran some tests on this matter. We created a page with a div containing a number of divs, each of which needed to have an onclick handler display an alert dialog with showing their id.
In one case we used DOM Level 0 event registration and defined the event handler for each directly in the html for each: onclick="_do_click(this);". In the other case, we used DOM level 2 event propagation and defined a single event handler on the containing div.
What we found was, at 100,000 contained divs, there was negligible difference in the load time on FireFox. It took a long time period. In Safari, we found that the DOM level 0 took twice the time off the DOM level 2, but was still four times faster than either FireFox case.
So, yes, it does result in better performance, but it seems like you really have to try to create a noticeable penalty.