I am newbie to jQuery. I am bit confused whether is it fine or may cause memory leak ?
Here is the code: This method is called on certain date filters for each new values
function preapreTooltip(chart) {
var tickLength = chart.xAxis[0].tickPositions.length,
ticks = chart.xAxis[0].ticks,
tickPositions = chart.xAxis[0].tickPositions;
for ( var iCntr = 0; iCntr < tickLength; iCntr++) {
var tickVal = tickPositions[iCntr];
//.label or .mark or both
(function(tickVal) { // Is this good practice to call function like this?
ticks[tickVal].label
.on('mouseover', function(event) { // Is this good practice to call function like this?
var label = '', labelCnt=0;
$(chart.series).each(function(nCntr, series) {
//business logic for each series
});
// calling method to show values in a popup
});
ticks[tickVal].label.on('mouseout', function(event) { // Is this good practice to call function like this?
try {
hideWrapper(); // hides popup
} catch (e) {
// do nothing
}
});
})(tickVal);
}
}
Whilst there are browser specific issues that need to be avoided when writing large pure JavaScript projects, when using a library such as jQuery it should be assumed that the library's design helps you avoid these problems. However, considering memory leaks are rather hard to track down, and each different version of a particular browser could behave differently - it is far better to know how to generally avoid memory leaks than being specific:
If your code is being iterated many times, make sure the variables you are using can be discarded by garbage collection, and are not tied up in closure references.
If your code is dealing with large data structures, make sure you have a way of removing or nullifying the data.
If your code constructs many objects, functions and event listeners - it is always best to include some deconstructive code too.
Try to avoid attaching javascript objects or functions to elements directly as an attribute - i.e. element.onclick = function(){}.
If in doubt, always tidy up when your code is finished.
You seem to believe that it is the way of calling a function that will have an effect on leaking, however it is always much more likely to be the content of those functions that could cause a problem.
With your code above, my only suggestions would be:
Whenever using event listeners try and find a way to reuse functions rather than creating one per element. This can be achieved by using event delegation (trapping the event on an ancestor/parent and delegating the reaction to the event.target), or coding a singular general function to deal with your elements in a relative way, most often relative to this or $(this).
When needing to create many event handlers, it is usually best to store those event listeners as named functions so you can remove them again when you are finished. This would mean avoiding using anonymous functions as you are doing. However, if you know that it is only your code dealing with the DOM, you can fallback to using $(elements).unbind('click') to remove all click handlers (anonymous or not) applied using jQuery to the selected elements. If you do use this latter method however, it is definitely better to use jQuery's event namespacing ability - so that you know you are only removing your events. i.e. $(elements).unbind('click.my_app');. This obviously means you do have to bind the events using $(elements).bind('click.my_app', function(){...});
being more specific:
auto calling an anonymous function
(function(){
/*
running an anonymous function this way will never cause a memory
leak because memory leaks (at least the ones we have control over)
require a variable reference getting caught in memory with the
JavaScript runtime still believing that the variable is in use,
when it isn't - meaning that it never gets garbage collected.
This construction has nothing to reference it, and so will be
forgotten the second it has been evaluated.
*/
})();
adding an anonymous event listener with jQuery:
var really_large_variable = {/*Imagine lots of data here*/};
$(element).click(function(){
/*
Whilst I will admit not having investigated to see how jQuery
handles its event listeners onunload, I doubt if it is auto-
matically unbinding them. This is because for most code they
wont cause a problem, especially if only a few are in use. For
larger projects though it is a good idea to create some beforeunload
or unload handlers that delete data and unbind any event handling.
The reason for this is not to protect against the reference of the
function itself, but to make sure the references the function keeps
alive are removed. This is all down to how JS scope works, if you
have never read up on JavaScript scope... I suggest you do so.
As an example however, this anonymous function has access to the
`really_large_variable` above - and will prevent any garbage collection
system from deleting the data contained in `really_large_variable`
even if this function or any other code never makes use of it.
When the page unloads you would hope that the browser would be able
to know to clear the memory involved, but you can't be 100% certain
it will *(especially the likes of IE6/7)* - so it is always best
to either make sure you set the contents of `really_large_variable` to null
or make sure you remove your references to your closures/event listeners.
*/
});
tearDowns and deconstruction
I've focused - with regard to my explanations - on when the page is no longer required and the user is navigating away. However the above becomes even more relevant in today's world of ajaxed content and highly dynamic interfaces; GUIs that are constantly creating and trashing elements.
If you are creating a dynamic javascript app, I cannot stress how important it is to have constructors with .tearDown or .deconstruct methods that are executed when the code is no longer required. These should step through large custom object constructs and nullify their content, as well as removing event listeners and elements that have been dynamically created and are no longer of use. You should also use jQuery's empty method before replacing an element's content - this can be better explained in their words:
http://api.jquery.com/empty/
To avoid memory leaks, jQuery removes other constructs such as data and event handlers from the child elements before removing the elements themselves.
If you want to remove elements without destroying their data or event handlers (so they can be re-added later), use .detach() instead.
Not only does coding with tearDown methods force you to do so more tidily (i.e. making sure you to keep related code, events and elements namespaced together), it generally means you build code in a more modular fashion; which is obviously far better for future-proofing your app, for read-ability, and for anyone else who may take over your project at a later date.
Here is an excellent article to detect memory leak using Chrome or Safari : http://javascript.crockford.com/memory/leak.html
It is use not well known functionalities of Developer Tools panel.
Interesting and very useful!
EDIT
It was not the good link (but still useful). Here is the one : http://gent.ilcore.com/2011/08/finding-memory-leaks.html
Related
I am building a single page webapp. This means over a period of time I get new DOM elements, remove unneeded ones. For example when I fetch a new form I just replace the contents of a specific div with that form HTML and also set up listeners unique to this form's elements. After some period I replace the contents of this form with a new instance of form (having different ID's).
I set up the event listeners again for this new form. Now the previous form is no longer part of the DOM so I the DOM elements should be automatically garbage collected. I am also expecting the listener functions pointing to the elements removed from the DOM to disappear.
However the following profile gathered from Chrome suggests that my listener count is increasing over time. Can you tell me why this is so? I tried clicking on the "Collect Garbage" button. But this is the profile I get. Is there something wrong with the way I am building my application? Is there a problem and if so how should I fix it?
In case it matters I am using JSP templating language with jquery, jquery-ui and some other plugins.
This is how the dynamic fragments that I add/remove on my page look like.
<script>
$(document).ready(function() {
$("#unique_id").find(".myFormButton").button().click(
function() {
$.ajax({url: "myurl.html",
success: function(response) {
console.log(response);
}
});
});
});
</script>
<div id="unique_id">
<form>
<input name="myvar" />
<button class="myFormButton">Submit</button>
</form>
</div>
Update
If you want to have a look at the actual code here is the relevant portion.
This link shows that when clear button is pressed the function clearFindForm is called which effectively refetches content (HTML fragment) using an ajax request and replaces the entire div in this jsp with the content fetched.
The refetchContent function works as below: Here is the link to the code in case that helps in giving a better answer.
function refetchContent(url, replaceTarget) {
$.ajax({
url: url,
data: {},
type: "GET",
success: function (response) {
replaceTarget.replaceWith(response);
},
error: function (response) {
showErrorMessage("Something went wrong. Please try again.");
}
});
}
While jQuery is very good at removing event listeners to DOM elements that are removed via it's methods (including .html() - just read the API: http://api.jquery.com/html/) - it won't remove event listeners to DOM elements that may still have a reference to them in a detached DOM tree.
For example, if you do something like this:
$.ajax({
....
})
.done(function(response,status,jqXHR) {
//create a detached DOM tree
form = $(response)
//add an event listener to the detached tree
form.find('#someIDInTheResponse').on('submit',function() {
});
//add the form to the html
$('#someID').html(form);
});
//at some other point in the code
$('#someIDInTheResponse').remove();
Note that in the above example, despite the fact that you removed the element from the DOM, the listener will not be removed from memory. This is because the element still exists in memory in a detached DOM tree accessible via the global variable "form" (this is because I didn't create use "var" to create the initial detached DOM tree in the scope of the done function....there are some nuances and jQuery can't fix bad code, it can only do it's best.
2 other things:
Doing everything inside callbacks or event listeners (like do this on a button click) turns into real bad spaghetti code really fast and becomes unmanageable rather quickly. Try and separate application logic from UI interaction. For example, don't use callbacks to click events to perform a bunch of logic, use callbacks to click events to call functions that perform a bunch of logic.
Second, and somewhat less important, (I welcome feedback on this perspective via comments) I would deem 30MB of memory to be a fairly high baseline for a web app. I've got a pretty intensive google maps web app that hits 30MB after an hour or so of intensive use and you can really notice start to notice it's sluggishness when you do. Lord knows what it would act like if it ever hit 60MB. I'm thinking IE<9 would become virtually unusable at this point, although, like I said, I welcome other people's feedback on this idea.
I wonder if you are simply not unbinding/removing the previously bound event listeners when you replace fragments?
I briefly looked at the specific sections of code you linked to in your updated question, but didn't see any event listener binding other than what you are doing in document ready, so I'm guessing you are doing some additional binding when you replace the document fragments. I'm not a jQuery expert, but in general binding or assigning additional event listeners does not replace previously bound/assigned event listeners automatically.
My point is that you should look to see if you are doing binding via "click()" (or via some other approach) to existing elements without unbinding the existing event listener first.
You might take a look at moff's answer to this question, which provides an example for click, specifically.
I can't add a comment because of reputation but to respond to what Adam is saying...
To summarise the case Adam presents, it's potentially nothing to do with jQuery, the problem may be within normal Javascript. However you don't present enough code for anyone to really get to the bottom of the problem. Your usage of scoped encapsulation may be perfectly fine and the problem may be else where.
I would recommend that you search for tools for finding the cause of memory leaks (for example, visualising/traversing the entire object/scope/reference/function tree, etc).
One thing to watch out for with jQuery are plugins and global insertions into the DOM! I've seen many JS libs, not just jQuery plugins, fail to provide destroyers and cleanup methods. The worst offenders are often things with popups and popouts such as date pickers, dialogs, etc that have a nasty habit of appending layer divs and the like into body without removing them after.
Something to keep in mind if that a lot of people just get as far as to make things construct but don't handle destruct especially in JS because they expect still even in this day and age that you will be serving normal webpages. You should also check plugins for destroy methods because not all will hook onto a remove event. Events are also used in a messy fashion in jQuery by others so a rogue handler might be halting the execution of following cleanup events.
In summary, jQuery is a nice robust library but be warned, just because someone depends on it does not mean it inherits jQuery's quality.
Out of curiosity... have you checked the listeners on document.ready? Maybe you need to manually GC those.
I'm working on a completely ajax web project where a section content is always generated through DOM manipulation or using jQuery's load function. I had been using "live" but am very interested in moving away from "live" and using "on" for performance benefits. When a new page loads a whole new set of bindings required for that section also need to get loaded. The html sections have some parent DOMs (basically wrappers for different content areas of the web page) that never change allowing me to do bindings on them for all future DOM elements that will be created on the page.
In terms of memory and performance trade off which is generally the better way to handle event bindings?
After a new section has finished loading its html, bind all the events needed for that specific page instance on DOM elements that will be removed when a page changes.
Bind every event on the very first page load to DOM elements (not to the document though like live does) that are known to always exist.
Memory issues with listeners can usually be dealt with fairly easily (don't hold large chunks of data in closures, don't create circular references, use delegation, etc.).
"Live" just uses delegation (as far as I know) - you can implement delegation quite simply without it using simple criteria, e.g. class or id, with listeners on the unchanging parent elements. Delegation is a good strategy where it replaces numerous other listeners, the content is constantly being changed and identifying elements that should call functions is simple.
If you follow a strategy of attaching numerous new listeners every time content changes, you also have to detach the old ones when they are replaced as a strategy to reduce the likelihood of memory leaks. Performance (in terms of time taken to attach and remove listeners as part of the DOM update) usually isn't that much of an issue unless you are doing hundreds of them.
With delegation, a parent element listens for events, checks if the event.target/srcElement is one it cares about for that event, then calls the appropriate function perhaps using call to set the value of this if required.
Note that you can also simply include inline listeners in the inserted HTML, then you never need to worry about memory leaks, delegation or adding and removing listeners. Inline listeners using a simple function call are no more complex than adding any other attribute (class, id, whatever) and require zero extra programming on the client. I don't think inline listeners were ever an issue for memory leaks.
Of course the "unobtrusive javascript" mob will howl, but they are very practical, functional and robust, not to mention supported by every browser that ever supported javascript.
Is there a way to count the amount of times that the DOM has been appended to?
If you're stricly after .append(), you can just patch it, like:
var _origAppend = $.fn.append;
$.appendCount = 0;
$.fn.append = function() {
$.appendCount++;
return _origAppend.apply(this, arguments);
};
Now, you could just access $.appendCount at anytime to see how often it was called. However, be aware that there are lots of functions which can manipulate the DOM. It might be a more clever idea, to patch jQuery.fn.domManip instead. That method is called internally basically at any dom manipulation (like you might have suspected because of the name)
You can use the mutation events.
Be aware they have a huge performance impact!
The mutation event module is designed to allow notification of any changes to the structure of a document, including attr and text modifications. It may be noted that none of the mutation events listed are designated as cancelable. This stems from the fact that it is very difficult to make use of existing DOM interfaces which cause document modifications if any change to the document might or might not take place due to cancelation of the related event. Although this is still a desired capability, it was decided that it would be better left until the addition of transactions into the DOM.
Spec
So I am just trying to add some functions to hover. the below does about the same thing except the one with the for loop i have the target stored in an array. I actually have a few questions.
is this the correct use of stopPropagation?
what's the best practice for doing something like this?
which one of the below method is faster and uses less resources?
I know I can use hover() but I used bind because I thought it is faster, is my thinking correct?
thank you
for (var i in slides) {
$(slides[i].el).bind( {
mouseenter: function (event) {
event.stopPropagation();
// do something
},
mouseleave: function (event) {
event.stopPropagation();
//do something
}
});
}
$("#vehicleSlides .vehicleAreas").bind( {
mouseenter: function (event) {
event.stopPropagation();
// do something
},
mouseleave: function (event) {
event.stopPropagation();
//do something
}
});
1 - is this the correct use of stopPropagation
If you wish to stop the event bubbling up the DOM tree, then yes.
2 - what's the best practice for doing something like this
Personally, I prefer the jQuery selector followed by methods, but this is just a preference. The best practice is whatever style you and your team all agree upon and use consistently.
3 - which one of the below method is faster and uses less resources
In practical terms, there will be next to no difference between the two.
4 - I know I can use hover() but I used bind because I thought it is faster, is my thinking correct
The jQuery hover method is shorthand for the bind to mouseenter and mouseleave events, so there will be one extra function call using hover, however there will be almost no difference in performance.
Best practice would make consistently correct functionality the highest priority, so it depends on whether or not the event should be seen or heard by a parent node once processed.
That depends on your design. For example, in a "window"-like object that you want to be able to drag around, you could either A. attach a mouse handler to the entire window, or B. attach a listener to a child "background" object to detect the mouse down event to begin dragging.
If you choose design B., then you have to make sure labels and other objects you don't want to receive mouse events have mouse events disabled (in Flash [AS3] set mouseEnabled and mouseChildren to false; not sure about JavaScript). One con of this design is that it would prevent any object in the U.I. from passively processing or modifying event behavior in the bubbling phase, because any interception in the capture phase would prevent it from reaching the background in the first place. One pro of this design is that by allowing the event to bubble, you could have monitors and other global effects processing mouse clicks at higher levels.
On the other hand, if you choose design A., then you don't have to worry about making child objects transparent to the mouse (an event on a label would still bubble up to the window container itself), but instead you have to make sure that event propagation on child objects like buttons is stopped once the event is handled so that they don't reach the window handler at the top of the hierarchy. It really depends on how you want it to function, and a hybrid approach is probably best.
You can have this down to a design science to the point where the "design" isn't a design at all, but a complex truth known to be true scientifically.
Any browser optimization of this system would involve keeping track, during the capture phase, of which parent nodes had bubble-phase event handlers attached for the event type. If for example it entered the target/bubbling phase knowing that no parent nodes had handlers, it could skip the entire bubbling phase, or jump directly to nodes known to have handlers. That's poor design however, IMO, because you might want to attach new handlers to parent nodes at any time during capture or bubbling, or you may want to move a node to another parent to try to cause the event to bubble up a different parent chain. Try it and see how it behaves in different browsers. There's bound to be huge inconsistency like anything else involving HTML rendering and event processing, in terms of both behavior and performance :P
There's probably not much difference between bind and hover. Work avoidance is always a good thing to consider but 1-3 more function calls to get to an event handler isn't going to put a dent in a modern JIT's performance.
You are not invoking stopPropagation incorrectly but if you're doing it for no particular reason other than bubbling making you uncomfortable or because you're afraid of triggering something else by accident, then yes, you are doing it wrong.
The first rule of UI work should always be:
DON'T DO ANYTHING YOU DON'T NEED TO DO
Examples:
Don't solve problems you don't have yet.
Don't do anything "just in case," because that means you don't know what's actually happening and you really need to understand what your stuff actually does and how it works before you call anything done in UI.
Don't stop people from using your UI differently than anticipated. e.g. validating HTML format and throwing errors when somebody tries to make something you wrote work a little differently. What are you, running a customer support line? It helps no one/serves nothing. What does it matter if they prefer a more semantically correct unordered list to a pile of divs?
But on stopProp specifically, if you must use it (and it can solve some problems very elegantly so never say never) try to only hit endpoint nodes with it so other things can be added to the same container without losing the benefits of bubbling. Yes, benefits I say. Don't fear bubbling. Events moving back up the ancestor line are only likely to trigger other UI events if your HTML is a complete disaster (it should be nothing but containers all the way back up to the body, right?).
Also if you can just verify that you have the right target element in the handler before taking action, do that instead of stopProp. But cripes it pisses me off when people add return false and e.stopPropagation to every single UI handler they write. Especially when they themselves pick up the event from a container that encompasses much more than the active element in question.
So don't do that. We might work in the same office some day and I can be whiny and insufferable and I'll sabotage your cheesecake.
In the last couple of years I've been using JavaScript name spaces espoused by YUI as my default way of formatting JavaScript code.
All in all it works well in more complex environments where many web widgets may be stuck together at different times.
Until recently I've almost always used added event handlers in the HTML by calling the handlers from the name spaced object itself.
I've been using JQuery lately and have been setting the handlers inside of the ready function rather than in the HTML.
As I do more and more of that it seems that the closure of the JQuery ready function is taking the place of the name space object.
What I am ending up with now (at least for one widget/tool type pages) is a name space object that primarily deals holds data and a bunch of closed event handlers that I access the name space object for data specific purposes.
My question is, what are some best practices for dealing with closures verses a Name space object particularly as it relates to event handlers?
Or, similarly, what are some best practices for setting up event handlers, should they be "heavy weight" and handle the processing themselves, or hand off to more library like code.
Personally it's harder to keep track of the flow of many separate "heavy weight" event handlers.
This is a very subjective question. That being said, I'll do my best to give what I think are the pros/cons of each approach.
Namespace Objects:
yourNamespace.events.someClickHandler = function(event) {...}
$(function() {
$("#someElement").click(yourNamespace.events.someClickHandler);
}
One of the first "cons" of this approach is the separation of your event handler from it's hookup. You can avoid this by hooking up each event after it is defined (as I did in the example above), but then you wind up with a whole lot of "$(function() {" lines throughout your code.
Another "con" of this style is the long names for your event handlers (eg. "yourNamespace.events.someClickHandler"). You can use aliases:
var events = yourNamespace.events;
$("#someElement").click(events.someClickHandler);
and shorter namespace names to mitigate this somewhat, but there's really no way to get around it.
So with all those cons, why would anyone use the namespace pattern? Well, one "pro" of it is that the events are accessible. Let's say (for debugging purposes) you want to log the event object that gets passed to your click handler. Using Firebug you could do something like:
var oldHandler = yourNamespace.events.someClickHandler;
var newHandler = function(e) {console.log(e); oldHandler(e);}
$("#someElement").unbind("click");
$("#someElement").click(yourNamespace.events.someClickHandler);
and, without even refreshing the page, you could get your event logged.
Similarly, this style has another pro of re-usability. If you want to make a new click handler that is similar to the old one, but also does x, y, and z, you could do:
yourNamespace.events.advancedClickHandler = function(event) {
yourNamespace.events.someClickHandler(e);
x(); y(); z();
}
Now in contrast we have the "closure" style (I'd prefer to call it the "anonymous function" style).
$(function() {
$("#someElement").click(function(event) {...});
}
It's clearly more concise, the handler's definition is about as close as it can get to the hook-up, and I think it even uses a teeny bit less memory because there's no variable reference. BUT, you won't be able to do any live debugging stuff, you won't be able to hook that handler up anywhere else, or extend it in some new handler, or anything like that, because your handler only exists for the moment when you hook it up.
So, hopefully that helps. In my experience the namespace object style is generally the preferred method, especially the more hardcore/serious the JS project is. But it definitely has some disadvantages; there really is no perfect (ie. perfectly clean/ and re-usable) way of hooking up events in JS at this time.