I have a JavaScript closure which I keep recreating throughout the life time of my web application (single full-ajax page).
I would like to know if it's creating a memory leak.
Here is an example JSFIDDLE
The code in question:
function CreateLinks() {
var ul = $("<ul></ul>").appendTo('div#links');
for (var i in myLinks) {
var li = $('<li>' + myLinks[i].name + '</li>').appendTo(ul);
//closure starts here
(function (value) {
li.click(function (e) {
$('div#info').append('<label>' + value + '</label><br />');
RecreateLinks();
});
})(myLinks[i].value);
}
}
You should be okay IF you make sure that you avoid binding multiple click handlers in the RecreateLinks() function; this can be done by explicitly unbinding existing ones, removing the DOM nodes or making sure that you won't be adding multiple click handlers.
Browsers are getting better at memory allocation strategies, but you shouldn't assume too much. If memory usage is a big concern, try to avoid creating too many closures of which you're not sure they will get garbage collected. One such approach is to use .data() to store your value object and then use a generic click handler instead of a closure.
Profiling JavaScript is not so simple; Chrome does have a Profile tool that can monitor CPU and data performance. This can give you a pretty good gauge on the expected memory consumption, but Chrome is not all browsers, keep that in mind.
Depending on how smart the browser is, it may be better to have "myLinks[i].value" an attribute on your <li> rather than passed via closure. Certain dumb browsers have issues collecting garbage when an event handler references a variable from outside it's scope. JavaScript and DOM run two different GC and the JS one doesn't realize the DOM element/eventhandler is gone and that the variable is no longer in use. This issue may be cleared up by properly removing the event handler via javascript rather than just disposing of the element which it is attached to.
Something akin to:
li.attr('lvalue',myLinks[i].value);
...
var value = $(this).attr('lvalue');
This setup would also allow you to use
$('#links > ul > li').live('click',function(){...});
Which would remove the need for adding individual events each time.
Related
I have some trouble that comes from my Javascript (JS) codes, since I sometimes need to access the same DOM elements more than once in the same function. Some reasoning is also provided here.
From the point of view of the performance, is it better to create a jQuery object once and then cache it or is it better to create the same jQuery object at will?
Example:
function(){
$('selector XXX').doSomething(); //first call
$('selector XXX').doSomething(); //second call
...
$('selector XXX').doSomething(); // n-th call
}
or
function(){
var obj = $('selector XXX');
obj.doSomething(); //first call
obj.doSomething(); //second call
...
obj.doSomething(); // n-th call
}
I suppose that the answer probably depends by the value of "n", so assume that n is a "small" number (e.g. 3), then a medium number (e.g. 10) and finally a large one (e.g. 30, like if the object is used for comparison in a for cycle).
Thanks in advance.
It is always better to cache the element, if n is greater than 1, cache the element, or chain the operations together (you can do $('#something').something().somethingelse(); for most jQuery operations, since they usually return the wrapped set itself). As an aside, it has become a bit of a standard to name cache variables beginning with a money sign $ so that later in the code it is evident that you are performing an operation on a jQuery set. So you will see a lot of people do var $content = $('#content'); then $content.find('...'); later on.
The second is superior. Most importantly, it is cleaner. In the future, if you want to change your selector, you only need to change it one place. Else you need to change it in N places.
Secondly, it should perform better, although a user would only notice for particularly heavy dom, or if you were invoking that function a lot.
If you look at this question from a different perspective, the correct answer is obvious.
In the first case, you're duplicating the selection logic in every place it appears. If you change the name of the element, you have to change each occurence. This should be reason enough to not do it. Now you have two options - either you cache the element's selector or the element itself. Using the element as an object makes more sense than using the name.
Performance-wise, I think the effect is negligible. Probably you'll be able to find test results for this particular use-case: caching jQuery objects vs always re-selecting them. Performance might become an issue if you have a large DOM and do a lot of lookups, but you need to see for yourself if that's the case.
If you want to see exactly how much memory your objects are taking up, you can use the Chrome Heap Profiler and check there. I don't know if similar tools are available for other browsers and probably the implementations will vary wildly in performance, especially in IE's case, but it may satisfy your curiosity.
IMO, you should use the second variant, storing the result of the selection in an object, no so much as to improve performance but to have as little duplicate logic as possible.
As for caching $(this), I agree with Nick Craver's answer. As he said there, you should also use chaining where possible - cleans up your code and solves your problem.
You should take a look at
http://www.artzstudio.com/2009/04/jquery-performance-rules/
or
http://addyosmani.com/jqprovenperformance/
I almost always prefer to cache the jQuery object but the benefit varies greatly based on exactly what you are using for your selector. If you are using ids then the benefit is far less than if you are using types of selectors. Also, not all selectors are created equally so try to keep that in mind when you write your selectors.
For example:
$('table tr td') is a very poor selector. Try to use context or .find() and it will make a BIG difference.
One thing I like to do is place timers in my code to see just how efficient it is.
var timer = new Date();
// code here
console.log('time to complete: ' + (new Date() - timer));
Most cached objects will be performed in less than 2 milliseconds where as brand new selectors take quite a bit longer because you first have to find the element, and then perform the operation.
In JavaScript, functions are generally short-lived—especially when hosted by a browser. However, a function’s scope might outlive the function. This happens, for example, when you create a closure. If you want to prevent a jQuery object from being referenced for a long time, you can assign null to any variables that reference it when you are done with that variable or use indirection to create your closures. For example:
var createHandler = function (someClosedOverValue) {
return function () {
doSomethingWith(someClosedOverValue);
};
}
var blah = function () {
var myObject = jQuery('blah');
// We want to enable the closure to access 'red' but not keep
// myObject alive, so use a special createHandler for it:
var myClosureWithoutAccessToMyObject = createHandler('red');
doSomethingElseWith(myObject, myClosureWithoutAccessToMyObject);
// After this function returns, and assuming doSomethingElseWith() does
// not itself generate additional references to myObject, myObject
// will no longer have any references and be elligible for garbage
// collection.
}
Because jQuery(selector) might end up having to run expensive algorithms or even walk the DOM tree a bit for complex expressions that can’t be handled by the browser directly, it is better to cache the returned object. Also, as others have mentioned, for code clarity, it is better to cache the returned object to avoid typing the selector multiple times. I.e., DRY code is often easier to maintain than WET code.
However, each jQuery object has some amount of overhead. So storing large arrays of jQuery objects in global variables is probably wasteful—unless if you actually need to operate on large numbers of these objects and still treat them as distinct. In such a situation, you might save memory by caching arrays of the DOM elements directly and using the jQuery(DOMElement) constructor which should basically be free when iterating over them.
Though, as people say, you can only know the best approach for your particular case by benchmarking different approaches. It is hard to predict reality even when theory seems sound ;-).
From what I understand about memory leaks, referencing an out-of-scope var within a closure will cause a memory leak.
But it is also a common practice to create a "that" var in order to preserve the "this" reference and use it within a closure, especially for events.
So, what's the deal with doing stuff like this:
SomeObject.prototype.createImage = function(){
var that = this,
someImage = new Image();
someImage.src = 'someImage.png';
someImage.onload = function(){
that.callbackImage(this);
}
};
Wouldn't that add a little leakage to a project?
Yes, yes it does cause memory leaks, at least in some browsers (guess which). This is one of the more compelling reasons to put your trust in one of the various frameworks available and to set up all your event handlers via its mechanisms instead of directly adding "DOM 0" event handlers like that.
Internet Explorer (at least prior to 9, and possibly including 9) has two memory allocation mechanisms (at least) internally: one for the DOM, and one for JavaScript (well JScript). They don't understand each other. Thus even if a DOM node is freed up, any closure memory as in your example will not be freed.
edit — Oh, and the reason I mention frameworks is that they generally include code to try and mitigate this problem. The avoidance of attaching anything to DOM node properties is one of the safest approaches. All browsers worth worrying about (including ancient IE versions) have alternative ways of attaching event handlers to DOM nodes, for example.
Shot in the dark here, but I reckon that:
someImage.onload = function(){
that.callbackImage(this);
someImage.onload = null
}
would clean up the "memory leak" left by that
There was a time with IE that circular references involving DOM elements (which were typically formed by closures when assigning a function to a listener) caused memory leaks, e.g.
function foo() {
var someEl = document.getElementById('...');
someEl.onclick = function() {...};
}
However, I think they are fixed or patched sufficiently that unless testing shows otherwise, they can be ignored. There are also a number of ways to avoid such closures, so even if they are an issue, they can be worked around (e.g. don't create circular references involving DOM elements).
Edit
Using libraries or any other method of attaching listeners can still create circular references and memory leaks, e.g. in IE:
function foo() {
var el = document.getElementById('d0');
// Create circular reference through closure to el
el.attachEvent('onclick', function(){bar(el);});
}
function bar(o) {
alert(o == window.event.srcElement); // true
}
window.onload = foo;
The above uses attachEvent to add a listener (which pretty much all frameworks will use for IE < 9, including jQuery) yet still creates a circular reference involving a DOM element and so will leak in certain versions of IE. So just using a library will not fix the issue, you need to understand the causes and avoid them.
I have been using jQuery for over a couple of months and read up on Javascript memory leaks for a few days.
I have two questions regarding memory leaks and jQuery:
When I bind (using .bind(...)) do I have to unbind them (.unbind()) if I leave the page/refresh to avoid memory leaks or does jQuery remove them for me?
Concerning closures, I read that they can lead to memory leaks if used incorrectly. If I do something such as:
function doStuff( objects ){ //objects is a jQuery object that holds an array of DOM objects
var textColor = "red";
objects.each(function(){
$(this).css("color", textColor );
});
}
doStuff( $( "*" ) );
I know that the above code is stupid (better/simpler r ways of doing this) but I want to know if this causes circular references/closure problems with .each and if it would cause a memory leak. If it does cause a memory leak, how would I rewrite it (usually similar method) to avoid a memory leak?
Thanks in advance.
Edit: I have another case similar to question 2 (which I guess makes this part 3).
If have something like this:
function doStuff( objects ){ //iframe objects
var textColor = "red";
function innerFunction()
{
$(this).contents().find('a').css("color", textColor );
}
objects.each(function(){
//I can tell if all 3 are running then we
//have 3 of the same events on each object,
//this is just so see which method works/preferred
//Case 1
$(this).load(innerFunction);
//Case 2
$(this).load(function(){
$(this).contents().find('a').css("color", textColor );
});
//Case 3
$(this).load(function(){
innerFunction();
});
});
}
doStuff( $( "iframe" ) );
There are 3 cases above and I would like to know which method (or all) would produce a memory leak. Also I would like to know which is the preferred method (usually I use case 2) or better practice (or if these are not good what would be better?).
Thanks again!
1) No. The browser clears everything between page loads.
2) In its current form there will be no memory leak, since jquery's .each() function doesn't bind anything, so once its execution is finished, the anonymous function it was passed is no longer reachable, and therefore the environment it closed (i.e. the closure as a whole) is also not reachable. So the garbage collection engine can clean everything up - including the reference to objects.
However, if instead of .each() you had something harmless, like $('div:eq(0)').bind() (I'm trying to emphasise that it needn't be a reference to the large objects variable, enough that it is even a single unrelated element), then since the anonymous function sent to .bind() closes the objects variable, it will remain reachable, and therefore not garbage collected, allowing memory leaks.
A simple way to avoid this problem, is to objects = null; at the end of the executing function.
I should note that I'm not familiar with the JS garbage collection engines, so it is possible that there are reasonably intelligent optimisations. For example it can be checked whether or not the anonymous function tries to access any variables closed in with it, and if not, it might pass them to the garbage collector, which would solve the problem.
For further reading, look for references to javascript's memory model, specifically the environment model and static binding.
There are some subtle leak patterns that you may not even recognize. Check out a question I asked some time ago about a similar issue
jQuery 1.5 Memory leak in IE8
If you create a closure around an object which references the dom node, a leaking reference loop is made, and unfortunately, it can not be corrected by simply unbinding. You will have to set the reference to the DOM node in the object to null.
In regards to 1., no, you definitely do not have to .unbind() when leaving a page. I am not entirely sure about two.
About 1, sometimes you have to free your resources especially when you have circular references and using Internet Explorer (because of some bugs, because in theory you shouldn't have to do that) ;) Google maps had a function in their v2 to prevent that that we had to call on document.onunload (GUnload).
About 2, you don't have circular references. It consumes a lot of memory since the this must have its own context of execution to access textColor among others..
A circular reference is achieved when an object references to itself or a closure calls itself. Maybe there's other situations..
Hope this helps
My JavaSript code builds a list of LI elements. When I update the list, memory usage grows and never goes down. I tested in sIEve and it shows that the browser keeps all elements that were supposed to be deleted by $.remove() or $.empty jQuery commands.
What should I do to remove DOM nodes without the memory leak?
See my other question for the specific code.
The DOM preserves all DOM nodes, even if they have been removed from the DOM tree itself, the only way to remove these nodes is to do a page refresh (if you put the list into an iframe the refresh won't be as noticable)
Otherwise, you could wait for the problem to get bad enough that the browsers garbage collector is forced into action (talking hundreds of megabytes of unused nodes here)
Best practice is to reuse nodes.
EDIT: Try this:
var garbageBin;
window.onload = function ()
{
if (typeof(garbageBin) === 'undefined')
{
//Here we are creating a 'garbage bin' object to temporarily
//store elements that are to be discarded
garbageBin = document.createElement('div');
garbageBin.style.display = 'none'; //Make sure it is not displayed
document.body.appendChild(garbageBin);
}
function discardElement(element)
{
//The way this works is due to the phenomenon whereby child nodes
//of an object with it's innerHTML emptied are removed from memory
//Move the element to the garbage bin element
garbageBin.appendChild(element);
//Empty the garbage bin
garbageBin.innerHTML = "";
}
}
To use it in your context, you would do it like this:
discardElement(this);
This is more of an FYI than an actual answer, but it is also quite interesting.
From the W3C DOM core specification (http://www.w3.org/TR/DOM-Level-2-Core/core.html):
The Core DOM APIs are designed to be compatible with a wide range of languages, including both general-user scripting languages and the more challenging languages used mostly by professional programmers. Thus, the DOM APIs need to operate across a variety of memory management philosophies, from language bindings that do not expose memory management to the user at all, through those (notably Java) that provide explicit constructors but provide an automatic garbage collection mechanism to automatically reclaim unused memory, to those (especially C/C++) that generally require the programmer to explicitly allocate object memory, track where it is used, and explicitly free it for re-use. To ensure a consistent API across these platforms, the DOM does not address memory management issues at all, but instead leaves these for the implementation. Neither of the explicit language bindings defined by the DOM API (for ECMAScript and Java) require any memory management methods, but DOM bindings for other languages (especially C or C++) may require such support. These extensions will be the responsibility of those adapting the DOM API to a specific language, not the DOM Working Group.
In other words: memory management is left to the implementation of the DOM specification in various languages. You would have to look into the documentation of the DOM implementation in javascript to find out any method to remove a DOM object from memory, which is not a hack. (There is however very little information on the MDC site on that topic.)
As a note on jQuery#remove and jQuery#empty: from what I can tell neither of these methods does anything other than removing Objects from DOM nodes or removing DOM nodes from the document. They only remove That of course does not mean that there is no memory allocated to these objects (even though they aren't in the document anymore).
Edit: The above passage was superfluous since obviously jQuery cannot do wonders and work around the DOM implementation of the used browser.
Have you removed any event listeners? That can cause memory leaks.
The code below does not leak on my IE7 and other browsers:
<html>
<head></head>
<body>
add
<ul></ul>
<script>
function addRemove(a) {
var ul = document.getElementsByTagName('UL')[0],
li, i = 20000;
if (a.innerHTML === 'add') {
while (i--) {
li = document.createElement('LI');
ul.appendChild(li);
li.innerHTML = i;
li.onclick = function() {
alert(this.innerHTML);
};
}
a.innerHTML = 'remove';
} else {
while (ul.firstChild) {
ul.removeChild(ul.firstChild);
}
a.innerHTML = 'add';
}
}
</script>
</body>
</html>
May be you can try to spot some differences with your code.
I know that IE leaks far less when you insert first the node in the DOM before doing anything to it, eg: attaching events to it or filling its innerHTML property.
If you have to "post-fix" leakage, and must do so without rewriting all your code to take closures, circular references etc in account, use Douglas Crockfords Purge-method prior to delete:
https://crockford.com/javascript/memory/leak.html
Or use this closure-fix workaround:
Leak Free Javascript Closures
Is it possible to do this from within a class?
$("#" + field).click(this.validate);
So basically I want to pass a function of the object that should be executed whenever something is clicked. Also, if there are more than 1 instances of this object, then the correct instance (i.e the one which runs this code for the given field) should be executed.
I am not sure about an easy way, but you can always go the closure route:
var that = this;
$("#" + field).click(function() {
that.validate();
});
Is it possible to do this from within a class?
$("#" + field).click(this.validate);
“this.validate” is problematic. JavaScript does not have bound methods, so when you pass that reference, it is only pointing to a plain function. When called, ‘this’ will not be correctly set. See ALA for a fairly thorough discussion of the binding loss problem.
Some frameworks provide built-in method-binding functionality; jQuery does not, as it tends to concentrate more on closures than JavaScript objects. In any case, creating a binding wrapper using a closure is pretty simple; Andrey's answer is the usual approach for jQuery users.
The one thing to look out for with closures for event handlers (whether you are using jQuery or not) is that it tends to cause memory leaks in IE. For simple, short-lived web pages you may not care.
Yes it is possible.
10 mins ago was writting a snipet because i had just that problem
$("div.myalarms_delete").click(function(){
var mid = this.id;
$("li#"+mid).fadeOut("fast");
});
the div.myalarms_delete also as the id I needed
From inside a class you're typically better off using the .find function.
var child = this.find('.someChildClass');
var child2 = this.find('#someChildId');