Do javascript closures with a this/self reference cause memory leaks? - javascript

From what I understand about memory leaks, referencing an out-of-scope var within a closure will cause a memory leak.
But it is also a common practice to create a "that" var in order to preserve the "this" reference and use it within a closure, especially for events.
So, what's the deal with doing stuff like this:
SomeObject.prototype.createImage = function(){
var that = this,
someImage = new Image();
someImage.src = 'someImage.png';
someImage.onload = function(){
that.callbackImage(this);
}
};
Wouldn't that add a little leakage to a project?

Yes, yes it does cause memory leaks, at least in some browsers (guess which). This is one of the more compelling reasons to put your trust in one of the various frameworks available and to set up all your event handlers via its mechanisms instead of directly adding "DOM 0" event handlers like that.
Internet Explorer (at least prior to 9, and possibly including 9) has two memory allocation mechanisms (at least) internally: one for the DOM, and one for JavaScript (well JScript). They don't understand each other. Thus even if a DOM node is freed up, any closure memory as in your example will not be freed.
edit — Oh, and the reason I mention frameworks is that they generally include code to try and mitigate this problem. The avoidance of attaching anything to DOM node properties is one of the safest approaches. All browsers worth worrying about (including ancient IE versions) have alternative ways of attaching event handlers to DOM nodes, for example.

Shot in the dark here, but I reckon that:
someImage.onload = function(){
that.callbackImage(this);
someImage.onload = null
}
would clean up the "memory leak" left by that

There was a time with IE that circular references involving DOM elements (which were typically formed by closures when assigning a function to a listener) caused memory leaks, e.g.
function foo() {
var someEl = document.getElementById('...');
someEl.onclick = function() {...};
}
However, I think they are fixed or patched sufficiently that unless testing shows otherwise, they can be ignored. There are also a number of ways to avoid such closures, so even if they are an issue, they can be worked around (e.g. don't create circular references involving DOM elements).
Edit
Using libraries or any other method of attaching listeners can still create circular references and memory leaks, e.g. in IE:
function foo() {
var el = document.getElementById('d0');
// Create circular reference through closure to el
el.attachEvent('onclick', function(){bar(el);});
}
function bar(o) {
alert(o == window.event.srcElement); // true
}
window.onload = foo;
The above uses attachEvent to add a listener (which pretty much all frameworks will use for IE < 9, including jQuery) yet still creates a circular reference involving a DOM element and so will leak in certain versions of IE. So just using a library will not fix the issue, you need to understand the causes and avoid them.

Related

ES6 object dies [duplicate]

I know ECMAScript 6 has constructors but is there such a thing as destructors for ECMAScript 6?
For example if I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted.
One solution is to have a convention of creating a destructor method for every class that needs this kind of behaviour and manually call it. This will remove the references to the event handlers, hence my object will truly be ready for garbage collection. Otherwise it'll stay in memory because of those methods.
But I was hoping if ECMAScript 6 has something native that will be called right before the object is garbage collected.
If there is no such mechanism, what is a pattern/convention for such problems?
Is there such a thing as destructors for ECMAScript 6?
No. EcmaScript 6 does not specify any garbage collection semantics at all[1], so there is nothing like a "destruction" either.
If I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted
A destructor wouldn't even help you here. It's the event listeners themselves that still reference your object, so it would not be able to get garbage-collected before they are unregistered.
What you are actually looking for is a method of registering listeners without marking them as live root objects. (Ask your local eventsource manufacturer for such a feature).
1): Well, there is a beginning with the specification of WeakMap and WeakSet objects. However, true weak references are still in the pipeline [1][2].
I just came across this question in a search about destructors and I thought there was an unanswered part of your question in your comments, so I thought I would address that.
thank you guys. But what would be a good convention if ECMAScript
doesn't have destructors? Should I create a method called destructor
and call it manually when I'm done with the object? Any other idea?
If you want to tell your object that you are now done with it and it should specifically release any event listeners it has, then you can just create an ordinary method for doing that. You can call the method something like release() or deregister() or unhook() or anything of that ilk. The idea is that you're telling the object to disconnect itself from anything else it is hooked up to (deregister event listeners, clear external object references, etc...). You will have to call it manually at the appropriate time.
If, at the same time you also make sure there are no other references to that object, then your object will become eligible for garbage collection at that point.
ES6 does have weakMap and weakSet which are ways of keeping track of a set of objects that are still alive without affecting when they can be garbage collected, but it does not provide any sort of notification when they are garbage collected. They just disappear from the weakMap or weakSet at some point (when they are GCed).
FYI, the issue with this type of destructor you ask for (and probably why there isn't much of a call for it) is that because of garbage collection, an item is not eligible for garbage collection when it has an open event handler against a live object so even if there was such a destructor, it would never get called in your circumstance until you actually removed the event listeners. And, once you've removed the event listeners, there's no need for the destructor for this purpose.
I suppose there's a possible weakListener() that would not prevent garbage collection, but such a thing does not exist either.
FYI, here's another relevant question Why is the object destructor paradigm in garbage collected languages pervasively absent?. This discussion covers finalizer, destructor and disposer design patterns. I found it useful to see the distinction between the three.
Edit in 2020 - proposal for object finalizer
There is a Stage 3 EMCAScript proposal to add a user-defined finalizer function after an object is garbage collected.
A canonical example of something that would benefit from a feature like this is an object that contains a handle to an open file. If the object is garbage collected (because no other code still has a reference to it), then this finalizer scheme allows one to at least put a message to the console that an external resource has just been leaked and code elsewhere should be fixed to prevent this leak.
If you read the proposal thoroughly, you will see that it's nothing like a full-blown destructor in a language like C++. This finalizer is called after the object has already been destroyed and you have to predetermine what part of the instance data needs to be passed to the finalizer for it to do its work. Further, this feature is not meant to be relied upon for normal operation, but rather as a debugging aid and as a backstop against certain types of bugs. You can read the full explanation for these limitations in the proposal.
You have to manually "destruct" objects in JS. Creating a destroy function is common in JS. In other languages this might be called free, release, dispose, close, etc. In my experience though it tends to be destroy which will unhook internal references, events and possibly propagates destroy calls to child objects as well.
WeakMaps are largely useless as they cannot be iterated and this probably wont be available until ECMA 7 if at all. All WeakMaps let you do is have invisible properties detached from the object itself except for lookup by the object reference and GC so that they don't disturb it. This can be useful for caching, extending and dealing with plurality but it doesn't really help with memory management for observables and observers. WeakSet is a subset of WeakMap (like a WeakMap with a default value of boolean true).
There are various arguments on whether to use various implementations of weak references for this or destructors. Both have potential problems and destructors are more limited.
Destructors are actually potentially useless for observers/listeners as well because typically the listener will hold references to the observer either directly or indirectly. A destructor only really works in a proxy fashion without weak references. If your Observer is really just a proxy taking something else's Listeners and putting them on an observable then it can do something there but this sort of thing is rarely useful. Destructors are more for IO related things or doing things outside of the scope of containment (IE, linking up two instances that it created).
The specific case that I started looking into this for is because I have class A instance that takes class B in the constructor, then creates class C instance which listens to B. I always keep the B instance around somewhere high above. A I sometimes throw away, create new ones, create many, etc. In this situation a Destructor would actually work for me but with a nasty side effect that in the parent if I passed the C instance around but removed all A references then the C and B binding would be broken (C has the ground removed from beneath it).
In JS having no automatic solution is painful but I don't think it's easily solvable. Consider these classes (pseudo):
function Filter(stream) {
stream.on('data', function() {
this.emit('data', data.toString().replace('somenoise', '')); // Pretend chunks/multibyte are not a problem.
});
}
Filter.prototype.__proto__ = EventEmitter.prototype;
function View(df, stream) {
df.on('data', function(data) {
stream.write(data.toUpper()); // Shout.
});
}
On a side note, it's hard to make things work without anonymous/unique functions which will be covered later.
In a normal case instantiation would be as so (pseudo):
var df = new Filter(stdin),
v1 = new View(df, stdout),
v2 = new View(df, stderr);
To GC these normally you would set them to null but it wont work because they've created a tree with stdin at the root. This is basically what event systems do. You give a parent to a child, the child adds itself to the parent and then may or may not maintain a reference to the parent. A tree is a simple example but in reality you may also find yourself with complex graphs albeit rarely.
In this case, Filter adds a reference to itself to stdin in the form of an anonymous function which indirectly references Filter by scope. Scope references are something to be aware of and that can be quite complex. A powerful GC can do some interesting things to carve away at items in scope variables but that's another topic. What is critical to understand is that when you create an anonymous function and add it to something as a listener to ab observable, the observable will maintain a reference to the function and anything the function references in the scopes above it (that it was defined in) will also be maintained. The views do the same but after the execution of their constructors the children do not maintain a reference to their parents.
If I set any or all of the vars declared above to null it isn't going to make a difference to anything (similarly when it finished that "main" scope). They will still be active and pipe data from stdin to stdout and stderr.
If I set them all to null it would be impossible to have them removed or GCed without clearing out the events on stdin or setting stdin to null (assuming it can be freed like this). You basically have a memory leak that way with in effect orphaned objects if the rest of the code needs stdin and has other important events on it prohibiting you from doing the aforementioned.
To get rid of df, v1 and v2 I need to call a destroy method on each of them. In terms of implementation this means that both the Filter and View methods need to keep the reference to the anonymous listener function they create as well as the observable and pass that to removeListener.
On a side note, alternatively you can have an obserable that returns an index to keep track of listeners so that you can add prototyped functions which at least to my understanding should be much better on performance and memory. You still have to keep track of the returned identifier though and pass your object to ensure that the listener is bound to it when called.
A destroy function adds several pains. First is that I would have to call it and free the reference:
df.destroy();
v1.destroy();
v2.destroy();
df = v1 = v2 = null;
This is a minor annoyance as it's a bit more code but that is not the real problem. When I hand these references around to many objects. In this case when exactly do you call destroy? You cannot simply hand these off to other objects. You'll end up with chains of destroys and manual implementation of tracking either through program flow or some other means. You can't fire and forget.
An example of this kind of problem is if I decide that View will also call destroy on df when it is destroyed. If v2 is still around destroying df will break it so destroy cannot simply be relayed to df. Instead when v1 takes df to use it, it would need to then tell df it is used which would raise some counter or similar to df. df's destroy function would decrease than counter and only actually destroy if it is 0. This sort of thing adds a lot of complexity and adds a lot that can go wrong the most obvious of which is destroying something while there is still a reference around somewhere that will be used and circular references (at this point it's no longer a case of managing a counter but a map of referencing objects). When you're thinking of implementing your own reference counters, MM and so on in JS then it's probably deficient.
If WeakSets were iterable, this could be used:
function Observable() {
this.events = {open: new WeakSet(), close: new WeakSet()};
}
Observable.prototype.on = function(type, f) {
this.events[type].add(f);
};
Observable.prototype.emit = function(type, ...args) {
this.events[type].forEach(f => f(...args));
};
Observable.prototype.off = function(type, f) {
this.events[type].delete(f);
};
In this case the owning class must also keep a token reference to f otherwise it will go poof.
If Observable were used instead of EventListener then memory management would be automatic in regards to the event listeners.
Instead of calling destroy on each object this would be enough to fully remove them:
df = v1 = v2 = null;
If you didn't set df to null it would still exist but v1 and v2 would automatically be unhooked.
There are two problems with this approach however.
Problem one is that it adds a new complexity. Sometimes people do not actually want this behaviour. I could create a very large chain of objects linked to each other by events rather than containment (references in constructor scopes or object properties). Eventually a tree and I would only have to pass around the root and worry about that. Freeing the root would conveniently free the entire thing. Both behaviours depending on coding style, etc are useful and when creating reusable objects it's going to be hard to either know what people want, what they have done, what you have done and a pain to work around what has been done. If I use Observable instead of EventListener then either df will need to reference v1 and v2 or I'll have to pass them all if I want to transfer ownership of the reference to something else out of scope. A weak reference like thing would mitigate the problem a little by transferring control from Observable to an observer but would not solve it entirely (and needs check on every emit or event on itself). This problem can be fixed I suppose if the behaviour only applies to isolated graphs which would complicate the GC severely and would not apply to cases where there are references outside the graph that are in practice noops (only consume CPU cycles, no changes made).
Problem two is that either it is unpredictable in certain cases or forces the JS engine to traverse the GC graph for those objects on demand which can have a horrific performance impact (although if it is clever it can avoid doing it per member by doing it per WeakMap loop instead). The GC may never run if memory usage does not reach a certain threshold and the object with its events wont be removed. If I set v1 to null it may still relay to stdout forever. Even if it does get GCed this will be arbitrary, it may continue to relay to stdout for any amount of time (1 lines, 10 lines, 2.5 lines, etc).
The reason WeakMap gets away with not caring about the GC when non-iterable is that to access an object you have to have a reference to it anyway so either it hasn't been GCed or hasn't been added to the map.
I am not sure what I think about this kind of thing. You're sort of breaking memory management to fix it with the iterable WeakMap approach. Problem two can also exist for destructors as well.
All of this invokes several levels of hell so I would suggest to try to work around it with good program design, good practices, avoiding certain things, etc. It can be frustrating in JS however because of how flexible it is in certain aspects and because it is more naturally asynchronous and event based with heavy inversion of control.
There is one other solution that is fairly elegant but again still has some potentially serious hangups. If you have a class that extends an observable class you can override the event functions. Add your events to other observables only when events are added to yourself. When all events are removed from you then remove your events from children. You can also make a class to extend your observable class to do this for you. Such a class could provide hooks for empty and non-empty so in a since you would be Observing yourself. This approach isn't bad but also has hangups. There is a complexity increase as well as performance decrease. You'll have to keep a reference to object you observe. Critically, it also will not work for leaves but at least the intermediates will self destruct if you destroy the leaf. It's like chaining destroy but hidden behind calls that you already have to chain. A large performance problem is with this however is that you may have to reinitialise internal data from the Observable everytime your class becomes active. If this process takes a very long time then you might be in trouble.
If you could iterate WeakMap then you could perhaps combine things (switch to Weak when no events, Strong when events) but all that is really doing is putting the performance problem on someone else.
There are also immediate annoyances with iterable WeakMap when it comes to behaviour. I mentioned briefly before about functions having scope references and carving. If I instantiate a child that in the constructor that hooks the listener 'console.log(param)' to parent and fails to persist the parent then when I remove all references to the child it could be freed entirely as the anonymous function added to the parent references nothing from within the child. This leaves the question of what to do about parent.weakmap.add(child, (param) => console.log(param)). To my knowledge the key is weak but not the value so weakmap.add(object, object) is persistent. This is something I need to reevaluate though. To me that looks like a memory leak if I dispose all other object references but I suspect in reality it manages that basically by seeing it as a circular reference. Either the anonymous function maintains an implicit reference to objects resulting from parent scopes for consistency wasting a lot of memory or you have behaviour varying based on circumstances which is hard to predict or manage. I think the former is actually impossible. In the latter case if I have a method on a class that simply takes an object and adds console.log it would be freed when I clear the references to the class even if I returned the function and maintained a reference. To be fair this particular scenario is rarely needed legitimately but eventually someone will find an angle and will be asking for a HalfWeakMap which is iterable (free on key and value refs released) but that is unpredictable as well (obj = null magically ending IO, f = null magically ending IO, both doable at incredible distances).
If there is no such mechanism, what is a pattern/convention for such problems?
The term 'cleanup' might be more appropriate, but will use 'destructor' to match OP
Suppose you write some javascript entirely with 'function's and 'var's.
Then you can use the pattern of writing all the functions code within the framework of a try/catch/finally lattice. Within finally perform the destruction code.
Instead of the C++ style of writing object classes with unspecified lifetimes, and then specifying the lifetime by arbitrary scopes and the implicit call to ~() at scope end (~() is destructor in C++), in this javascript pattern the object is the function, the scope is exactly the function scope, and the destructor is the finally block.
If you are now thinking this pattern is inherently flawed because try/catch/finally doesn't encompass asynchronous execution which is essential to javascript, then you are correct. Fortunately, since 2018 the asynchronous programming helper object Promise has had a prototype function finally added to the already existing resolve and catch prototype functions. That means that that asynchronous scopes requiring destructors can be written with a Promise object, using finally as the destructor. Furthermore you can use try/catch/finally in an async function calling Promises with or without await, but must be aware that Promises called without await will be execute asynchronously outside the scope and so handle the desctructor code in a final then.
In the following code PromiseA and PromiseB are some legacy API level promises which don't have finally function arguments specified. PromiseC DOES have a finally argument defined.
async function afunc(a,b){
try {
function resolveB(r){ ... }
function catchB(e){ ... }
function cleanupB(){ ... }
function resolveC(r){ ... }
function catchC(e){ ... }
function cleanupC(){ ... }
...
// PromiseA preced by await sp will finish before finally block.
// If no rush then safe to handle PromiseA cleanup in finally block
var x = await PromiseA(a);
// PromiseB,PromiseC not preceded by await - will execute asynchronously
// so might finish after finally block so we must provide
// explicit cleanup (if necessary)
PromiseB(b).then(resolveB,catchB).then(cleanupB,cleanupB);
PromiseC(c).then(resolveC,catchC,cleanupC);
}
catch(e) { ... }
finally { /* scope destructor/cleanup code here */ }
}
I am not advocating that every object in javascript be written as a function. Instead, consider the case where you have a scope identified which really 'wants' a destructor to be called at its end of life. Formulate that scope as a function object, using the pattern's finally block (or finally function in the case of an asynchronous scope) as the destructor. It is quite like likely that formulating that functional object obviated the need for a non-function class which would otherwise have been written - no extra code was required, aligning scope and class might even be cleaner.
Note: As others have written, we should not confuse destructors and garbage collection. As it happens C++ destructors are often or mainly concerned with manual garbage collection, but not exclusively so. Javascript has no need for manual garbage collection, but asynchronous scope end-of-life is often a place for (de)registering event listeners, etc..
Here you go. The Subscribe/Publish object will unsubscribe a callback function automatically if it goes out of scope and gets garbage collected.
const createWeakPublisher = () => {
const weakSet = new WeakSet();
const subscriptions = new Set();
return {
subscribe(callback) {
if (!weakSet.has(callback)) {
weakSet.add(callback);
subscriptions.add(new WeakRef(callback));
}
return callback;
},
publish() {
for (const weakRef of subscriptions) {
const callback = weakRef.deref();
console.log(callback?.toString());
if (callback) callback();
else subscriptions.delete(weakRef);
}
},
};
};
Although it might not happen immediately after the callback function goes out of scope, or it might not happen at all. See weakRef documentation for more details. But it works like a charm for my use case.
You might also want to check out the FinalizationRegistry API for a different approach.
"A destructor wouldn't even help you here. It's the event listeners
themselves that still reference your object, so it would not be able
to get garbage-collected before they are unregistered."
Not so. The purpose of a destructor is to allow the item that registered the listeners to unregister them. Once an object has no other references to it, it will be garbage collected.
For instance, in AngularJS, when a controller is destroyed, it can listen for a destroy event and respond to it. This isn't the same as having a destructor automatically called, but it's close, and gives us the opportunity to remove listeners that were set when the controller was initialized.
// Set event listeners, hanging onto the returned listener removal functions
function initialize() {
$scope.listenerCleanup = [];
$scope.listenerCleanup.push( $scope.$on( EVENTS.DESTROY, instance.onDestroy) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.SUCCESS, instance.onCreateUserResponse ) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.FAILURE, instance.onCreateUserResponse ) );
}
// Remove event listeners when the controller is destroyed
function onDestroy(){
$scope.listenerCleanup.forEach( remove => remove() );
}
Javascript does not have destructures the same way C++ does. Instead, alternative design patterns should be used to manage resources. Here are a couple of examples:
You can restrict users to using the instance for the duration of a callback, after which it'll automatically be cleaned up. (This pattern is similar to the beloved "with" statement in Python)
connectToDatabase(async db => {
const resource = await db.doSomeRequest()
await useResource(resource)
}) // The db connection is closed once the callback ends
When the above example is too restrictive, another alternative is to just create explicit cleanup functions.
const db = makeDatabaseConnection()
const resource = await db.doSomeRequest()
updatePageWithResource(resource)
pageChangeEvent.addListener(() => {
db.destroy()
})
The other answers already explained in detail that there is no destructor. But your actual goal seems to be event related. You have an object which is connected to some event and you want this connection to go away automatically when the object is garbage collected. But this won't happen because the event subscription itself references the listener function. Well, UNLESS you use this nifty new WeakRef stuff.
Here is an example:
<!DOCTYPE html>
<html>
<body>
<button onclick="subscribe()">Subscribe</button>
<button id="emitter">Emit</button>
<button onclick="free()">Free</button>
<script>
const emitter = document.getElementById("emitter");
let listener = null;
function addWeakEventListener(element, event, callback) {
// Weakrefs only can store objects, so we put the callback into an object
const weakRef = new WeakRef({ callback });
const listener = () => {
const obj = weakRef.deref();
if (obj == null) {
console.log("Removing garbage collected event listener");
element.removeEventListener(event, listener);
} else {
obj.callback();
}
};
element.addEventListener(event, listener);
}
function subscribe() {
listener = () => console.log("Event fired!");
addWeakEventListener(emitter, "click", listener);
console.log("Listener created and subscribed to emitter");
}
function free() {
listener = null;
console.log("Reference cleared. Now force garbage collection in dev console or wait some time before clicking Emit again.");
}
</script>
</body>
</html>
(JSFiddle)
Clicking the Subscribe button creates a new listener function and registers it at the click event of the Emit button. So clicking the Emit button after that prints a message to the console. Now click the Free button which simply sets the listener variable to null so the garbage collector can remove the listener. Wait some time or force gargabe collection in the developer console and then click the Emit button again. The wrapper listener function now sees that the actual listener (wrapped in a WeakRef) is no longer there and then unsubscribes itself from the button.
WeakRefs are quite powerful but note that there is no guarantee if and when your stuff is garbage collected.
The answer to the question as-stated in the title is FinalizationRegistry, available since Firefox 79 (June 2020), Chrome 84 and derivatives (July 2020), Safari 14.1 (April 2021), and Node 14.6.0 (July 2020)… however, a native JS destructor is probably not the right solution for your use-case.
function create_eval_worker(f) {
let src_worker_blob = new Blob([f.toString()], {type: 'application/javascript'});
let src_worker_url = URL.createObjectURL(src_worker_blob);
async function g() {
let w = new Worker(src_worker_url);
…
}
// Run URL.revokeObjectURL(src_worker_url) as a destructor of g
let registry = new FinalizationRegistry(u => URL.revokeObjectURL(u));
registry.register(g, src_worker_url);
return g;
}
}
Caveat:
Avoid where possible
Correct use of FinalizationRegistry takes careful thought, and it's best avoided if possible. When, how, and whether garbage collection occurs is down to the implementation of any given JavaScript engine. Any behavior you observe in one engine may be different in another engine, in another version of the same engine, or even in a slightly different situation with the same version of the same engine.
…
Developers shouldn't rely on cleanup callbacks for essential program logic. Cleanup callbacks may be useful for reducing memory usage across the course of a program, but are unlikely to be useful otherwise.
A conforming JavaScript implementation, even one that does garbage collection, is not required to call cleanup callbacks. When and whether it does so is entirely down to the implementation of the JavaScript engine. When a registered object is reclaimed, any cleanup callbacks for it may be called then, or some time later, or not at all.
–Mozilla Developer Network

Am i creating a memory leak?

I have a JavaScript closure which I keep recreating throughout the life time of my web application (single full-ajax page).
I would like to know if it's creating a memory leak.
Here is an example JSFIDDLE
The code in question:
function CreateLinks() {
var ul = $("<ul></ul>").appendTo('div#links');
for (var i in myLinks) {
var li = $('<li>' + myLinks[i].name + '</li>').appendTo(ul);
//closure starts here
(function (value) {
li.click(function (e) {
$('div#info').append('<label>' + value + '</label><br />');
RecreateLinks();
});
})(myLinks[i].value);
}
}
You should be okay IF you make sure that you avoid binding multiple click handlers in the RecreateLinks() function; this can be done by explicitly unbinding existing ones, removing the DOM nodes or making sure that you won't be adding multiple click handlers.
Browsers are getting better at memory allocation strategies, but you shouldn't assume too much. If memory usage is a big concern, try to avoid creating too many closures of which you're not sure they will get garbage collected. One such approach is to use .data() to store your value object and then use a generic click handler instead of a closure.
Profiling JavaScript is not so simple; Chrome does have a Profile tool that can monitor CPU and data performance. This can give you a pretty good gauge on the expected memory consumption, but Chrome is not all browsers, keep that in mind.
Depending on how smart the browser is, it may be better to have "myLinks[i].value" an attribute on your <li> rather than passed via closure. Certain dumb browsers have issues collecting garbage when an event handler references a variable from outside it's scope. JavaScript and DOM run two different GC and the JS one doesn't realize the DOM element/eventhandler is gone and that the variable is no longer in use. This issue may be cleared up by properly removing the event handler via javascript rather than just disposing of the element which it is attached to.
Something akin to:
li.attr('lvalue',myLinks[i].value);
...
var value = $(this).attr('lvalue');
This setup would also allow you to use
$('#links > ul > li').live('click',function(){...});
Which would remove the need for adding individual events each time.

IE8 compliant weakmap for DOM node references

I want to have literally a Dictionary<Node, Object>
This is basically an ES6 WeakMap but I need to work with IE8.
The main feature I want is
minimize memory leaks
O(1) lookup on Object given Node.
My implementation:
var uuid = 0,
domShimString = "__domShim__";
var dataManager = {
_stores: {},
getStore: function _getStore(el) {
var id = el[domShimString];
if (id === undefined) {
return this._createStore(el);
}
return this._stores[domShimString + id];
},
_createStore: function _createStore(el) {
var store = {};
this._stores[domShimString + uuid] = store;
el[domShimString] = uuid;
uuid++;
return store;
}
};
My implementation is O(1) but has memory leaks.
What's the correct way to implement this to minimize memory leaks?
In an article I wrote recently, ES 6 - A quick look at weak maps, I explained how jQuery is able to make data() leak free. It basically generates an expando property name, jQuery.expando. When you attach data to an element, the data is pushed to an internal cache array, and the element is given the expando property with a value of the index of the data in the cache. Something similar to this:
element[jQuery.expando] = elementId;
The way to prevent circular references is to not attach objects directly to elements as expandos. If a reference to the element remains in code, then that element cannot be garbage collected even if it is removed from the DOM. However, preventing circular references doesn't plug the leak entirely - there's still data left in the array if the element is removed from the DOM and garbage collected. So jQuery clears the array on page unload, as well as removing data from the array if elements are removed from the DOM using its own methods like remove(). It keeps the data alive for detach().
The reason jQuery does this, is because there is no weak map equivalent, it's kind of shimmable in ES 5 but not in ES 3. As explained in my article, WeakMap is made for exactly this kind of situation, but the only available implementation is in Firefox 6 and above and, with the spec not being finalized, even that shouldn't be used in production environments.
Another thing to take from my article is that certain elements will not allow you to attach expando properties — <object> and <embed> are the two culprits named and shamed in the jQuery source code. For these element's, you're pretty much screwed and jQuery just will not let you use data on them.
Basic circular reference memory leaks occur in reference counted implementations when two object's properties hold direct references to each other. So DOMObject holds a reference to JSObject and vice versa. Assuming there are no other references to either object, they'd both have a permanent reference count of 1 and the GC would not mark them for collection.
Older browsers (IE6) wouldn't break these circular references, even on page unload, whilst newer browsers are able to break many of these circular references by recognizing the patterns that cause them. jQuery.cache and similar patterns partially void memory leaks because DOMObject never holds a reference to JSObject so, even when JSObject holds a reference to DOMObject, the GC can still mark JSObject for collection when there are no more references to it. Once the GC has collected JSObject, the reference count for DOMObject will be reduced, freeing that up for collection also.
Although IE 8+ and other reference counting browsers may be able to break many circular reference patterns (around 400 were fixed for IE 8), the likelihood of leaks is only reduced. For instance, I've seen a huge leak in one of my own apps in IE 8, when working with script elements and JSONP. The best solution is to plan for the worst and, without WeakMap(), the best you can do is use the jQuery data pattern. Sure, you might be risking having orphaned objects, but this is the lesser of two evils.

Using jQuery and Memory Leaks

I have been using jQuery for over a couple of months and read up on Javascript memory leaks for a few days.
I have two questions regarding memory leaks and jQuery:
When I bind (using .bind(...)) do I have to unbind them (.unbind()) if I leave the page/refresh to avoid memory leaks or does jQuery remove them for me?
Concerning closures, I read that they can lead to memory leaks if used incorrectly. If I do something such as:
function doStuff( objects ){ //objects is a jQuery object that holds an array of DOM objects
var textColor = "red";
objects.each(function(){
$(this).css("color", textColor );
});
}
doStuff( $( "*" ) );
I know that the above code is stupid (better/simpler r ways of doing this) but I want to know if this causes circular references/closure problems with .each and if it would cause a memory leak. If it does cause a memory leak, how would I rewrite it (usually similar method) to avoid a memory leak?
Thanks in advance.
Edit: I have another case similar to question 2 (which I guess makes this part 3).
If have something like this:
function doStuff( objects ){ //iframe objects
var textColor = "red";
function innerFunction()
{
$(this).contents().find('a').css("color", textColor );
}
objects.each(function(){
//I can tell if all 3 are running then we
//have 3 of the same events on each object,
//this is just so see which method works/preferred
//Case 1
$(this).load(innerFunction);
//Case 2
$(this).load(function(){
$(this).contents().find('a').css("color", textColor );
});
//Case 3
$(this).load(function(){
innerFunction();
});
});
}
doStuff( $( "iframe" ) );
There are 3 cases above and I would like to know which method (or all) would produce a memory leak. Also I would like to know which is the preferred method (usually I use case 2) or better practice (or if these are not good what would be better?).
Thanks again!
1) No. The browser clears everything between page loads.
2) In its current form there will be no memory leak, since jquery's .each() function doesn't bind anything, so once its execution is finished, the anonymous function it was passed is no longer reachable, and therefore the environment it closed (i.e. the closure as a whole) is also not reachable. So the garbage collection engine can clean everything up - including the reference to objects.
However, if instead of .each() you had something harmless, like $('div:eq(0)').bind() (I'm trying to emphasise that it needn't be a reference to the large objects variable, enough that it is even a single unrelated element), then since the anonymous function sent to .bind() closes the objects variable, it will remain reachable, and therefore not garbage collected, allowing memory leaks.
A simple way to avoid this problem, is to objects = null; at the end of the executing function.
I should note that I'm not familiar with the JS garbage collection engines, so it is possible that there are reasonably intelligent optimisations. For example it can be checked whether or not the anonymous function tries to access any variables closed in with it, and if not, it might pass them to the garbage collector, which would solve the problem.
For further reading, look for references to javascript's memory model, specifically the environment model and static binding.
There are some subtle leak patterns that you may not even recognize. Check out a question I asked some time ago about a similar issue
jQuery 1.5 Memory leak in IE8
If you create a closure around an object which references the dom node, a leaking reference loop is made, and unfortunately, it can not be corrected by simply unbinding. You will have to set the reference to the DOM node in the object to null.
In regards to 1., no, you definitely do not have to .unbind() when leaving a page. I am not entirely sure about two.
About 1, sometimes you have to free your resources especially when you have circular references and using Internet Explorer (because of some bugs, because in theory you shouldn't have to do that) ;) Google maps had a function in their v2 to prevent that that we had to call on document.onunload (GUnload).
About 2, you don't have circular references. It consumes a lot of memory since the this must have its own context of execution to access textColor among others..
A circular reference is achieved when an object references to itself or a closure calls itself. Maybe there's other situations..
Hope this helps

How to remove DOM elements without memory leaks?

My JavaSript code builds a list of LI elements. When I update the list, memory usage grows and never goes down. I tested in sIEve and it shows that the browser keeps all elements that were supposed to be deleted by $.remove() or $.empty jQuery commands.
What should I do to remove DOM nodes without the memory leak?
See my other question for the specific code.
The DOM preserves all DOM nodes, even if they have been removed from the DOM tree itself, the only way to remove these nodes is to do a page refresh (if you put the list into an iframe the refresh won't be as noticable)
Otherwise, you could wait for the problem to get bad enough that the browsers garbage collector is forced into action (talking hundreds of megabytes of unused nodes here)
Best practice is to reuse nodes.
EDIT: Try this:
var garbageBin;
window.onload = function ()
{
if (typeof(garbageBin) === 'undefined')
{
//Here we are creating a 'garbage bin' object to temporarily
//store elements that are to be discarded
garbageBin = document.createElement('div');
garbageBin.style.display = 'none'; //Make sure it is not displayed
document.body.appendChild(garbageBin);
}
function discardElement(element)
{
//The way this works is due to the phenomenon whereby child nodes
//of an object with it's innerHTML emptied are removed from memory
//Move the element to the garbage bin element
garbageBin.appendChild(element);
//Empty the garbage bin
garbageBin.innerHTML = "";
}
}
To use it in your context, you would do it like this:
discardElement(this);
This is more of an FYI than an actual answer, but it is also quite interesting.
From the W3C DOM core specification (http://www.w3.org/TR/DOM-Level-2-Core/core.html):
The Core DOM APIs are designed to be compatible with a wide range of languages, including both general-user scripting languages and the more challenging languages used mostly by professional programmers. Thus, the DOM APIs need to operate across a variety of memory management philosophies, from language bindings that do not expose memory management to the user at all, through those (notably Java) that provide explicit constructors but provide an automatic garbage collection mechanism to automatically reclaim unused memory, to those (especially C/C++) that generally require the programmer to explicitly allocate object memory, track where it is used, and explicitly free it for re-use. To ensure a consistent API across these platforms, the DOM does not address memory management issues at all, but instead leaves these for the implementation. Neither of the explicit language bindings defined by the DOM API (for ECMAScript and Java) require any memory management methods, but DOM bindings for other languages (especially C or C++) may require such support. These extensions will be the responsibility of those adapting the DOM API to a specific language, not the DOM Working Group.
In other words: memory management is left to the implementation of the DOM specification in various languages. You would have to look into the documentation of the DOM implementation in javascript to find out any method to remove a DOM object from memory, which is not a hack. (There is however very little information on the MDC site on that topic.)
As a note on jQuery#remove and jQuery#empty: from what I can tell neither of these methods does anything other than removing Objects from DOM nodes or removing DOM nodes from the document. They only remove That of course does not mean that there is no memory allocated to these objects (even though they aren't in the document anymore).
Edit: The above passage was superfluous since obviously jQuery cannot do wonders and work around the DOM implementation of the used browser.
Have you removed any event listeners? That can cause memory leaks.
The code below does not leak on my IE7 and other browsers:
<html>
<head></head>
<body>
add
<ul></ul>
<script>
function addRemove(a) {
var ul = document.getElementsByTagName('UL')[0],
li, i = 20000;
if (a.innerHTML === 'add') {
while (i--) {
li = document.createElement('LI');
ul.appendChild(li);
li.innerHTML = i;
li.onclick = function() {
alert(this.innerHTML);
};
}
a.innerHTML = 'remove';
} else {
while (ul.firstChild) {
ul.removeChild(ul.firstChild);
}
a.innerHTML = 'add';
}
}
</script>
</body>
</html>
May be you can try to spot some differences with your code.
I know that IE leaks far less when you insert first the node in the DOM before doing anything to it, eg: attaching events to it or filling its innerHTML property.
If you have to "post-fix" leakage, and must do so without rewriting all your code to take closures, circular references etc in account, use Douglas Crockfords Purge-method prior to delete:
https://crockford.com/javascript/memory/leak.html
Or use this closure-fix workaround:
Leak Free Javascript Closures

Categories