JavaScript: confusion about WeakSet and its effects on garbage collection - javascript

I have been grinding leetcode and I encountered this question https://leetcode.com/problems/intersection-of-two-linked-lists/ where it asks you to find the intersection of two LinkedList. One solution (not the best one I know) is to use a hash set to keep track of the first linked list while traversing through it and then traversing through the second list. When we found out that there is a duplicate node then that is the intersection.
For example, E is the intersection
A -> B
\
E -> F
/
C -> D
The way I solved it is to use a WeakSet as the hash set to store the reference of the first linked list.
Here is the code
var getIntersectionNode = function(headA, headB) {
let hashSet = new WeakSet()
while(headA){
hashSet.add(headA)
headA = headA.next
}
while(headB){
if(hashSet.has(headB)) return headB
headB = headB.next
}
return null
};
My question is, since WeakSet has this nice feature - when no other references to an object stored in the WeakSet exist, those objects can be garbage collected. If we go back to the example here, when we are iterating through A -> B -> E -> F, we are adding every node into the hash set, but we don't preserve the reference for every node, i.e. headA = headA.next. So that means after I added one node into the hash set and I advanced to the next node, the reference to the previous node is gone, then it should be garbage collected from the hash set right? Then how come the solution would pass?
For example, when we are at A, we store the A into the hash set, and we advance to B, now there is no way to reference back to A, with WeakSet it should have been garbage collected. But clearly if that is the case the solution wouldn't work. Can someone point out where my understanding is wrong here?

There are a couple issues here:
The original objects passed into getIntersectionNode from the caller will still exist at least until the function finishes. If you do
someFn({ foo: 'bar' })
The object won't get garbage collected until synchronous JS processing has finished; the GC only runs once JS is idle, and even then, it'll often take a few seconds. If you add an element to a WeakSet, and you were somehow able to observe when exactly it gets removed from it due to there no longer being any references to it, it would take some time.
Even then, even if unreferenceable objects were GC'd immediately, in this case, all that's needed is for the one intersection node to remain referenceable. If there's an intersection, that intersection node will be a child of headA somewhere, and that node will also exist somewhere nested inside headB; a reference still exists to it inside headB even after iterating through headA.
Unless your script carries out asynchronous tasks (like wait for user input, or a setTimeout), there's no benefit to using a WeakSet over a Set (or a WeakMap over a Map), since the garbage collector won't run in time for it to be of any use.

Related

Is it possible to get notified just before an object is about to get garbage collected?

The Question
With FinalizationRegistry, it's possible to get notified after that an object has been garbage collected. However, is it possible to get notified before, so I can still have access to the data and do something with it?
What I'm trying to achieve
I want to implement a CompressedMap<K, V> were data is internally stored, either deflated in a Map<K, Buffer> or inflated in a Map<K, WeakRef<V>>. It's up to the user to define the deflate and inflate functions.
As a classic Map<K, V>, if the user holds a reference to a value present in the map, and update it, it should also be automatically updated in the map (because it's the same object). That's why I need to keep the values in a Map<K, WeakRef<V>> and compress and move them to the Map<K, Buffer> only when they're about to get garbage collected.
What I've already considered
SO question: Can I get a callback when my object is about to get collected by GC in Node?
The accepted answer shows how to use FinalizationRegistry which fires a callback AFTER that the object has been garbage collected and is no longer available.
Moving the value to the deflated map after each modification
It would require to wrap each fields of the object in a getter/setter and it has lot of implications:
It's more computational intensive to update the deflated map after EACH modification.
Modifications on new fields (not wrapped in getter/setter) would be ignored.
Wrapping each fields of each objects could have a big memory impact on large map which would defeat the purpose of a "compressed map".
It would modify the user's objects.
It questions where the boundary of the object is. Maybe we should wrap all the fields even deep ones, maybe not. It depends of the user's use case.
Writing a Node.JS addon and using Node-API
I didn't dig deeply into it, but it would be a last resort solution, because my implementation will only be compatible with Node.JS. Even if I'm focused on Node.JS, browser support would be nice to have. Also I never wrote a Node.JS addon, and I'm not even sure if it will allow me to implement a PreFinalizationRegistry.
References
FinalizationRegistry
Map
WeakRef
Developers shouldn't rely on cleanup callbacks for essential program logic. Cleanup callbacks may be useful for reducing memory usage across the course of a program, but are unlikely to be useful otherwise.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/FinalizationRegistry#notes_on_cleanup_callbacks
Instead of waiting until the object is finalized, I would recommend using setTimeout() to deflate it when it hasn't been used for a certain period of time.
To do so you'll want to return an object that behaves like a Map<K, WeakRef<V>> and wraps it instead of the actual map. This way you can start throwing exceptions if it is used after the timeout.
Imagine an unreferenced object {a: 1, b: {c: 2}} for which there is still a reference to its b subobject. If the object was compressed and then garbage-collected, the holder of the reference to b might say ref_to_b.c = 3, and this change would not automatically be reflected in the compressed version. So when the compressed version is later re-inflated, it still has b.c = 2.
This means that you can only compress those members of an object that are not themselves objects, that is, the primitive-valued members. And this could be done with a setter whenever such a value is changed. The deflated values would be kept with strong references, so that an object can always be recreated from them if only its key is known, even if its earlier incarnation has been garbage-collected.
class DeflatableObject {
static deflated = {
primitive: new Map(),
subobject: new Map()
}
static recreate(key) {
var obj = new DeflatableObject();
obj.key = key;
obj._primitive = inflate(DeflatableObject.deflated.primitive.get(key));
var subobj = DeflatableObject.deflated.subobject.get(key);
if (subobj)
obj._subobject = subobj.object.deref() || DeflatableSubobject.recreate(subobj.key);
return obj;
}
set primitive(value) {
this._primitive = value;
DeflatableObject.deflated.primitive.set(this.key, deflate(value));
}
get primitive() {
return this._primitive;
}
set subobject(value) {
this._subobject = value;
DeflatableObject.deflated.subobject.set(this.key, {
object: new WeakRef(value),
key: value.key
});
}
get subobject() {
return this._subobject;
}
}

What is Liveness in JavaScript?

Trying to examine intricacies of JavaScript GC, I got deep into the weeds (that is, into the ECMAScript spec). It was found by me that an object should not be collected as long as it is deemed "live". And liveness itself is defined as follows:
At any point during evaluation, a set of objects S is considered
live if either of the following conditions is met:
Any element in S is included in any agent's [[KeptAlive]] List.
There exists a valid future hypothetical WeakRef-oblivious execution with respect to S that observes the Object value of any
object in S.
The [[KeptAlive]] list is appended with an object once a special WeakRef is created, which (weakly) refers to it, and emptied after the current synchronous job ceases.
However, as for WeakRef-oblivious execution, I fail to get my mind around what it is:
For some set of objects S, a hypothetical WeakRef-oblivious
execution with respect to S is an execution whereby the abstract
operation WeakRefDeref of a WeakRef whose referent is an element
of S always returns undefined.
WeakRefDeref of a WeakRef returns undefined when its referent was collected already. Am I getting it right that it is implied here that all objects that make up S should be collected? So the notion of a future hypothetical WeakRef-oblivious execution is that there is still an object, an element of S, which not collected yet and observed by some WeakRef.
It all still makes little sense for me. I would appreciate some samples.
Let's ignore the formalised, but incomplete, definitions. We find the actual meaning in the non-normative notes of that section.1
What is Liveness in JavaScript?
Liveness is the lower bound for guaranteeing which WeakRefs an engine must not empty (note 6). So live (sets of) objects are those that must not be garbage-collected because they still will be used by the program.
However, the liveness of a set of objects does not mean that all the objects in the set must be retained. It means that there are some objects in the set that still will be used by the program, and the live set (as a whole) must not be garbage-collected. This is because the definition is used in its negated form in the garbage collector Execution algorithm2: At any time, if a set of objects S is not live, an ECMAScript implementation may3 […] atomically [remove them]. In other words, if an implementation chooses a non-live set S in which to empty WeakRefs, it must empty WeakRefs for all objects in S simultaneously (note 2).
Looking at individual objects, we can say they are not live (garbage-collectable) if there is at least one non-live set containing them; and conversely we say that an individual object is live if every set of objects containing it is live (note 3). It's a bit weird as a "live set of objects" is basically defined as "a set of objects where any of them is live", however the individual liveness is always "with respect to the set S", i.e. whether these objects can be garbage-collected together.
1: This definitely appears to be the section with the highest notes-to-content ratio in the entire spec.
2: emphasis mine
3: From the first paragraph of the objectives: "This specification does not make any guarantees that any object will be garbage collected. Objects which are not live may be released after long periods of time, or never at all. For this reason, this specification uses the term "may" when describing behaviour triggered by garbage collection."
Now, let's try to understand the definition.
At any point during evaluation, a set of objects S is considered
live if either of the following conditions is met:
Any element in S is included in any agent's [[KeptAlive]] List.
There exists a valid future hypothetical WeakRef-oblivious execution
with respect to S that observes the Object value of any object in S.
The first condition is pretty clear. The [[KeptAlive]] list of an agent is representing the list of objects to be kept alive until the end of the current Job. It is cleared after a synchronous run of execution ends, and the note on WeakRef.prototype.deref4 provides further insight on the intention: If [WeakRefDeref] returns a target Object that is not undefined, then this target object should not be garbage collected until the current execution of ECMAScript code has completed.
The second condition however, oh well. It is not well defined what "valid", "future execution" and "observing the Object value" mean. The intuition the second condition above intends to capture is that an object is live if its identity is observable via non-WeakRef means (note 2), aha. From my understanding, "an execution" is the execution of JavaScript code by an agent and the operations occurring during that. It is "valid" if it conforms to the ECMAScript specification. And it is "future" if it starts from the current state of the program.
An object's identity may be observed by observing a strict equality comparison between objects or observing the object being used as key in a Map (note 4), whereby I assume that the note only gives examples and "the Object value" means "identity". What seems to matter is whether the code does or does not care if the particular object is used, and all of that only if the result of the execution is observable (i.e. cannot be optimised away without altering the result/output of the program)5.
To determine liveness of objects by these means would require testing all possible future executions until the objects are no longer observable. Therefore, liveness as defined here is undecidable6. In practice, engines use conservative approximations such as reachability7 (note 6), but notice that research on more advanced garbage-collectors is under way.
Now for the interesting bit: what makes an execution "hypothetical WeakRef-oblivious with respect to a set of object S"? It means an execution under the hypothesis that all WeakRefs to objects in S are already cleared8. We assume that during the future execution, the abstract operation WeakRefDeref of a WeakRef whose referent is an element of S always returns undefined (def), and then work back whether it still might observe an element of the set. If none of the objects to be can be observed after all weak references to them are cleared, they may be garbage-collected. Otherwise, S is considered live, the objects cannot be garbage-collected and the weak references to them must not be cleared.
4: See the whole note for an example. Interestingly, also the new WeakRef(obj) constructor adds obj to the [[KeptAlive]] list.
5: Unfortunately, "the notion of what constitutes an "observation" is intentionally left vague" according to this very interesting es-discourse thread.
6: While it appears to be useless to specify undecidable properties, it actually isn't. Specifying a worse approximation, e.g. said reachability, would preclude some optimisations that are possible in practice, even if it is impossible to implement a generic 100% optimiser. The case is similar for dead code elimination.
7: Specifying the concept of reachability would actually be much more complicated than describing liveness. See Note 5, which gives examples of structures where objects are reachable through internal slots and specification type fields but should be garbage-collected nonetheless.
8: See also issue 179 in the proposal and the corresponding PR for why sets of objects were introduced.
Example time!
It is hard to me to recognize how livenesses of several objects may affect each other.
WeakRef-obliviousness, together with liveness, capture[s the notion] that a WeakRef itself does not keep an object alive (note 1). This is pretty much the purpose of a WeakRef, but let's see an example anyway:
{
const o = {};
const w = new WeakRef(o);
t = setInterval(() => {
console.log(`Weak reference was ${w.deref() ? "kept" : "cleared"}.`)
}, 1000);
}
(You can run this in the console, then force garbage collection, then clearInterval(t);)
[The second notion is] that cycles in liveness does not imply that an object is live (note 1). This one is a bit tougher to show, but see this example:
{
const o = {};
const w = new WeakRef(o);
setTimeout(() => {
console.log(w.deref() && w.deref() === o ? "kept" : "cleared")
}, 1000);
}
Here, we clearly do observe the identity of o. So it must be alive? Only if the w that holds o is not cleared, as otherwise … === o is not evaluated. So the liveness of (the set containing) o depends on itself, with circular reasoning, and a clever garbage collector is actually allowed to collect it regardless of the closure.
To be concrete, if determining obj's liveness depends on determining the liveness of another WeakRef referent, obj2, obj2's liveness cannot assume obj's liveness, which would be circular reasoning (note 1). Let's try to make an example with two objects that depend on each other:
{
const a = {}, b = {};
const wa = new WeakRef(a), wb = new WeakRef(b);
const lookup = new WeakMap([[a, "b kept"], [b, "a kept"]]);
setTimeout(() => {
console.log(wa.deref() ? lookup.get(b) : "a cleared");
console.log(wb.deref() ? lookup.get(a) : "b cleared");
}, 1000);
}
The WeakMap primarily serves as something that would observe the identity of the two objects. Here, if a is kept so wa.deref() would return it, b is observed; and if b is kept so wb.deref() would return it, a is observed. Their liveness depends on each other, but we must not do circular reasoning. A garbage-collector may clear both wa and wb at the same time, but not only one of them.
Chrome does currently checks for reachability through the closure so the above snippet doesn't work, but we can remove those references by introducing a circular dependency between the objects:
{
const a = {}, b = {};
a.b = b; b.a = a;
const wa = new WeakRef(a), wb = new WeakRef(b);
const lookup = new WeakMap([[a, "b kept"], [b, "a kept"]]);
t = setInterval(() => {
console.log(wa.deref() ? lookup.get(wa.deref().b) : "a cleared");
console.log(wb.deref() ? lookup.get(wb.deref().a) : "b cleared");
}, 1000);
}
To me, note 2 (WeakRef-obliviousness is defined on sets of objects instead of individual objects to account for cycles. If it were defined on individual objects, then an object in a cycle will be considered live even though its Object value is only observed via WeakRefs of other objects in the cycle.) seems to say the exact same thing. The note was introduced to fix the definition of liveness to handle cycles, that issue also includes some interesting examples.

ES6 object dies [duplicate]

I know ECMAScript 6 has constructors but is there such a thing as destructors for ECMAScript 6?
For example if I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted.
One solution is to have a convention of creating a destructor method for every class that needs this kind of behaviour and manually call it. This will remove the references to the event handlers, hence my object will truly be ready for garbage collection. Otherwise it'll stay in memory because of those methods.
But I was hoping if ECMAScript 6 has something native that will be called right before the object is garbage collected.
If there is no such mechanism, what is a pattern/convention for such problems?
Is there such a thing as destructors for ECMAScript 6?
No. EcmaScript 6 does not specify any garbage collection semantics at all[1], so there is nothing like a "destruction" either.
If I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted
A destructor wouldn't even help you here. It's the event listeners themselves that still reference your object, so it would not be able to get garbage-collected before they are unregistered.
What you are actually looking for is a method of registering listeners without marking them as live root objects. (Ask your local eventsource manufacturer for such a feature).
1): Well, there is a beginning with the specification of WeakMap and WeakSet objects. However, true weak references are still in the pipeline [1][2].
I just came across this question in a search about destructors and I thought there was an unanswered part of your question in your comments, so I thought I would address that.
thank you guys. But what would be a good convention if ECMAScript
doesn't have destructors? Should I create a method called destructor
and call it manually when I'm done with the object? Any other idea?
If you want to tell your object that you are now done with it and it should specifically release any event listeners it has, then you can just create an ordinary method for doing that. You can call the method something like release() or deregister() or unhook() or anything of that ilk. The idea is that you're telling the object to disconnect itself from anything else it is hooked up to (deregister event listeners, clear external object references, etc...). You will have to call it manually at the appropriate time.
If, at the same time you also make sure there are no other references to that object, then your object will become eligible for garbage collection at that point.
ES6 does have weakMap and weakSet which are ways of keeping track of a set of objects that are still alive without affecting when they can be garbage collected, but it does not provide any sort of notification when they are garbage collected. They just disappear from the weakMap or weakSet at some point (when they are GCed).
FYI, the issue with this type of destructor you ask for (and probably why there isn't much of a call for it) is that because of garbage collection, an item is not eligible for garbage collection when it has an open event handler against a live object so even if there was such a destructor, it would never get called in your circumstance until you actually removed the event listeners. And, once you've removed the event listeners, there's no need for the destructor for this purpose.
I suppose there's a possible weakListener() that would not prevent garbage collection, but such a thing does not exist either.
FYI, here's another relevant question Why is the object destructor paradigm in garbage collected languages pervasively absent?. This discussion covers finalizer, destructor and disposer design patterns. I found it useful to see the distinction between the three.
Edit in 2020 - proposal for object finalizer
There is a Stage 3 EMCAScript proposal to add a user-defined finalizer function after an object is garbage collected.
A canonical example of something that would benefit from a feature like this is an object that contains a handle to an open file. If the object is garbage collected (because no other code still has a reference to it), then this finalizer scheme allows one to at least put a message to the console that an external resource has just been leaked and code elsewhere should be fixed to prevent this leak.
If you read the proposal thoroughly, you will see that it's nothing like a full-blown destructor in a language like C++. This finalizer is called after the object has already been destroyed and you have to predetermine what part of the instance data needs to be passed to the finalizer for it to do its work. Further, this feature is not meant to be relied upon for normal operation, but rather as a debugging aid and as a backstop against certain types of bugs. You can read the full explanation for these limitations in the proposal.
You have to manually "destruct" objects in JS. Creating a destroy function is common in JS. In other languages this might be called free, release, dispose, close, etc. In my experience though it tends to be destroy which will unhook internal references, events and possibly propagates destroy calls to child objects as well.
WeakMaps are largely useless as they cannot be iterated and this probably wont be available until ECMA 7 if at all. All WeakMaps let you do is have invisible properties detached from the object itself except for lookup by the object reference and GC so that they don't disturb it. This can be useful for caching, extending and dealing with plurality but it doesn't really help with memory management for observables and observers. WeakSet is a subset of WeakMap (like a WeakMap with a default value of boolean true).
There are various arguments on whether to use various implementations of weak references for this or destructors. Both have potential problems and destructors are more limited.
Destructors are actually potentially useless for observers/listeners as well because typically the listener will hold references to the observer either directly or indirectly. A destructor only really works in a proxy fashion without weak references. If your Observer is really just a proxy taking something else's Listeners and putting them on an observable then it can do something there but this sort of thing is rarely useful. Destructors are more for IO related things or doing things outside of the scope of containment (IE, linking up two instances that it created).
The specific case that I started looking into this for is because I have class A instance that takes class B in the constructor, then creates class C instance which listens to B. I always keep the B instance around somewhere high above. A I sometimes throw away, create new ones, create many, etc. In this situation a Destructor would actually work for me but with a nasty side effect that in the parent if I passed the C instance around but removed all A references then the C and B binding would be broken (C has the ground removed from beneath it).
In JS having no automatic solution is painful but I don't think it's easily solvable. Consider these classes (pseudo):
function Filter(stream) {
stream.on('data', function() {
this.emit('data', data.toString().replace('somenoise', '')); // Pretend chunks/multibyte are not a problem.
});
}
Filter.prototype.__proto__ = EventEmitter.prototype;
function View(df, stream) {
df.on('data', function(data) {
stream.write(data.toUpper()); // Shout.
});
}
On a side note, it's hard to make things work without anonymous/unique functions which will be covered later.
In a normal case instantiation would be as so (pseudo):
var df = new Filter(stdin),
v1 = new View(df, stdout),
v2 = new View(df, stderr);
To GC these normally you would set them to null but it wont work because they've created a tree with stdin at the root. This is basically what event systems do. You give a parent to a child, the child adds itself to the parent and then may or may not maintain a reference to the parent. A tree is a simple example but in reality you may also find yourself with complex graphs albeit rarely.
In this case, Filter adds a reference to itself to stdin in the form of an anonymous function which indirectly references Filter by scope. Scope references are something to be aware of and that can be quite complex. A powerful GC can do some interesting things to carve away at items in scope variables but that's another topic. What is critical to understand is that when you create an anonymous function and add it to something as a listener to ab observable, the observable will maintain a reference to the function and anything the function references in the scopes above it (that it was defined in) will also be maintained. The views do the same but after the execution of their constructors the children do not maintain a reference to their parents.
If I set any or all of the vars declared above to null it isn't going to make a difference to anything (similarly when it finished that "main" scope). They will still be active and pipe data from stdin to stdout and stderr.
If I set them all to null it would be impossible to have them removed or GCed without clearing out the events on stdin or setting stdin to null (assuming it can be freed like this). You basically have a memory leak that way with in effect orphaned objects if the rest of the code needs stdin and has other important events on it prohibiting you from doing the aforementioned.
To get rid of df, v1 and v2 I need to call a destroy method on each of them. In terms of implementation this means that both the Filter and View methods need to keep the reference to the anonymous listener function they create as well as the observable and pass that to removeListener.
On a side note, alternatively you can have an obserable that returns an index to keep track of listeners so that you can add prototyped functions which at least to my understanding should be much better on performance and memory. You still have to keep track of the returned identifier though and pass your object to ensure that the listener is bound to it when called.
A destroy function adds several pains. First is that I would have to call it and free the reference:
df.destroy();
v1.destroy();
v2.destroy();
df = v1 = v2 = null;
This is a minor annoyance as it's a bit more code but that is not the real problem. When I hand these references around to many objects. In this case when exactly do you call destroy? You cannot simply hand these off to other objects. You'll end up with chains of destroys and manual implementation of tracking either through program flow or some other means. You can't fire and forget.
An example of this kind of problem is if I decide that View will also call destroy on df when it is destroyed. If v2 is still around destroying df will break it so destroy cannot simply be relayed to df. Instead when v1 takes df to use it, it would need to then tell df it is used which would raise some counter or similar to df. df's destroy function would decrease than counter and only actually destroy if it is 0. This sort of thing adds a lot of complexity and adds a lot that can go wrong the most obvious of which is destroying something while there is still a reference around somewhere that will be used and circular references (at this point it's no longer a case of managing a counter but a map of referencing objects). When you're thinking of implementing your own reference counters, MM and so on in JS then it's probably deficient.
If WeakSets were iterable, this could be used:
function Observable() {
this.events = {open: new WeakSet(), close: new WeakSet()};
}
Observable.prototype.on = function(type, f) {
this.events[type].add(f);
};
Observable.prototype.emit = function(type, ...args) {
this.events[type].forEach(f => f(...args));
};
Observable.prototype.off = function(type, f) {
this.events[type].delete(f);
};
In this case the owning class must also keep a token reference to f otherwise it will go poof.
If Observable were used instead of EventListener then memory management would be automatic in regards to the event listeners.
Instead of calling destroy on each object this would be enough to fully remove them:
df = v1 = v2 = null;
If you didn't set df to null it would still exist but v1 and v2 would automatically be unhooked.
There are two problems with this approach however.
Problem one is that it adds a new complexity. Sometimes people do not actually want this behaviour. I could create a very large chain of objects linked to each other by events rather than containment (references in constructor scopes or object properties). Eventually a tree and I would only have to pass around the root and worry about that. Freeing the root would conveniently free the entire thing. Both behaviours depending on coding style, etc are useful and when creating reusable objects it's going to be hard to either know what people want, what they have done, what you have done and a pain to work around what has been done. If I use Observable instead of EventListener then either df will need to reference v1 and v2 or I'll have to pass them all if I want to transfer ownership of the reference to something else out of scope. A weak reference like thing would mitigate the problem a little by transferring control from Observable to an observer but would not solve it entirely (and needs check on every emit or event on itself). This problem can be fixed I suppose if the behaviour only applies to isolated graphs which would complicate the GC severely and would not apply to cases where there are references outside the graph that are in practice noops (only consume CPU cycles, no changes made).
Problem two is that either it is unpredictable in certain cases or forces the JS engine to traverse the GC graph for those objects on demand which can have a horrific performance impact (although if it is clever it can avoid doing it per member by doing it per WeakMap loop instead). The GC may never run if memory usage does not reach a certain threshold and the object with its events wont be removed. If I set v1 to null it may still relay to stdout forever. Even if it does get GCed this will be arbitrary, it may continue to relay to stdout for any amount of time (1 lines, 10 lines, 2.5 lines, etc).
The reason WeakMap gets away with not caring about the GC when non-iterable is that to access an object you have to have a reference to it anyway so either it hasn't been GCed or hasn't been added to the map.
I am not sure what I think about this kind of thing. You're sort of breaking memory management to fix it with the iterable WeakMap approach. Problem two can also exist for destructors as well.
All of this invokes several levels of hell so I would suggest to try to work around it with good program design, good practices, avoiding certain things, etc. It can be frustrating in JS however because of how flexible it is in certain aspects and because it is more naturally asynchronous and event based with heavy inversion of control.
There is one other solution that is fairly elegant but again still has some potentially serious hangups. If you have a class that extends an observable class you can override the event functions. Add your events to other observables only when events are added to yourself. When all events are removed from you then remove your events from children. You can also make a class to extend your observable class to do this for you. Such a class could provide hooks for empty and non-empty so in a since you would be Observing yourself. This approach isn't bad but also has hangups. There is a complexity increase as well as performance decrease. You'll have to keep a reference to object you observe. Critically, it also will not work for leaves but at least the intermediates will self destruct if you destroy the leaf. It's like chaining destroy but hidden behind calls that you already have to chain. A large performance problem is with this however is that you may have to reinitialise internal data from the Observable everytime your class becomes active. If this process takes a very long time then you might be in trouble.
If you could iterate WeakMap then you could perhaps combine things (switch to Weak when no events, Strong when events) but all that is really doing is putting the performance problem on someone else.
There are also immediate annoyances with iterable WeakMap when it comes to behaviour. I mentioned briefly before about functions having scope references and carving. If I instantiate a child that in the constructor that hooks the listener 'console.log(param)' to parent and fails to persist the parent then when I remove all references to the child it could be freed entirely as the anonymous function added to the parent references nothing from within the child. This leaves the question of what to do about parent.weakmap.add(child, (param) => console.log(param)). To my knowledge the key is weak but not the value so weakmap.add(object, object) is persistent. This is something I need to reevaluate though. To me that looks like a memory leak if I dispose all other object references but I suspect in reality it manages that basically by seeing it as a circular reference. Either the anonymous function maintains an implicit reference to objects resulting from parent scopes for consistency wasting a lot of memory or you have behaviour varying based on circumstances which is hard to predict or manage. I think the former is actually impossible. In the latter case if I have a method on a class that simply takes an object and adds console.log it would be freed when I clear the references to the class even if I returned the function and maintained a reference. To be fair this particular scenario is rarely needed legitimately but eventually someone will find an angle and will be asking for a HalfWeakMap which is iterable (free on key and value refs released) but that is unpredictable as well (obj = null magically ending IO, f = null magically ending IO, both doable at incredible distances).
If there is no such mechanism, what is a pattern/convention for such problems?
The term 'cleanup' might be more appropriate, but will use 'destructor' to match OP
Suppose you write some javascript entirely with 'function's and 'var's.
Then you can use the pattern of writing all the functions code within the framework of a try/catch/finally lattice. Within finally perform the destruction code.
Instead of the C++ style of writing object classes with unspecified lifetimes, and then specifying the lifetime by arbitrary scopes and the implicit call to ~() at scope end (~() is destructor in C++), in this javascript pattern the object is the function, the scope is exactly the function scope, and the destructor is the finally block.
If you are now thinking this pattern is inherently flawed because try/catch/finally doesn't encompass asynchronous execution which is essential to javascript, then you are correct. Fortunately, since 2018 the asynchronous programming helper object Promise has had a prototype function finally added to the already existing resolve and catch prototype functions. That means that that asynchronous scopes requiring destructors can be written with a Promise object, using finally as the destructor. Furthermore you can use try/catch/finally in an async function calling Promises with or without await, but must be aware that Promises called without await will be execute asynchronously outside the scope and so handle the desctructor code in a final then.
In the following code PromiseA and PromiseB are some legacy API level promises which don't have finally function arguments specified. PromiseC DOES have a finally argument defined.
async function afunc(a,b){
try {
function resolveB(r){ ... }
function catchB(e){ ... }
function cleanupB(){ ... }
function resolveC(r){ ... }
function catchC(e){ ... }
function cleanupC(){ ... }
...
// PromiseA preced by await sp will finish before finally block.
// If no rush then safe to handle PromiseA cleanup in finally block
var x = await PromiseA(a);
// PromiseB,PromiseC not preceded by await - will execute asynchronously
// so might finish after finally block so we must provide
// explicit cleanup (if necessary)
PromiseB(b).then(resolveB,catchB).then(cleanupB,cleanupB);
PromiseC(c).then(resolveC,catchC,cleanupC);
}
catch(e) { ... }
finally { /* scope destructor/cleanup code here */ }
}
I am not advocating that every object in javascript be written as a function. Instead, consider the case where you have a scope identified which really 'wants' a destructor to be called at its end of life. Formulate that scope as a function object, using the pattern's finally block (or finally function in the case of an asynchronous scope) as the destructor. It is quite like likely that formulating that functional object obviated the need for a non-function class which would otherwise have been written - no extra code was required, aligning scope and class might even be cleaner.
Note: As others have written, we should not confuse destructors and garbage collection. As it happens C++ destructors are often or mainly concerned with manual garbage collection, but not exclusively so. Javascript has no need for manual garbage collection, but asynchronous scope end-of-life is often a place for (de)registering event listeners, etc..
Here you go. The Subscribe/Publish object will unsubscribe a callback function automatically if it goes out of scope and gets garbage collected.
const createWeakPublisher = () => {
const weakSet = new WeakSet();
const subscriptions = new Set();
return {
subscribe(callback) {
if (!weakSet.has(callback)) {
weakSet.add(callback);
subscriptions.add(new WeakRef(callback));
}
return callback;
},
publish() {
for (const weakRef of subscriptions) {
const callback = weakRef.deref();
console.log(callback?.toString());
if (callback) callback();
else subscriptions.delete(weakRef);
}
},
};
};
Although it might not happen immediately after the callback function goes out of scope, or it might not happen at all. See weakRef documentation for more details. But it works like a charm for my use case.
You might also want to check out the FinalizationRegistry API for a different approach.
"A destructor wouldn't even help you here. It's the event listeners
themselves that still reference your object, so it would not be able
to get garbage-collected before they are unregistered."
Not so. The purpose of a destructor is to allow the item that registered the listeners to unregister them. Once an object has no other references to it, it will be garbage collected.
For instance, in AngularJS, when a controller is destroyed, it can listen for a destroy event and respond to it. This isn't the same as having a destructor automatically called, but it's close, and gives us the opportunity to remove listeners that were set when the controller was initialized.
// Set event listeners, hanging onto the returned listener removal functions
function initialize() {
$scope.listenerCleanup = [];
$scope.listenerCleanup.push( $scope.$on( EVENTS.DESTROY, instance.onDestroy) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.SUCCESS, instance.onCreateUserResponse ) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.FAILURE, instance.onCreateUserResponse ) );
}
// Remove event listeners when the controller is destroyed
function onDestroy(){
$scope.listenerCleanup.forEach( remove => remove() );
}
Javascript does not have destructures the same way C++ does. Instead, alternative design patterns should be used to manage resources. Here are a couple of examples:
You can restrict users to using the instance for the duration of a callback, after which it'll automatically be cleaned up. (This pattern is similar to the beloved "with" statement in Python)
connectToDatabase(async db => {
const resource = await db.doSomeRequest()
await useResource(resource)
}) // The db connection is closed once the callback ends
When the above example is too restrictive, another alternative is to just create explicit cleanup functions.
const db = makeDatabaseConnection()
const resource = await db.doSomeRequest()
updatePageWithResource(resource)
pageChangeEvent.addListener(() => {
db.destroy()
})
The other answers already explained in detail that there is no destructor. But your actual goal seems to be event related. You have an object which is connected to some event and you want this connection to go away automatically when the object is garbage collected. But this won't happen because the event subscription itself references the listener function. Well, UNLESS you use this nifty new WeakRef stuff.
Here is an example:
<!DOCTYPE html>
<html>
<body>
<button onclick="subscribe()">Subscribe</button>
<button id="emitter">Emit</button>
<button onclick="free()">Free</button>
<script>
const emitter = document.getElementById("emitter");
let listener = null;
function addWeakEventListener(element, event, callback) {
// Weakrefs only can store objects, so we put the callback into an object
const weakRef = new WeakRef({ callback });
const listener = () => {
const obj = weakRef.deref();
if (obj == null) {
console.log("Removing garbage collected event listener");
element.removeEventListener(event, listener);
} else {
obj.callback();
}
};
element.addEventListener(event, listener);
}
function subscribe() {
listener = () => console.log("Event fired!");
addWeakEventListener(emitter, "click", listener);
console.log("Listener created and subscribed to emitter");
}
function free() {
listener = null;
console.log("Reference cleared. Now force garbage collection in dev console or wait some time before clicking Emit again.");
}
</script>
</body>
</html>
(JSFiddle)
Clicking the Subscribe button creates a new listener function and registers it at the click event of the Emit button. So clicking the Emit button after that prints a message to the console. Now click the Free button which simply sets the listener variable to null so the garbage collector can remove the listener. Wait some time or force gargabe collection in the developer console and then click the Emit button again. The wrapper listener function now sees that the actual listener (wrapped in a WeakRef) is no longer there and then unsubscribes itself from the button.
WeakRefs are quite powerful but note that there is no guarantee if and when your stuff is garbage collected.
The answer to the question as-stated in the title is FinalizationRegistry, available since Firefox 79 (June 2020), Chrome 84 and derivatives (July 2020), Safari 14.1 (April 2021), and Node 14.6.0 (July 2020)… however, a native JS destructor is probably not the right solution for your use-case.
function create_eval_worker(f) {
let src_worker_blob = new Blob([f.toString()], {type: 'application/javascript'});
let src_worker_url = URL.createObjectURL(src_worker_blob);
async function g() {
let w = new Worker(src_worker_url);
…
}
// Run URL.revokeObjectURL(src_worker_url) as a destructor of g
let registry = new FinalizationRegistry(u => URL.revokeObjectURL(u));
registry.register(g, src_worker_url);
return g;
}
}
Caveat:
Avoid where possible
Correct use of FinalizationRegistry takes careful thought, and it's best avoided if possible. When, how, and whether garbage collection occurs is down to the implementation of any given JavaScript engine. Any behavior you observe in one engine may be different in another engine, in another version of the same engine, or even in a slightly different situation with the same version of the same engine.
…
Developers shouldn't rely on cleanup callbacks for essential program logic. Cleanup callbacks may be useful for reducing memory usage across the course of a program, but are unlikely to be useful otherwise.
A conforming JavaScript implementation, even one that does garbage collection, is not required to call cleanup callbacks. When and whether it does so is entirely down to the implementation of the JavaScript engine. When a registered object is reclaimed, any cleanup callbacks for it may be called then, or some time later, or not at all.
–Mozilla Developer Network

ECMAScript 6 class destructor

I know ECMAScript 6 has constructors but is there such a thing as destructors for ECMAScript 6?
For example if I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted.
One solution is to have a convention of creating a destructor method for every class that needs this kind of behaviour and manually call it. This will remove the references to the event handlers, hence my object will truly be ready for garbage collection. Otherwise it'll stay in memory because of those methods.
But I was hoping if ECMAScript 6 has something native that will be called right before the object is garbage collected.
If there is no such mechanism, what is a pattern/convention for such problems?
Is there such a thing as destructors for ECMAScript 6?
No. EcmaScript 6 does not specify any garbage collection semantics at all[1], so there is nothing like a "destruction" either.
If I register some of my object's methods as event listeners in the constructor, I want to remove them when my object is deleted
A destructor wouldn't even help you here. It's the event listeners themselves that still reference your object, so it would not be able to get garbage-collected before they are unregistered.
What you are actually looking for is a method of registering listeners without marking them as live root objects. (Ask your local eventsource manufacturer for such a feature).
1): Well, there is a beginning with the specification of WeakMap and WeakSet objects. However, true weak references are still in the pipeline [1][2].
I just came across this question in a search about destructors and I thought there was an unanswered part of your question in your comments, so I thought I would address that.
thank you guys. But what would be a good convention if ECMAScript
doesn't have destructors? Should I create a method called destructor
and call it manually when I'm done with the object? Any other idea?
If you want to tell your object that you are now done with it and it should specifically release any event listeners it has, then you can just create an ordinary method for doing that. You can call the method something like release() or deregister() or unhook() or anything of that ilk. The idea is that you're telling the object to disconnect itself from anything else it is hooked up to (deregister event listeners, clear external object references, etc...). You will have to call it manually at the appropriate time.
If, at the same time you also make sure there are no other references to that object, then your object will become eligible for garbage collection at that point.
ES6 does have weakMap and weakSet which are ways of keeping track of a set of objects that are still alive without affecting when they can be garbage collected, but it does not provide any sort of notification when they are garbage collected. They just disappear from the weakMap or weakSet at some point (when they are GCed).
FYI, the issue with this type of destructor you ask for (and probably why there isn't much of a call for it) is that because of garbage collection, an item is not eligible for garbage collection when it has an open event handler against a live object so even if there was such a destructor, it would never get called in your circumstance until you actually removed the event listeners. And, once you've removed the event listeners, there's no need for the destructor for this purpose.
I suppose there's a possible weakListener() that would not prevent garbage collection, but such a thing does not exist either.
FYI, here's another relevant question Why is the object destructor paradigm in garbage collected languages pervasively absent?. This discussion covers finalizer, destructor and disposer design patterns. I found it useful to see the distinction between the three.
Edit in 2020 - proposal for object finalizer
There is a Stage 3 EMCAScript proposal to add a user-defined finalizer function after an object is garbage collected.
A canonical example of something that would benefit from a feature like this is an object that contains a handle to an open file. If the object is garbage collected (because no other code still has a reference to it), then this finalizer scheme allows one to at least put a message to the console that an external resource has just been leaked and code elsewhere should be fixed to prevent this leak.
If you read the proposal thoroughly, you will see that it's nothing like a full-blown destructor in a language like C++. This finalizer is called after the object has already been destroyed and you have to predetermine what part of the instance data needs to be passed to the finalizer for it to do its work. Further, this feature is not meant to be relied upon for normal operation, but rather as a debugging aid and as a backstop against certain types of bugs. You can read the full explanation for these limitations in the proposal.
You have to manually "destruct" objects in JS. Creating a destroy function is common in JS. In other languages this might be called free, release, dispose, close, etc. In my experience though it tends to be destroy which will unhook internal references, events and possibly propagates destroy calls to child objects as well.
WeakMaps are largely useless as they cannot be iterated and this probably wont be available until ECMA 7 if at all. All WeakMaps let you do is have invisible properties detached from the object itself except for lookup by the object reference and GC so that they don't disturb it. This can be useful for caching, extending and dealing with plurality but it doesn't really help with memory management for observables and observers. WeakSet is a subset of WeakMap (like a WeakMap with a default value of boolean true).
There are various arguments on whether to use various implementations of weak references for this or destructors. Both have potential problems and destructors are more limited.
Destructors are actually potentially useless for observers/listeners as well because typically the listener will hold references to the observer either directly or indirectly. A destructor only really works in a proxy fashion without weak references. If your Observer is really just a proxy taking something else's Listeners and putting them on an observable then it can do something there but this sort of thing is rarely useful. Destructors are more for IO related things or doing things outside of the scope of containment (IE, linking up two instances that it created).
The specific case that I started looking into this for is because I have class A instance that takes class B in the constructor, then creates class C instance which listens to B. I always keep the B instance around somewhere high above. A I sometimes throw away, create new ones, create many, etc. In this situation a Destructor would actually work for me but with a nasty side effect that in the parent if I passed the C instance around but removed all A references then the C and B binding would be broken (C has the ground removed from beneath it).
In JS having no automatic solution is painful but I don't think it's easily solvable. Consider these classes (pseudo):
function Filter(stream) {
stream.on('data', function() {
this.emit('data', data.toString().replace('somenoise', '')); // Pretend chunks/multibyte are not a problem.
});
}
Filter.prototype.__proto__ = EventEmitter.prototype;
function View(df, stream) {
df.on('data', function(data) {
stream.write(data.toUpper()); // Shout.
});
}
On a side note, it's hard to make things work without anonymous/unique functions which will be covered later.
In a normal case instantiation would be as so (pseudo):
var df = new Filter(stdin),
v1 = new View(df, stdout),
v2 = new View(df, stderr);
To GC these normally you would set them to null but it wont work because they've created a tree with stdin at the root. This is basically what event systems do. You give a parent to a child, the child adds itself to the parent and then may or may not maintain a reference to the parent. A tree is a simple example but in reality you may also find yourself with complex graphs albeit rarely.
In this case, Filter adds a reference to itself to stdin in the form of an anonymous function which indirectly references Filter by scope. Scope references are something to be aware of and that can be quite complex. A powerful GC can do some interesting things to carve away at items in scope variables but that's another topic. What is critical to understand is that when you create an anonymous function and add it to something as a listener to ab observable, the observable will maintain a reference to the function and anything the function references in the scopes above it (that it was defined in) will also be maintained. The views do the same but after the execution of their constructors the children do not maintain a reference to their parents.
If I set any or all of the vars declared above to null it isn't going to make a difference to anything (similarly when it finished that "main" scope). They will still be active and pipe data from stdin to stdout and stderr.
If I set them all to null it would be impossible to have them removed or GCed without clearing out the events on stdin or setting stdin to null (assuming it can be freed like this). You basically have a memory leak that way with in effect orphaned objects if the rest of the code needs stdin and has other important events on it prohibiting you from doing the aforementioned.
To get rid of df, v1 and v2 I need to call a destroy method on each of them. In terms of implementation this means that both the Filter and View methods need to keep the reference to the anonymous listener function they create as well as the observable and pass that to removeListener.
On a side note, alternatively you can have an obserable that returns an index to keep track of listeners so that you can add prototyped functions which at least to my understanding should be much better on performance and memory. You still have to keep track of the returned identifier though and pass your object to ensure that the listener is bound to it when called.
A destroy function adds several pains. First is that I would have to call it and free the reference:
df.destroy();
v1.destroy();
v2.destroy();
df = v1 = v2 = null;
This is a minor annoyance as it's a bit more code but that is not the real problem. When I hand these references around to many objects. In this case when exactly do you call destroy? You cannot simply hand these off to other objects. You'll end up with chains of destroys and manual implementation of tracking either through program flow or some other means. You can't fire and forget.
An example of this kind of problem is if I decide that View will also call destroy on df when it is destroyed. If v2 is still around destroying df will break it so destroy cannot simply be relayed to df. Instead when v1 takes df to use it, it would need to then tell df it is used which would raise some counter or similar to df. df's destroy function would decrease than counter and only actually destroy if it is 0. This sort of thing adds a lot of complexity and adds a lot that can go wrong the most obvious of which is destroying something while there is still a reference around somewhere that will be used and circular references (at this point it's no longer a case of managing a counter but a map of referencing objects). When you're thinking of implementing your own reference counters, MM and so on in JS then it's probably deficient.
If WeakSets were iterable, this could be used:
function Observable() {
this.events = {open: new WeakSet(), close: new WeakSet()};
}
Observable.prototype.on = function(type, f) {
this.events[type].add(f);
};
Observable.prototype.emit = function(type, ...args) {
this.events[type].forEach(f => f(...args));
};
Observable.prototype.off = function(type, f) {
this.events[type].delete(f);
};
In this case the owning class must also keep a token reference to f otherwise it will go poof.
If Observable were used instead of EventListener then memory management would be automatic in regards to the event listeners.
Instead of calling destroy on each object this would be enough to fully remove them:
df = v1 = v2 = null;
If you didn't set df to null it would still exist but v1 and v2 would automatically be unhooked.
There are two problems with this approach however.
Problem one is that it adds a new complexity. Sometimes people do not actually want this behaviour. I could create a very large chain of objects linked to each other by events rather than containment (references in constructor scopes or object properties). Eventually a tree and I would only have to pass around the root and worry about that. Freeing the root would conveniently free the entire thing. Both behaviours depending on coding style, etc are useful and when creating reusable objects it's going to be hard to either know what people want, what they have done, what you have done and a pain to work around what has been done. If I use Observable instead of EventListener then either df will need to reference v1 and v2 or I'll have to pass them all if I want to transfer ownership of the reference to something else out of scope. A weak reference like thing would mitigate the problem a little by transferring control from Observable to an observer but would not solve it entirely (and needs check on every emit or event on itself). This problem can be fixed I suppose if the behaviour only applies to isolated graphs which would complicate the GC severely and would not apply to cases where there are references outside the graph that are in practice noops (only consume CPU cycles, no changes made).
Problem two is that either it is unpredictable in certain cases or forces the JS engine to traverse the GC graph for those objects on demand which can have a horrific performance impact (although if it is clever it can avoid doing it per member by doing it per WeakMap loop instead). The GC may never run if memory usage does not reach a certain threshold and the object with its events wont be removed. If I set v1 to null it may still relay to stdout forever. Even if it does get GCed this will be arbitrary, it may continue to relay to stdout for any amount of time (1 lines, 10 lines, 2.5 lines, etc).
The reason WeakMap gets away with not caring about the GC when non-iterable is that to access an object you have to have a reference to it anyway so either it hasn't been GCed or hasn't been added to the map.
I am not sure what I think about this kind of thing. You're sort of breaking memory management to fix it with the iterable WeakMap approach. Problem two can also exist for destructors as well.
All of this invokes several levels of hell so I would suggest to try to work around it with good program design, good practices, avoiding certain things, etc. It can be frustrating in JS however because of how flexible it is in certain aspects and because it is more naturally asynchronous and event based with heavy inversion of control.
There is one other solution that is fairly elegant but again still has some potentially serious hangups. If you have a class that extends an observable class you can override the event functions. Add your events to other observables only when events are added to yourself. When all events are removed from you then remove your events from children. You can also make a class to extend your observable class to do this for you. Such a class could provide hooks for empty and non-empty so in a since you would be Observing yourself. This approach isn't bad but also has hangups. There is a complexity increase as well as performance decrease. You'll have to keep a reference to object you observe. Critically, it also will not work for leaves but at least the intermediates will self destruct if you destroy the leaf. It's like chaining destroy but hidden behind calls that you already have to chain. A large performance problem is with this however is that you may have to reinitialise internal data from the Observable everytime your class becomes active. If this process takes a very long time then you might be in trouble.
If you could iterate WeakMap then you could perhaps combine things (switch to Weak when no events, Strong when events) but all that is really doing is putting the performance problem on someone else.
There are also immediate annoyances with iterable WeakMap when it comes to behaviour. I mentioned briefly before about functions having scope references and carving. If I instantiate a child that in the constructor that hooks the listener 'console.log(param)' to parent and fails to persist the parent then when I remove all references to the child it could be freed entirely as the anonymous function added to the parent references nothing from within the child. This leaves the question of what to do about parent.weakmap.add(child, (param) => console.log(param)). To my knowledge the key is weak but not the value so weakmap.add(object, object) is persistent. This is something I need to reevaluate though. To me that looks like a memory leak if I dispose all other object references but I suspect in reality it manages that basically by seeing it as a circular reference. Either the anonymous function maintains an implicit reference to objects resulting from parent scopes for consistency wasting a lot of memory or you have behaviour varying based on circumstances which is hard to predict or manage. I think the former is actually impossible. In the latter case if I have a method on a class that simply takes an object and adds console.log it would be freed when I clear the references to the class even if I returned the function and maintained a reference. To be fair this particular scenario is rarely needed legitimately but eventually someone will find an angle and will be asking for a HalfWeakMap which is iterable (free on key and value refs released) but that is unpredictable as well (obj = null magically ending IO, f = null magically ending IO, both doable at incredible distances).
If there is no such mechanism, what is a pattern/convention for such problems?
The term 'cleanup' might be more appropriate, but will use 'destructor' to match OP
Suppose you write some javascript entirely with 'function's and 'var's.
Then you can use the pattern of writing all the functions code within the framework of a try/catch/finally lattice. Within finally perform the destruction code.
Instead of the C++ style of writing object classes with unspecified lifetimes, and then specifying the lifetime by arbitrary scopes and the implicit call to ~() at scope end (~() is destructor in C++), in this javascript pattern the object is the function, the scope is exactly the function scope, and the destructor is the finally block.
If you are now thinking this pattern is inherently flawed because try/catch/finally doesn't encompass asynchronous execution which is essential to javascript, then you are correct. Fortunately, since 2018 the asynchronous programming helper object Promise has had a prototype function finally added to the already existing resolve and catch prototype functions. That means that that asynchronous scopes requiring destructors can be written with a Promise object, using finally as the destructor. Furthermore you can use try/catch/finally in an async function calling Promises with or without await, but must be aware that Promises called without await will be execute asynchronously outside the scope and so handle the desctructor code in a final then.
In the following code PromiseA and PromiseB are some legacy API level promises which don't have finally function arguments specified. PromiseC DOES have a finally argument defined.
async function afunc(a,b){
try {
function resolveB(r){ ... }
function catchB(e){ ... }
function cleanupB(){ ... }
function resolveC(r){ ... }
function catchC(e){ ... }
function cleanupC(){ ... }
...
// PromiseA preced by await sp will finish before finally block.
// If no rush then safe to handle PromiseA cleanup in finally block
var x = await PromiseA(a);
// PromiseB,PromiseC not preceded by await - will execute asynchronously
// so might finish after finally block so we must provide
// explicit cleanup (if necessary)
PromiseB(b).then(resolveB,catchB).then(cleanupB,cleanupB);
PromiseC(c).then(resolveC,catchC,cleanupC);
}
catch(e) { ... }
finally { /* scope destructor/cleanup code here */ }
}
I am not advocating that every object in javascript be written as a function. Instead, consider the case where you have a scope identified which really 'wants' a destructor to be called at its end of life. Formulate that scope as a function object, using the pattern's finally block (or finally function in the case of an asynchronous scope) as the destructor. It is quite like likely that formulating that functional object obviated the need for a non-function class which would otherwise have been written - no extra code was required, aligning scope and class might even be cleaner.
Note: As others have written, we should not confuse destructors and garbage collection. As it happens C++ destructors are often or mainly concerned with manual garbage collection, but not exclusively so. Javascript has no need for manual garbage collection, but asynchronous scope end-of-life is often a place for (de)registering event listeners, etc..
Here you go. The Subscribe/Publish object will unsubscribe a callback function automatically if it goes out of scope and gets garbage collected.
const createWeakPublisher = () => {
const weakSet = new WeakSet();
const subscriptions = new Set();
return {
subscribe(callback) {
if (!weakSet.has(callback)) {
weakSet.add(callback);
subscriptions.add(new WeakRef(callback));
}
return callback;
},
publish() {
for (const weakRef of subscriptions) {
const callback = weakRef.deref();
console.log(callback?.toString());
if (callback) callback();
else subscriptions.delete(weakRef);
}
},
};
};
Although it might not happen immediately after the callback function goes out of scope, or it might not happen at all. See weakRef documentation for more details. But it works like a charm for my use case.
You might also want to check out the FinalizationRegistry API for a different approach.
"A destructor wouldn't even help you here. It's the event listeners
themselves that still reference your object, so it would not be able
to get garbage-collected before they are unregistered."
Not so. The purpose of a destructor is to allow the item that registered the listeners to unregister them. Once an object has no other references to it, it will be garbage collected.
For instance, in AngularJS, when a controller is destroyed, it can listen for a destroy event and respond to it. This isn't the same as having a destructor automatically called, but it's close, and gives us the opportunity to remove listeners that were set when the controller was initialized.
// Set event listeners, hanging onto the returned listener removal functions
function initialize() {
$scope.listenerCleanup = [];
$scope.listenerCleanup.push( $scope.$on( EVENTS.DESTROY, instance.onDestroy) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.SUCCESS, instance.onCreateUserResponse ) );
$scope.listenerCleanup.push( $scope.$on( AUTH_SERVICE_RESPONSES.CREATE_USER.FAILURE, instance.onCreateUserResponse ) );
}
// Remove event listeners when the controller is destroyed
function onDestroy(){
$scope.listenerCleanup.forEach( remove => remove() );
}
Javascript does not have destructures the same way C++ does. Instead, alternative design patterns should be used to manage resources. Here are a couple of examples:
You can restrict users to using the instance for the duration of a callback, after which it'll automatically be cleaned up. (This pattern is similar to the beloved "with" statement in Python)
connectToDatabase(async db => {
const resource = await db.doSomeRequest()
await useResource(resource)
}) // The db connection is closed once the callback ends
When the above example is too restrictive, another alternative is to just create explicit cleanup functions.
const db = makeDatabaseConnection()
const resource = await db.doSomeRequest()
updatePageWithResource(resource)
pageChangeEvent.addListener(() => {
db.destroy()
})
The other answers already explained in detail that there is no destructor. But your actual goal seems to be event related. You have an object which is connected to some event and you want this connection to go away automatically when the object is garbage collected. But this won't happen because the event subscription itself references the listener function. Well, UNLESS you use this nifty new WeakRef stuff.
Here is an example:
<!DOCTYPE html>
<html>
<body>
<button onclick="subscribe()">Subscribe</button>
<button id="emitter">Emit</button>
<button onclick="free()">Free</button>
<script>
const emitter = document.getElementById("emitter");
let listener = null;
function addWeakEventListener(element, event, callback) {
// Weakrefs only can store objects, so we put the callback into an object
const weakRef = new WeakRef({ callback });
const listener = () => {
const obj = weakRef.deref();
if (obj == null) {
console.log("Removing garbage collected event listener");
element.removeEventListener(event, listener);
} else {
obj.callback();
}
};
element.addEventListener(event, listener);
}
function subscribe() {
listener = () => console.log("Event fired!");
addWeakEventListener(emitter, "click", listener);
console.log("Listener created and subscribed to emitter");
}
function free() {
listener = null;
console.log("Reference cleared. Now force garbage collection in dev console or wait some time before clicking Emit again.");
}
</script>
</body>
</html>
(JSFiddle)
Clicking the Subscribe button creates a new listener function and registers it at the click event of the Emit button. So clicking the Emit button after that prints a message to the console. Now click the Free button which simply sets the listener variable to null so the garbage collector can remove the listener. Wait some time or force gargabe collection in the developer console and then click the Emit button again. The wrapper listener function now sees that the actual listener (wrapped in a WeakRef) is no longer there and then unsubscribes itself from the button.
WeakRefs are quite powerful but note that there is no guarantee if and when your stuff is garbage collected.
The answer to the question as-stated in the title is FinalizationRegistry, available since Firefox 79 (June 2020), Chrome 84 and derivatives (July 2020), Safari 14.1 (April 2021), and Node 14.6.0 (July 2020)… however, a native JS destructor is probably not the right solution for your use-case.
function create_eval_worker(f) {
let src_worker_blob = new Blob([f.toString()], {type: 'application/javascript'});
let src_worker_url = URL.createObjectURL(src_worker_blob);
async function g() {
let w = new Worker(src_worker_url);
…
}
// Run URL.revokeObjectURL(src_worker_url) as a destructor of g
let registry = new FinalizationRegistry(u => URL.revokeObjectURL(u));
registry.register(g, src_worker_url);
return g;
}
}
Caveat:
Avoid where possible
Correct use of FinalizationRegistry takes careful thought, and it's best avoided if possible. When, how, and whether garbage collection occurs is down to the implementation of any given JavaScript engine. Any behavior you observe in one engine may be different in another engine, in another version of the same engine, or even in a slightly different situation with the same version of the same engine.
…
Developers shouldn't rely on cleanup callbacks for essential program logic. Cleanup callbacks may be useful for reducing memory usage across the course of a program, but are unlikely to be useful otherwise.
A conforming JavaScript implementation, even one that does garbage collection, is not required to call cleanup callbacks. When and whether it does so is entirely down to the implementation of the JavaScript engine. When a registered object is reclaimed, any cleanup callbacks for it may be called then, or some time later, or not at all.
–Mozilla Developer Network

Deleting large Javascript objects when process is running out of memory

I'm a novice to this kind of javascript, so I'll give a brief explanation:
I have a web scraper built in Nodejs that gathers (quite a bit of) data, processes it with Cheerio (basically jQuery for Node) creates an object then uploads it to mongoDB.
It works just fine, except for on larger sites. What's appears to be happening is:
I give the scraper an online store's URL to scrape
Node goes to that URL and retrieves anywhere from 5,000 - 40,000 product urls to scrape
For each of these new URLs, Node's request module gets the page source then loads up the data to Cheerio.
Using Cheerio I create a JS object which represents the product.
I ship the object off to MongoDB where it's saved to my database.
As I say, this happens for thousands of URLs and once I get to, say, 10,000 urls loaded I get errors in node. The most common is:
Node: Fatal JS Error: Process out of memory
Ok, here's the actual question(s):
I think this is happening because Node's garbage cleanup isn't working properly. It's possible that, for example, the request data scraped from all 40,000 urls is still in memory, or at the very least the 40,000 created javascript objects may be. Perhaps it's also because the MongoDB connection is made at the start of the session and is never closed (I just close the script manually once all the products are done). This is to avoid opening/closing the connection it every single time I log a new product.
To really ensure they're cleaned up properly (once the product goes to MongoDB I don't use it anymore and can be deleted from memory) can/should I just simply delete it from memory, simply using delete product?
Moreso (I'm clearly not across how JS handles objects) if I delete one reference to the object is it totally wiped from memory, or do I have to delete all of them?
For instance:
var saveToDB = require ('./mongoDBFunction.js');
function getData(link){
request(link, function(data){
var $ = cheerio.load(data);
createProduct($)
})
}
function createProduct($)
var product = {
a: 'asadf',
b: 'asdfsd'
// there's about 50 lines of data in here in the real products but this is for brevity
}
product.name = $('.selector').dostuffwithitinjquery('etc');
saveToDB(product);
}
// In mongoDBFunction.js
exports.saveToDB(item){
db.products.save(item, function(err){
console.log("Item was successfully saved!");
delete item; // Will this completely delete the item from memory?
})
}
delete in javascript is NOT used to delete variables or free memory. It is ONLY used to remove a property from an object. You may find this article on the delete operator a good read.
You can remove a reference to the data held in a variable by setting the variable to something like null. If there are no other references to that data, then that will make it eligible for garbage collection. If there are other references to that object, then it will not be cleared from memory until there are no more references to it (e.g. no way for your code to get to it).
As for what is causing the memory accumulation, there are a number of possibilities and we can't really see enough of your code to know what references could be held onto that would keep the GC from freeing up things.
If this is a single, long running process with no breaks in execution, you might also need to manually run the garbage collector to make sure it gets a chance to clean up things you have released.
Here's are a couple articles on tracking down your memory usage in node.js: http://dtrace.org/blogs/bmc/2012/05/05/debugging-node-js-memory-leaks/ and https://hacks.mozilla.org/2012/11/tracking-down-memory-leaks-in-node-js-a-node-js-holiday-season/.
JavaScript has a garbage collector that automatically track which variable is "reachable". If a variable is "reachable", then its value won't be released.
For example if you have a global variable var g_hugeArray and you assign it a huge array, you actually have two JavaScript object here: one is the huge block that holds the array data. Another is a property on the window object whose name is "g_hugeArray" that points to that data. So the reference chain is: window -> g_hugeArray -> the actual array.
In order to release the actual array, you make the actual array "unreachable". you can break either link the above chain to achieve this. If you set g_hugeArray to null, then you break the link between g_hugeArray and the actual array. This makes the array data unreachable thus it will be released when the garbage collector runs. Alternatively, you can use "delete window.g_hugeArray" to remove property "g_hugeArray" from the window object. This breaks the link between window and g_hugeArray and also makes the actual array unreachable.
The situation gets more complicated when you have "closures". A closure is created when you have a local function that reference a local variable. For example:
function a()
{
var x = 10;
var y = 20;
setTimeout(function()
{
alert(x);
}, 100);
}
In this case, local variable x is still reachable from the anonymous time out function even after function "a" has returned. If without the timeout function, then both local variable x and y will become unreachable as soon as function a returns. But the existence of the anonymous function change this. Depending on how the JavaScript engine is implemented, it may choose to keep both variable x and y (because it doesn't know whether the function will need y until the function actually runs, which occurs after function a returns). Or if it is smart enough, it can only keep x. Imagine that if both x and y points to big things, this can be a problem. So closure is very convenient but at times it is more likely to cause memory issues and can make it more difficult to track memory issues.
I faced same problem in my application with similar functionality. I've been looking for memory leaks or something like that. The size of consumed memory my process has reached to 1.4 GB and depends on the number of links that must be downloaded.
The first thing I noticed was that after manually running the Garbage Collector, almost all memory was freed. Each page that I downloaded took about 1 MB, was processed and stored in the database.
Then I install heapdump and looked at the snapshot of the application. More information about memory profiling you can found at Webstorm Blog.
My guess is that while the application is running, the GC does not start. To do this, I began to run application with the flag --expose-gc, and began to run GC manually at the time of implementation of the program.
const runGCIfNeeded = (() => {
let i = 0;
return function runGCIfNeeded() {
if (i++ > 200) {
i = 0;
if (global.gc) {
global.gc();
} else {
logger.warn('Garbage collection unavailable. Pass --expose-gc when launching node to enable forced garbage collection.');
}
}
};
})();
// run GC check after each iteration
checkProduct(product._id)
.then(/* ... */)
.finally(runGCIfNeeded)
Interestingly, if you do not use const, let, var, etc when you define something in the global scope, it seems be an attribute of the global object, and deleting returns true. This could cause it to be garbage collected. I tested it like this and it seems to have the intended impact on my memory usage, please let me know if this is incorrect or if you got drastically different results:
x = [];
process.memoryUsage();
i = 0;
while(i<1000000) {
x.push(10.5);
}
process.memoryUsage();
delete x
process.memoryUsage();

Categories