In the mozilla docs it says:
initWithCallback(): Initialize a timer to fire after the given millisecond interval. This version takes a function to call and a closure to pass to that function.
In the this code example:
setupTimer: function() {
var waitPeriod = getNewWaitPeriod();
myTimer.initWithCallback({
notify: function(t) {
foo();
setupTimer();
}
},
waitPeriod,
Components.interfaces.nsITimer.TYPE_ONE_SHOT);
}
How much is actually included in the closure that's passed to the function. Does the closure keep a copy of the entire stack? Is this code sample at risk of stack overflowing or forever increasing memory usage?
In theory the closure keeps everything that's in scope for the closure (so in this case the local variables in setupTimer plus whatever variables setupTimer itself closes over). Note that this is different from the callstack: closure scope in JS is lexical, not dynamic, so it doesn't matter how you reached your function, only what the source of the function looks like.
In practice JS engines optimize closures heavily to speed up access to barewords in closures, so the set of things the closure actually keeps alive might be smaller than the theoretical set I describe above. But I wouldn't depend on that.
Related
I'm reading this article (http://javascript.info/tutorial/memory-leaks#memory-leak-size) about memory leaks which mentions this as a memory leak:
function f() {
var data = "Large piece of data";
function inner() {
return "Foo";
}
return inner;
}
JavaScript interpreter has no idea which variables may be required by
the inner function, so it keeps everything. In every outer
LexicalEnvironment. I hope, newer interpreters try to optimize it, but
not sure about their success.
The article suggests we need to manually set data = null before we return the inner function.
Does this hold true today? Or is this article outdated? (If it's outdated, can someone point me to a resource about current pitfalls)
Modern engines would not maintain unused variables in the outer scope.
Therefore, it doesn't matter if you set data = null before returning the inner function, because the inner function does not depend on ("close over") data.
If the inner function did depend on data--perhaps it returns it--then setting data = null is certainly not what you want to, because then, well, it would be null instead of having its original value!
Assuming the inner function does depend on data, then yes, as long as inner is being pointed to (referred to by) something, then the value of data will have to be kept around. But, that's what you are saying you want! How can you have something available without having it be available?
Remember that at some point the variable which holds the return value of f() will itself go out of scope. At that point, at least until f() is called again, data will be garbage collected.
The general rule is that you don't need to worry about memory and leaks with JavaScript. That's the whole point of GC. The garbage collector does an excellent job of identifying what is needed and what is not needed, and keeping the former and garbage collecting the latter.
You may want to consider the following example:
function foo() {
var x = 1;
return function() { debugger; return 1; };
}
function bar() {
var x = 1;
return function() { debugger; return x; };
}
foo()();
bar()();
And examine its execution in Chrome devtools variable window. When the debugger stops in the inner function of foo, note that x is not present as a local variable or as a closure. For all practical purposes, it does not exist.
When the debugger stops in the inner function of bar, we see the variable x, because it had to be preserved so as to be accessible in order to be returned.
Does this hold true today? Or is this article outdated?
No, it doesn't, and yes, it is. The article is four years old, which is a lifetime in the web world. I have no way to know if jQuery still is subject to leaks, but I'd be surprised if it were, and if so, there's an easy enough way to avoid them--don't use jQuery. The leaks the article's author mentions related to DOM loops and event handlers are not present in modern browsers, by which I mean IE10 (more likely IE9) and above. I'd suggest finding a more up-to-date reference if you really want to understand about memory leaks. Actually, I'd suggest you mainly stop worrying about memory leaks. They occur only in very specialized situations. It's hard to find much on the topic on the web these days for that precise reason. Here's one article I found: http://point.davidglasser.net/2013/06/27/surprising-javascript-memory-leak.html.
Just in addition to to #torazaburo's excellent answer, it is worth pointing out that the examples in that tutorial are not leaks. A leak what happens when a program deletes a reference to something but does not release the memory it consumes.
The last time I remember that JS developers had to really worry about genuine leaks was when Internet Explorer (6 and 7 I think) used separate memory management for the DOM and for JS. Because of this, it was possible to bind an onclick event to a button, destroy the button, and still have the event handler stranded in memory -- forever (or until the browser crashed or was closed by the user). You couldn't trigger the handler or release it after the fact. It just sat on the stack, taking up room. So if you had a long-lived webapp or a webpage that created and destroyed a lot of DOM elements, you had to be super diligent to always unbind events before destroying them.
I've also run into a few annoying leaks in iOS but these were all bugs and were (eventually) patched by Apple.
That said, a good developer needs to keep resource management in mind when writing code. Consider these two constructors:
function F() {
var data = "One megabyte of data";
this.inner = new function () {
return data;
}
}
var G = function () {};
G.prototype.data = "One megabyte of data";
G.prototype.inner = function () {
return this.data;
};
If you were to create a thousand instances of F, the browser would have to allocate an extra gigabyte of memory for all those copies of the huge string. And every time you deleted an instance, you might get some onscreen jankiness when the GC eventually recovered that ram. On the other hand, if you made a thousand instances of G, the huge string would be created once and reused by every instance. That is a huge performance boost.
But the advantage of F is that the huge string is essentially private. No other code outside of the constructor would be able to access that string directly. Because of that, each instance of F could mutate that string as much as it wanted and you'd never have to worry about causing problems for other instances.
On the other hand, the huge string in G is out there for anyone to change. Other instances could change it, and any code that shares the same scope as G could too.
So in this case, there is a trade-off between resource use and security.
I am wondering if it is more or less efficient to encapsulate the main body of your JavaScript code in an object? This way the entire scope of your program would be separated from other scopes like window.
For example, if I had the following code in a JavaScript file:
/* My main code body. */
var somevar1=undefined;
var somevar2=undefined;
var somevarN=undefined;
function somefunction(){};
function initialize(){/* Initializes the program. */};
/* End main code body. */
I could instead encapsulate the main code body in an object:
/* Encapsulating object. */
var application={};
application.somevar1=undefined;
application.somevar2=undefined;
application.somevarN=undefined;
application.somefunction=function(){};
application.initialize=function(){/* Initializes the program. */};
My logic is that since JavaScript searches through all variables in a scope until the right one is found, keeping application specific functions and variables in their own scope would increase efficiency, especially if there were a lot of functions and variables.
My only concern is that this is bad practice or that maybe this would increase the lookup time of variables and functions inside the new "application" scope. If this is bad practice or completely useless, please let me know! Thank you!
I’ve no idea what the performance implications are (I suspect they’re negligible either way, but if you’re really concerned, test it), but it’s very common practice in JavaScript to keep chunks of code in their own scope, rather than having everything in the global scope.
The main benefit is reducing the risk of accidentally overwriting variables in the global scope, and making naming easier (i.e. you have window.application.initialize instead of e.g. window.initialize_application).
Instead of your implementation above, you can use self-calling functions to create an area of scope just for one bit of code. This is known as the module pattern.
This has the added advantage of allowing you to create “private” variables that are only accessible within the module, and so aren’t accessible from other code running in the same global object:
/* Encapsulating object. */
var application=( function () {
var someprivatevar = null// This can't be accessed by code running outside of this function
, someprivatefunction = function () { someprivatevar = someprivatevar || new Date(); };
return {
somevar1: undefined
, somevar2: undefined
, somevarN: undefined
, somefunction: function(){}
, getInitialized: function () { return someprivatevar; }
, initialize: function (){/* Initializes the program. */ someprivatefunction(); }
};
})();
application.initialize();
application.getInitialized();// Will always tell you when application was initialized
I realize you asked a yes or no question. However, my answer is don't worry about it, and to back that up, I want to post a quote from Eloquent Javascript, by Marijn Haverbeke.
The dilemma of speed versus elegance is an interesting one. You can
see it as a kind of continuum between human-friendliness and
machine-friendliness. Almost any program can be made faster by making
it bigger and more convoluted. The programmer must decide on an
appropriate balance....
The basic rule, which has been repeated by many programmers and with
which I wholeheartedly agree, is to not worry about efficiency until
you know for sure that the program is too slow. If it is, find out
which parts are taking up the most time, and start exchanging elegance
for efficiency in those parts.
Having easy to read code is key to my issue here, so I'm going to stick with the modular approach even if there is a slightly longer lookup chain for variables and functions. Either method seems to have pros and cons as it is.
On the one hand, you have the global scope which holds several variables from the start. If you put all of your code right in the global scope, your list of variables in that scope will include your own as well as variables like innerWidth and innerHeight. The list to search through when referencing variables will be longer, but I believe this is such a small amount of overhead it is ridiculous to worry about.
On the other hand, you can add just one object to the global scope which holds all of the variables you want to work with. From inside this scope, you can reference these variables easily and avoid searching through global variables like innerWidth and innerHeight. The con is that when accessing your encapsulated variables from the global scope you would have a longer lookup chain such as: globalscope.customscope.myvar instead of globalscope.myvar or just myvar.
When I look at it this way, it seems like a very trivial question. Perhaps the best way to go about it is to just encapsulate things that go together just for the sake of decluttering code and keep focus on readability rather than having a mess of unreadable code that is only slightly more efficient.
If have a callback that is activated repeatedly such as the following...
Template.foo.rendered = function() {
$(this.firstNode).droppable({
// other arguments
drop: function() {
// some really long function that doesn't access anything in the closure
}
});
}
Should I optimize it to the following?
dropFunction = function() {
// some really long function that doesn't access anything in the closure
}
Template.foo.rendered = function() {
$(this.firstNode).droppable({
// other arguments
drop: dropFunction
});
}
In this case, the rendered callback is a Meteor construct which runs asynchronously over a DOM node with template foo whenever they are constructed; there can be quite a few of them. Does it help to declare the function somewhere in a global closure, to save the Javascript engine the hassle of keeping track of an extra local closure, or does it not matter?
I don't believe that in modern browsers that there will even be a difference. When creating a closure object, V8 determines what variables your function uses, and if you don't use a variable it won't put it in the closure object(I read this somewhere online a while back, sorry I can't find the link*SEE BELOW*). I assume any browser that compiles JavaScript would do the same.
What will be optimized is the creation of the function. In the second example the drop function is created at the same time as the rendered function, in the first it's re-created every time the rendered is called. If this was in a long for loop that might be worth optimizing.
I personally think the first example is a lot easier to read and maintain. And as I said in a comment on the question you(or the person editing this code in the future) could accidentally overwrite the function variable(in an event handler or something) and you could have a hard to find bug on your hands. I would suggest saying with number 1, or wrapping everything in a self-executing function so dropFunction has a smaller scope and less chance of being overwritten, like so:
(function () {
var dropFunction = function() {
// some really long function that doesn't access anything in the closure
}
Template.foo.rendered = function() {
$(this.firstNode).droppable({
// other arguments
drop: dropFunction
});
}
})();
EDIT: I found the link, it was actually in a blog post about a closure memory leak in Meteor!
http://point.davidglasser.net/2013/06/27/surprising-javascript-memory-leak.html
Here's the portion I mentioned:
JavaScript implementations (or at least current Chrome) are smart enough to notice that str isn’t used in logIt, so it’s not put into logIt’s lexical environment, and it’s OK to GC the big string once run finishes.
I'll start this by saying I don't know anything about meteor :/ but if I'm understanding what happens correctly, basically:
Template.foo.rendered
Gets called repeatedly. When that occurs, you construct and object + define a new function on it "drop" which is eventually invoked. If that is the case, you are better off from a performance perspective defining drop once at parse time. But the benefit probably depends on how often rendered is called.
As for why - I posted something similar here:
Performance of function declaration vs function expressions in javascript in a loop
And I think #soktinpk's answer I think is a great summary.
If you look at the jsperf in the above link it may not initially seem directly applicable, but if my assumed statements above are true it is in fact is - because the question at hand boils down to how expensive is to define a function at runtime vs parse time and are you defining this function at runtime over and over again.
If you look at the performance of the loops where a function is defined at runtime vs the loops where functions are defined at parse time (or defined at parse time and hoisted), the parse time loop run significantly faster.
I believe that the equivalent of what meteor is doing in your example is probably something like this jsperf: http://jsperf.com/how-expensive-is-defining-a-function-at-runtime2
What I have done here is created two loops that simulate your scenario being invoked over and over again. The first invokes a function that creates and object and function and passes it to another function for invocation.
The second creates and object that references a function and passes it to another function for invocation. The difference in performance looks significant (58% in chrome).
I am finding myself rather confused regarding javascript garbage collection and how to best encourage it.
What I would like to know is related to a particular pattern. I am not interested in whether the pattern itself is considered a good or bad idea, I am simply interested in how a browsers garbage collector would respond, i.e would the references be freed and collected or would it cause leaks.
Imagine this pattern:
TEST = {
init : function(){
this.cache = {
element : $('#element')
};
},
func1 : function(){
this.cache.element.show();
},
func2 : function(){
TEST.cache.element.show();
},
func3 : function(){
var self = this;
self.cache.element.show();
},
func4 : function(){
var element = this.cache.element;
element.show();
}
func5 : function(){
this.auxfunc(this.cache.element);
}
auxfunc1 : function(el){
el.show();
}
func6 : function(){
var el = getElement();
el.show();
}
getElement : function(){
return this.cache.element;
}
}
Now imagine that on page load TEST.init() is called;
Then later at various times the various functions are called.
What I would like to know is if caching elements or objects or anything else upon initialization and referring to them throughout the lifetime of an application, in the manner shown above, effects a browsers garbage collector positively or negatively.
Is there any difference? Which method best encourages garbage collection? Do they cause leaks? Are there any circular references? if so where?
This code, in itself, shouldn't cause any memory leaks. Especially not in modern browsers. It's just an object, like you have tons of others in any script. It all depends on where, how, and how long you reference it.
The basic rule is that, whenever an object is no longer referenced anywhere in the code (directly/by variable or indirectly/ through closure accessing a scope), the GC will flag and swipe.
If you use the above code, and then assign something else to TEST the object literal it referenced could be GC'ed, if no other variable references the original object.
Of course, predicting memory leaks in JS is an inexact science. In my experience, they're not nearly as common as some would have you believe. Firebug, Chrome's console (profiler) and IE debugger get you along a long way.
Some time ago I did some more digging into this matter resulting in this question. Perhaps some links, and findings are helpful to you...
If not here's a couple of tips to avoid the obvious leaks:
Don't use global variables (they don't actually leak memory permanently, but do so, for as long as your script runs).
Don't attach event handlers to the global object (window.onload ==> leaks mem in IE <9 because the global object is never fully unloaded, hence the event handler isn't GC'ed)
Just wrap your script in a huge IIFE, and use strict mode whenever possible. That way, you create a scope that can be GC'ed in its entirety on unload.
Test, test and test again. Don't believe every blogpost you read on the subject! If something was an issue last year, that needn't be the case today. This answer might not be 100% accurate anymore by the time you read this, or because just this morning some miracle-patch for JS GC'ing was written by erm... Paris Hilton or some other alien life-form.
Oh, and to answer your in-comment question: "But what concerns me is if each time i call this.cache.element, is it creating a new reference within that functions scope ON TOP of the original cache reference which will not be garbage collected?"
The answer is no. Because this will reference the TEST object, and the init function assigns the object a property cahche, that in itself is another object literal with 1 property referencing a jQ object. That property (cache) and all that is accessible through it will sit in memory right until you either delete TEST.cache or delete TEST. if you were to create var cahce = {...}; that object would be GC'ed when the ini function returns, because a variable cannot outlive its scope, except for when you're using closures and exposing certain variables indirectly.
I have seen Javascript code that claims to speed up function call overhead like:
function foo() {
// do something
}
function myFunc() {
var fastFoo = foo; // this caches function foo locally for faster lookups
for (var i = 0; i < 1000; i++) {
fastFoo();
}
}
I do not see how this can speed up javascript function call overhead, as it seems to me like it is just a memory lookup, either at the top of the current stack (for fastFoo) or somewhere else in the stack (I am not sure where the global context is stored... anyone?).
Is this a relic of ancient browsers, a complete myth, or a true improvement enhancer?
It is all dependent on scope. Accessing the local scope is always faster than accessing a parent scope. if the function is defined in the parent scope you will often see speedups if you make a local reference.
If this speedup is significant depends on many things, and only testing in your case will show if it is worth doing so.
The difference in speed depends on the difference of scope.
Calling a.b.c.d.e.f.g.h(); from the scope of x.y.z is slower than calling a.b(); from the scope of a.b.c (not the prettiest or most correct examples, but it should ´serve it's purpose :)
This will result in an infinitesimal performance gain.
Don't do it.