I have javascript web application which abnormally consumes more than 4gb of ram memory, when i launch application initially the memory consumption would be 700 to 800MB while doing any action on application some time longer immediately ram memory consumption is spike up.
what could be the root cause of it and how could i make my application to consume 400 to 500MB of ram.
It's impossible to answer your question, it's broad and we don't know your code (and we won't read it all for obvious reasons). But I think you should read this article on javascript optimisation on the Google Chrome développer website.
You should analyse which functions use the most memory and pin point the problem to find a possible memory leak and/or optimise the code.
https://developers.google.com/speed/articles/optimizing-javascript
Optimizing JavaScript code
Authors: Gregory Baker, Software Engineer on GMail & Erik Arvidsson,
Software Engineer on Google Chrome
Recommended experience: Working knowledge of JavaScript
Client-side scripting can make your application dynamic and active,
but the browser's interpretation of this code can itself introduce
inefficiencies, and the performance of different constructs varies
from client to client. Here we discuss a few tips and best practices
to optimize your JavaScript code.
Defining class methods
The following is inefficient, as each time a instance of baz.Bar is
constructed, a new function and closure is created for foo:
baz.Bar = function() { // constructor body this.foo = function() {
// method body }; } The preferred approach is:
baz.Bar = function() { // constructor body };
baz.Bar.prototype.foo = function() { // method body }; With this
approach, no matter how many instances of baz.Bar are constructed,
only a single function is ever created for foo, and no closures are
created.
Initializing instance variables
Place instance variable declaration/initialization on the prototype
for instance variables with value type (rather than reference type)
initialization values (i.e. values of type number, Boolean, null,
undefined, or string). This avoids unnecessarily running the
initialization code each time the constructor is called. (This can't
be done for instance variables whose initial value is dependent on
arguments to the constructor, or some other state at time of
construction.)
For example, instead of:
foo.Bar = function() { this.prop1_ = 4; this.prop2_ = true;
this.prop3_ = []; this.prop4_ = 'blah'; }; Use:
foo.Bar = function() { this.prop3_ = []; };
foo.Bar.prototype.prop1_ = 4;
foo.Bar.prototype.prop2_ = true;
foo.Bar.prototype.prop4_ = 'blah'; Avoiding pitfalls with closures
Closures are a powerful and useful feature of JavaScript; however,
they have several drawbacks, including:
They are the most common source of memory leaks. Creating a closure is
significantly slower than creating an inner function without a
closure, and much slower than reusing a static function. For example:
function setupAlertTimeout() { var msg = 'Message to alert';
window.setTimeout(function() { alert(msg); }, 100); } is slower than:
function setupAlertTimeout() { window.setTimeout(function() {
var msg = 'Message to alert';
alert(msg); }, 100); } which is slower than:
function alertMsg() { var msg = 'Message to alert'; alert(msg); }
function setupAlertTimeout() { window.setTimeout(alertMsg, 100); }
They add a level to the scope chain. When the browser resolves
properties, each level of the scope chain must be checked. In the
following example:
var a = 'a';
function createFunctionWithClosure() { var b = 'b'; return
function () {
var c = 'c';
a;
b;
c; }; }
var f = createFunctionWithClosure(); f(); when f is invoked,
referencing a is slower than referencing b, which is slower than
referencing c. See IE+JScript Performance Recommendations Part 3:
JavaScript Code inefficiencies for information on when to use closures
with IE.
Avoiding with
Avoid using with in your code. It has a negative impact on
performance, as it modifies the scope chain, making it more expensive
to look up variables in other scopes.
Avoiding browser memory leaks
Memory leaks are an all too common problem with web applications, and
can result in huge performance hits. As the memory usage of the
browser grows, your web application, along with the rest of the user's
system, slows down. The most common memory leaks for web applications
involve circular references between the JavaScript script engine and
the browsers' C++ objects' implementing the DOM (e.g. between the
JavaScript script engine and Internet Explorer's COM infrastructure,
or between the JavaScript engine and Firefox XPCOM infrastructure).
Here are some rules of thumb for avoiding memory leaks:
Use an event system for attaching event handlers
The most common circular reference pattern [ DOM element --> event
handler --> closure scope --> DOM ] element is discussed in this MSDN
blog post. To avoid this problem, use one of the well-tested event
systems for attaching event handlers, such as those in Google doctype,
Dojo, or JQuery.
In addition, using inline event handlers can lead to another kind of
leak in IE. This is not the common circular reference type leak, but
rather a leak of an internal temporary anonymous script object. For
details, see the section on "DOM Insertion Order Leak Model" in
Understanding and Solving Internet Explorer Leak Patterns and and an
example in this JavaScript Kit tutorial.
Avoid expando properties
Expando properties are arbitrary JavaScript properties on DOM elements
and are a common source of circular references. You can use expando
properties without introducing memory leaks, but it is pretty easy to
introduce one by accident. The leak pattern here is [ DOM element -->
via expando--> intermediary object --> DOM element ]. The best thing
to do is to just avoid using them. If you do use them, only use values
with primitive types. If you do use non-primitive values, nullify the
expando property when it is no longer needed. See the section on
"Circular References" in Understanding and Solving Internet Explorer
Leak Patterns.
Related
I'm trying to know more about memory management in JS and I have some question about closures.
Case 1:
// Suppose that object var is capable to emit events
var object = new EventEmitter();
object.addEventListener('custom-event', function callback(event) {
object.removeEventListener('custom-event', callback);
object = null;
//Do some heavy computation like opening a specific view or somehthing similar
var heavy_window = new HeavyWindow();
heavy_window.open();
});
Case 2:
// Suppose that object var is capable to emit events
var object = new EventEmitter();
object.addEventListener('custom-event', callback = function(event) {
object.removeEventListener('custom-event', callback);
object = null;
//Do some heavy computation like opening a specific view or somehthing similar
});
My questions are:
Case 1:
Is correct to think that the object remains in memory until the heavy_window is not nulled?
Nulling the object var inside the closure can help gc?
Case 2:
Naming a closure in this way ...addEventListener(callback = function() {}) instead of ...addEventListener(function callback() {}) can cause a memory leaks? Declaring callback = function() {} will cause a global hidden variable?
Note, I don't need examples in jQuery or using other framework. I'm interested to know more in JavaScript vanilla.
Thank you in advance.
I don't think the details for garbage collecting is specified in the ES specification. However in most interpreters, objects will usually be garbage-collected once there doesn't exist any references to the object or not accessible, or scheduled at a later time.
In case 1, heavy_window might have access to the original new EventEmitter (event often has references to the event target) if you pass the event object into HeavyWindow, and thus your new EventEmitter() might not be garbage-collected until all references to your heavy_window are gone. If not, then your new EventEmitter will be garbage-collected at some point in the future.
In case 2, assigning the callback function to a global variable callback will not change anything because the callback function do not have any reference to the actual new EventEmitter object you have created. (Again, unless event has a reference to the new EventEmitter object you have created.)
In reality things might be a little different since garbage collectors in browsers are fairly complex. At what time the interpreter decides to collect the garbage is really up to it, but the bottom line is that it will only do it when all references to an object are gone.
Memory management usually isn't really a concern is JavaScript and you shouldn't need to be thinking about it most of the time. You will know it if there is a memory leak and able to detect it by using developer tools provided by the browser.
A powerful feature of Javascript is how you can use closures and higher order functions to hide private data:
function outerFunction(closureArgument) {
var closureVariable = something;
return function innerFunction() {
// do something with the closed variables
}
}
However, this means that every time I call outerFunction, a new innerFunction is declared and returned, which means that outerFunction(arg) !== outerFunction(arg). I wonder to what extent, if any, this might impact performance. As an example, a typical alternative would be to create a constructor that holds the data, and put innerFunction on the prototype:
function Constructor(argument) {
this.argument = argument;
this.variable = something;
}
Constructor.prototype.method = function() {
// do something with this.argument and this.variable
}
In that case, there is a single function that is shared by a number of instances, e.g new Constructor().method === new Constructor().method, but the data can not easily be made private, and the function can only be called as a method, so it has access to its context variables (This is just an example; there are other ways to solve the issue; my question is not actually about constructors and prototypes).
Does this matter from a performance perspective? It seems to me that since a function is always a literal, and the function body itself is immutable, Javascript engines should be able to optimize the above code example so that there shouldn't be a big performance impact because we have "multiple" functions instead of a single, shared function.
In summary: Does running the same function declaration multiple times come with any significant performance penalty, or do Javascript engines optimize it?
Yes, engines do optimise closures. A lot. There is no significant performance or memory penalty.
However, the prototype approach still can be faster, because engines are better at detecting the potential to inline a method call. You will want to step through the details in this article - though it could easily be outdated by newer compiler optimisations.
But don't fear closures because of that, any differences will be negligible for most cases you encounter. If in doubt, and with real need for speed, benchmark your code (and avoid premature optimisation till then). If closures make your code easier to read, write, maintain, go for them. They are a powerful tool that you don't want to miss.
I'm reading this article (http://javascript.info/tutorial/memory-leaks#memory-leak-size) about memory leaks which mentions this as a memory leak:
function f() {
var data = "Large piece of data";
function inner() {
return "Foo";
}
return inner;
}
JavaScript interpreter has no idea which variables may be required by
the inner function, so it keeps everything. In every outer
LexicalEnvironment. I hope, newer interpreters try to optimize it, but
not sure about their success.
The article suggests we need to manually set data = null before we return the inner function.
Does this hold true today? Or is this article outdated? (If it's outdated, can someone point me to a resource about current pitfalls)
Modern engines would not maintain unused variables in the outer scope.
Therefore, it doesn't matter if you set data = null before returning the inner function, because the inner function does not depend on ("close over") data.
If the inner function did depend on data--perhaps it returns it--then setting data = null is certainly not what you want to, because then, well, it would be null instead of having its original value!
Assuming the inner function does depend on data, then yes, as long as inner is being pointed to (referred to by) something, then the value of data will have to be kept around. But, that's what you are saying you want! How can you have something available without having it be available?
Remember that at some point the variable which holds the return value of f() will itself go out of scope. At that point, at least until f() is called again, data will be garbage collected.
The general rule is that you don't need to worry about memory and leaks with JavaScript. That's the whole point of GC. The garbage collector does an excellent job of identifying what is needed and what is not needed, and keeping the former and garbage collecting the latter.
You may want to consider the following example:
function foo() {
var x = 1;
return function() { debugger; return 1; };
}
function bar() {
var x = 1;
return function() { debugger; return x; };
}
foo()();
bar()();
And examine its execution in Chrome devtools variable window. When the debugger stops in the inner function of foo, note that x is not present as a local variable or as a closure. For all practical purposes, it does not exist.
When the debugger stops in the inner function of bar, we see the variable x, because it had to be preserved so as to be accessible in order to be returned.
Does this hold true today? Or is this article outdated?
No, it doesn't, and yes, it is. The article is four years old, which is a lifetime in the web world. I have no way to know if jQuery still is subject to leaks, but I'd be surprised if it were, and if so, there's an easy enough way to avoid them--don't use jQuery. The leaks the article's author mentions related to DOM loops and event handlers are not present in modern browsers, by which I mean IE10 (more likely IE9) and above. I'd suggest finding a more up-to-date reference if you really want to understand about memory leaks. Actually, I'd suggest you mainly stop worrying about memory leaks. They occur only in very specialized situations. It's hard to find much on the topic on the web these days for that precise reason. Here's one article I found: http://point.davidglasser.net/2013/06/27/surprising-javascript-memory-leak.html.
Just in addition to to #torazaburo's excellent answer, it is worth pointing out that the examples in that tutorial are not leaks. A leak what happens when a program deletes a reference to something but does not release the memory it consumes.
The last time I remember that JS developers had to really worry about genuine leaks was when Internet Explorer (6 and 7 I think) used separate memory management for the DOM and for JS. Because of this, it was possible to bind an onclick event to a button, destroy the button, and still have the event handler stranded in memory -- forever (or until the browser crashed or was closed by the user). You couldn't trigger the handler or release it after the fact. It just sat on the stack, taking up room. So if you had a long-lived webapp or a webpage that created and destroyed a lot of DOM elements, you had to be super diligent to always unbind events before destroying them.
I've also run into a few annoying leaks in iOS but these were all bugs and were (eventually) patched by Apple.
That said, a good developer needs to keep resource management in mind when writing code. Consider these two constructors:
function F() {
var data = "One megabyte of data";
this.inner = new function () {
return data;
}
}
var G = function () {};
G.prototype.data = "One megabyte of data";
G.prototype.inner = function () {
return this.data;
};
If you were to create a thousand instances of F, the browser would have to allocate an extra gigabyte of memory for all those copies of the huge string. And every time you deleted an instance, you might get some onscreen jankiness when the GC eventually recovered that ram. On the other hand, if you made a thousand instances of G, the huge string would be created once and reused by every instance. That is a huge performance boost.
But the advantage of F is that the huge string is essentially private. No other code outside of the constructor would be able to access that string directly. Because of that, each instance of F could mutate that string as much as it wanted and you'd never have to worry about causing problems for other instances.
On the other hand, the huge string in G is out there for anyone to change. Other instances could change it, and any code that shares the same scope as G could too.
So in this case, there is a trade-off between resource use and security.
I am finding myself rather confused regarding javascript garbage collection and how to best encourage it.
What I would like to know is related to a particular pattern. I am not interested in whether the pattern itself is considered a good or bad idea, I am simply interested in how a browsers garbage collector would respond, i.e would the references be freed and collected or would it cause leaks.
Imagine this pattern:
TEST = {
init : function(){
this.cache = {
element : $('#element')
};
},
func1 : function(){
this.cache.element.show();
},
func2 : function(){
TEST.cache.element.show();
},
func3 : function(){
var self = this;
self.cache.element.show();
},
func4 : function(){
var element = this.cache.element;
element.show();
}
func5 : function(){
this.auxfunc(this.cache.element);
}
auxfunc1 : function(el){
el.show();
}
func6 : function(){
var el = getElement();
el.show();
}
getElement : function(){
return this.cache.element;
}
}
Now imagine that on page load TEST.init() is called;
Then later at various times the various functions are called.
What I would like to know is if caching elements or objects or anything else upon initialization and referring to them throughout the lifetime of an application, in the manner shown above, effects a browsers garbage collector positively or negatively.
Is there any difference? Which method best encourages garbage collection? Do they cause leaks? Are there any circular references? if so where?
This code, in itself, shouldn't cause any memory leaks. Especially not in modern browsers. It's just an object, like you have tons of others in any script. It all depends on where, how, and how long you reference it.
The basic rule is that, whenever an object is no longer referenced anywhere in the code (directly/by variable or indirectly/ through closure accessing a scope), the GC will flag and swipe.
If you use the above code, and then assign something else to TEST the object literal it referenced could be GC'ed, if no other variable references the original object.
Of course, predicting memory leaks in JS is an inexact science. In my experience, they're not nearly as common as some would have you believe. Firebug, Chrome's console (profiler) and IE debugger get you along a long way.
Some time ago I did some more digging into this matter resulting in this question. Perhaps some links, and findings are helpful to you...
If not here's a couple of tips to avoid the obvious leaks:
Don't use global variables (they don't actually leak memory permanently, but do so, for as long as your script runs).
Don't attach event handlers to the global object (window.onload ==> leaks mem in IE <9 because the global object is never fully unloaded, hence the event handler isn't GC'ed)
Just wrap your script in a huge IIFE, and use strict mode whenever possible. That way, you create a scope that can be GC'ed in its entirety on unload.
Test, test and test again. Don't believe every blogpost you read on the subject! If something was an issue last year, that needn't be the case today. This answer might not be 100% accurate anymore by the time you read this, or because just this morning some miracle-patch for JS GC'ing was written by erm... Paris Hilton or some other alien life-form.
Oh, and to answer your in-comment question: "But what concerns me is if each time i call this.cache.element, is it creating a new reference within that functions scope ON TOP of the original cache reference which will not be garbage collected?"
The answer is no. Because this will reference the TEST object, and the init function assigns the object a property cahche, that in itself is another object literal with 1 property referencing a jQ object. That property (cache) and all that is accessible through it will sit in memory right until you either delete TEST.cache or delete TEST. if you were to create var cahce = {...}; that object would be GC'ed when the ini function returns, because a variable cannot outlive its scope, except for when you're using closures and exposing certain variables indirectly.
// Case A
function Constructor() {
this.foo = function() {
...
};
...
}
// vs
// Case B
function Constructor() {
...
};
Constructor.prototype.foo = function() {
...
}
One of the main reasons people advise the use of prototypes is that .foo is created once in the case of the prototype where as this.foo is created multiple times when using the other approach.
However one would expect interpreters can optimize this. So that there is only one copy of the function foo in case A.
Of course you would still have a unique scope context for each object because of closures but that has less overhead then a new function for each object.
Do modern JS interpreters optimise Case A so there is only one copy of the function foo ?
Yes, creating functions uses more memory.
... and, no, interpreters don't optimize Case A down to a single function.
The reason is the JS scope chain requires each instance of a function to capture the variables available to it at the time it's created. That said, modern interpreters are better about Case A than they used to be, but largely because the performance of closure functions was a known issue a couple years ago.
Mozilla says to avoid unnecessary closures for this reason, but closures are one of the most powerful and often used tools in a JS developer's toolkit.
Update: Just ran this test that creates 1M 'instances' of Constructor, using node.js (which is V8, the JS interpreter in Chrome). With caseA = true I get this memory usage:
{
rss: 212291584, //212 MB
vsize: 3279040512, //3279 MB
heapTotal: 203424416, //203 MB
heapUsed: 180715856 //180 MB
}
And with caseA = false I get this memory usage:
{
rss: 73535488, //73 MB
vsize: 3149352960, //3149 MB
heapTotal: 74908960, //74 MB
heapUsed: 56308008 //56 MB
}
So the closure functions are definitely consuming significantly more memory, by almost 3X. But in the absolute sense, we're only talking about a difference of ~140-150 bytes per instance. (However that will likely increase depending on the number of in-scope variables you have when the function is created).
I believe, after some brief testing in node, that in both Case A and B there is only one copy of the actual code for the function foo in memory.
Case A - there is a function object created for each execution of the Constructor() storing a reference to the functions code, and its current execution scope.
Case B - there is only one scope, one function object, shared via prototype.
The javascript interpreters aren't optimizing prototype objects either. Its merely a case of there only being one of them per type (that multiple instances reference). Constructors, on the other hand, create new instances and the methods defined within them. So by definition, this really isn't an issue of interpreter 'optimization' but of simply understanding what's taking place.
On a side note, if the interpreter were to try and consolidate instance methods you would run into issues if you ever decided to change the value of one in a particular instance (I would prefer that headache not be added to the language) :)