Does creating functions consume more memory - javascript

// Case A
function Constructor() {
this.foo = function() {
...
};
...
}
// vs
// Case B
function Constructor() {
...
};
Constructor.prototype.foo = function() {
...
}
One of the main reasons people advise the use of prototypes is that .foo is created once in the case of the prototype where as this.foo is created multiple times when using the other approach.
However one would expect interpreters can optimize this. So that there is only one copy of the function foo in case A.
Of course you would still have a unique scope context for each object because of closures but that has less overhead then a new function for each object.
Do modern JS interpreters optimise Case A so there is only one copy of the function foo ?

Yes, creating functions uses more memory.
... and, no, interpreters don't optimize Case A down to a single function.
The reason is the JS scope chain requires each instance of a function to capture the variables available to it at the time it's created. That said, modern interpreters are better about Case A than they used to be, but largely because the performance of closure functions was a known issue a couple years ago.
Mozilla says to avoid unnecessary closures for this reason, but closures are one of the most powerful and often used tools in a JS developer's toolkit.
Update: Just ran this test that creates 1M 'instances' of Constructor, using node.js (which is V8, the JS interpreter in Chrome). With caseA = true I get this memory usage:
{
rss: 212291584, //212 MB
vsize: 3279040512, //3279 MB
heapTotal: 203424416, //203 MB
heapUsed: 180715856 //180 MB
}
And with caseA = false I get this memory usage:
{
rss: 73535488, //73 MB
vsize: 3149352960, //3149 MB
heapTotal: 74908960, //74 MB
heapUsed: 56308008 //56 MB
}
So the closure functions are definitely consuming significantly more memory, by almost 3X. But in the absolute sense, we're only talking about a difference of ~140-150 bytes per instance. (However that will likely increase depending on the number of in-scope variables you have when the function is created).

I believe, after some brief testing in node, that in both Case A and B there is only one copy of the actual code for the function foo in memory.
Case A - there is a function object created for each execution of the Constructor() storing a reference to the functions code, and its current execution scope.
Case B - there is only one scope, one function object, shared via prototype.

The javascript interpreters aren't optimizing prototype objects either. Its merely a case of there only being one of them per type (that multiple instances reference). Constructors, on the other hand, create new instances and the methods defined within them. So by definition, this really isn't an issue of interpreter 'optimization' but of simply understanding what's taking place.
On a side note, if the interpreter were to try and consolidate instance methods you would run into issues if you ever decided to change the value of one in a particular instance (I would prefer that headache not be added to the language) :)

Related

How is the built-in Function object approach uses substantially more memory than the function literal?

Below snippet defines a function Robot() that is used as an object constructor—a function that creates an object.
function Robot(robotName) {
this.name = robotName;
this.sayHi = function () { console.log("Hi my name is "+this.name); };
this.sayBye = function () { console.log("Bye!"); };
this.sayAnything = function (msg) { console.log(this.name+" says "+msg); };
}
The second approach is
function Robot(robotName) {
this.name = robotName;
this.sayHi = new Function ("console.log('Hi my name is '+this.name); ");
this.sayBye = new Function ("console.log('Bye!'); ");
this.sayAnything = new Function("msg","console.log(this.name+' says '+msg);" );
}
The book which I am reading says -
The only downside to 2nd approach is that it will use substantially
more memory, as new function objects are created every time you create
a new instance of the Robot object.
I see when I do like below-
var wally = new Robot("Wally");
In both the approaches, wally robot has 3 function objects.
wally.sayHi(); //1
wally.sayAnything("I don't know what to say"); //2
wally.sayBye(); //3
How is the 2nd approach uses substantially more memory then?
When code gets parsed it'll be turned into an internal representation that the engine is able to execute. Now for regular code that happens once when the code is loaded. If you dynamically turn strings into code (as with Function) that happens when Function() is called, so each time your constructor gets called. Therefore using the second approach the engine has to create & keep another internal representation of the code, so if you create 10.000 instances, there will have to be 10.000 code representations. Additionally, this will not only eat up memory, it will also degrade performance, as optimizations are done on a per function basis, and the parsing of code also takes time, so the second approach will probably execute much much slower (yes, the engine could optimize those differences away, but I guess it probably won't).
There are a few other downsides:
1) No syntax highlighting from your IDE
2) unreadable code
So never, never, never ever use the second version. And get a new book as:
1) It makes no sense to learn how to dynamically create functions from strings, you will never have to use it.
2) It says that the second is worse because "new function objects are created every time you create a new instance of the Robot object", which applies to both the first and the second snippet (so thats not really a reason).
it's more about how javascript works.
As you know, on first launch, javascript create AST, and optimises code. So, in first case, function will be "precompiled" once. In second, AST and "precompilation" will be done every time
Most engines will create a new stack frame (more memory) as it has to invoke the interpreter for each new Function("") call, with an an anonymous IIFE (harder to debug) . It's similar to using eval(). Besides the memory issues, it allows for injection attacks and can also cause scope issues depending on strict mode.

How to optimise ram memory consumption by javascript web application

I have javascript web application which abnormally consumes more than 4gb of ram memory, when i launch application initially the memory consumption would be 700 to 800MB while doing any action on application some time longer immediately ram memory consumption is spike up.
what could be the root cause of it and how could i make my application to consume 400 to 500MB of ram.
It's impossible to answer your question, it's broad and we don't know your code (and we won't read it all for obvious reasons). But I think you should read this article on javascript optimisation on the Google Chrome développer website.
You should analyse which functions use the most memory and pin point the problem to find a possible memory leak and/or optimise the code.
https://developers.google.com/speed/articles/optimizing-javascript
Optimizing JavaScript code
Authors: Gregory Baker, Software Engineer on GMail & Erik Arvidsson,
Software Engineer on Google Chrome
Recommended experience: Working knowledge of JavaScript
Client-side scripting can make your application dynamic and active,
but the browser's interpretation of this code can itself introduce
inefficiencies, and the performance of different constructs varies
from client to client. Here we discuss a few tips and best practices
to optimize your JavaScript code.
Defining class methods
The following is inefficient, as each time a instance of baz.Bar is
constructed, a new function and closure is created for foo:
baz.Bar = function() { // constructor body this.foo = function() {
// method body }; } The preferred approach is:
baz.Bar = function() { // constructor body };
baz.Bar.prototype.foo = function() { // method body }; With this
approach, no matter how many instances of baz.Bar are constructed,
only a single function is ever created for foo, and no closures are
created.
Initializing instance variables
Place instance variable declaration/initialization on the prototype
for instance variables with value type (rather than reference type)
initialization values (i.e. values of type number, Boolean, null,
undefined, or string). This avoids unnecessarily running the
initialization code each time the constructor is called. (This can't
be done for instance variables whose initial value is dependent on
arguments to the constructor, or some other state at time of
construction.)
For example, instead of:
foo.Bar = function() { this.prop1_ = 4; this.prop2_ = true;
this.prop3_ = []; this.prop4_ = 'blah'; }; Use:
foo.Bar = function() { this.prop3_ = []; };
foo.Bar.prototype.prop1_ = 4;
foo.Bar.prototype.prop2_ = true;
foo.Bar.prototype.prop4_ = 'blah'; Avoiding pitfalls with closures
Closures are a powerful and useful feature of JavaScript; however,
they have several drawbacks, including:
They are the most common source of memory leaks. Creating a closure is
significantly slower than creating an inner function without a
closure, and much slower than reusing a static function. For example:
function setupAlertTimeout() { var msg = 'Message to alert';
window.setTimeout(function() { alert(msg); }, 100); } is slower than:
function setupAlertTimeout() { window.setTimeout(function() {
var msg = 'Message to alert';
alert(msg); }, 100); } which is slower than:
function alertMsg() { var msg = 'Message to alert'; alert(msg); }
function setupAlertTimeout() { window.setTimeout(alertMsg, 100); }
They add a level to the scope chain. When the browser resolves
properties, each level of the scope chain must be checked. In the
following example:
var a = 'a';
function createFunctionWithClosure() { var b = 'b'; return
function () {
var c = 'c';
a;
b;
c; }; }
var f = createFunctionWithClosure(); f(); when f is invoked,
referencing a is slower than referencing b, which is slower than
referencing c. See IE+JScript Performance Recommendations Part 3:
JavaScript Code inefficiencies for information on when to use closures
with IE.
Avoiding with
Avoid using with in your code. It has a negative impact on
performance, as it modifies the scope chain, making it more expensive
to look up variables in other scopes.
Avoiding browser memory leaks
Memory leaks are an all too common problem with web applications, and
can result in huge performance hits. As the memory usage of the
browser grows, your web application, along with the rest of the user's
system, slows down. The most common memory leaks for web applications
involve circular references between the JavaScript script engine and
the browsers' C++ objects' implementing the DOM (e.g. between the
JavaScript script engine and Internet Explorer's COM infrastructure,
or between the JavaScript engine and Firefox XPCOM infrastructure).
Here are some rules of thumb for avoiding memory leaks:
Use an event system for attaching event handlers
The most common circular reference pattern [ DOM element --> event
handler --> closure scope --> DOM ] element is discussed in this MSDN
blog post. To avoid this problem, use one of the well-tested event
systems for attaching event handlers, such as those in Google doctype,
Dojo, or JQuery.
In addition, using inline event handlers can lead to another kind of
leak in IE. This is not the common circular reference type leak, but
rather a leak of an internal temporary anonymous script object. For
details, see the section on "DOM Insertion Order Leak Model" in
Understanding and Solving Internet Explorer Leak Patterns and and an
example in this JavaScript Kit tutorial.
Avoid expando properties
Expando properties are arbitrary JavaScript properties on DOM elements
and are a common source of circular references. You can use expando
properties without introducing memory leaks, but it is pretty easy to
introduce one by accident. The leak pattern here is [ DOM element -->
via expando--> intermediary object --> DOM element ]. The best thing
to do is to just avoid using them. If you do use them, only use values
with primitive types. If you do use non-primitive values, nullify the
expando property when it is no longer needed. See the section on
"Circular References" in Understanding and Solving Internet Explorer
Leak Patterns.

Should I avoid executing the same function declaration multiple times for performance reasons?

A powerful feature of Javascript is how you can use closures and higher order functions to hide private data:
function outerFunction(closureArgument) {
var closureVariable = something;
return function innerFunction() {
// do something with the closed variables
}
}
However, this means that every time I call outerFunction, a new innerFunction is declared and returned, which means that outerFunction(arg) !== outerFunction(arg). I wonder to what extent, if any, this might impact performance. As an example, a typical alternative would be to create a constructor that holds the data, and put innerFunction on the prototype:
function Constructor(argument) {
this.argument = argument;
this.variable = something;
}
Constructor.prototype.method = function() {
// do something with this.argument and this.variable
}
In that case, there is a single function that is shared by a number of instances, e.g new Constructor().method === new Constructor().method, but the data can not easily be made private, and the function can only be called as a method, so it has access to its context variables (This is just an example; there are other ways to solve the issue; my question is not actually about constructors and prototypes).
Does this matter from a performance perspective? It seems to me that since a function is always a literal, and the function body itself is immutable, Javascript engines should be able to optimize the above code example so that there shouldn't be a big performance impact because we have "multiple" functions instead of a single, shared function.
In summary: Does running the same function declaration multiple times come with any significant performance penalty, or do Javascript engines optimize it?
Yes, engines do optimise closures. A lot. There is no significant performance or memory penalty.
However, the prototype approach still can be faster, because engines are better at detecting the potential to inline a method call. You will want to step through the details in this article - though it could easily be outdated by newer compiler optimisations.
But don't fear closures because of that, any differences will be negligible for most cases you encounter. If in doubt, and with real need for speed, benchmark your code (and avoid premature optimisation till then). If closures make your code easier to read, write, maintain, go for them. They are a powerful tool that you don't want to miss.

What happens to a JavaScript function when it is closured in as a variable?

I've read many questions on closures and JavaScript on SO, but I can't find information about what happens to a function when it gets closured in. Usually the examples are string literals or simple objects. See the example below. When you closure in a function, the original function is preserved even if you change it later on.
What technically happens to the function preserved in a closure? How is it stored in memory? How is it preserved?
See the following code as an example:
var makeFunc = function () {
var fn = document.getElementById; //Preserving this, I will change it later
function getOriginal() {
return fn; //Closure it in
}
return getOriginal;
};
var myFunc = makeFunc();
var oldGetElementById = myFunc(); //Get my preserved function
document.getElementById = function () { //Change the original function
return "foo"
};
console.log(oldGetElementById.call(document, "myDiv")); //Calls the original!
console.log(document.getElementById("myDiv")); //Returns "foo"
Thanks for the comments and discussion. After your recommendations, I found the following.
What technically happens to the function preserved in a closure?
Functions as objects are treated no differently than any other simple object as closured variables (such as a string or object).
How is it stored in memory? How is it preserved?
To answer this, I had to dig through some texts on programming languages. John C. Mitchell's Concepts in Programming Languages explains that closured variables usually end up in the program heap.
Therefore, the variables defined in nesting subprograms may need lifetimes that are of the entire program, rather than just the time during which the subprogram in which they were defined is active. A variable whose lifetime is that of the whole program is said to have unlimited extent. This usually means they must be heap-dynamic, rather than stack-dynamic.
And more specific to JavaScript runtimes, Dmitry Soshnikov describes
As to implementations, for storing local variables after the context is destroyed, the stack-based implementation is not fit any more (because it contradicts the definition of stack-based structure). Therefore in this case closured data of the parent context are saved in the dynamic memory allocation (in the “heap”, i.e. heap-based implementations), with using a garbage collector (GC) and references counting. Such systems are less effective by speed than stack-based systems. However, implementations may always optimize it: at parsing stage to find out, whether free variables are used in function, and depending on this decide — to place the data in the stack or in the “heap”.
Further, Dmitry shows a varied implementation of the parent scope in function closures:
As we mentioned, for optimization purpose, when a function does not use free variables, implementations may not to save a parent scope chain. However, in ECMA-262-3 specification nothing is said about it; therefore, formally (and by the technical algorithm) — all functions save scope chain in the [[Scope]] property at creation moment.
Some implementations allow access to the closured scope directly. For example in Rhino, the [[Scope]] property of a function corresponds to a non-standard property __parent__.

What are the benefits of writing "functional" Javascript as opposed to OO?

I often come across Javascript code snippets that consist of many anonymous functions that are called where they are created, such as here:
var prealloc = (function() {
// some definitions here
return function prealloc_win(file, size, perms, sparseOk) {
// function body
};
})();
// can be called like this:
prealloc(...);
So this calls an anonymous function which returns another function prealloc_win. To me this seems equivalent to instantiating a class where the resulting object exposes the function prealloc_win:
function preallocObj() {
// some definitions here
this.prealloc_win = function(file, size, perms, sparseOk) {
// function body
};
}
prealloc = new preallocObj();
// can be called like this:
prealloc.prealloc_win(...);
Is this assumption correct? What are the benefits of using anonymous functions that are called directly? And why is this idiom so often seen in Javascript, but not often in other languages which could be written in the same way (C, C++, Python)?
The deal is that the preallocObj class says that this is something that could be
instantiated multiple times. I could just create more instances of it even though it wasn't really designed for that. You could do some hacks to prevent that but it's easier just to use the immediately invoked anonymous function for this.
With the immediately created and invoked anonymous function, a "class" is created, instantly "instantiated" and assigned to prealloc and
there is no way to reference the original anonymous function that created the prealloc object after this. It was created, invoked and lost.
You pretty much have the right idea. The benefits of this module pattern/function builder are that the resultant function can enclose its own internal definitions or state.
It's basically just a way to create a function with private variables or constants. Consider the less efficient alternative:
var prealloc = function() {
// some definitions here
// function body
}
Every time this function is called it would reassign/instantiate its variables, adding unnecessary performance overhead and overwriting any state data that resulted from previous calls.
This method is useful when there are some variables that are important to the workings of a function that you want only private access to or that you need to persist between invocations without contaminating the outer scope.
Javascript is fundamentally very different to C++, JAVA and Python and should be written in differnt ways. At the risk of repetition, Javascript is not an OOP language it is a prototype language. Douglas Crockford (inventor of JSON) at Yahoo has some wonderful articles and particuarily Videos entitled 'Javascript - the good parts' you should watch them all.

Categories