i have the below JS:
var z = function(){
return "string";
}
var x = function(){
var y = new z();
var div = document.createElement('div');
div.innerHTML = y;
document.body.appendChild(div);
/*
my code…hundreds of other functions. Entire app is js all data comes through sockets and elements created using JS
*/
}
I have a couple of questions which might sound stupid but I am hoping not.
So inside 'x' is 'y' and 'div'. Now if these 2 elements are only used there do they still 'live' inside the JS on the browser or do they vanish?
Basically do i need to set them to null to avoid any extra memory from being used on useless items.
Also I wrote like 25k of lines using JS and all the elements are created using JS. The app stays up for like 9 hours until they close it and it starts all over again on another day. But for those hours I am worried it will be getting slower due to its size. Could this be true?
In terms of your applications memory usage, every time x() is called it creates a temporary instance of the local variable y. This is discarded once the function has been run to completion.
X is an anonymous function that gives it a new scope . New variables will be inside this scope, also this will become the function object and you can access the global scope with window.
There will be differences in how the various browsers handle this kind of situation but the result is pretty much the same.
Browsers are always being optimised to make them more efficient at handling memory as well as faster. They are optimising scope chain lookup costs away too which should result in improved performance.
Due to nature of your anonymous function x() there may be moments where the browser runs its "garbage collection" which could slow or stop script execution but after that it should run without problems.
The Javascript engines inside the modern browsers can handle incredible processing as many libraries (such as jquery) require large amounts of processing.
I would not worry too much about Javascript engines and your 25k lines, it is more down to your code itself and what it is doing.
Related
Ques 1: Does it take significantly more time to initialize a JavaScript Object if it has a large number of variables and functions?
Ques 2: Does a large JavaScript(.js) file size is a performance issue?
For Instance:
I am creating a JavaScript Object using Prototype, my sample code is below:
function SimpleObject(){
// no variables and functions attached to it
}
function RootObject(){
var one = "one";
var two = "two";
.
.
var one_thousand = "One Thousand";
}
function Child_1_Object(){
// n number of variables
}
.
.
function Child_1000_Object(){
// n number of variables
}
RootObject.prototype.get_Child_1_Object = function(){
return new Child_1_Object();
}
.
.
RootObject.prototype.get_Child_1000_Object = function(){
return new Child_1000_Object();
}
All the above code is in one .js file which has 10k lines of code(10KLOC).
My question is when I will create an object of RootObject will it take significantly more time comparing to creation of a SimpleObject?
Question one:
Making an object more complicated, with members, functions etc. will increase the amount of time needed to instantiate it. I wrote a simple jsPerf test here to prove the point: http://jsperf.com/simple-vs-complex-object
One thing to note though, you're still creating hundreds of thousands of objects a second - it's hardly slow, even for very complex objects.
Question two:
Large files are problematic only because of their size in terms of a client having to download them. Minifying your code will help with this. e.g. http://javascript-minifier.com/
Definitely. In fast, modern browsers these are miliseconds (or less), but every variable must be initialised, so 10k is always worse than 1.
It is, as it needs to be downloaded by the browser (so the bigger - the slower) and than parsed by the js engine (again - the bigger - the slower)
It's simple math, although - like I said before - it you're just initialising variables - the delays are negligible, so you don't have to worry about that.
No. Most of the amount of time required to instantiate a new object will be determined by what is done in the constructor.
The main issue with having a lot of JavaScript (10k is not even remotely a large JavaScript file) is still a matter of what is it really doing? Sure some JavaScript VMs may run into issues with performance if you have 10mb of JavaScript, but I've never seen even Internet Explorer 7 choke on something like ExtJS 3.4 which has about 2.5mb of uncompressed JavaScript.
Now, download speed may be an issue, but parsing JavaScript is not. All of that is written in C or C++ and runs very quickly. When you are just declaring object types in JavaScript, much of that is just code parsing and assigning the prototype as a known type within the JavaScript VM. It doesn't have to actually execute the constructor at that point so the code you have above will run quickly until, perhaps, you start initializing objects.
Another thing about parsing JavaScript is that parsing is only one of the steps that other languages take. Java, C#, C, C++, etc. also have at least a stage of converting the parse tree into some form of object code. Java and C# stop there because their runtime has to do additional JIT compilation and optimization on the fly; C/C++ and some others have to do linking and optimization before generating usable machine code. Not that "parsing is easy" by any means on a programming language, but it is not the most performance-intensive part.
I have been working on web project for past 4 months. To optimise the code performance we have used a pattern. My doubt is, does it actually boost performance or not?
when ever we have to use this object we assign it to a local variable, and use that.
function someFunction()
{
var thisObject = this;
//use thisObject in all following the code.
}
the assumption here is that, assigning this object to a local stack variable will boost the performance.
I have not seen this type of coding anywhere so doubt if it is of no use.
EDIT: I know that assigning this object to local variable is done for preserving object, but that is not our case.
While this is a common practice in Javascript it's not done for performance reasons. The saving of the this object in another named local is usually done to preserve the value of this across callbacks which are defined within the function.
function someFunction() {
var thisObject = this;
var someCallback = function() {
console.log(thisObject === this); // Could print true or false
};
return someCallback;
}
Whether or not thisObject === this evaluates to true will depend on how it's called
var o = {}
o.someFunction = someFunction();
var callback = o.someFunction();
callback(); // prints false
callback.call(o); // prints true
As with ALL performance questions, they should be examined by actually measuring the performance. In a rather simple test case (your actual code might vary some), I find mixed results across the browsers:
Chrome and Firefox don't differ very much between the two tests, but the slight difference is in opposite directions between the two. IE9 shows the test using a saved copy of this which I called self to be significantly slower.
Without a significant and consistent performance difference in Chrome and Firefox and IE9 showing the this test case to be substantially faster, I think you can conclude that the design pattern you asked about is not delivering a consistent performance boost across browsers.
In my code, I save a copy of this to another variable only when I need it for a consistent reference to the original object inside of inline event handlers, callbacks or methods where this has been set to something else. In other words, I only apply this design pattern when needed.
In a previous discussion of this design pattern here on SO, it was concluded that some libraries use this design pattern in order to allow additional minimization because this cannot be minified below the four characters that it takes up, but assigning it to a locally variable can be minified to a single character variable name.
Even when this kind of optimization has an (positive) effect, it is very likely to depend on the interpreter.
An other release may revert the results.
However, in the end you should measure, not guess.
So this is puzzling to me. Are variables expensive?
Let's take a look at this code:
var div1 = document.createElement('div');
var div2 = document.createElement('div');
div2.appendChild(div1);
Now this one
var div2 = document.createElement('div');
div2.appendChild(document.createElement('div'));
So, on the second example, I'm doing the same thing as on the first one. Is it more expensive to declare a variable than not? Do I save memory by just creating and using an element on the fly?
EDIT: this is a sample code to illustrate the question. I know that in this specific case, the memory saved (or not) in minimal.
Variables are extremely cheap in JavaScript, yet, no matter how cheap they are, everything has a cost.
Having said that, it's not a noticable difference, and you should be thinking more about making your code readable rather than performing micro-optimizations.
You should be using variables to save repeated operations. In your example above, it's likely you need the variable pointing to the newly created div, or you'll be able to do nothing to it... unless you end up retrieving it from the DOM; which will prove many times more costly than the variable, as DOM manipulation is one of the slowest parts of JavaScript.
In theory, there is a tiny little amount of memory you save by not doing the variable declaration, because a variable declaration requires that the JavaScript engine creates a property on the variable binding object for the execution context in which you declare it. (See the specification, Section 10.5 and associated sections.)
In reality, you will never notice the difference, not even on a (relatively) slow engine like IE's. Feel free to use variables wherever they make the code clearer. Creating a property on a variable binding object (e.g., declaring a variable) is a very, very, very quick operation. (Avoid it at global scope, of course, but not because of memory use.)
Caveat: If the variable refers to a big memory structure that you only hold temporarily, and you create closures within that function context, the big memory structure may (in some engines) be kept in memory longer than necesary. Example:
function foo(element) {
var a = /* ...create REALLY BIG structure here */;
element.addEventListener("event", function() {
// Do something **not** referencing `a` and not doing an `eval`
}, false);
}
In an unsophisticated engine, since the event handler is a closure over the context of the call to foo, in theory a is available to the handler and so what it references remains "referencible" and not available for garbage collection until/unless that event handler function is no longer referencible. In any decent engine (like the V8 engine in Chrome), because you neither refer to a nor use eval within the closure (event handler), it's fine. If you want to defend yourself from mediocre engines, set a to undefined before foo returns, so the engine (no matter how mediocre) knows that the event handler won't be referencing that structure even if it were to reference a.
Yes, you save memory, no it is in no way an amount that matters.
I have researched quite a bit about this but mostly by piecing other questions together, which still leaves some doubt. In an app that does not refresh the browser page at any time and may live for quite a while (hours) without closing (assuming that refreshing a page or navigating to another would restart the js code), what's the best way to ensure objects are released and that there's no memory leak.
These are the specific scenarios I'm concerned about:
All of the code below is within a revealing module pattern.
mycode = function(){}()
variables within functions, I'm sure this one is collected by the GC just fine
function(){ var h = "ss";}
variables within the module, should g = null when it's no longer needed?
var g;
function(){ g = "dd";}
And lastly the life of a jqXHR: is it cleaned up after it returns? Should it be set to null in all cases as a precaution whether kept inside a function or module?
If doing this, is it x cleaned up by the GC after it returns?:
function(){
var x = $.get();
x.done = ...;
x.fail = ...;
}
How about when doing this, will it also be cleaned up after x returns?:
var x;
function(){
x = $.get();
x.done = ...;
x.fail = ...;
}
Lastly, is there a way to cleanup all variables and restart a module without restarting the browser?
variables within functions, I'm sure this one is collected by the GC just fine
Yes.
variables within the module, should g = null when it's no longer needed?
Sure.
And lastly the life of a jqXHR: is it cleaned up after it returns? Should it be set to null in all cases as a precaution whether kept inside a function or module?
Various browsers have had bugs related to XHR that caused the onreadystatechange and anything it closed over to remain uncollectable unless the dev was careful to replace it with a dummy value (xhr.onreadystatechange = new Function('')) but I believe jQuery handles this for you.
Lastly, is there a way to cleanup all variables and restart a module without restarting the browser?
Global state associated with the page will take up browser memory until the page is evicted from the browser history stack. location.replace can help you here by letting you kill the current page and replace it with a new version of the same app without expanding the history stack.
Replace the current document with the one at the provided URL. The difference from the assign() method is that after using replace() the current page will not be saved in session history, meaning the user won't be able to use the Back button to navigate to it.
When you use the word "module", that is not a term that has a well-defined meaning to the browser or its JavaScript interpreter so there is no way to evict a module and only a module from memory. There are several things that you have to worry about that might keep things in memory:
References to JavaScript objects that have been attached to DOM nodes and everything they close over -- event handlers are a very common example.
Live setInterval and setTimeout callbacks and everything they close over.
Properties of the global object and everything they close over.
As you noted, properties of certain host objects like XHR instances, web worker callbacks, etc. and (you guessed it) everything they close over.
Any scheme that is going to unload a module and only a module would need to deal with all of these and figure out which of them are part of the module and which are not. That's a lot of different kinds of cleanup.
Javascript is a garbage-collected language. It relies on the garbage collector to clean up unused memory. So essentially, you have to trust that the GC will do its job.
The GC will (eventually, not necessarily immediately) collect objects that are unreachable to you. If you have a reference to an object, then it is potentially still in use, and so the GC won't touch it.
If you have no reference to the object, directly or indirectly, then the GC knows that the object cannot possibly be used, and the object can be collected. So all you have to do, really, is make sure you reset any references to the object.
However, the GC makes no guarantees about when the object will be collected. And you shouldn't need to worry about that.
Really the only leaks you should worry about are closures.
function foo(a){
var b = 10 + a;
return function(c){
return b + c;
}
}
var bar = foo(20);
var baz = bar(5);
The GC has no way to delete var b - it's out of scope. This is a big problem with IE, not so much with Mozilla and much less with Chrome.
As a rule of thumb with any garbage-collected language (this applies to Java, .NET and JavaScript for example), what you want to do is make sure that there is no lingering reference to a block of memory that you want to have the GC clean up for you. When the GC looks at a block of memory and finds that there is still something in the program referencing it, then it will avoid releasing it.
With regard to the jqXHR, there's no point in you setting them to null at the end of an AJAX function call. All of the parameters of a AJAX success/error/complete will be released once the function returns by the GC unless jQuery is doing something bizarre like keeping a reference to them.
Every variable you can no longer access can be collected by the GC. If you declare a variable inside a function, once the function is quit, the variable can be removed. It is, when the computer runs out of memory, or at any other time.
This becomes more complicated when you execute asynchronous functions like XHR. The done and fail closures can access all variables declared in the outer functions. Thus, as long as done and fail can be executed, all the variables must remain in memory. Once the request is finished, the variables can be released.
Anyway, you should simply make sure that every variable is declared as deep as possible.
Is the first example with 'g', g should be set to null. It will retain the pointer to "dd" otherwise.
In the second example, the 'x' in the first case does not need to be set to null, since that variable will "go away" when the surrounding function exits. In the second case, with 'x' outside of the function, 'x' will retain whatever is assigned to it, and that will not be GC'd until 'x' is set to null or something else.
In a Javascript program that runs within WSH and creates objects, let's say Scripting.FileSystemObject or any arbitrary COM object, do I need to set the variable to null when I'm finished with it? Eg, am I recommended to do this:
var fso = new ActiveXObject("Scripting.FileSystemObject");
var fileStream = fso.openTextFile(filename);
fso = null; // recommended? necessary?
... use fileStream here ...
fileStream.Close();
fileStream = null; // recommended? necessary?
Is the effect different than just letting the vars go out of scope?
Assigning null to an object variable will decrement the reference counter so that the memory management system can discard the resource - as soon as it feels like it. The reference counter will be decremented automagically when the variable goes out of scope. So doing it manually is a waste of time in almost all cases.
In theory a function using a big object A in its first and another big object B in its second part could be more memory efficient if A is set to null in the middle. But as this does not force the mms to destroy A, the statement could still be a waste.
You may get circular references if you do some fancy class design. Then breaking the circle by hand may be necessary - but perhaps avoiding such loops in the first place would be better.
There are rumours about ancient database access objects with bugs that could be avoided by zapping variables. I wouldn't base my programming rules on such voodoo.
(There are tons of VBscript code on the internet that is full of "Set X = Nothing"; when asked, the authors tend to talk about 'habit' and other languages (C, C++))
Building on what Ekkehard.Horner has said...
Scripts like VBScript, JScript, and ASP are executed within an environment that manages memory for you. As such, explicitly setting an object reference to Null or Empty, does not necessarily remove it from memory...at least not right away. (In practice it's often nearly instantaneous, but in actuality the task is added to a queue within the environment that is executed at some later point in time.) In this regard, it's really much less useful than you might think.
In compiled code, it's important to clean up memory before a program (or section of code in some cases) ends so that any allocated memory is returned to the system. This prevents all kinds of problems. Outside of slowly running code, this is most important when a program exits. In scripting environments like ASP or WSH, memory management takes care of this cleanup automatically when a script exits. So all object references are set to null for you even if you don't do it explicitly yourself which makes the whole mess unnecessary in this instance.
As far as memory concerns during script execution, if you are building arrays or dictionary objects large enough to cause problems, you've either gone way beyond the scope of scripting or you've taken the wrong approach in your code. In other words, this should never happen in VBScript. In fact, the environment imposes limits to the sizes of arrays and dictionary objects in order to prevent these problems in the first place.
If you have long running scripts which use objects at the top/start, which are unneeded during the main process, setting these objects to null may free up memory sooner and won't do any harm. As mentioned by other posters, there may be little practical benefit.