Does JavaScript Object initialization time depends on number of Variables and function? - javascript

Ques 1: Does it take significantly more time to initialize a JavaScript Object if it has a large number of variables and functions?
Ques 2: Does a large JavaScript(.js) file size is a performance issue?
For Instance:
I am creating a JavaScript Object using Prototype, my sample code is below:
function SimpleObject(){
// no variables and functions attached to it
}
function RootObject(){
var one = "one";
var two = "two";
.
.
var one_thousand = "One Thousand";
}
function Child_1_Object(){
// n number of variables
}
.
.
function Child_1000_Object(){
// n number of variables
}
RootObject.prototype.get_Child_1_Object = function(){
return new Child_1_Object();
}
.
.
RootObject.prototype.get_Child_1000_Object = function(){
return new Child_1000_Object();
}
All the above code is in one .js file which has 10k lines of code(10KLOC).
My question is when I will create an object of RootObject will it take significantly more time comparing to creation of a SimpleObject?

Question one:
Making an object more complicated, with members, functions etc. will increase the amount of time needed to instantiate it. I wrote a simple jsPerf test here to prove the point: http://jsperf.com/simple-vs-complex-object
One thing to note though, you're still creating hundreds of thousands of objects a second - it's hardly slow, even for very complex objects.
Question two:
Large files are problematic only because of their size in terms of a client having to download them. Minifying your code will help with this. e.g. http://javascript-minifier.com/

Definitely. In fast, modern browsers these are miliseconds (or less), but every variable must be initialised, so 10k is always worse than 1.
It is, as it needs to be downloaded by the browser (so the bigger - the slower) and than parsed by the js engine (again - the bigger - the slower)
It's simple math, although - like I said before - it you're just initialising variables - the delays are negligible, so you don't have to worry about that.

No. Most of the amount of time required to instantiate a new object will be determined by what is done in the constructor.
The main issue with having a lot of JavaScript (10k is not even remotely a large JavaScript file) is still a matter of what is it really doing? Sure some JavaScript VMs may run into issues with performance if you have 10mb of JavaScript, but I've never seen even Internet Explorer 7 choke on something like ExtJS 3.4 which has about 2.5mb of uncompressed JavaScript.
Now, download speed may be an issue, but parsing JavaScript is not. All of that is written in C or C++ and runs very quickly. When you are just declaring object types in JavaScript, much of that is just code parsing and assigning the prototype as a known type within the JavaScript VM. It doesn't have to actually execute the constructor at that point so the code you have above will run quickly until, perhaps, you start initializing objects.
Another thing about parsing JavaScript is that parsing is only one of the steps that other languages take. Java, C#, C, C++, etc. also have at least a stage of converting the parse tree into some form of object code. Java and C# stop there because their runtime has to do additional JIT compilation and optimization on the fly; C/C++ and some others have to do linking and optimization before generating usable machine code. Not that "parsing is easy" by any means on a programming language, but it is not the most performance-intensive part.

Related

When is a JavaScript function optimized in a V8 environment?

I'm trying to learn whether or not and at which point and to what extent is the following TypeScript function optimized on its way from JavaScript to machine code in some V8 environment for example:
function foo(): number {
const a = 1;
const b = a;
const c = b + 1;
return c;
}
The function operates with constants with no parameters so it's always equivalent to the following function:
function foo(): number {
return 1 + 1;
}
And eventually in whatever bytecode or machine code just assign the number 2 to some register, without intermediary assignments of values or pointers from one register to another.
Assuming optimizing such simple logic is trivial, I could imagine a few potential steps where it could happen:
Compiling TypeScript to JavaScript
Generating abstract syntax tree from the JavaScript
Generating bytecode from the AST
Generating machine code from the bytecode
Repeating step 4 as per just-in-time compilation
Does this optimization happen, or is it a bad practice from the performance point of view to assign expressions to constants for better readability?
(V8 developer here.)
Does this optimization happen
Yes, it happens if and when the function runs hot enough to get optimized. Optimized code will indeed just write 2 into the return register.
Parsers, interpreters, and baseline compilers typically don't apply any optimizations. The reason is that identifying opportunities for optimizations tends to take more time than simply executing a few operations, so doing that work only pays off when the function is executed a lot.
Also, if you were to set a breakpoint in the middle of that function and inspect the local state in a debugger, you would want to see all three local variables and how they're being assigned step by step, so engines have to account for that possibility as well.
is it a bad practice from the performance point of view to assign expressions to constants for better readability?
No, it's totally fine to do that. Even if it did cost you a machine instruction or two, having readable code is usually more important.
This is true in particular when a more readable implementation lets you realize that you could simplify the whole algorithm. CPUs can execute billions of instructions per second, so saving a handful of instructions won't change much; but if you have an opportunity to, say, replace a linear scan with a constant-time lookup, that can save enough computational work (once your data becomes big enough) to make a huge difference.

speed of c++ addons for node.js

I made two similar codes on c++ and node.js, that just work with strings. I have this in .js file:
//some code, including f
console.time('c++');
console.log(addon.sum("123321", s));
console.timeEnd('c++');
console.time('js');
console.log(f("123321", s));
console.timeEnd('js');
and my c++ addon looks like that:
//some code, including f
void Sum(const v8::FunctionCallbackInfo<v8::Value>& args)
{
v8::Isolate* isolate = args.GetIsolate();
v8::String::Utf8Value str0(isolate, args[0]);
v8::String::Utf8Value str1(isolate, args[1]);
string a(*str0), b(*str1);
string s2 = f(a, b);
args.GetReturnValue().Set(v8::String::NewFromUtf8(isolate, s2.c_str()).ToLocalChecked());
}
but the problem is that c++ works nearly 1.5 times slower, than js, even though the function on JS has some parts, that can be optimised (I did not write it very accurately).
In the console I get
#uTMTahdZ22d!a_ah(3(_Zd_]Zc(tJT[263mca!(jcT[20_]h0h_06q(0jJ(T]!&]qZM]d_30j&Tuj2hm[Z0d#!32ccT2(!dud#6]0MdJc]mta!3]j]_(hhJqha(([
c++: 7.970s
#uTMTahdZ22d!a_ah(3(_Zd_]Zc(tJT[263mca!(jcT[20_]h0h_06q(0jJ(T]!&]qZM]d_30j&Tuj2hm[Z0d#!32ccT2(!dud#6]0MdJc]mta!3]j]_(hhJqha(([
js: 5.062s
So, the results of functions are similar, but the JS program ran a lot faster. How can it be? Shouldn't c++ faster than JS (at least not so much slower)? Maybe I did not took in account an important detail, that slows c++ so much, or working with strings is really so slow in c++?
First off, the Javascript interpreter is pretty advanced in what types of optimizations it can do (actually compiling Javascript code to native code in some cases) which significantly reduces the differences between Javascript and C++ compared to what most people would think.
Second, crossing the C++/Javascript boundary has some overhead cost associated with it as you marshall the function arguments in between the Javascript world and the C++ world (creating copies, doing heap operations, etc...). So, if that overhead is significant relative to the execution of the operation, then it can negate your advantages to going to C++ in the first place.
For more detailed comments, we would need to see the actual implementation of f() in both Javascript and C++.

How does the "creation phase" know how much memory space to set up?

In JavaScript: Understanding the Weird Parts the instructor explains that memory for variables is set up during a so-called creation phase (and that undefined is assigned); then the execution phase happens. But why is this useful when we don't know what value(s) the variable will later point to?
Clearly variables can point to many different things -from e.g. a short string all the way to a deeply nested object structure -and I assume that they can vary wildly in the amount of memory they need.
If line-by-line execution -including variable assignment -happens only in the later, execution phase, how can the initial creation phase know how to set up memory? Or, is memory set aside only for the name in each variable name/value pair, with memory for the value being managed differently?
The instructor is referring to Google Chrome's V8 engine (as is evidenced by his use of it in the video).
The V8 engine uses several optimization approaches in order to facilitate memory management. At first, it will compile the JavaScript code and during compilation it will determine how many variables (hidden classes, more later) it needs to create. These will determine the amount of memory originally allocated.
V8 compiles JavaScript source code directly into machine code when it is first executed. There are no intermediate byte codes, no interpreter. Property access is handled by inline cache code that may be patched with other machine instructions as V8 executes. 1
The first set is created by navigating the JavaScript code to determine how many different object "shapes" there are. Anything without a prototype is considered to be a "Transitioning object shape"
The main way objects are encoded is by separating the hidden class (description) from the object (content). When new objects are instantiated, they are created using the same initial hidden class as previous objects from the same constructor. As properties are added, objects transition from hidden class to hidden class, typically following previous transitions in the so-called “transition tree”. 2
Conversely, if the object does have a prototype then it will have its particular shape tracked separately.
Prototypes have 2 main phases: setup and use. Prototypes in the setup phase are encoded as dictionary objects. Any direct access to the prototype, or access through a prototype chain, will transition it to use state, making sure that all such accesses from now on are fast. 2
The compiler will essentially read all possible variables as being one of these two possible shapes and then allocate the amount of memory necessary to facilitate instantiating those shapes.
Once the first set of shapes is setup, V8 will then take advantage of what they call "fast property access" in order to build on the first set of variables (hidden classes) that were setup during the build.
To reduce the time required to access JavaScript properties V8 dynamically creates hidden classes behind the scenes 3
There are two advantages to using hidden classes: property access does not require a dictionary lookup, and they enable V8 to use the classic class-based optimization, inline caching 3
As a result, not all memory use is known during compilation, only how much to allocate for the core set of hidden classes. This allocation will grow as the code is executed, from things like assignment, inline cache misses, and conversion into dictionary mode (which happens when too many properties are assigned to an object, and several other nuanced factors).
1. Dynamic machine code generation, https://github.com/v8/v8/wiki/Design%20Elements#dynamic-machine-code-generation
2. Setting up prototypes in V8, https://medium.com/#tverwaes/setting-up-prototypes-in-v8-ec9c9491dfe2
3. Fast Property Access, https://github.com/v8/v8/wiki/Design%20Elements#fast-property-access
Javascript is a bit of a problem you know at least in my opinion
in Javascript there are specifications of the language made by ecmascript
and there are implementation made by developers
you have to understand that what is been taught about Javascript what's so called "under the hood" are the specifications of Javascript
you might have heard the terms Execution Context Lexical Environment.
They are only spec on how the language should work they give idea to the Developers how to build their JS engine in a way that it will behave like the spec.
An execution context is purely a specification mechanism and need not correspond to any particular artefact of an ECMAScript implementation. It is impossible for ECMAScript code to directly access or observe an execution context. ECMAScript
Every Javascript engine is implemented differently and they supposed to behave like the spec of ECMAScript
there is a concept that everytime when an execution context is created.
it has stages creation phase and execution phase.
everytime when creation phase begin it's allocating memory for the variables for that execution context.
in reality it doesn't work like that at all
there is no really a "Creation Phase" at least in V8 engine(Google Chrome JS Engine) in the way that you think.
i can give you a hint and tell you that eveytime when you call a function that
doesn't have another function inside it.
the variables inside that function are basically replacing some "block" in memory
i will give you a basic example let's say V8 engine uses some address in memory
let's say the address is 0x61FF1C
function setNum(){ var num = 5; }
everytime when i will call the setNum function the value for example of num will get stored at address 0x61FF1C EVERYTIME when i will call the function the value of num 5it will get stored at 0x61FF1C so it's overwriting the content that were before inside 0x61FF1C.
that's just how the v8 engine works in that scenario now the example i gave is just to get the idea i know it's sounds a little vague.
there is much more to the V8 engine which i'm not going to discuss because it's huge
by the way i'm not a V8 developer so i dont know everything about that engine
i'm not even close to that level but i know some things.
anyway i think that every JS Developer should think in the spec way but also remember that many engines behave like the spec but it doesn't mean that they work exactly like the spec
When any program executes, time of execution we call running time. During running time or processing time, processor process code and communicate with memory. Procesor takes code from memory, process it, result give back to memory, take other code. All the running time, some space in memory get grater; some space gets zero, some variables get the new value, some variables have been deleting, etc. Volume of working memory in running time is changing all the time.

Javascript application memory handling

i have the below JS:
var z = function(){
return "string";
}
var x = function(){
var y = new z();
var div = document.createElement('div');
div.innerHTML = y;
document.body.appendChild(div);
/*
my code…hundreds of other functions. Entire app is js all data comes through sockets and elements created using JS
*/
}
I have a couple of questions which might sound stupid but I am hoping not.
So inside 'x' is 'y' and 'div'. Now if these 2 elements are only used there do they still 'live' inside the JS on the browser or do they vanish?
Basically do i need to set them to null to avoid any extra memory from being used on useless items.
Also I wrote like 25k of lines using JS and all the elements are created using JS. The app stays up for like 9 hours until they close it and it starts all over again on another day. But for those hours I am worried it will be getting slower due to its size. Could this be true?
In terms of your applications memory usage, every time x() is called it creates a temporary instance of the local variable y. This is discarded once the function has been run to completion.
X is an anonymous function that gives it a new scope . New variables will be inside this scope, also this will become the function object and you can access the global scope with window.
There will be differences in how the various browsers handle this kind of situation but the result is pretty much the same.
Browsers are always being optimised to make them more efficient at handling memory as well as faster. They are optimising scope chain lookup costs away too which should result in improved performance.
Due to nature of your anonymous function x() there may be moments where the browser runs its "garbage collection" which could slow or stop script execution but after that it should run without problems.
The Javascript engines inside the modern browsers can handle incredible processing as many libraries (such as jquery) require large amounts of processing.
I would not worry too much about Javascript engines and your 25k lines, it is more down to your code itself and what it is doing.

in JavaScript, when finished with an object created via new ActiveXObject, do I need to set it to null?

In a Javascript program that runs within WSH and creates objects, let's say Scripting.FileSystemObject or any arbitrary COM object, do I need to set the variable to null when I'm finished with it? Eg, am I recommended to do this:
var fso = new ActiveXObject("Scripting.FileSystemObject");
var fileStream = fso.openTextFile(filename);
fso = null; // recommended? necessary?
... use fileStream here ...
fileStream.Close();
fileStream = null; // recommended? necessary?
Is the effect different than just letting the vars go out of scope?
Assigning null to an object variable will decrement the reference counter so that the memory management system can discard the resource - as soon as it feels like it. The reference counter will be decremented automagically when the variable goes out of scope. So doing it manually is a waste of time in almost all cases.
In theory a function using a big object A in its first and another big object B in its second part could be more memory efficient if A is set to null in the middle. But as this does not force the mms to destroy A, the statement could still be a waste.
You may get circular references if you do some fancy class design. Then breaking the circle by hand may be necessary - but perhaps avoiding such loops in the first place would be better.
There are rumours about ancient database access objects with bugs that could be avoided by zapping variables. I wouldn't base my programming rules on such voodoo.
(There are tons of VBscript code on the internet that is full of "Set X = Nothing"; when asked, the authors tend to talk about 'habit' and other languages (C, C++))
Building on what Ekkehard.Horner has said...
Scripts like VBScript, JScript, and ASP are executed within an environment that manages memory for you. As such, explicitly setting an object reference to Null or Empty, does not necessarily remove it from memory...at least not right away. (In practice it's often nearly instantaneous, but in actuality the task is added to a queue within the environment that is executed at some later point in time.) In this regard, it's really much less useful than you might think.
In compiled code, it's important to clean up memory before a program (or section of code in some cases) ends so that any allocated memory is returned to the system. This prevents all kinds of problems. Outside of slowly running code, this is most important when a program exits. In scripting environments like ASP or WSH, memory management takes care of this cleanup automatically when a script exits. So all object references are set to null for you even if you don't do it explicitly yourself which makes the whole mess unnecessary in this instance.
As far as memory concerns during script execution, if you are building arrays or dictionary objects large enough to cause problems, you've either gone way beyond the scope of scripting or you've taken the wrong approach in your code. In other words, this should never happen in VBScript. In fact, the environment imposes limits to the sizes of arrays and dictionary objects in order to prevent these problems in the first place.
If you have long running scripts which use objects at the top/start, which are unneeded during the main process, setting these objects to null may free up memory sooner and won't do any harm. As mentioned by other posters, there may be little practical benefit.

Categories