When is a JavaScript function optimized in a V8 environment? - javascript

I'm trying to learn whether or not and at which point and to what extent is the following TypeScript function optimized on its way from JavaScript to machine code in some V8 environment for example:
function foo(): number {
const a = 1;
const b = a;
const c = b + 1;
return c;
}
The function operates with constants with no parameters so it's always equivalent to the following function:
function foo(): number {
return 1 + 1;
}
And eventually in whatever bytecode or machine code just assign the number 2 to some register, without intermediary assignments of values or pointers from one register to another.
Assuming optimizing such simple logic is trivial, I could imagine a few potential steps where it could happen:
Compiling TypeScript to JavaScript
Generating abstract syntax tree from the JavaScript
Generating bytecode from the AST
Generating machine code from the bytecode
Repeating step 4 as per just-in-time compilation
Does this optimization happen, or is it a bad practice from the performance point of view to assign expressions to constants for better readability?

(V8 developer here.)
Does this optimization happen
Yes, it happens if and when the function runs hot enough to get optimized. Optimized code will indeed just write 2 into the return register.
Parsers, interpreters, and baseline compilers typically don't apply any optimizations. The reason is that identifying opportunities for optimizations tends to take more time than simply executing a few operations, so doing that work only pays off when the function is executed a lot.
Also, if you were to set a breakpoint in the middle of that function and inspect the local state in a debugger, you would want to see all three local variables and how they're being assigned step by step, so engines have to account for that possibility as well.
is it a bad practice from the performance point of view to assign expressions to constants for better readability?
No, it's totally fine to do that. Even if it did cost you a machine instruction or two, having readable code is usually more important.
This is true in particular when a more readable implementation lets you realize that you could simplify the whole algorithm. CPUs can execute billions of instructions per second, so saving a handful of instructions won't change much; but if you have an opportunity to, say, replace a linear scan with a constant-time lookup, that can save enough computational work (once your data becomes big enough) to make a huge difference.

Related

Function wont eager compile

I recently tried to eager compile my javascript function from the guide shown in the link-https://v8.dev/blog/code-caching-for-devs
But even after enclosing the function in IIFE heuristics as said above, the V8 System analyzer shows its lazy compiled, and unoptimized. It is not showing under the eager compile section. The creation time however shifted to the front, but there is no noticeable performance change. Does this mean that eager compile functions don't perform faster? Then what exactly is the advantage of a eager compile function?
(pls note, the below function is created for learning purposes. I use this to learn about eager compiling js functions):
script.js:
const a = 1000;
const b = 2000;
const add = function(a,b) {
var sampleObject2={
firstName: "John",
lastName: "Doe",
age: 70,
eyeColor: "blue"
};
var c=a**b;
var d=b**c;
c=c+21;
d=d+c;
d=d**sampleObject2.age;
console.log(c**d);
return c**d;
};
window.addEventListener("click", ()=>{
var start=performance.now()
add(a,b);
var end=performance.now()
console.log(end-start);
},false);
(V8 developer here.)
the V8 System analyzer shows its lazy compiled
That's not what I'm seeing: with the code as written in your question, i.e. const add = function(a, b) { ... };, the System Analyzer says type: LazyCompile~.
When I add the parentheses that (currently!) get interpreted as an eager-compilation hint, i.e. const add = (function(a, b) = { ... });, that changes to type: Function~ (as in your screenshot; so I guess you did actually add the parentheses before running the test that produced that screenshot).
and unoptimized
That is expected. Optimization has nothing to do with lazy compilation. There is no way to optimize a function right away, because attempting that would make no sense: the optimizer can't do anything useful if it doesn't have type feedback to work with, which can only be collected during unoptimized execution.
Does this mean that eager compile functions don't perform faster?
Correct, the function's own performance doesn't change at all.
Then what exactly is the advantage of a eager compile function?
Preparing a function for later lazy compilation has a small cost. In case the function will be needed soon, compiling it right away avoids this cost. In fact, I do see a tiny difference in the times that your test reports; although I haven't verified that this difference is actually due to eager vs. lazy compilation, it may well be caused by something else.
If eager-compiling everything was a good idea, then engines would do it -- that'd be much easier. In reality, lazy-compiling is almost always worth the tiny amount of extra work it creates, because it makes startup/pageload faster, and can avoid quite a bit of memory consumption when there are unused functions, e.g. in your included libraries. So JavaScript engines accept a lot of internal complexity to make lazy compilation possible. I wouldn't recommend attempting to side-step that.
(For completeness: a couple of years ago V8's implementation of lazy compilation was not as optimized as it is today, and manually hinting eager compilation could achieve more significant improvements, especially for deeply nested functions. But these times are gone! And now it's just one more example that illustrates that in general you shouldn't go to great lengths to adapt your code to what engines are currently doing well or not-so-well.)

speed of c++ addons for node.js

I made two similar codes on c++ and node.js, that just work with strings. I have this in .js file:
//some code, including f
console.time('c++');
console.log(addon.sum("123321", s));
console.timeEnd('c++');
console.time('js');
console.log(f("123321", s));
console.timeEnd('js');
and my c++ addon looks like that:
//some code, including f
void Sum(const v8::FunctionCallbackInfo<v8::Value>& args)
{
v8::Isolate* isolate = args.GetIsolate();
v8::String::Utf8Value str0(isolate, args[0]);
v8::String::Utf8Value str1(isolate, args[1]);
string a(*str0), b(*str1);
string s2 = f(a, b);
args.GetReturnValue().Set(v8::String::NewFromUtf8(isolate, s2.c_str()).ToLocalChecked());
}
but the problem is that c++ works nearly 1.5 times slower, than js, even though the function on JS has some parts, that can be optimised (I did not write it very accurately).
In the console I get
#uTMTahdZ22d!a_ah(3(_Zd_]Zc(tJT[263mca!(jcT[20_]h0h_06q(0jJ(T]!&]qZM]d_30j&Tuj2hm[Z0d#!32ccT2(!dud#6]0MdJc]mta!3]j]_(hhJqha(([
c++: 7.970s
#uTMTahdZ22d!a_ah(3(_Zd_]Zc(tJT[263mca!(jcT[20_]h0h_06q(0jJ(T]!&]qZM]d_30j&Tuj2hm[Z0d#!32ccT2(!dud#6]0MdJc]mta!3]j]_(hhJqha(([
js: 5.062s
So, the results of functions are similar, but the JS program ran a lot faster. How can it be? Shouldn't c++ faster than JS (at least not so much slower)? Maybe I did not took in account an important detail, that slows c++ so much, or working with strings is really so slow in c++?
First off, the Javascript interpreter is pretty advanced in what types of optimizations it can do (actually compiling Javascript code to native code in some cases) which significantly reduces the differences between Javascript and C++ compared to what most people would think.
Second, crossing the C++/Javascript boundary has some overhead cost associated with it as you marshall the function arguments in between the Javascript world and the C++ world (creating copies, doing heap operations, etc...). So, if that overhead is significant relative to the execution of the operation, then it can negate your advantages to going to C++ in the first place.
For more detailed comments, we would need to see the actual implementation of f() in both Javascript and C++.

Any linter to warn about side-effects in JavaScript?

With the flexibility of JavaScript, we can write code full of side-effects, or just purely functional.
I have been interested in functional JavaScript, and wanting to start a project in this paradigm. And a linter about that can surely help me gathering good practices. Is there any linter to enforce pure functional and side-effect free style?
Purity Analysis is equivalent to Solving the Halting Problem, so any kind of static analysis that can determine whether code is pure or impure is impossible in the general case. There will always be infinitely many programs for which it is undecidable whether or not they are pure; some of those programs will be pure, some impure.
Now, you deliberately used the term "linter" instead of static analyzer (although of course a linter is just a static analyzer), which seems to imply that you are fine with an approximate heuristic result. You can have a linter that will sometimes tell you that your code is pure, sometimes tell you that your code is impure, and most times tell you that it cannot decide whether your code is pure or impure. And you can have a whitelist of operations that are known to be pure (e.g. adding two Numbers using the + operator), and a blacklist of operations that are known to be impure (e.g. anything that can throw an exception, any sort of loops, if statements, Array.prototype.forEach) and do a heuristic scan for those.
But in the end, the results will be too unreliable to do anything serious with them.
I haven't used this myself but I found this plugin for ESLint: https://github.com/jfmengels/eslint-plugin-fp
You cannot use JS commpletely without side effects. Every DOM-access is a side effect, and we could have an argument wether the whole global namespace may also fall under that definition.
The best you can do is, stay reasonable. I'm splitting this logically into two groups:
the work horses (utilities): their purpose is to take some data and to process it somehow. These are (mostly) side effects free. mostly, because somethimes these functions need some state, like a counter or a cache, wich could be argued to be a side effect, but since this is isolated/enclosed into these functions I don't really care. like the functions that you pass to Array#map() or to a promises then(), and similar places.
and the management: these functions rarely do some data-processing on their own, they mostly orchestrate the data flow, from whereever it is created, to whatever processing(-utilities) it has to be run, up to where it ends, like modifying the DOM, or mutating an object.
var theOnesINeed = compose(...);
var theOtherOnesINeed = compose(...);
var intoADifferentFormat = function(value){ return ... }
function(event){
var a = someList.filter(theOnesINeed).map(intoADifferentFormat);
var b = someOtherList.filter(theOtherOnesINeed);
var rows = a.concat(b).map(wrap('<li>', '</li>'));
document.querySelector('#youYesYou').innerHTML = rows.join('\n');
}
so that all functions stay as short and simple as possible. And don't be afraid of descriptive names (not like these way to general ones :))

Why is the interaction between a c++ add-on and javascript expensive in node.js?

Edit Please answer if you have knowledge of node's inner workings.
I've been really interested lately in diving into c++ add-ons for node. I've read a lot of articles on the subject that basically state you should reduce chatter between c++ and node. I understand that as a principle. What I'm really looking for is the why.
Again, why is the interaction between a c++ add-on and javascript expensive in node.js?
I can't speak to any knowledge of the inner workings of Node, but really it would boil down to this: the less stuff your code does, the faster (less expensive) it will be. Interacting with the Node API from the add-on is doing more stuff than not interacting with the API. For (a contrived) example, consider the difference between adding two integers in C++:
int sum = a + b;
...and evaluating an add expression in the JavaScript engine, which would involve creating a function variable, wrapping any arguments to this function, invoking the function, and unwrapping the return value.
The required wrapping/unwrapping/value conversions for C++/JS communication itself is justification enough to try to minimize the number of interactions between the two layers. Keep in mind that the C++ type system and JS type system are really different, what with JS not having ints, strings being implemented differently, objects working differently, etc.

Does JavaScript Object initialization time depends on number of Variables and function?

Ques 1: Does it take significantly more time to initialize a JavaScript Object if it has a large number of variables and functions?
Ques 2: Does a large JavaScript(.js) file size is a performance issue?
For Instance:
I am creating a JavaScript Object using Prototype, my sample code is below:
function SimpleObject(){
// no variables and functions attached to it
}
function RootObject(){
var one = "one";
var two = "two";
.
.
var one_thousand = "One Thousand";
}
function Child_1_Object(){
// n number of variables
}
.
.
function Child_1000_Object(){
// n number of variables
}
RootObject.prototype.get_Child_1_Object = function(){
return new Child_1_Object();
}
.
.
RootObject.prototype.get_Child_1000_Object = function(){
return new Child_1000_Object();
}
All the above code is in one .js file which has 10k lines of code(10KLOC).
My question is when I will create an object of RootObject will it take significantly more time comparing to creation of a SimpleObject?
Question one:
Making an object more complicated, with members, functions etc. will increase the amount of time needed to instantiate it. I wrote a simple jsPerf test here to prove the point: http://jsperf.com/simple-vs-complex-object
One thing to note though, you're still creating hundreds of thousands of objects a second - it's hardly slow, even for very complex objects.
Question two:
Large files are problematic only because of their size in terms of a client having to download them. Minifying your code will help with this. e.g. http://javascript-minifier.com/
Definitely. In fast, modern browsers these are miliseconds (or less), but every variable must be initialised, so 10k is always worse than 1.
It is, as it needs to be downloaded by the browser (so the bigger - the slower) and than parsed by the js engine (again - the bigger - the slower)
It's simple math, although - like I said before - it you're just initialising variables - the delays are negligible, so you don't have to worry about that.
No. Most of the amount of time required to instantiate a new object will be determined by what is done in the constructor.
The main issue with having a lot of JavaScript (10k is not even remotely a large JavaScript file) is still a matter of what is it really doing? Sure some JavaScript VMs may run into issues with performance if you have 10mb of JavaScript, but I've never seen even Internet Explorer 7 choke on something like ExtJS 3.4 which has about 2.5mb of uncompressed JavaScript.
Now, download speed may be an issue, but parsing JavaScript is not. All of that is written in C or C++ and runs very quickly. When you are just declaring object types in JavaScript, much of that is just code parsing and assigning the prototype as a known type within the JavaScript VM. It doesn't have to actually execute the constructor at that point so the code you have above will run quickly until, perhaps, you start initializing objects.
Another thing about parsing JavaScript is that parsing is only one of the steps that other languages take. Java, C#, C, C++, etc. also have at least a stage of converting the parse tree into some form of object code. Java and C# stop there because their runtime has to do additional JIT compilation and optimization on the fly; C/C++ and some others have to do linking and optimization before generating usable machine code. Not that "parsing is easy" by any means on a programming language, but it is not the most performance-intensive part.

Categories