Function wont eager compile - javascript

I recently tried to eager compile my javascript function from the guide shown in the link-https://v8.dev/blog/code-caching-for-devs
But even after enclosing the function in IIFE heuristics as said above, the V8 System analyzer shows its lazy compiled, and unoptimized. It is not showing under the eager compile section. The creation time however shifted to the front, but there is no noticeable performance change. Does this mean that eager compile functions don't perform faster? Then what exactly is the advantage of a eager compile function?
(pls note, the below function is created for learning purposes. I use this to learn about eager compiling js functions):
script.js:
const a = 1000;
const b = 2000;
const add = function(a,b) {
var sampleObject2={
firstName: "John",
lastName: "Doe",
age: 70,
eyeColor: "blue"
};
var c=a**b;
var d=b**c;
c=c+21;
d=d+c;
d=d**sampleObject2.age;
console.log(c**d);
return c**d;
};
window.addEventListener("click", ()=>{
var start=performance.now()
add(a,b);
var end=performance.now()
console.log(end-start);
},false);

(V8 developer here.)
the V8 System analyzer shows its lazy compiled
That's not what I'm seeing: with the code as written in your question, i.e. const add = function(a, b) { ... };, the System Analyzer says type: LazyCompile~.
When I add the parentheses that (currently!) get interpreted as an eager-compilation hint, i.e. const add = (function(a, b) = { ... });, that changes to type: Function~ (as in your screenshot; so I guess you did actually add the parentheses before running the test that produced that screenshot).
and unoptimized
That is expected. Optimization has nothing to do with lazy compilation. There is no way to optimize a function right away, because attempting that would make no sense: the optimizer can't do anything useful if it doesn't have type feedback to work with, which can only be collected during unoptimized execution.
Does this mean that eager compile functions don't perform faster?
Correct, the function's own performance doesn't change at all.
Then what exactly is the advantage of a eager compile function?
Preparing a function for later lazy compilation has a small cost. In case the function will be needed soon, compiling it right away avoids this cost. In fact, I do see a tiny difference in the times that your test reports; although I haven't verified that this difference is actually due to eager vs. lazy compilation, it may well be caused by something else.
If eager-compiling everything was a good idea, then engines would do it -- that'd be much easier. In reality, lazy-compiling is almost always worth the tiny amount of extra work it creates, because it makes startup/pageload faster, and can avoid quite a bit of memory consumption when there are unused functions, e.g. in your included libraries. So JavaScript engines accept a lot of internal complexity to make lazy compilation possible. I wouldn't recommend attempting to side-step that.
(For completeness: a couple of years ago V8's implementation of lazy compilation was not as optimized as it is today, and manually hinting eager compilation could achieve more significant improvements, especially for deeply nested functions. But these times are gone! And now it's just one more example that illustrates that in general you shouldn't go to great lengths to adapt your code to what engines are currently doing well or not-so-well.)

Related

TypeScript, why are subsequent function calls much faster than the original call? [duplicate]

I've got this problem I have been working on and found some interesting behavior. Basically, if I benchmark the same code multiple times in a row, the code execution gets significantly faster.
Here's the code:
http://codepen.io/kirkouimet/pen/xOXLPv?editors=0010
Here's a screenshot from Chrome:
Anybody know what's going on?
I'm checking performance with:
var benchmarkStartTimeInMilliseconds = performance.now();
...
var benchmarkEndTimeInMilliseconds = performance.now() - benchmarkStartTimeInMilliseconds;
Chrome's V8 optimizing compiler initially compiles your code without optimizations. If a certain part of your code is executed very often (e.g. a function or a loop body), V8 will replace it with an optimized version (so called "on-stack replacement").
According to https://wingolog.org/archives/2011/06/08/what-does-v8-do-with-that-loop:
V8 always compiles JavaScript to native code. The first time V8 sees a
piece of code, it compiles it quickly but without optimizing it. The
initial unoptimized code is fully general, handling all of the various
cases that one might see, and also includes some type-feedback code,
recording what types are being seen at various points in the
procedure.
At startup, V8 spawns off a profiling thread. If it notices that a
particular unoptimized procedure is hot, it collects the recorded type
feedback data for that procedure and uses it to compile an optimized
version of the procedure. The old unoptimized code is then replaced
with the new optimized code, and the process continues
Other modern JS engines identify such hotspots and optimize them as well, in a similar fashion.

When is a JavaScript function optimized in a V8 environment?

I'm trying to learn whether or not and at which point and to what extent is the following TypeScript function optimized on its way from JavaScript to machine code in some V8 environment for example:
function foo(): number {
const a = 1;
const b = a;
const c = b + 1;
return c;
}
The function operates with constants with no parameters so it's always equivalent to the following function:
function foo(): number {
return 1 + 1;
}
And eventually in whatever bytecode or machine code just assign the number 2 to some register, without intermediary assignments of values or pointers from one register to another.
Assuming optimizing such simple logic is trivial, I could imagine a few potential steps where it could happen:
Compiling TypeScript to JavaScript
Generating abstract syntax tree from the JavaScript
Generating bytecode from the AST
Generating machine code from the bytecode
Repeating step 4 as per just-in-time compilation
Does this optimization happen, or is it a bad practice from the performance point of view to assign expressions to constants for better readability?
(V8 developer here.)
Does this optimization happen
Yes, it happens if and when the function runs hot enough to get optimized. Optimized code will indeed just write 2 into the return register.
Parsers, interpreters, and baseline compilers typically don't apply any optimizations. The reason is that identifying opportunities for optimizations tends to take more time than simply executing a few operations, so doing that work only pays off when the function is executed a lot.
Also, if you were to set a breakpoint in the middle of that function and inspect the local state in a debugger, you would want to see all three local variables and how they're being assigned step by step, so engines have to account for that possibility as well.
is it a bad practice from the performance point of view to assign expressions to constants for better readability?
No, it's totally fine to do that. Even if it did cost you a machine instruction or two, having readable code is usually more important.
This is true in particular when a more readable implementation lets you realize that you could simplify the whole algorithm. CPUs can execute billions of instructions per second, so saving a handful of instructions won't change much; but if you have an opportunity to, say, replace a linear scan with a constant-time lookup, that can save enough computational work (once your data becomes big enough) to make a huge difference.

Why should I not not slice on the arguments object in javascript? [duplicate]

In the bluebird docs, they have this as an anti-pattern that stops optimization.. They call it argument leaking,
function leaksArguments2() {
var args = [].slice.call(arguments);
}
I do this all the time in Node.js. Is this really a problem. And, if so, why?
Assume only the latest version of Node.js.
Disclaimer: I am the author of the wiki page
It's a problem if the containing function is called a lot (being hot). Functions that leak arguments are not supported by the optimizing compiler (crankshaft).
Normally when a function is hot, it will be optimized. However if the function contains unsupported features like leaking arguments, being a hot function doesn't help and it will continue running slow generic code.
The performance of an optimized function compared to an unoptimized one is huge. For example consider a function that adds 3 doubles together: http://jsperf.com/213213213 21x difference.
What if it added 6 doubles together? 29x difference Generally the more code the function has, the more severe the punishment is for that function to run in unoptimized mode.
For node.js stuff like this in general is actually a huge problem due to the fact that any cpu time completely blocks the server. Just by optimizing the url parser that is included in node core (my module is 30x faster in node's own benchmarks), improves the requests per second of mysql-express from 70K rps to 100K rps in a benchmark that queries a database.
Good news is that node core is aware of this
Is this really a problem
For application code, no. For almost any module/library code, no. For a library such as bluebird that is intended to be used pervasively throughout an entire codebase, yes. If you did this in a very hot function in your application, then maybe yes.
I don't know the details but I trust the bluebird authors as credible that accessing arguments in the ways described in the docs causes v8 to refuse to optimize the function, and thus it's something that the bluebird authors consider worth using a build-time macro to get the optimized version.
Just keep in mind the latency numbers that gave rise to node in the first place. If your application does useful things like talking to a database or the filesystem, then I/O will be your bottleneck and optimizing/caching/parallelizing those will pay vastly higher dividends than v8-level in-memory micro-optimizations such as above.

JavaScript - What Level of Code Optimization can one expect?

So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!

Does removing comments improve code performance? JavaScript

Does removing comments from JavaScript code improve performance?
I realize that this is not great programing practice as comments form an intrinsic part of development. I am just interested to know if they do in fact add some overhead during compilation.
Whether your compiling or interpreting your JavaScript, the compiler/interpreter needs to look at the line, decide it's a comment, and move on (or look at a region of the line). For web applications, the comment lines also need to be downloaded.
So yes, there is some overhead.
However, I doubt you could find a real-world scenario where that difference matters.
If you are compiling your code, the overhead is only during the compile run, not during subsequent execution.
Removing comments will make the Javascript file smaller and easier to download.
Other than that, it will not affect noticably performance at all.
If you're concerned about bandwidth and want to make the file smaller, the best thing to do is to the file through JSMin or a similar tool before deploying it to your production web site. (Make SURE to keep the original file, though).
Update
As of September 24, 2016, the below answer is no longer accurate.
The source-length heuristic was removed in this commit:
https://github.com/v8/v8/commit/0702ea3000df8235c8bfcf1e99a948ba38964ee3#diff-64e6fce9a2a9948942eb00c7c1cb75f2
The surprising answer is possibly!
As Eric noted, there is overhead in terms of downloading and parsing, but this is likely to be so small as to be unnoticeable in most cases.
However, JavaScript performance is a treacherous beast to tame, and there is at least one other way that comments can affect performance.
Inlining
Modern (as of 2016) JavaScript engines such as V8 do a lot of fairly heavy work to ensure high performance. One of the things that these engines do is called JIT - "Just In Time" compilation. JIT compilation includes a number of complicated and occasionally unintuitive steps, one of which is inlining appropriate small functions into the call site.
Inlining means that given code like this:
function doIt(a, b) {
return (a + b) * 2;
}
function loop() {
var x = 1, y = 1;
var i;
for(i = 0; i < 100; ++i) {
x = doIt(x, y);
}
}
the compiler will do the equivalent of converting to this code:
function loop() {
var x = 1, y = 1;
var i;
for(i = 0; i < 100; ++i) {
// the doIt call is now gone, replaced with inlined code
x = (x + y) * 2;
}
}
The JIT compiler is able to determine that it's fine to replace the call to doIt with the body of the function. This can unlock big performance wins, as it removes the performance overhead of function calls completely.
Comments
However, how does the JavaScript engine choose which functions are suitable for inlining? There are a number of criteria, and one of them is the size of the function. Ideally, this would be the size of the compiled function, but V8's optimizer uses the length of the human-readable code in the function, including comments.
So, if you put too many comments in a function, it can potentially push it past V8's arbitrary inline function length threshold, and suddenly you are paying function call overhead again.
More info
Please take a look at Julien Crouzet's neat post for more details:
https://web.archive.org/web/20190206050838/https://top.fse.guru/nodejs-a-quick-optimization-advice-7353b820c92e?gi=90fffc6b0662
Note that Julien talks about Crankshaft; V8 has since introduced TurboFan but the source length criteria remains.
The full list of criteria is in the (remarkably readable) TurboFan source code here, with the source length criteria highlighted:
https://github.com/v8/v8/blob/5ff7901e24c2c6029114567de5a08ed0f1494c81/src/compiler/js-inlining-heuristic.cc#L55
Performance lag while browser interprets code? No significant difference.
But it does add to bytesize which makes is longer to download.
But that's no reason to omit comments. Keep your development codebase commented. Use javascript compressors prior to release.
Also, during the release, try to bunch up your entire javascript codebase for a page inside a single file so as to minimize HTTP requests. HTTP requests bear a significant performance penalty.
It would make no noticeable difference to the execution of the JavaScript.
What it does make a difference to is the size of the JavaScript files being downloaded to client browsers. If you have lots of comments, it can significantly increase the size of the JavaScript files. This can be even more so with whitespace characters used for layout.
The usual approach is to "minify" .js files before deployment to remove comments and whitespace characters. Minifiers can also rename variables to shorter names to save extra space. They usually make the original JavaScript unreadable to the human eye though, so it is best to make sure you keep a copy of the un-minified file for development.
Another issue is that comments of the sort "This code is crap but we must meed the deadline" may not look as good in customer's browser.
I'm not sure about runtime speed, but removing comments will decrease download size, which is just as important.
You should always have comments in the code you work on, and use a minifier to strip them out for deployment - YUI Compressor is a good one.

Categories