Does removing comments improve code performance? JavaScript - javascript

Does removing comments from JavaScript code improve performance?
I realize that this is not great programing practice as comments form an intrinsic part of development. I am just interested to know if they do in fact add some overhead during compilation.

Whether your compiling or interpreting your JavaScript, the compiler/interpreter needs to look at the line, decide it's a comment, and move on (or look at a region of the line). For web applications, the comment lines also need to be downloaded.
So yes, there is some overhead.
However, I doubt you could find a real-world scenario where that difference matters.
If you are compiling your code, the overhead is only during the compile run, not during subsequent execution.

Removing comments will make the Javascript file smaller and easier to download.
Other than that, it will not affect noticably performance at all.
If you're concerned about bandwidth and want to make the file smaller, the best thing to do is to the file through JSMin or a similar tool before deploying it to your production web site. (Make SURE to keep the original file, though).

Update
As of September 24, 2016, the below answer is no longer accurate.
The source-length heuristic was removed in this commit:
https://github.com/v8/v8/commit/0702ea3000df8235c8bfcf1e99a948ba38964ee3#diff-64e6fce9a2a9948942eb00c7c1cb75f2
The surprising answer is possibly!
As Eric noted, there is overhead in terms of downloading and parsing, but this is likely to be so small as to be unnoticeable in most cases.
However, JavaScript performance is a treacherous beast to tame, and there is at least one other way that comments can affect performance.
Inlining
Modern (as of 2016) JavaScript engines such as V8 do a lot of fairly heavy work to ensure high performance. One of the things that these engines do is called JIT - "Just In Time" compilation. JIT compilation includes a number of complicated and occasionally unintuitive steps, one of which is inlining appropriate small functions into the call site.
Inlining means that given code like this:
function doIt(a, b) {
return (a + b) * 2;
}
function loop() {
var x = 1, y = 1;
var i;
for(i = 0; i < 100; ++i) {
x = doIt(x, y);
}
}
the compiler will do the equivalent of converting to this code:
function loop() {
var x = 1, y = 1;
var i;
for(i = 0; i < 100; ++i) {
// the doIt call is now gone, replaced with inlined code
x = (x + y) * 2;
}
}
The JIT compiler is able to determine that it's fine to replace the call to doIt with the body of the function. This can unlock big performance wins, as it removes the performance overhead of function calls completely.
Comments
However, how does the JavaScript engine choose which functions are suitable for inlining? There are a number of criteria, and one of them is the size of the function. Ideally, this would be the size of the compiled function, but V8's optimizer uses the length of the human-readable code in the function, including comments.
So, if you put too many comments in a function, it can potentially push it past V8's arbitrary inline function length threshold, and suddenly you are paying function call overhead again.
More info
Please take a look at Julien Crouzet's neat post for more details:
https://web.archive.org/web/20190206050838/https://top.fse.guru/nodejs-a-quick-optimization-advice-7353b820c92e?gi=90fffc6b0662
Note that Julien talks about Crankshaft; V8 has since introduced TurboFan but the source length criteria remains.
The full list of criteria is in the (remarkably readable) TurboFan source code here, with the source length criteria highlighted:
https://github.com/v8/v8/blob/5ff7901e24c2c6029114567de5a08ed0f1494c81/src/compiler/js-inlining-heuristic.cc#L55

Performance lag while browser interprets code? No significant difference.
But it does add to bytesize which makes is longer to download.
But that's no reason to omit comments. Keep your development codebase commented. Use javascript compressors prior to release.
Also, during the release, try to bunch up your entire javascript codebase for a page inside a single file so as to minimize HTTP requests. HTTP requests bear a significant performance penalty.

It would make no noticeable difference to the execution of the JavaScript.
What it does make a difference to is the size of the JavaScript files being downloaded to client browsers. If you have lots of comments, it can significantly increase the size of the JavaScript files. This can be even more so with whitespace characters used for layout.
The usual approach is to "minify" .js files before deployment to remove comments and whitespace characters. Minifiers can also rename variables to shorter names to save extra space. They usually make the original JavaScript unreadable to the human eye though, so it is best to make sure you keep a copy of the un-minified file for development.

Another issue is that comments of the sort "This code is crap but we must meed the deadline" may not look as good in customer's browser.

I'm not sure about runtime speed, but removing comments will decrease download size, which is just as important.
You should always have comments in the code you work on, and use a minifier to strip them out for deployment - YUI Compressor is a good one.

Related

Function wont eager compile

I recently tried to eager compile my javascript function from the guide shown in the link-https://v8.dev/blog/code-caching-for-devs
But even after enclosing the function in IIFE heuristics as said above, the V8 System analyzer shows its lazy compiled, and unoptimized. It is not showing under the eager compile section. The creation time however shifted to the front, but there is no noticeable performance change. Does this mean that eager compile functions don't perform faster? Then what exactly is the advantage of a eager compile function?
(pls note, the below function is created for learning purposes. I use this to learn about eager compiling js functions):
script.js:
const a = 1000;
const b = 2000;
const add = function(a,b) {
var sampleObject2={
firstName: "John",
lastName: "Doe",
age: 70,
eyeColor: "blue"
};
var c=a**b;
var d=b**c;
c=c+21;
d=d+c;
d=d**sampleObject2.age;
console.log(c**d);
return c**d;
};
window.addEventListener("click", ()=>{
var start=performance.now()
add(a,b);
var end=performance.now()
console.log(end-start);
},false);
(V8 developer here.)
the V8 System analyzer shows its lazy compiled
That's not what I'm seeing: with the code as written in your question, i.e. const add = function(a, b) { ... };, the System Analyzer says type: LazyCompile~.
When I add the parentheses that (currently!) get interpreted as an eager-compilation hint, i.e. const add = (function(a, b) = { ... });, that changes to type: Function~ (as in your screenshot; so I guess you did actually add the parentheses before running the test that produced that screenshot).
and unoptimized
That is expected. Optimization has nothing to do with lazy compilation. There is no way to optimize a function right away, because attempting that would make no sense: the optimizer can't do anything useful if it doesn't have type feedback to work with, which can only be collected during unoptimized execution.
Does this mean that eager compile functions don't perform faster?
Correct, the function's own performance doesn't change at all.
Then what exactly is the advantage of a eager compile function?
Preparing a function for later lazy compilation has a small cost. In case the function will be needed soon, compiling it right away avoids this cost. In fact, I do see a tiny difference in the times that your test reports; although I haven't verified that this difference is actually due to eager vs. lazy compilation, it may well be caused by something else.
If eager-compiling everything was a good idea, then engines would do it -- that'd be much easier. In reality, lazy-compiling is almost always worth the tiny amount of extra work it creates, because it makes startup/pageload faster, and can avoid quite a bit of memory consumption when there are unused functions, e.g. in your included libraries. So JavaScript engines accept a lot of internal complexity to make lazy compilation possible. I wouldn't recommend attempting to side-step that.
(For completeness: a couple of years ago V8's implementation of lazy compilation was not as optimized as it is today, and manually hinting eager compilation could achieve more significant improvements, especially for deeply nested functions. But these times are gone! And now it's just one more example that illustrates that in general you shouldn't go to great lengths to adapt your code to what engines are currently doing well or not-so-well.)

When is a JavaScript function optimized in a V8 environment?

I'm trying to learn whether or not and at which point and to what extent is the following TypeScript function optimized on its way from JavaScript to machine code in some V8 environment for example:
function foo(): number {
const a = 1;
const b = a;
const c = b + 1;
return c;
}
The function operates with constants with no parameters so it's always equivalent to the following function:
function foo(): number {
return 1 + 1;
}
And eventually in whatever bytecode or machine code just assign the number 2 to some register, without intermediary assignments of values or pointers from one register to another.
Assuming optimizing such simple logic is trivial, I could imagine a few potential steps where it could happen:
Compiling TypeScript to JavaScript
Generating abstract syntax tree from the JavaScript
Generating bytecode from the AST
Generating machine code from the bytecode
Repeating step 4 as per just-in-time compilation
Does this optimization happen, or is it a bad practice from the performance point of view to assign expressions to constants for better readability?
(V8 developer here.)
Does this optimization happen
Yes, it happens if and when the function runs hot enough to get optimized. Optimized code will indeed just write 2 into the return register.
Parsers, interpreters, and baseline compilers typically don't apply any optimizations. The reason is that identifying opportunities for optimizations tends to take more time than simply executing a few operations, so doing that work only pays off when the function is executed a lot.
Also, if you were to set a breakpoint in the middle of that function and inspect the local state in a debugger, you would want to see all three local variables and how they're being assigned step by step, so engines have to account for that possibility as well.
is it a bad practice from the performance point of view to assign expressions to constants for better readability?
No, it's totally fine to do that. Even if it did cost you a machine instruction or two, having readable code is usually more important.
This is true in particular when a more readable implementation lets you realize that you could simplify the whole algorithm. CPUs can execute billions of instructions per second, so saving a handful of instructions won't change much; but if you have an opportunity to, say, replace a linear scan with a constant-time lookup, that can save enough computational work (once your data becomes big enough) to make a huge difference.

JavaScript - What Level of Code Optimization can one expect?

So, I am fairly new to JavaScript coding, though not new to coding in general. When writing source code I generally have in mind the environment my code will run in (e.g. a virtual machine of some sort) - and with it the level of code optimization one can expect. (1)
In Java for example, I might write something like this,
Foo foo = FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42);
blub.doSomethingImportantWithAFooObject(foo);
even if the foo object only used at this very location (thus introducing an needless variable declaration). Firstly it is my opinion that the code above is way better readable than the inlined version
blub.doSomethingImportantWithAFooObject(FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
and secondly I know that Java compiler code optimization will take care of this anyway, i.e. the actual Java VM code will end up being inlined - so performance wise, there is no diffence between the two. (2)
Now to my actual Question:
What Level of Code Optimization can I expect in JavaScript in general?
I assume this depends on the JavaScript engine - but as my code will end up running in many different browsers lets just assume the worst and look at the worst case. Can I expect a moderate level of code optimization? What are some cases I still have to worry about?
(1) I do realize that finding good/the best algorithms and writing well organized code is more important and has a bigger impact on performance than a bit of code optimization. But that would be a different question.
(2) Now, I realize that the actual difference were there no optimization is small. But that is beside the point. There are easily features which are optimized quite efficiently, I was just kind of too lazy to write one down. Just imagine the above snippet inside a for loop which is called 100'000 times.
Don't expect much on the optimization, there won't be
the tail-recursive optimization,
loop unfolding,
inline function
etc
As javascript on client is not designed to do heavy CPU work, the optimization won't make a huge difference.
There are some guidelines for writing hi-performance javascript code, most are minor and technics, like:
Not use certain functions like eval(), arguments.callee and etc, which will prevent the js engine from generating hi-performance code.
Use native features over hand writing ones, like don't write your own containers, json parser etc.
Use local variable instead of global ones.
Never use for-each loop for array.
Use parseInt() rather than Math.floor.
AND stay away from jQuery.
All these technics are more like experience things, and may have some reasonable explanations behind. So you will have to spend some time search around or try jsPerf to help you decide which approach is better.
When you release the code, use closure compiler to take care of dead-branch and unnecessary-variable things, which will not boost up your performance a lot, but will make your code smaller.
Generally speaking, the final performance is highly depending on how well your code organized, how carefully your algorithm designed rather than how the optimizer performed.
Take your example above (by assuming FooFactory.getFoo() and Bar.someStaticStuff("qux","gak",42) is always returning the same result, and Bar, FooFactory are stateless, that someStaticStuff() and getFoo() won't change anything.)
for (int i = 0; i < 10000000; i++)
blub.doSomethingImportantWithAFooObject(
FooFactory.getFoo(Bar.someStaticStuff("qux","gak",42));
Even the g++ with -O3 flag can't make that code faster, for compiler can't tell if Bar and FooFactory are stateless or not. So these kind of code should be avoided in any language.
You are right, the level of optimization is different from JS VM to VM. But! there is a way of working around that. There are several tools that will optimize/minimize your code for you. One of the most popular ones is by Google. It's called the Closure-Compiler. You can try out the web-version and there is a cmd-line version for build-script etc. Besides that there is not much I would try about optimization, because after all Javascript is sort of fast enough.
In general, I would posit that unless you're playing really dirty with your code (leaving all your vars at global scope, creating a lot of DOM objects, making expensive AJAX calls to non-optimal datasources, etc.), the real trick with optimizing performance will be in managing all the other things you're loading in at run-time.
Loading dozens on dozens of images, or animating huge background images, and pulling in large numbers of scripts and css files can all have much greater impact on performance than even moderately-complex Javascript that is written well.
That said, a quick Google search turns up several sources on Javascript performance optimization:
http://www.developer.nokia.com/Community/Wiki/JavaScript_Performance_Best_Practices
http://www.nczonline.net/blog/2009/02/03/speed-up-your-javascript-part-4/
http://mir.aculo.us/2010/08/17/when-does-javascript-trigger-reflows-and-rendering/
As two of those links point out, the most expensive operations in a browser are reflows (where the browser has to redraw the interface due to DOM manipulation), so that's where you're going to want to be the most cautious in terms of performance. Some of that can be alleviated by being smart about what you're modifying on the fly (for example, it's less expensive to apply a class than modify inline styles ad hoc,) so separating your concerns (style from data) will be really important.
Making only the modifications you have to, in order to get the job done, (ie. rather than doing the "HULK SMASH (DOM)!" method of replacing entire chunks of pages with AJAX calls to screen-scraping remote sources, instead calling for JSON data to update only the minimum number of elements needed) and other common-sense approaches will get you a lot farther than hours of minor tweaking of a for-loop (though, again, common sense will get you pretty far, there, too).
Good luck!

Is writing your own parser worth the extra runtime it causes?

I am doing work involving a lot of DOM manipulation. I wrote a function to more efficiently change common styles on objects using "hotkeys". It simply interprets this:
styles = parseStyles({p:"absolute",l:"100px",t:"20px",bg:"#CC0000"});
as this:
styles = {position:"absolute",left:"100px",top:"20px",background:"#CC0000"};
This came about mainly to save me from having to read so much, and because I wanted to see if I could do it :) (as well as file sizes). I find these hotkeys easier to look at; I am setting and resetting styles dozens of times for different custom DOM objects.
But, is having a bottleneck like this going to be a significant burden on performance and runtime if my page is using it up to 5,000 times in a session, about 10-25 executions at a time?
function parseStyles(toParse){
var stylesKey =
{h:"height",p:"position",l:"left",t:"top",r:"right",b:"bottom",bg:"background"},
parsedStyles = {};
for (entry in toParse){
if (entry in stylesKey){
parsedStyles[stylesKey[entry]] = toParse[entry];
} else {
parsedStyles[entry] = toParse[entry];
}
}
return parsedStyles;
}
I find that setting non-computed styles is rarely ever needed any more. If you know the styles ahead of time, define a class for that in your CSS and addClass or removeClass from the necessary objects. Your code is a lot more maintainable and all style-specific info is in your CSS files rather than your Javascript files. Pretty much, the only time I set formatting-style info directly on an object anymore is when I'm using computed positions with absolute positioning and even then, if I rack my brain hard enough, the positioning problem can usually be solved with plain CSS/HTML.
The examples you cite in your question look like static styles which could all be done with a CSS rule and simply doing addClass to an object. Besides being cleaner, it should be a lot faster to execute too.
It looks like what you're doing is using run-time parsing in order to save development-time typing. That's fine if you don't notice the performance difference. Only you can know that for sure because only you can test your app in the environments that you need it to run (old browser, old CPU, stress usage). We can't answer that for you. It would certainly be faster not to be doing run-time conversion for something that is known at development time. Whether that speed difference is relevant depends upon a lot of details and app requirements you haven't disclosed (and may not have even seriously thought about) and could only be figured out with configuration testing.
If it were me and I had any thoughts that maybe I was calling this so much that it might impact performance, I'd find it a lot less work to do a little extra typing (or search and replace) and not have to test the potential performance issues.
Memoize your parsing function.
The simple fact is, that over some finite area of time, the number of actual styles, or full style strings that you will process will likely be quite small, and will also, likely, have a reasonable amount of duplication.
So, when you go to parse a style expression, you can do something simple like store the expression in a map, and check if you've seen it before. If you have, return the result that you got before.
This will save you quite a bit of time when reuse is involved, and likely not cost you much time when it's not.

JavaScript optimization: Caching math functions globally

I'm currently doing some "extreme" optimization on a JavaScript game engine I'm writing. And I have noticed I use math functions a lot! And I'm currently only caching them locally per function I use them in. So I was going to cache them at the global level in the window object using the below code.
var aMathFunctions = Object.getOwnPropertyNames(Math);
for (var i in aMathFunctions)
{
window[aMathFunctions[i]] = Math[aMathFunctions[i]];
}
Are there any major problems or side effects with this? Will I be overwriting existing functions in window, and will I be increasing my memory footprint dramatically? Or what else may go wrong?
EDIT: Below is an excerpt on reading I have done about JavaScript optimization that has lead me to try this.
Property Depth
Nesting objects in order to use dot notation is a great way to
namespace and organize your code. Unforutnately, when it comes to
performance, this can be a bit of a problem. Every time a value is
accessed in this sort of scenario, the interpreter has to traverse the
objects you've nested in order to get to that value. The deeper the
value, the more traversal, the longer the wait. So even though
namespacing is a great organizational tool, keeping things as shallow
as possible is your best bet at faster performance. The latest
incarnation of the YUI Library evolved to eliminate a whole layer of
nesting from its namespacing. So for example, YAHOO.util.Anim is now
Y.Anim.
Reference: http://www.phpied.com/extreme-javascript-optimization/
Edit: Should not matter anymore in Chrome due to this revision; perhaps caching is now even faster.
Don't do it, it's much slower when using global functions.
http://jsperf.com/math-vs-global
On Chrome:
sqrt(2); - 12,453,198 ops/second
Math.sqrt(2); - 542,475,219 ops/second
As for memory usage, globalizing it wouldn't be bad at all on the other hand. You just create another reference; the function itself will not be copied.
I am actually amazed it is faster for me on Mac OS X and Firefox 5 talking 5-8 ms difference in 50000 iterations.
console.time("a");
for (var i=0;i<50000;i++) {
var x = Math.floor(12.56789);
}
console.timeEnd("a");
var floor = Math.floor;
console.time("b");
for (var i=0;i<50000;i++) {
var y = floor(12.56789);
}
console.timeEnd("b");
I see only one real bonus if it will reduce the footprint of the code. I have not tested any other browsers so it may be a boost in one and slower in others.
Would it cause any problems? I don't see why it would unless you have things in global scope with those names. :)

Categories