Does minified JavaScript code improve performance? - javascript

I'm making an AIR application (so the download time doesn't have a huge impact). Does combining and minifing all the JavaScript files affect the performance? How would obfuscating affect the performance?

Minifying improves performance for your page overall by decreasing the load time (even if only slightly).
Neither minifying nor obfuscating alter the execution time by any perceivable amount for the vast majority of JavaScript code out there.
I do recommend minifying for those reasons and more. Minifying multiple scripts together (like jQuery and its plugins) can yield even greater savings.
As pointed out, on constrained devices and/or with very large codebases minifying could yield a noticeable result.

Minification
Minification does improve performance for two reasons:
Reduced file-size (because it removes comments and unnecessary white spaces), so your script loads faster. Even if it is embedded into the <head>.
It is parsed faster, since comments and white spaces don't have to be explicitly ignored (since they're not there).
Combining
I've written quite a few HTML/JavaScript AIR applications, and from personal experience, combining files won't make a difference. In fact, it's good practice to separate the script based on certain criteria (classes, global functions, SQL functions, etc.). It helps keep them organised when the project becomes too big.
Obfuscation
Obfuscating is usually a combination of minification and renaming variables. It involves using eval to blow up the code again. This reduces performance for obvious reasons, but it depends on the size of your code.
I'd suggest running tests to understand this best for your specific situation.

Everyone here already talked about minifying, but nobody talked about the second part of your question - combining. This will definitely improve performance, probably even more than minifying.
Multiple files require multiple HTTP requests, so when you put them all into one file, only one request is needed. This is important for two reasons:
each individual HTTP request may take longer to load for various routing reasons, and one file will potentially delay your whole application.
browsers and other clients have a maximum limit of files they are allowed to download concurrently from a single domain. Depending on the number of files in your application, this may mean the client queuing them up, thus making the load even longer.
Also, besides minifying and combining, you have to absolutely make sure you have some sort of server-side compression enabled. This can save you 90% or even more in the amount of bytes transferred, depending on the files.
You can read more about compression (gzip, deflate) in How to make your site lightning fast by compressing (deflate/gzip) your HTML, JavaScript, CSS, XML, etc. in Apache.

Minification does not improve the speed at which JavaScript executes at runtime, which I believe it what you're really interested in. In fact, if you use Packer's Base64 compression, it's actually slower on initial load.
Minification will reduce the size of the JavaScript though, making your application's download size smaller.

Minifying strips out all comments, superfluous white space and shortens variable names. It thus reduces download time for your JavaScript files as they are (usually) a lot smaller in filesize. So, yes it does improve performance.
The obfuscation shouldn't adversely affect performance.
The article Best Practices for Speeding Up Your Web Site talks about minifying.

I'd like to post this as a separate answer as it somewhat contrasts the accepted one:
Yes, it does make a performance difference as it reduces parsing time - and that's often the critical thing. For me, it was even just simply linear in the size and I could get it from 12 seconds to 4 seconds parse time by minifying from 3 MB to 1 MB.
It's not a big application either. It just has a couple of reasonable dependencies.
So the moral of the story here is: Yes, minifying is important for performance - and not because of bandwidth, but because of parsing.

According to this page:
Minification in JavaScript is the process of removing all characters that are not necessary from the JavaScript source code. That is why it is called “minification” – because all of the data that is not necessary to the functioning of the JavaScript is removed from the source code, and therefore the JavaScript is “minimized”. Even though these characters are removed from the JavaScript source code, the functionality of the JavaScript code does not change at all.
So, your JavaScript code will behave exactly the same even after it goes through the minification process. Code that has gone through the minification process is also known as “minified” code
What are the benefits and advantages of JavaScript minification
The main purpose of JavaScript minification is to speed up the downloading or transfer of the JavaScript code from the server hosting the website’s JavaScript. The reason that minification makes downloads go faster is because it reduces the amount of data (in the minified JavaScript file) that needs to be downloaded. Less data means that the user’s browser spends less time processing that data, which is why time is saved. So, we can say that minification is performed on JavaScript source code because it is essentially a performance enhancement – and it allows websites that use minified JavaScript to load faster.

Related

Performance of loading one large javascript file vs multiple smaller files with less overall size?

I have a question regarding the loading of javascript files. From what I've read so far, the general consensus appears to be "less script files is better", but I could not find an answer to my next question:
Say you have a javascript file with 4000 lines of code, but only 500 lines of that are used on one specific page. Would it make sense (performance-wise) to only load this part of the script when that specific page is opened (using something like if URL == X then load {})? Or would it be better if you load the entire script all at once?
(Please assume the script in question can be refactored into multiple files)
Realistically, breaking up the javascript loaded by functionality at a granular level (~500 lines as suggested in your question) will not give you much better performance. You'll get a lot more benefits from combining, minifying, and optimizing your javascript. Don't forget to minify your CSS as well.
This will generally speed up download times because it not only reduces the number of bytes transferred (what you're focusing on), but actually makes your javascript smaller (making variable names smaller & removing whitespace), and also reduces the number of HTTP requests (this actually has a large effect on page performance). Check out this article for a more in depth demo of how this has an effect on speed.
Additionally, by always using the same combined/minified file (instead of a different one for each page in your app), you take advantage of the browser caching your script on first load, meaning after the initial load, your page gets its scripts from cache.
(Note - I linked to Google PageSpeed for optimization tips above - there is a lot more than this that is useful to know for doing js optimizations)

Do comments affect performance?

Am I correct to say that JavaScript code isn't compiled, not even JIT? If so, does that mean that comments have an affect on performance, and I should be very careful where I put place my comments? Such as placing function comments above and outside the function definition when possible, and definitely avoid placing comments inside loops, if I wanted to maximize performance? I know that in most cases (at least in non-loop cases), the change in performance would be negligible, but I think that this would be something that is good to know and be aware of, especially for front-end/js developers. Also, a relevant question was asked on a js assessment I recently took.
Am I correct to say that JavaScript code isn't compiled, not even JIT?
No. Although JavaScript is traditionally an "interpreted" language (although it needn't necessarily be), most JavaScript engines compile it on-the-fly whenever necessary. V8 (the engine in Chrome and NodeJS) used to compile immediately and quickly, then go back and aggressively optimize any code that was used a lot (the old FullCodegen+TurboFan stack); a while back having done lots of real-world measurement, they switched to initially parsing to byteocde and interpreting, and then compiling if code is reused much at all (the new Ignition+TurboFan stack), gaining a significant memory savings by not compiling run-once setup code. Even engines that are less aggressive almost certainly at least parse the text into some form of bytecode, discarding comments early.
Remember that "interpreted" vs. "compiled" is usually more of an environmental thing than a language thing; there are C interpreters, and there are JavaScript compilers. Languages tend to be closely associated with environments (like how JavaScript tends to be associated with the web browser environment, even though it's always been used more widely than that, even back in 1995), but even then (as we've seen), there can be variation.
If so, does that mean that comments have an affect on performance...
A very, very, very minimal one, on the initial parsing stage. But comments are very easy to scan past, nothing to worry about.
If you're really worried about it, though, you can minify your script with tools like jsmin or the Closure Compiler (even with just simple optimizations). The former will just strip comments and unnecessary whitespace, stuff like that (still pretty effective); the latter does that and actually understands the code and does some inlining and such. So you can comment freely, and then use those tools to ensure that whatever minuscule impact those comments may have when the script is first loaded is bypassed by using minifying tools.
Of course, the thing about JavaScript performance is that it's hard to predict reliably cross-engine, because the engines vary so much. So experiments can be fun:
Here's an experiment which (in theory) reparses/recreates the function every time
Here's one that just parses/creates the function once and reuses it
Result? My take is that there's no discernable difference within the measurement error of the test.
The biggest effect that comments have is to bloat the file size and thereby slow down the download of the script. Hence why all professional sites use a minimizer for a productive version to cut the js down to as small as it gets.
It may have some effect. Very minimalistic effect, though (even IE6 handles comments correctly ! to be confirmed...).
However, most people use a minifier that strips off comments. So it's okay.
Also:
V8 increases performance by compiling JavaScript to native machine code before executing it.
Source
It can prevent functions from being inlined, which affects performance, though this shouldn't really happen.
In some perhaps isolated circumstances, comments definitely somehow bog down the code execution. I am writing a lengthy userscript, using in the latest Firefox on Mac using TamperMonkey, and several days' increasingly frustrated troubleshooting came to an end when I stripped the lengthy comments from the script and suddenly the script execution stopped hanging completely on Facebook. Multiple back-and-forth comparisons running the same exact script on a fresh user account, the only difference being the comments, proved this to be the case.

When is a JavaScript small enough to be worth inlining (for best performance)?

Let's say I'm building a static 10-page website for a client, and there's only a few lines of JavaScript for the whole site (less than 1KB). In this situation I'd guess that it's best (for performance) to put the <1KB of JavaScript code inline between script tags on every page, rather than in an external .js file. The extra bandwidth consumption (when moving between pages) is probably worth it for removing a whole HTTP request.
At the other end of the spectrum, if I had 200KB of JavaScript on the same website, then I'd definitely put this in a separate file to reduce bandwidth when moving between pages on the site.
But I've no idea where the 'cut-off point' should be. If I have 5KB of JS, should I just put this inline in my HTML? What about 10KB? 20KB?
Obviously the 'cut-off point' is going to depend on the situation, e.g. it might be different for mobile sites. But if anyone has any general pointers that would help guide this decision, then I'd like to hear them.
(NB: I'm interested only in performance here, not maintainability etc. I can keep my code in separate files but use some kind of build process or serverside middleware to automatically inline it, so maintainability will not be an issue.)
(Bonus points: please let me know if all the considerations are exactly the same with inline vs. external CSS.)
I'm strictly talking about performance here, more as a thought experiment (and please excuse the lack of experimental rigor).
If you're inlining the javascript, yes you do save time in the fact that everything is done in one http request. However, you should factor in the time it takes for a server to dynamically generate the webpage that you need (SSL also adds time to this).
Best case
If you can create a build that generates minified javascript and injects it into an html page, combined with gzip compression, you should result in a much lower bandwidth. The more text you stick in the compression, the bigger your payoff compared to individually gzipping per request. Ultimately, this means if you can have it load a single html page statically with all the javascript/css inline, this means it will definitely be faster.
Normal case (with dynamic html and shared js libraries)
A small sample of some stackoverflow posts and how they affect my own bandwidth:
304ms 9.67KB
204ms 11.28KB
344ms 17.93KB
290ms 17.19KB
210ms 16.79KB
591ms 37.20KB
229ms 30.55KB
Judging from this, it seems like the overhead incurred (disregarding filesize) is around 150ms for each http connection - perhaps in the worst case. Now, the question is, how big does the text (javascript in your case) have to be for your bandwidth to incur 150ms. According to this article (http://www.telecompetitor.com/akamai-average-u-s-broadband-connection-speed-is-now-5-3-mbps/), for broadband users, we are working with an average of 5.3 mbps (which is .6625 MB/s or 678.4 KB/s or .6784 KB/ms). This means that for the average broadband user, you will need roughly 100KB download of gzipped+minified javascript for it to equal.
Please adjust the parameters yourself to see what this may mean for your audience. This number, whatever you calculate it to be, is the point at which you may be better off serving it inline (through server-generated response) versus fetching it/caching it externally.
All in all, I don't think that for performance reasons, this is a bottleneck at all. The size of compressed + minified javascript is usually negligible and it has to be on orders of unmaintainable magnitude that it would matter.
The cutoff point is 0KB -- the answer to "When is a JavaScript small enough to be worth inlining" is "never".
Aside from the maintainability issues, which you say you don't want to consider, it's better for performance anyway. Yes, you have one extra HTTP request the first time you reference the .js file, but after that it's cached so your overall traffic/bandwidth decreases over time.
The same is true for CSS files.
Making the page heavier with inline JS or CSS is more costly than the extra request for the initial fetches of the files (unless you're serving completely static .html where the whole page is cached for a long time)
Short answer: it depends!
Assuming:
You would not inline the script by actually copy-and-pasting it into every HTML page
A separate file would be cached properly
Do the numbers. Come up with a rough estimate for how much extra traffic would be generated by the extra HTML, in proportion to the rest of the traffic, and then decide if you think that's an acceptable compromise.
If it's 1% it's probably not worth worrying about. But if it's 50% then you can halve your traffic by putting the script in a separate file.

What's the impact of not compressing javascript and CSS on client side processing

Ignoring download times, what's the performance impact of making the browser interpret several separate small files as opposed to one big one. In particular, could it make a significant difference to page rendering speed in ie6 and 7?
Browsers typically limit themselves to a certain number of simultaneous requests. This number is dependent on how "server friendly" they are.
How many concurrent AJAX (XmlHttpRequest) requests are allowed in popular browsers?
So, depending on the number of artifacts the browser has to load, it may have to wait for others to complete first. Artifacts include everything the browser has to go back to the server for: images, javascript, css, flash, etc. Even the favicon if you have one.
That aside, rendering speed is normally going to boil down to how the pages are structured. ie. how many calculations you depend on the browser to make (% width vs fixed width).
It has to make more round-trip HTTP requests. It may or may not have significant consequences.
Where,
Apart from download times , if you too have many javascript and css files
Each request is as an extra http call from client to server.
If the page load is one of the main criteria you should definetly think about it
read this doc also
http://developer.yahoo.com/performance/rules.html
I work for a gov't organization with a large scale enterprise intranet and when we had around 25+ JS files and 10+ CSS files loading on our intranet portal we did notice a dramatic lag in page load time in IE6 and 7. Newer browsers have faster routines for loading and executing JavaScript. I used YUI Compressor to minify everything including CSS.
If you include minification in along with combining files, then dead code often gets removed (depending on the minifier) and some code can be optimized (see YUI Compressor: What are micro optimizations? and Which javascript minification library produces better results?).
I've asked this question a bunch of times when I first started out with web development.
If you have under 10 javascripts and 10 css files (css not so important in my opinion), then I don't think there is much use minifying and compressing. However, if you are dealing with a bunch of javascript files (greater than 10), then YES, it's gonna make a difference.
What you may experience is, even after compressing and minifying and combining your scripts, you may still experience slow-ness. That's when HTML caching plays a huge role in website optimizations, at least that's what I experienced in my web application. Try looking into Memcached and use it to cache your html files. This technique speeds up your web application a WHOLE LOT!!!
I am assuming your question is related to web optimization and high performance websites.
Just my 2 cents.

Will combining javascript files help speed up the website in IE8?

PageSpeed and Yslow suggest that to combine javascripts file to reduce HTTPRequest. But this is becuase (I think) pre ie8 browser has no more than 2 serverhost connection.
But nowaday, browser has 6 serverhost connections, which means it has download javascripts in parrallel. So let say we have 1MB of javascript, should we break it down into 6 different files in similar size to obtain max download speed? Please let me know.
Micahel.S
No, because each HTTP request involves overhead (less if pipelining is used)
The answer to your question is no. However, assuming you are able to serve your content in a completely isolated environment where only IE8 is used (like company intranet), then the answer to your question becomes: no.
Since you aren't designing for IE6-7 I assume you are in an isolated environment (otherwise you are making a poor designing decision). In this environment, yes you might see small benefits from breaking down JavaScript files, but I recommend against it.
Why? Since you are optimizing for speed, I assume you are putting JavaScript at the bottom of the body tag in your HTML document, in order to prevent JS from blocking download of DOM. This is a fundamental practice to make the page appear to be loading faster. However, by placing the content in the bottom of the body, your question becomes moot. Since the DOM is no longer being blocked by the script tags, whatever speed benefits you could achieve by using parallel downloading would be lost on the user because they see the page load before the browser even requests the JavaScript files.
tl;dr: There is no practical speed advantage to break JS into multiple files for parallel downloading.
Splitting the files won't make too much of a difference really. If you want to see performance gains in terms of download times for your Production environment what I always do is use YUI Compressor (http://developer.yahoo.com/yui/compressor/) to get my JS file size down as small as possible then serve a gzipped version of the js to browsers that support it.
In a development environment you shouldn't be too worried about it though. Just split them logically based on their purpose so they're easier to work on then bring them all together into one file once you're ready and optimize it for production.
Most browsers will cache your JavaScript files anyway, so after the first page load, it won't matter.
But, you should split your JavaScript logically and in a way that would most help you in development, since browsers vary in the number of simultaneous connections they allow.
For speed, you can obfuscate your code (via a minimization routine) and serve it in a way no human would have the patience to read.

Categories