Why do we need to use uncompressed files for development? - javascript

I have been wondering why do we need uncompressed files for development and minified for production? I mean what are the benefits of using uncompressed files over minified?
Do they give you better error messages or is it just that if we want to look something up we can go through code of uncompressed files?
Might be dumb question to some of you but I never had habit of going through code of well known big libraries and if I am not wrong, very few people do it.

The main reason for this is usability. When a js-file is minified and you've got an Error and trying to find a place where it is located, what would you find? just a minified string like
(function(_){var window=this,document=this.document;var ba,ea,fa,ha,la,na,oa,pa,qa,sa,ra,ta,wa,xa,za,Aa,Da,Ea,Fa,Ga,Ia;ba=function(a){return function(){return _.aa[a].apply(this,arguments)}};ea=function(a){return ca(_.p.top,a)||ca(da(),a)};_.aa=[];fa="function"==typeof Object.create?Object.create:function(a){var b=function(){};...
and so on. Is it readable for you? I don't think so. It's not readable at all.
For a much better understanding of the code, you need to uncompress it. It will add some additional spaces and format the code in a much readable way. so it would look like:
(function(){
var b = j(),
c = k (b);
})();
It allows you to move from one piece of code to another and discover the code or search your error inside.
Also, for production, you need not only minify your code but compress it as well. So, it would be nice to use Uglify library for that.
It removes unnecessary spaces, rename variables, objects and functions for much smaller names like a or b12. It increases downloading speed.
Hope it helps you.

There may be several reasons why one might prefer uncompressed [unminified] files during development. Some reasons I can think of:
Reduce time to view changes while coding by skipping the minification step. (If you use minification as a part of your build step during development, you may have to wait for the minification to complete each time you make a change to see it in the browser.)
If code mangler is being used, variables may be renamed and not intuitively apparent as to what they are actually called in the codebase
If you are using webpack or some similar module loader, it may include extra code for hot module reloading and dependency injection/error tracking when not minified (which you wouldn't want in production)
It allows debugging to be easier, readable and intuitive.
Minification and code mangling are done MAINLY to make the delivery of those assets more efficient from the server to an end user (client). This ensures that the client can download the script fast and also reduces the cost for the website/business to deliver that script to the user. So this can be considered an extra unnecessary step when running the code during development. (The assets are already available locally so the extra payload cost is negligible)
TLDR: Minification and code mangling can be a time consuming process (especially when we are generating map files etc) which can delay the time between changes and the time those changes are visible on the local instance. It can also actually hamper development by making it harder/less intuitive to debug

Related

Do comments in NodeJs affect performance?

I am in the process of having to refactor an entire NodeJs project which is quite large. One of the biggest problems I'm facing is that my predecessor included literally no documentation.
I'm used to clientside js, where comments can be stripped via uglify (or similar) before deploying to a production environment.
Is there something similar for node, or how do people handle this? Is the performance impact of comments negligible?
Comments do not affect code performance in a significant way. Neither in the in client nor in the server.
What happens in the client is that if you're including JavaScript with comments those lines are still being downloaded by the browser, without no extra benefit for the user.
In client side code, comments add to the file size that needs to be sent to the browser, so that is why tools at used to remove comments. On the other hand, the comments in server side code doesn't make much difference.
Comments do not affect performance in a significant matter. How I understand it, the javascript program is being loaded into the memory. In this process the comments are being ignored and are not loaded into the memory. Meaning that only during the loading of your application you could experience extremely minor increase of loading time while having a lot of comments. But this is negligible.
Using uglify would not be necessary since users cannot read your NodeJS code. And it would make the newly refactored code less readable for you (which would be counterproductive).
Like Alberto and Konst are pointing out is that uglify can be used to reduce file size for downloading by the client.
Note: I do not know if i am exactly correct, please do correct me if i am wrong.

Javascript compiler / compactor (Windows GUI)

I need to compile / compact an "ever-growing" javascript app (round abouts 40k LoC, currently).
I don't have the time to integrate the "appropriate" tool for what I'm trying to do, something like require.js, as it requires modification of said code.
I was planning to just manually pile everything together to do a compact/compile routine, but at this point, I think that will make it quite difficult to maintain. (Ultimately, I'd like to keep the file structure broken down into its current hierarchy.)
That being said, I've been using a tool called Winless for .less compilation, andI really like its functionality:
add a folder set
note the files you want to include
watch for changes in those files and recompile when that happens
I'm wondering if an analogous tool exists for js?
The Closure Compiler does a top notch job, and I hear that Uglifyjs is good too. I might end up using Closure's REST api to manually manage file sets, but I'm wondering if anything exists that abstracts the process a little bit - into a standalone windows gui?
I've been poking around for a couple weeks, but have not come up with anything solid.
Cheers -
Have a look at http://gruntjs.com/ - "The JavaScript Task Runner". It becomes dead easy to script tasks, which would be along the lines of
a) run JSHint (for spotting errors in your code)
b) Run any QUnit tests
c) and finally call Uglify.js to compress your source in to a compressed, deployable file (I include a version number in mine to avoid caching issues).

Good Practice with External JavaScript files

I'm new to JavaScript.
How should I split my functions across external scripts? What is considered good practice? should all my functions be crammed into one external .js file or should I group like functions together?
I would guess more files mean more HTTP requests to obtain the script and that could slow down performance? However more files keep things organized: for example, onload.js initializes things on load, data.js retrieves data from the server, ui.js refer to UI handlers...
What's the pros advice on this?
Thanks!
As Pointy mentioned, you should try a tool. Try Grunt of Brunch, both are meant to help you in the build process, and you can configure them to combine all your files when you are ready for prod (also, minify, etc), while keeping separate files when you are developing.
When releasing a product, you generally want as little HTTP requests as possible to load a page (Hence image sprites, for example)
So I'd suggest concatenating your .js's for release, but keeping them separated in a way that works well for you, during development.
Keep in mind that if you "use strict", concatenating scripts might be a source of errors.
(more info on js strict mode here)
It depends on the size, count of your scripts and how many of them you use at any time.
Many performance good practices claim (and there's good logic in this) that it's good to inline your JavaScript if it's small enough. This leads to lower count of HTTP requests but it also prevents the browser from caching the JavaScript so you should be very careful. That's why there're a practices even to inline your images (using base64 encoding) in some special cases (for example look at Bing.com, all their JavaScript is inline).
If you have a lot of JavaScript files and you're using just a little part of them at any time (not only as count but as size) you can load them asynchronously (for example using require.js). But this will require a lot of changes in your application design if you haven't considered it at the beginning (and also make your application complexity bigger).
There are practices even to cache your CSS/JavaScript into the localStorage. For further information you can read the Web Performance Daybook
So let's make something like a short retrospection.
If you have your JavaScript inline this will reduce the first load of the page. The inline JavaScript won't be cached by the browser so every next load of your page will be slower that if you have used external files.
If you are using different external files make sure that you're using all of them or at least big part of them because you can have redundant HTTP requests for files which actually are unnecessary loaded. This will lead to better organization of your code but probably greater load time (still don't forget the browser cache which will help you).
To put everything in at single file will reduce your HTTP requests but you'll have one big file which will block your page loading (if you're using synchronous loading of the JS file) until the file have been loaded completely. In such case I can recommend you to put this big file in the end of the body.
For performance tracking you can use tools like YSlow.
When I think about good practice, then I think of MVC patterns. One might argue if this is the way to go in development, but many people use it to structure what the want to achieve. Usually it is not advisable to use MVC at all if the project is just too small - just like creating a full C++ windows app if you just needed a simple C program with a for loop.
In any case, MVC or MV* in javascript will help you structure your code to the extent that all the actions are part of the controllers, while object properties are just stored in the model. The views then are just for showing purposes and are rendered for the user via special requests or rendinering engines. When I stared using MV*, I stared with BackboneJS and the Guide "Developing BackboneJS Applications" from Addy Osmani. Of course there are a multitude of other frameworks that you can use to structure your code. They can be found on the TodoMVC website.
What you can also do is derive your own structure from their apps and then use the directory structure for your development (but without the MV* framework).
I do not agree to your concern that using such a structure lead to more files, which mean more HTTP requests. Of course this is true during development, BUT remember, the user should get a performance enhanced (i.e. compiled) and minified version as a script. Therefore even if you are developing in such an organized way, it makes more sense to minify/uglify and compile your scripts with closure compiler from Google.

RequireJS and legacy application

I have a legacy application and I have refactored parts of the application into separate backbone.marionette applications. I do not have the time or the budget to refactor the whole thing and I want my code to be easier to manage which made me think of requirejs.
Most of the files are minified and munged together.
Can I use requirejs for this type of hybrid solution where I can work on separate backbone modules and still access the existing javascript?
As someone who just recently started using Require.js on a legacy, Backbone-using codebase, I feel your pain :-) I used a combination of approaches, which I'll list here.
Let's say you have fileA.js and fileB.js, and you want to convert fileB.js to use Require, without changing fileA.js:
Abuse the global space
Require doesn't force you to import every variable through it; even in a Require-ified file, you can still access global variables the same way you would with non-Require-ified code. This means that if fileA creates all of its variables in the global/window namespace (which is very likely if you weren't using Require before), fileB can access them whether or not fileA uses Require.
This wound up being my solution for most of my legacy files; I just left them as is, and put all the new Require-ified stuff below them. That way every global they create is ready and waiting when the Require-ified files need them.
Now, this is great if fileB depends on fileA, but what if it's the reverse? Well, Require also doesn't prevent you from making new global variables, which means that fileB can share anything it wants with fileA, as long as it is willing to put it in the global space.
Duplicate code
Don't get upset; I know how important "DRY" coding practices are. However, for just a few files what I wound up doing was making Require-ified duplicates. This wound up being necessary because I'm using a Handlebars plug-in for Require to do my template compiling, so if I wanted any file to use Handlebars I needed it to be Require-ified.
To combat the normal un-DRY issues, I added comments to the old files effectively saying "don't add anything to this file, the Require-ified version is the 'real' version". My plan is to slowly convert more of the site to Require over time, until I can finally eliminate the original, obsolete file. We have a small shop, so it works for us, but at a larger company this might not fly.
Refactoring
I know, you said you wanted to avoid this, but sometimes a little refactoring can give you a lot of bang for your buck. I personally barely refactored anything at all, but there were just a couple places were a small tweak greatly simplified matters.
Overall I see refactoring as something you do after you switch to Require (to slowly over time bring your non-Require-ified code "in to the fold").
Shims
Chchrist is correct in saying that shims are a good way to solve the "half-way to Require" issues However, I personally didn't use them at all, so I can't really say much about them except "look in to them, they'll probably be helpful".

What is the proper way to structure/organize javascript for a large application?

Let's say you are building a large application, and you expect to have tons of javascript on the site. Even if you separated the javascript into 1 file per page where javascript is used, you'd still have about 100 javascript files.
What is the best way to keep the file system organized, include these files on the page, and keep the code structure itself organized as well? All the while, having the option to keep things minified for production is important too.
Personally I prefer the module pattern for structuring the code, and I think this article gives a great introduction:
http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth
It keeps my global scope clean, enables namespacing and provides a nice way to seperate public and private methods and fields.
I'd structure the code in separate files, and seek to keep coupling low and cohesion high. Although multiple files are bad for client performance (you should minimize the number of requests), you'll easily get the best from both worlds using a compression utility.
I've some experience with YUI Compressor (for which there is a Maven plugin:
http://alchim.sourceforge.net/yuicompressor-maven-plugin/compress-mojo.html - I haven't used this myself). If you need to order the javascript files in a certain way (also applies for CSS), you could make a shell script, concatenating the files in a given order in advance (Tip: YUI Compressor defaults to STDIN).
Besides, either way you decide to minify your files will probably do just fine. Your focus should instead be on how you structure your code, so that it performs well on the client (should be highest priority in order to satisfy a good UX), and is maintainable in a way that works for you.
You mention namespaces, which is a good idea, but I'd be careful to adopt too many concepts from the traditional object oriented domain, and instead learn to utilize many of the excellent features of the javascript language :)
If the overall size is not too big, i'd use a script to compile all files into a single file and minify this file. Then I'd use aggressive caching plus the mtime in the url so people load that file only once but get the new one as soon as one is available.
Use:
jslint - to keep checks on code quality.
require.js - to get just the code you need for each page (or equivalent).
Organise your js as you would any other software project, break it up by roles, responsibilities and separating concerns.

Categories