Should I version control the minified versions of my jQuery plugins? - javascript

Let's say I write a jQuery plugin and add it to my repository (Mercurial in my case). It's a single file, say jquery.plugin.js. I'm using BitBucket to manage this repository, and one of its features is a Downloads page. So, I add jquery.plugin.js as one of the downloads.
Now I want to make available a minified version of my plugin, but I'm not sure what the best practice is. I know that it should be available on the Downloads page as jquery.plugin.min.js, but should I also version control it each time I update it to reflect the unminified version?
The most obvious problem I see with version controlling the minified version is that I might forget to update it each time I make a change to the unminified version.
So, should I version control the minified file?

No, you should not need to keep generated minimized versions under source control.
We have had problems when adding generated files into source control (TFS), because of the way TFS sets local files to be read-only. Tools that generate files as part of the build process then have write access problems (this is probably not a problem with other version control systems).
But importantly, all the:
tools
scripts
source code
resources
third party libraries
and anything else you need to build, test and deploy your product should be under version control.
You should be able to check out a specific version from source control (by tag or revision number or the equivalent) and recreate the software exactly as it was at that point in time. Even on a 'fresh' machine.
The build should not be dependent on anything which is not under source control.
Scripts: build-scripts whether ant, make, MSBuild command files or whatever you are using, and any deployment scripts you may have need to be under version control - not just on the build machine.
Tools: this means the compilers, minimizers, test frameworks - everything you need for your build, test and deployment scripts to work - should be under source control. You need the exact version of those tools to be available to recreate to a point in time.
The book 'Continuous Delivery' taught me this lesson - I highly recommend it.
Although I believe this is a great idea - and stick to it as best as possible - there are some areas where I am not 100% sure. For example the operating system, the Java JDK, and the Continuous Integration tool (we are using Jenkins).
Do you practice Continuous Integration? It's a good way to test that you have all the above under control. If you have to do any manual installation on the Continuous Integration machine before it can build the software, something is probably wrong.

My simple rule of thumb:
Can this be automatically generated during a build process?
If yes, then it is a resource, not a source file. Do not check it in.
If no, then it is a source file. Check it in.

Here are the Sensible Rules for Repositories™ that I use for myself:
If a blob needs to be distributed as part of the source package in order to build it, use it, or test it from within the source tree, it should be under version control.
If an asset can be regenerated on demand from versioned sources, do that instead. If you can (GNU) make it, (Ruby) rake it, or just plain fake it, don't commit it to your repository.
You can split the difference with versioned symlinks, maintenance scripts, submodules, externals definitions, and so forth, but the results are generally unsatisfactory and error prone. Use them when you have to, and avoid them when you can.
This is definitely a situation where your mileage may vary, but the three Sensible Rules work well for me.

Related

Should I include compressed files in subversion system?

I compress all my JavaScript files and CSS files with YUI Compressor
yuicompressor -o '.js$:.min.js' *.js -v
Should I keep the minified files in my subversion system or not?
I know both is possible but I'm searching for "best practice" and the pros and cons
My opinion is that it is a better to have that as part of the building process.
And builds should not be in the version tracking system for the source code.
For that reason, minified javascript files shouldn't be there either.
I think it would be best to leave out any minified files.
The reason for this is, if you have a junior come in to work on your site and they see the minified files, but not the other files, they might end up formatting your minified file so it's more readable, make changes and then when you go back to edit the correct file and compress, all his work has gone.
Worst case scenario when you don't include them is the junior will ask why his changes haven't displayed on the site and you can explain to him the correct tools which you are currently using.
OKay, this answer will purely depend on your project. Say if your project engine is capable of producing minified files, then you may not need to do this.
But say, if this is the same exact code that has to be deployed in your webserver, and all your HTML files depend on the minified code, then you have to add this.
In a nutshell, in doubt, keep the file!
SVN (or got or any versioning system) is there for keeping track of the code, not of the releases thus (IMO) it does not make sense to keep a version of the releases.
It is better to let a dedicated tool handling that (like artifactory)
Minified versions should not be in source system if you are able to generate them from the actual files in a controlled manner. This would imply that you have a build system and tools in place that is also controlled. For e.g. let's say that someone changes your YUI compressor version tomorrow. If you don't have the old version somewhere your build could break.
With the same logic, it would make sense to keep both the minified and unminified versions of any 3rd party libraries.

difference between d3.v3.js and entire D3 repository

For d3, or any javascript package in general, what is the difference between the js file which has the entire source code(say, d3.v3.js) and the github repo for it(in the case of d3, it is https://github.com/mbostock/d3).
What does the github repo contain that the entire source code does not?
I read on Scott Murray's tutorials that the D3 repository contains "all of the component source code". Can someone explain what's meant by 'component'?
Let's look at the Whatever library. It does whatever. The repo for it is located at https://github.com/someone/whatever.js (this is not a real repo).
The repo itself usually contains a variety of info, including documentation, style guides, and code organization. Whatever.js is actually made up of three files: lib/whatever.js, lib/whatever-tools.js, and lib/whatever-xml.js. These get concatenated for actual use, but for development of whatever.js itself it's easier to work with separate files.
Having to deal with just commits all on a single file is absolutely horrible. Pull requests would be even worse.
The distributed version, aka whatever.js and whatever.min.js, is a version of the repo code after it's been dealt with however it needs to be. In the case of most libraries the files just get concatenated, but for some libraries fancy things happen. The .min.js version is the normal file, but after being run through a minification tool, these days usually UglifyJS2.
Some libraries will not even have all of the code in the main generated file, usually due to usage reasons. For example, Angular.js doesn't have the ng-route module in angular(.min).js, you need to include angular-route(.min).js too. This is for sanity reasons, because quite a lot of Angular uses don't need or want the routing system, and it's a fairly big addon.
it is the same as with any project in development environment and deployment environment, so in github that's a development environment for d3.js d3.v3.js is the compiled library that you need to use in your product.
Zeke Sonxx's answer is excellent. I'll just add that in the case of Javascript, because the source code can be run directly, there might be less need for a github repo. But even in the simplest cases, you get to add additional files when needed, keep track of problems and plans in the github issue system, etc. Example: The gexf-parser repo only has one main source file, src/parser.js, but there is a collection of files for testing as well, and a few other useful files. Javascript can also be "compiled", but it's not compilation in the sense of some languages (C, Java, Clojure, etc.). The application distributed will often be built from many different source files in the repo.

Some software for IIS/.NET similar to JAWR (consolidate and minify Javascript)

Is there any software package/library that will produce a consolidated, minified JavaScript file for a production environment, while leaving the original files/references as-is in a development environment (so developers can work independently)?
JAWR does this (and more) for a Java/Groovy environment, but I haven't seen anything like it for the Microsoft .NET/IIS7 stack. Any pointers would be helpful. Thanks!
If you're looking for a good way to automatically compress and combine css & js files here are some options:
Xpedite Not bad: has one big disadantage: you can't combine files (js/css) that are included in usercontrols with the files in your page.
Shinkansen, I don't have a lot of experience with it, but I know it has a lot of configuration options.
The ClientDependency Framework was originally written for Umbraco. Now there is a package available via NuGet for both WebForms and MVC. It works really well and this is my favorite.
We use YUICompressor to minify our Javascript (and CSS) and it works well.
However, we've had to write our own HttpHandler to decide whether to minify or not on the fly, depending on a config setting (but it could equally be on whether it was a DEBUG or RELEASE build).
In fact, we cache the file once minified (or not), so we don't have to do the same process on every request.

Are there ways to improve javascript (Dojo) loading?

I'm starting to use the Dojo toolkit and it has rich features like Dijits and themes which are useful but take forever to load.
I've got a good internet connection but those with slower connections would experience rather slow page loads.
This is also a question about heavy vs light frameworks. If you make heavy use of widgets, what are some techniques to keep page load times down?
Dojo has a build system that will drastically improve load times. Take a look at one of the dojo books or the online docs & look at layered builds. In order to do a build, you need to have the "source" (or "full") version of dojo, which has the build tool included -- you can tell if you have this by the presence of the 'util' directory (which is at the same level as dojo, dijit, dojox). If you don't have the full version, go back to the dojo site & delve down into the download area -- it's not completely obvious perhaps.
Anyway, if you have the right version, you basically just need to make a "build profile" file (or files ... aka a layered build), which is essentially your list of dojo.requires that you would normally have in your html. The build system will jam all the javascript code for all the dijits, dojox stuff, etc. together into a "layered build" (a file) and it will run shrinksafe on it, which sort of minifies the code (removes whitespace, shortens names, etc). It will also do some of this to the css files. Aside from making things much smaller, you get just a single file for all the js code (or a few files if you do more than one layer, but most of the time a single layer suffices).
This will improve your load times at least ten-fold, if not more. It might take you a bit of reading to get down the format of the profiles and the build command itself, but it's not too hard really. Once you create a build file, name it something obvious like "mystuff" and then you can dojo.require the "mystuff" file (which will be in the new build directory that is created when you build, then underneath that & hanging out with the dojo.js file in the dojo directory). Requiring in your built file will satisfy all the dojo.require's you normally do (assuming you have them all listed in the profile to build) and things will load very fast.
Here's a link to the older build docs, which mostly still hold true:
http://www.dojotoolkit.org/book/dojo-book-0-9/part-4-meta-dojo/package-system-and-custom-builds
Here's the updated docs (though perhaps a little incomplete):
docs.dojocampus.org/build/index
It reads harder than it really is ... use the layer.profile file in the profiles directory as a starting point. Just put a couple of things & then do a build & see if you get the release directory created (which should be at the same level as dojo, dijit, etc.) and it will have the entire dojo system in it (all minified) as well as your built (layered) stuff. Much faster.
Dylan Tynan
It's not that big (28k gzipped). Nevertheless you can use Google's hosted version of Dojo. Many of your users will already have it cached.
Once you create a build file, name it something obvious like "mystuff"
and then you can dojo.require the "mystuff" file (which will be in the
new build directory that is created when you build, then underneath
that & hanging out with the dojo.js file in the dojo directory).
Requiring in your built file will satisfy all the dojo.require's you
normally do (assuming you have them all listed in the profile to
build) and things will load very fast
Slight correction -- you don't dojo.require that file, you reference it in an ordinary script tag.
<script type="text/javascript" src="js/dojo/dojo/dojo.js" ></script>
<script type="text/javascript" src="js/dojo/mystuff/mystuff.js"></script>
For the directory layout I put the built file "mystuff.js" into the same directory as my package. So at the same level as dojo, dojox, and dijit, I would have a directory named "mystuff", and within that I have MyClass1.js and MyClass2.js. Then a fragment from the profile.js file for the build looks like:
layers:[
{
name: "../mystuff/mystuff.js",
dependencies: [
"mystuff.MyClass1",
"mystuff.MyClass2"
]
},...
I know this is an old thread. But I'm posting this answer for the benefit of other users like me who may read this.
If you are serving from apache there are other factors too. These settings can make a huge difference - MaxClients and MaxRequestsPerChild. You will need to tweak them based on the resources available to your server/machine serving the files.
Changing this worked really well for me.
Using the google CDN is also a good option although it may not be practical in some situations.
Custom build also has an effect as pointed out in other answers.

Script Minification and Continuous Integration with MSBuild

On a recent project I have been working on in C#/ASP.NET I have some fairly complicated JavaScript files and some nifty Style Sheets. As these script resources grow in size it is advisable to minify the resources and keep your web pages as light as possible, of course. I know many developers who hand-feed their JavaScript resources into compressors after debugging and then deploy their applications.
When it comes to source control and automated builds in the satisfying world of continuous integration (thank you CruiseControl.NET); hand compression will simply not do. The only way to maintain source control and offer compressed resources is to keep JS/CSS source & their minified brethren in a separate directory structure. Then register only one set of resources or the other in code-behind. However, if a developer makes a change to JS/CSS source and then fails to re-compact it and check in both versions, then you’re code-line is now out of sync. Not to mention inelegant.
I am thinking that it would be nice to write a custom executable (if one does not exist yet) for the CC.NET task block which would find and compress all JavaScript and CSS resources in the target directory after the build action but before the asp.net publish to target. This way, developers would only work on JS and CSS source and users would only get the minified resources.
Is there an application that already performs this task and if not, what kind of resource(s) should I look to install on the build server to have CC.NET execute?
(The closest question I could find here to this one required NAnt, which is not an option in my case.)
EDIT:
Dave Ward now has a great article on how to automatically minify in Visual Studio at his site.
The MSBuildCommunityTasks Project has a few MSBuild tasks that may do what you are looking for including Merge and JSCompress.
You could add these into your MSBuild project in the AfterBuild target to allow the project to perform this action every time the project is built and nothing would ever be out of sync. Your web application could then reference the compacted version for run but the developers would edit the full versions.
Nothing else would be needed on the server except the MSBuild community tasks assembly. You can put this assembly in your own source tree and reference from there and your CI build should get that assembly and everything it needs when it builds.
Another JS (and CSS!) compression library for MSBuild:
http://www.codeplex.com/YUICompressor
This is a .NET port of the java-based Yahoo! compressor.
Not a perfect answer, but if you're using MVC4 they've built this in as a new feature. When running a Debug configuration, it outputs individual files with comments and such but when you switch to Release, it will automatically bundle, minify, and change in page references to the minified files. You can setup separate bundles for, say, jquery and your own js. This works with CSS and JS files.
http://www.asp.net/mvc/tutorials/mvc-4/bundling-and-minification
If MVC4 doesn't work for you, you can also find packages on Nuget that can help such as this:
https://www.nuget.org/packages?q=minify

Categories