RequireJS and legacy application - javascript

I have a legacy application and I have refactored parts of the application into separate backbone.marionette applications. I do not have the time or the budget to refactor the whole thing and I want my code to be easier to manage which made me think of requirejs.
Most of the files are minified and munged together.
Can I use requirejs for this type of hybrid solution where I can work on separate backbone modules and still access the existing javascript?

As someone who just recently started using Require.js on a legacy, Backbone-using codebase, I feel your pain :-) I used a combination of approaches, which I'll list here.
Let's say you have fileA.js and fileB.js, and you want to convert fileB.js to use Require, without changing fileA.js:
Abuse the global space
Require doesn't force you to import every variable through it; even in a Require-ified file, you can still access global variables the same way you would with non-Require-ified code. This means that if fileA creates all of its variables in the global/window namespace (which is very likely if you weren't using Require before), fileB can access them whether or not fileA uses Require.
This wound up being my solution for most of my legacy files; I just left them as is, and put all the new Require-ified stuff below them. That way every global they create is ready and waiting when the Require-ified files need them.
Now, this is great if fileB depends on fileA, but what if it's the reverse? Well, Require also doesn't prevent you from making new global variables, which means that fileB can share anything it wants with fileA, as long as it is willing to put it in the global space.
Duplicate code
Don't get upset; I know how important "DRY" coding practices are. However, for just a few files what I wound up doing was making Require-ified duplicates. This wound up being necessary because I'm using a Handlebars plug-in for Require to do my template compiling, so if I wanted any file to use Handlebars I needed it to be Require-ified.
To combat the normal un-DRY issues, I added comments to the old files effectively saying "don't add anything to this file, the Require-ified version is the 'real' version". My plan is to slowly convert more of the site to Require over time, until I can finally eliminate the original, obsolete file. We have a small shop, so it works for us, but at a larger company this might not fly.
Refactoring
I know, you said you wanted to avoid this, but sometimes a little refactoring can give you a lot of bang for your buck. I personally barely refactored anything at all, but there were just a couple places were a small tweak greatly simplified matters.
Overall I see refactoring as something you do after you switch to Require (to slowly over time bring your non-Require-ified code "in to the fold").
Shims
Chchrist is correct in saying that shims are a good way to solve the "half-way to Require" issues However, I personally didn't use them at all, so I can't really say much about them except "look in to them, they'll probably be helpful".

Related

Why do we need to use uncompressed files for development?

I have been wondering why do we need uncompressed files for development and minified for production? I mean what are the benefits of using uncompressed files over minified?
Do they give you better error messages or is it just that if we want to look something up we can go through code of uncompressed files?
Might be dumb question to some of you but I never had habit of going through code of well known big libraries and if I am not wrong, very few people do it.
The main reason for this is usability. When a js-file is minified and you've got an Error and trying to find a place where it is located, what would you find? just a minified string like
(function(_){var window=this,document=this.document;var ba,ea,fa,ha,la,na,oa,pa,qa,sa,ra,ta,wa,xa,za,Aa,Da,Ea,Fa,Ga,Ia;ba=function(a){return function(){return _.aa[a].apply(this,arguments)}};ea=function(a){return ca(_.p.top,a)||ca(da(),a)};_.aa=[];fa="function"==typeof Object.create?Object.create:function(a){var b=function(){};...
and so on. Is it readable for you? I don't think so. It's not readable at all.
For a much better understanding of the code, you need to uncompress it. It will add some additional spaces and format the code in a much readable way. so it would look like:
(function(){
var b = j(),
c = k (b);
})();
It allows you to move from one piece of code to another and discover the code or search your error inside.
Also, for production, you need not only minify your code but compress it as well. So, it would be nice to use Uglify library for that.
It removes unnecessary spaces, rename variables, objects and functions for much smaller names like a or b12. It increases downloading speed.
Hope it helps you.
There may be several reasons why one might prefer uncompressed [unminified] files during development. Some reasons I can think of:
Reduce time to view changes while coding by skipping the minification step. (If you use minification as a part of your build step during development, you may have to wait for the minification to complete each time you make a change to see it in the browser.)
If code mangler is being used, variables may be renamed and not intuitively apparent as to what they are actually called in the codebase
If you are using webpack or some similar module loader, it may include extra code for hot module reloading and dependency injection/error tracking when not minified (which you wouldn't want in production)
It allows debugging to be easier, readable and intuitive.
Minification and code mangling are done MAINLY to make the delivery of those assets more efficient from the server to an end user (client). This ensures that the client can download the script fast and also reduces the cost for the website/business to deliver that script to the user. So this can be considered an extra unnecessary step when running the code during development. (The assets are already available locally so the extra payload cost is negligible)
TLDR: Minification and code mangling can be a time consuming process (especially when we are generating map files etc) which can delay the time between changes and the time those changes are visible on the local instance. It can also actually hamper development by making it harder/less intuitive to debug

What is the proper way to structure/organize javascript for a large application?

Let's say you are building a large application, and you expect to have tons of javascript on the site. Even if you separated the javascript into 1 file per page where javascript is used, you'd still have about 100 javascript files.
What is the best way to keep the file system organized, include these files on the page, and keep the code structure itself organized as well? All the while, having the option to keep things minified for production is important too.
Personally I prefer the module pattern for structuring the code, and I think this article gives a great introduction:
http://www.adequatelygood.com/2010/3/JavaScript-Module-Pattern-In-Depth
It keeps my global scope clean, enables namespacing and provides a nice way to seperate public and private methods and fields.
I'd structure the code in separate files, and seek to keep coupling low and cohesion high. Although multiple files are bad for client performance (you should minimize the number of requests), you'll easily get the best from both worlds using a compression utility.
I've some experience with YUI Compressor (for which there is a Maven plugin:
http://alchim.sourceforge.net/yuicompressor-maven-plugin/compress-mojo.html - I haven't used this myself). If you need to order the javascript files in a certain way (also applies for CSS), you could make a shell script, concatenating the files in a given order in advance (Tip: YUI Compressor defaults to STDIN).
Besides, either way you decide to minify your files will probably do just fine. Your focus should instead be on how you structure your code, so that it performs well on the client (should be highest priority in order to satisfy a good UX), and is maintainable in a way that works for you.
You mention namespaces, which is a good idea, but I'd be careful to adopt too many concepts from the traditional object oriented domain, and instead learn to utilize many of the excellent features of the javascript language :)
If the overall size is not too big, i'd use a script to compile all files into a single file and minify this file. Then I'd use aggressive caching plus the mtime in the url so people load that file only once but get the new one as soon as one is available.
Use:
jslint - to keep checks on code quality.
require.js - to get just the code you need for each page (or equivalent).
Organise your js as you would any other software project, break it up by roles, responsibilities and separating concerns.

Building CoffeesSript browser applications - namespace and scope issues

I am new to CoffeeScript and am trying to get a feel for the best way of managing and building a complex application that will run in the browser. So, I am trying to determine what is the best way to structure my code and build it; with consideration for scope, testing, extensibility, clarity and performance issues.
One simple solution suggested here (https://github.com/jashkenas/coffee-script/wiki/%5BHowTo%5D-Compiling-and-Setting-Up-Build-Tools) seems to be maintain all your files/classes separately - and the use a Cakefile to concatenate all your files into a single coffee file and compile that. Seeems like this would work, in terms of making sure everything ends up in the same scope. It also seems like it makes deployment simple. And, it can be automated, which is nice. But it doesn't feel like the most elegant or extensible solutions.
Another approach seems to be this functional approach to generating namespaces (https://github.com/jashkenas/coffee-script/wiki/Easy-modules-with-CoffeeScript). This seems like a clever solution. I tested it and it works, but I wonder if there are performance or other drawbacks. It also seems like it could be combined with the above approach.
Another option seems to be assigning/exporting classes and functions to the window object. It sounds like that is a fairly standard approach, but I'm curious if this is really the best approach.
I tried using Spine, as it seems like it can address these issues, but was running into issues getting it up and running (couldn't install spine.app or hem), and I suspect it uses one or more of the above techniques anyways. I would be interested if javascriptMVC or backbone solves these issues - I would consider them as well.
Thanks for your thoughts.
Another option seems to be assigning/exporting classes and functions to the window object. It sounds like that is a fairly standard approach, but I'm curious if this is really the best approach.
I'd say it is. Looking at that wiki page's history, the guy advocating the concatenation of .coffee files before compilation was Stan Angeloff, way back in August 2010, before tools like Sprockets 2 (part of Rails 3.1) became available. It's definitely not the standard approach.
You don't want multiple .coffee files to share the same scope. That goes against the design of the language, which wraps each file in a scope wrapper for a reason. Having to attach globals to window to make them global saves you from making one of the most common mistakes JavaScripters run into.
Yes, helper duplication duplication can cause some inefficiency, and there's an open discussion on an option to disable helper output during compilation. But the impact is minor, because helpers are unlikely to account for more than a small fraction of your code.
I would be interested if javascriptMVC or backbone solves these issues
JavaScript MVC and BackBone don't do anything with respect to scoping issues, except insofar as they cause you to store data in global objects rather than as local variables. I'm not sure what you mean when you say that Spine "seems like it can address these issues"; I'd appreciate it if you'd post another, more specific question.
In case you would prefer the node.js module system, this gives you the same in the browser: https://github.com/substack/node-browserify
File foo.coffee:
module.exports = class Foo
...
File bar.coffee:
Foo = require './foo'
# do stuff

Why doesn't javascript have a better way to include files?

I've seen a few ways that you can make a javascript file include other javascript files, but they all seem pretty hacky - mostly they involve tacking the javascript file onto the end of the current document and then loading it in some way.
Why doesn't javascript just include a simple "load this file and execute the script in it" include directive? It's not like this is a new concept. I know that everyone is excited about doing everything in HTML5 with javascript etc, but isn't it going to be hard if you have to hack around omission of basic functionality like this?
I can't see how it would be a security concern, since a web page can include as many javascript files as it likes, and they all get executed anyway.
The main problems with the current inclusion system (ie, add additional script tags) involve latency. Since a script tag can insert code at the point of inclusion, as soon as a script tag is encountered, further parsing has to more-or-less stop until the JS downloads and is executed (although the browser can continue to fetch resources in parallel). If the JS decides to run an inclusion, you've just added more latency on top of this - now you can't even fetch your scripts in parallel.
Basically, it's trying to solve a problem that doesn't exist (since JS can already tack on additional script tags to do an inclusion), while making the latency problem worse. There are javascript minifiers out there that can merge JS files; you should look into using those instead, as they will help improve latency issues as well.
Actually, YUI 3 solves this problem beautifully. Feel free to check out the documentation: http://developer.yahoo.com/yui/3/yui/#use (that's the specific Use function which does this magic). Basically it works like this:
You define modules
When you create the core YUI object with YUI(), you specify which modules your code needs
Behind the scenes, YUI checks if those modules are loaded. If not, it asynchronously loads them on the page.
I've also read that the jQuery team's working on something similar (someone back me up here).
As to the philosophical argument that it'd be nice if this was built in, I think that may be a good feature. On the other hand, the simplicity of javascript is nice too. It allows a much lower point of entry for beginning programmers to do their thing. And for those of us that need it, great libraries like YUI are getting better every day.
the requirejs project attempts to solve this problem, please see for example
http://requirejs.org/docs/why.html
(I don't use it yet, though)

The same script using two js files?

I admit, I don't know too much about javascript, mostly I just "steal and modify" from Javascript.com and Dynamic Drive.
I've run across a few scripts that call two .js files
<script type="text/javascript" src="lib/prototype.js">
</script>
<script type="text/javascript" src="src/aptabs.js">
</script>
and was wondering why, can I safely merge them both with my external javascript or is there a sort of incompatibility that prevents all the code from sharing the same file?
It's often good to separate code with different concerns. Those two files might come from different places. Say prototype is upgraded and you want the new goodness. Then you can just replace the prototype.js file on your server rather than editing your huge file and do the surgery on it.
EDIT: It's also "nicer" for the browser to be able to cache the files individually. If your question comes from a concern of duplicating that block of code in several html files I suggest you make one snippet of it from the server side and include it your html files through whatever means you have at hand/feel comfy with.
It's actually better performance wise to have them both in the same file- Depending on how your site is architected. The principle is to reduce the number of http requests, as each one carries some overhead.
That said, that's something best left to the very end of production. During development it's easier to have everything seperate. If you're going to join them, it's best to have an automated build script to do the operations.
I'm pretty sure the javascript has no idea which file it has loaded from, so there shouldn't be a problem merging it, however...
Personally, I'd keep them separate. It will make it simpler to co-ordinate versioning etc. For most repeat visits, the script will be cached by the browser anyway, so 2 vs 1 isn't really a huge issue. But when you do upgrade one (or the other), the client only needs to download half as much. But again, since it is generally cached it isn't a biggie!
So for simplicity - keep the scripts in their original forms. Given your opening comment "I don't know too much about javascript", this is by far the best approach; I don't mean that disparagingly - simply that if something goes wrong, you don't want to have to find if you broke it, or it was broke already.
Edit: it also makes it easy to re-order, for example if you are using two scripts that use the same terminology like $ in jQuery, which also supports a mode with the explicit naming.

Categories