I have two JavaScript single-page applications in two separate GIT repositories. I want to keep them separated as much as possible (different teams working on them etc.), but they are still very closely related and even co-exist within the context of the same web page as part of one large SPA. Naturally, these two applications share large amounts of library code and it is very wasteful to bundle each with its own copy of libraries.
Is there any way I can reuse the library code? What would be a possible approach?
What I am describing seems to be achievable using the DLL plugin. Basically, I create a vendor.js file, which requires all of the dependencies. Then, from that file generate a bundle of all the libraries and a manifest.json file. Then, using DllReferencePlugin it should be possible to tell webpack in each of my apps to take the dependency from the vendor bundle. Both apps can be built independently. As a last step, simply load of the three bundles on the page.
Related
Coming from a C# background where every class is (best practices) stored in its own individual file, it makes development quite clean. I've never written anything complex in Javascript in the past, but I am starting to learn HTML 5 and I want to write a complex game using the HTML 5 canvas.
Putting all of my functions and code into a single .js file seems very messy. Is there a way to split it up, or a tool/IDE that lets you develop using separate files and compile them into a single one for deployment?
I guess I am looking for some best practice advice. Questions like this generally seem to get closed, so here are my specific questions to adhere to the SO FAQ that demands practical, answerable questions:
Does complex JS development usually involve all the code being in a single JS file? Eg. you're writing space invaders, do you just have spaceinvaders.js or do you have ships.js, logic.js etc.
Is it possible to split up your JS (whether using multiple script tags or pre-compiling to a single JS file) or to just put it all in a single file?
What's the industry standard? Does the HTML 5 spec make any recommendations?
There two possible ways.
Personally, I would use a build tool to simplify working with multiple files.
Using a build tool
Grunt
My favourite tool to keep up with complex js applications is grunt. With grunt you can develop in as many files as you want and use its plugins watch and concat to automatically concat them on save. You can do a lot more but this is the basic use case which may be helpful for you.
Grunt requires nodejs and takes some time to setup. But once you are ready with your Gruntfile setup it really speeds up your development process.
To make your project ready for production use you can also minify your scripts with some configuration and a single command.
A lot of the major javascript libraries are using grunt, easily recognizable based on their Gruntfile: jQuery, AngularJS, Twitter Bootstrap etc.
Grunt is also part of the development toolset yeoman.
Brunch
Brunch is another build tool which allows you to do similar things like grunt does.
Loading only the needed files
If you are developing a huge single page application and are concerned about the startup time of your application, one single file may not be the best solution. In this case you can use a javascript module loader.
Require.js
Therefor require.js is a goot fit. It allows you to only load the actual needed files on the current page. Though setting up require.js is a bit more work than setting up grunt.
Of course you can use more than one javascript file. How else would libraries like jQuery or Knockout function?
One thing to keep in mind, though, is that one of the things you want to do to keep your pages feeling snappy is to reduce the total number of http requests per page load. Adding a bunch of javascript files that are loaded separately causes an additonal request for each extra file. Therefore, you might want to experiment with a system for your build that stitches your javascript files together into a single item that you can use at deployment. There are a number of solutions out there that will do this for you in an automated way.
you could consider using requirejs - a very nice libray to split your javascript to modules.
it also provide a tool that you can "combine" all modules to a single file.
You can use as many javascript files as you want. Just add a link to them in your html code:
<body style="background-color: black" onload="main();" >
<!-- Your HTML body contents -->
<!-- Your scripts (here, I used HTML5 BoilerPlate to setup, and the links to jquery are provided) -->
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="js/vendor/jquery-1.9.1.min.js"><\/script>')</script>
<script src="js/main.js"></script>
</body>
Then you can hookup your main.js to listen for the main() function call:
function main() {
//here you can do your basic setup or delegate the control of the app to a different .js file.
}
or the jQuery document ready callback:
$(document).ready(function() {
//here is a good spot to hookup other jQuery listeners
});
As a ASP.NET MVC developer, I am trying to wrap my head around JavaScript AMD modules and libraries like RequireJS.
What is the relationship between ASP.NET MVC ScriptBundles and RequireJS?
In a large site with lots of JavaScript, should I be using both? Or one of them?
Should I integrate RequireJS with Bundles using IBundleTransform?
I wouldn't see using the two of these together. With Bundles you would have all your JavaScript loaded, ideally into just one or two bundles, on your layout controller. In production it would be optimized (combining into one file, minimised, cached and compressed etc).
RequireJS the way I see some of it is if you are being more granular about what JS is loaded and then you can use it's terse syntax to ensure a certain file is loaded before invoking some of that file's JavaScript.
I would recommend using Bundles since you are working with asp.net-mvc. They are pretty to use and work very well. I had used a similar pre mvc4 framework called Combres which was similar and this approach works very well for apps I think. It may be different for read only web sites.
I'm building a fairly large-scale JavaScript Backbone.js app with this folder organization:
app
index.html
libs
underscore
jquery
[...]
src
utils
modules
[...]
The index.html file basically loads up all the Backbone.js Routers etc. and instantiates AMD modules etc.
Often however, I find the need to create small applications that basically share dependencies with the Big app.
Suppose I need to create 3 small experiments (separate pages) that all load the same usual suspects (underscore, backbone and a couple of util libraries and modules I've written).
They may though differ in: 1) how they extend these JavaScript libraries, 2) what gets instantiated and 3) markup and interaction.
How do I keep this experimentation DRY?
How do I set up this "extendable Template"?
In my opinion, this is where having a good build system comes in. The more complex your setup, the more useful it is to be able to set up configuration files that can keep your dependency management consolidated in one place. This becomes particularly important when:
You need to load the same sets of dependencies on multiple static pages, but your dependency lists change often during development.
You need to be able to easily create compressed versions of the dependencies for a production version. I find this is pretty important with Backbone, because the uncompressed versions are really big but quite useful during development.
I've generally used Apache Ant for this, but there are lot of build systems out there. What I've done in the past is:
Set up an index.tmpl.html file with the core HTML markup and placeholders for JS scripts, CSS files, and underscore templates.
Make a build.properties file that defines my dependency lists. You can group them in different ways under different property names, e.g. lib.scripts.all or util.scripts.all.
In my build process, create a new index.html file, based on index.tmpl.html, with <script> and other tags to load my dependencies. I have different targets to load the raw files or to compress everything into a production-ready single script file.
You can see an example of this setup in this Github project.
If I understand your requirements, you could set up a similar build file with a few tweaks to allow you to set a) the HTML template to use (your default index or another with experiment-specific markup), b) the output file, c) the specific sets of dependencies to load, d) additional dependencies to load, e.g. experiment-specific modules or initialization scripts. You could set these properties up in a specific target (if you think you'll reuse them a few times) or just specify them on the command line when you invoke ant, via the -D flag.
This would allow you a great deal of flexibility to re-use different portions of your code, and has the added benefit of making it easier to move an "experiment" into your core production code, just by including it permanently in your build process.
Our file structures is pretty good, organizing functionality in separate folders. My question is how do others work on applications that involves upwards of 500 JavaScript files.
We have written a maven plugin to concatenate these files together (also runs YUI compressor). However, this involves 3-10seconds of compiling for every change.
Is this step necessary for organization of a large application, I feel like a well structured HTML file pulling in all these resources would save me 45minutes every day.
For my own framework projects, typically monitoring, testing, or in-page services to orchestrate other toolkits (but not as high as your file count), my approach has been to target the individual and dynamically loaded files during development. For test, I'll run one build to compress and version the individual files, and test the individual files again because, depending on the concatenation order, compression technique, and browser, I may wind up with a script error and it's a pain to dig it out of one monster file. Third, I'll concatenate together and test once more.
In the HTML reference, I'll either target the uncompressed file, which loads specified dependencies, or the compound file. A separate bootstrap file names the dependencies, which are either included in the compound file, or loaded dynamically as needed.
This way I can add or change a file, and start developing and testing without rebuilding.
The solution is likely to concatenate and compress for user testing and production only.
For development, it's probably best to simply import them all into the HTML file. It speeds up the dev process, and also simplifies debugging. It also allows the browser to cache some of those files.
When you can't rely on cached copies (which, with 500 files, I don't think will be very often), it will slow down load times.
You can likely save a lot of time by only running the compressor in production. The YUI compressor is notoriously slow, because it uses Java Rhino interpreter to actually parse the JavaScript and analyze it etc.
Nowadays, we have tons of Javascript libraries per page in addition to the Javascript files we write ourselves. How do you manage them all? How do you minify them in an organized way?
Organization
All of my scripts are maintained in a directory structure that I follow whenever I work on a site. The directory structure normally goes something like this:
+--root
|--javascript
|--lib
|--prototype.js
|--scriptaculous
|--scriptaculous.js
|--effects.js
|--..
|--myOwnScript.js
|--myOwnScript2.js
If, on the off chance, that I'm working on a team uses an inordinate amount of scripts, then I'll normally create a custom directory in which we'll organize scripts by relationship. This doesn't happen terribly often, though.
Compression
Though there are a lot of different compressors and obfuscators out there, I always come back to YUI Compressor.
Inclusion
Unless a site is using some form of a master page, CMS, or something that dictates what can be included on a page beyond my control, I only included the scripts necessarily for the given page just for the small performance sake. If a page doesn't require any script, there will be no script inclusions on that page.
First of all, YUI Compressor.
Keeping them organized is up to you, but most groups that I've seen have just come up with a convention that makes sense for their application.
It's generally optimal to package up your files in such a way that you have a small handful of packages which can be included on any given page for optimal caching.
You also might consider dividing your javascript up into segments that are easy to share across the team.
Cal Henderson (of Flickr fame) wrote Serving JavaScript Fast a while back. It covers asset delivery, not organization, but it might answer some of your questions.
Here are the bullet points:
Yes, you ought to concatenate JavaScript files in production to minimize the number of HTTP requests.
BUT you might not want to concatenate into one giant file; you might want to break it into logical pieces and spread the transfer cost over several pages.
gzip compression is good, but you shouldn't serve gzipped assets to IE <= 6, so you might also want to minify/compress your JavaScript.
I'll add a few bullet points of my own:
You ought to come up with a solution that works for both development and production. In development mode, it should pull in extra JavaScript files on demand; in production it should bundle everything ahead of time. Switching from one behavior to the other should be as easy as setting a flag.
Rails 2.0 handles all this through an asset cache; other web app frameworks might offer similar solutions.
As another answer suggests, placing third-party libraries in a lib directory is a good start. You can also divide your own JS files into sub-directories if it makes sense. Ideally, you'll be able to arrange them in such a way that the files in a given sub-directory can be concatenated into one file.
I will have a folder for all javascript, and a sub folder of that for 3rd party/shared libraries, and sub folders for each component of the site to keep everything organized.
For example:
/
+--/javascript/
+-- lib/
+-- admin/
+-- compnent1/
+-- compnent2/
Then run everything through a minifier/obfuscator during the build process.
I'v been using this lately:
http://code.google.com/apis/ajaxlibs/
And then have a "jscripts" folder where I keep my custom code.
In my last project, we had three kinds of JS files, all of them inside a JS folder.
Library code. A bunch of functions used on most all of the pages, so they were put together in one or a few files.
Classes. These had their own files, organized in folders as needed, but not necessarily so.
Ad hoc JS. Code that was specific to that page. These were saved in files that had the same name as the JSP pages they were supposed to run in.
The biggest effort was in having most of the code on the first two kinds, having custom code only know what to call, and when.
This might be a different approach than what you're looking for, but I've been playing around with the idea of JavaScript templates in our blog engine. In a nutshell, you assign a Javascript template to a page id using the database and it will dynamically include and minify all the JavaScript files associated with that template and create a file in a server-side cache with the template id as a file name. When a page is loaded, it calls the template file which first checks if the file exists in the cache and loads it if it does. If it doesn't exist, it creates it on the fly and includes it. I also use the template file to gzip the conglomerate JavaScript file.
The template idea would work well for site-wide JavaScript (like a JavaScript library), but it doesn't cover page-specific JavaScript. However, you can still use the same approach for page specific JavaScript by including a second file that does the same as above.