I'm building a fairly large-scale JavaScript Backbone.js app with this folder organization:
app
index.html
libs
underscore
jquery
[...]
src
utils
modules
[...]
The index.html file basically loads up all the Backbone.js Routers etc. and instantiates AMD modules etc.
Often however, I find the need to create small applications that basically share dependencies with the Big app.
Suppose I need to create 3 small experiments (separate pages) that all load the same usual suspects (underscore, backbone and a couple of util libraries and modules I've written).
They may though differ in: 1) how they extend these JavaScript libraries, 2) what gets instantiated and 3) markup and interaction.
How do I keep this experimentation DRY?
How do I set up this "extendable Template"?
In my opinion, this is where having a good build system comes in. The more complex your setup, the more useful it is to be able to set up configuration files that can keep your dependency management consolidated in one place. This becomes particularly important when:
You need to load the same sets of dependencies on multiple static pages, but your dependency lists change often during development.
You need to be able to easily create compressed versions of the dependencies for a production version. I find this is pretty important with Backbone, because the uncompressed versions are really big but quite useful during development.
I've generally used Apache Ant for this, but there are lot of build systems out there. What I've done in the past is:
Set up an index.tmpl.html file with the core HTML markup and placeholders for JS scripts, CSS files, and underscore templates.
Make a build.properties file that defines my dependency lists. You can group them in different ways under different property names, e.g. lib.scripts.all or util.scripts.all.
In my build process, create a new index.html file, based on index.tmpl.html, with <script> and other tags to load my dependencies. I have different targets to load the raw files or to compress everything into a production-ready single script file.
You can see an example of this setup in this Github project.
If I understand your requirements, you could set up a similar build file with a few tweaks to allow you to set a) the HTML template to use (your default index or another with experiment-specific markup), b) the output file, c) the specific sets of dependencies to load, d) additional dependencies to load, e.g. experiment-specific modules or initialization scripts. You could set these properties up in a specific target (if you think you'll reuse them a few times) or just specify them on the command line when you invoke ant, via the -D flag.
This would allow you a great deal of flexibility to re-use different portions of your code, and has the added benefit of making it easier to move an "experiment" into your core production code, just by including it permanently in your build process.
Related
I have two JavaScript single-page applications in two separate GIT repositories. I want to keep them separated as much as possible (different teams working on them etc.), but they are still very closely related and even co-exist within the context of the same web page as part of one large SPA. Naturally, these two applications share large amounts of library code and it is very wasteful to bundle each with its own copy of libraries.
Is there any way I can reuse the library code? What would be a possible approach?
What I am describing seems to be achievable using the DLL plugin. Basically, I create a vendor.js file, which requires all of the dependencies. Then, from that file generate a bundle of all the libraries and a manifest.json file. Then, using DllReferencePlugin it should be possible to tell webpack in each of my apps to take the dependency from the vendor bundle. Both apps can be built independently. As a last step, simply load of the three bundles on the page.
I would like to use meteor with a bootstrap admin, i.e. a bundle including several bootstrap plugins, script and everything typically made as a kind of framework for developing a web application.
Usually those bundles comes with a lot of dependencies, such as external links for fonts, IE hacks as well as their own shipped file of bootstrap, jQuery and other stuff. If we were in a regular php-like framework it would have been fine.
But in order to make such a template be "native" on meteor, I thought to refactor it in such a way that local dependencies (script and css basically) are stored into folders and not loaded via a <script src="…"></script> tag (otherwise the local path would not be found) but I doubt it is really the best practice, this is why I do consider 3 options:
To use the project/public folder in order to store all the bundle's dependencies (as if it would have been in php for example)
I might refactor the bundle's code by removing any script or style tag aimed to import the js or css into the page and add the corresponding js file aside so that meteor will dynamically load it during at runtime
Like in option 2 but instead of using the bundle's jQuery source files I would install the official jQuery's package for meteor (if existing).
The first (1) option should be the quickest one to get something running but it would not be very meteor native. The advantage however would be to keep the code near to the original one and being able to upgrade once a new version of the bundle would be released.
The 2 other options would be much more elegant (especially the third one) but it would involve a lot of refactoring and induce the risk of introducing bugs I did not expected.
My preference for now is the first option one but I'm afraid of not seing the drawbacks of this approach. Does someone have any experience in importing manually the CSS and JS files the "old fashion way" in meteor ? What is the risk of such an approach compared to using the "place in folder to include" way of meteor ?
I'm developing a modular framework in javascript and am looking for a way to automatically optimize/combine a set of javascripts as a precompile step.
I'm already using grunt so a grunt-task would probably make sense.
The framework consists of modules in their own files (as in the rectangular 'widgets' we're all used to) that in turn may require other javascripts.
All this is wired using Require.js which works great.
However, I came across the following constraint when trying to use r.js which comes with require.js
The optimizer will only combine modules that are specified in arrays
of string literals that are passed to top-level require and define
calls, or the require('name') string literal calls in a simplified
CommonJS wrapping. So, it will not find modules that are loaded via a
variable name:
The thing is: modules may inherit from eachother, and even composition of other modules is possible through configuration (with the technical need to load the referenced modules sitting in their own js-files).
This doesn't work with the mentioned constraint above.
I'm sure I could cook up something myself with enough time, but perhaps someone has already done something like this. (r.js but more flexible).
A doable solution imho would be to:
let the precompile-task run the page for which the js needs to be optimized once (but on the server in Node instead of on the client, the framework is able to do this)
and somehow track all the libraries loaded in by require.js
read out require.js somehow and voila there's your list of js-scripts to load.
hand this to r.js through the include it provides and r.js handles it from there.
there are more pagetypes btw. But in r.js it seems possible to define common libraries, so they don't get included in the per-page-optimized file.
Does this sound plausible? Anyone ever tried something like this?
This seems overly complicated. In r.js build there is option onBuildRead, where you can modify source so that it would be acceptable to optimizer. Also you may look into Internal API: onResourceLoad. Where you can capture all loaded dependencies and then make a call to do custom build.
To load your page you would have to use PhantomJS, so that it acts as a browser and executes JS. Then signal node to produce custom build for that page. But then need to switch resources on that page to use custom build. I guess you can make it configurable and do that when in production.
It does sound that it is possible, not sure if it is feasible.
I'm starting to use the Dojo toolkit and it has rich features like Dijits and themes which are useful but take forever to load.
I've got a good internet connection but those with slower connections would experience rather slow page loads.
This is also a question about heavy vs light frameworks. If you make heavy use of widgets, what are some techniques to keep page load times down?
Dojo has a build system that will drastically improve load times. Take a look at one of the dojo books or the online docs & look at layered builds. In order to do a build, you need to have the "source" (or "full") version of dojo, which has the build tool included -- you can tell if you have this by the presence of the 'util' directory (which is at the same level as dojo, dijit, dojox). If you don't have the full version, go back to the dojo site & delve down into the download area -- it's not completely obvious perhaps.
Anyway, if you have the right version, you basically just need to make a "build profile" file (or files ... aka a layered build), which is essentially your list of dojo.requires that you would normally have in your html. The build system will jam all the javascript code for all the dijits, dojox stuff, etc. together into a "layered build" (a file) and it will run shrinksafe on it, which sort of minifies the code (removes whitespace, shortens names, etc). It will also do some of this to the css files. Aside from making things much smaller, you get just a single file for all the js code (or a few files if you do more than one layer, but most of the time a single layer suffices).
This will improve your load times at least ten-fold, if not more. It might take you a bit of reading to get down the format of the profiles and the build command itself, but it's not too hard really. Once you create a build file, name it something obvious like "mystuff" and then you can dojo.require the "mystuff" file (which will be in the new build directory that is created when you build, then underneath that & hanging out with the dojo.js file in the dojo directory). Requiring in your built file will satisfy all the dojo.require's you normally do (assuming you have them all listed in the profile to build) and things will load very fast.
Here's a link to the older build docs, which mostly still hold true:
http://www.dojotoolkit.org/book/dojo-book-0-9/part-4-meta-dojo/package-system-and-custom-builds
Here's the updated docs (though perhaps a little incomplete):
docs.dojocampus.org/build/index
It reads harder than it really is ... use the layer.profile file in the profiles directory as a starting point. Just put a couple of things & then do a build & see if you get the release directory created (which should be at the same level as dojo, dijit, etc.) and it will have the entire dojo system in it (all minified) as well as your built (layered) stuff. Much faster.
Dylan Tynan
It's not that big (28k gzipped). Nevertheless you can use Google's hosted version of Dojo. Many of your users will already have it cached.
Once you create a build file, name it something obvious like "mystuff"
and then you can dojo.require the "mystuff" file (which will be in the
new build directory that is created when you build, then underneath
that & hanging out with the dojo.js file in the dojo directory).
Requiring in your built file will satisfy all the dojo.require's you
normally do (assuming you have them all listed in the profile to
build) and things will load very fast
Slight correction -- you don't dojo.require that file, you reference it in an ordinary script tag.
<script type="text/javascript" src="js/dojo/dojo/dojo.js" ></script>
<script type="text/javascript" src="js/dojo/mystuff/mystuff.js"></script>
For the directory layout I put the built file "mystuff.js" into the same directory as my package. So at the same level as dojo, dojox, and dijit, I would have a directory named "mystuff", and within that I have MyClass1.js and MyClass2.js. Then a fragment from the profile.js file for the build looks like:
layers:[
{
name: "../mystuff/mystuff.js",
dependencies: [
"mystuff.MyClass1",
"mystuff.MyClass2"
]
},...
I know this is an old thread. But I'm posting this answer for the benefit of other users like me who may read this.
If you are serving from apache there are other factors too. These settings can make a huge difference - MaxClients and MaxRequestsPerChild. You will need to tweak them based on the resources available to your server/machine serving the files.
Changing this worked really well for me.
Using the google CDN is also a good option although it may not be practical in some situations.
Custom build also has an effect as pointed out in other answers.
Nowadays, we have tons of Javascript libraries per page in addition to the Javascript files we write ourselves. How do you manage them all? How do you minify them in an organized way?
Organization
All of my scripts are maintained in a directory structure that I follow whenever I work on a site. The directory structure normally goes something like this:
+--root
|--javascript
|--lib
|--prototype.js
|--scriptaculous
|--scriptaculous.js
|--effects.js
|--..
|--myOwnScript.js
|--myOwnScript2.js
If, on the off chance, that I'm working on a team uses an inordinate amount of scripts, then I'll normally create a custom directory in which we'll organize scripts by relationship. This doesn't happen terribly often, though.
Compression
Though there are a lot of different compressors and obfuscators out there, I always come back to YUI Compressor.
Inclusion
Unless a site is using some form of a master page, CMS, or something that dictates what can be included on a page beyond my control, I only included the scripts necessarily for the given page just for the small performance sake. If a page doesn't require any script, there will be no script inclusions on that page.
First of all, YUI Compressor.
Keeping them organized is up to you, but most groups that I've seen have just come up with a convention that makes sense for their application.
It's generally optimal to package up your files in such a way that you have a small handful of packages which can be included on any given page for optimal caching.
You also might consider dividing your javascript up into segments that are easy to share across the team.
Cal Henderson (of Flickr fame) wrote Serving JavaScript Fast a while back. It covers asset delivery, not organization, but it might answer some of your questions.
Here are the bullet points:
Yes, you ought to concatenate JavaScript files in production to minimize the number of HTTP requests.
BUT you might not want to concatenate into one giant file; you might want to break it into logical pieces and spread the transfer cost over several pages.
gzip compression is good, but you shouldn't serve gzipped assets to IE <= 6, so you might also want to minify/compress your JavaScript.
I'll add a few bullet points of my own:
You ought to come up with a solution that works for both development and production. In development mode, it should pull in extra JavaScript files on demand; in production it should bundle everything ahead of time. Switching from one behavior to the other should be as easy as setting a flag.
Rails 2.0 handles all this through an asset cache; other web app frameworks might offer similar solutions.
As another answer suggests, placing third-party libraries in a lib directory is a good start. You can also divide your own JS files into sub-directories if it makes sense. Ideally, you'll be able to arrange them in such a way that the files in a given sub-directory can be concatenated into one file.
I will have a folder for all javascript, and a sub folder of that for 3rd party/shared libraries, and sub folders for each component of the site to keep everything organized.
For example:
/
+--/javascript/
+-- lib/
+-- admin/
+-- compnent1/
+-- compnent2/
Then run everything through a minifier/obfuscator during the build process.
I'v been using this lately:
http://code.google.com/apis/ajaxlibs/
And then have a "jscripts" folder where I keep my custom code.
In my last project, we had three kinds of JS files, all of them inside a JS folder.
Library code. A bunch of functions used on most all of the pages, so they were put together in one or a few files.
Classes. These had their own files, organized in folders as needed, but not necessarily so.
Ad hoc JS. Code that was specific to that page. These were saved in files that had the same name as the JSP pages they were supposed to run in.
The biggest effort was in having most of the code on the first two kinds, having custom code only know what to call, and when.
This might be a different approach than what you're looking for, but I've been playing around with the idea of JavaScript templates in our blog engine. In a nutshell, you assign a Javascript template to a page id using the database and it will dynamically include and minify all the JavaScript files associated with that template and create a file in a server-side cache with the template id as a file name. When a page is loaded, it calls the template file which first checks if the file exists in the cache and loads it if it does. If it doesn't exist, it creates it on the fly and includes it. I also use the template file to gzip the conglomerate JavaScript file.
The template idea would work well for site-wide JavaScript (like a JavaScript library), but it doesn't cover page-specific JavaScript. However, you can still use the same approach for page specific JavaScript by including a second file that does the same as above.