Questions about Dojo modules from Dojo newb? - javascript

I'm new to Dojo so I don't quite understand all of Dojo's features. Here are some questions, but I'm sure some of them seem really stupid since I don't quite get how Dojo is structured yet:
How do you create multiple modules within a single js file and access the module in the file it's created? Also, how do you access specific modules from a file containing multiple modules?
What is the difference between require and define?
I successfully required a module from a file, but I can't figure out how to get the variables out of the file, how do you do this?
I was looking at how Dojo required its modules and notice that it does a http request for each file, but isn't that really inefficient when you're dealing with lots of modules and/or on a large site, you really would like to minimize the number of http requests necessary? What's the way around this?

Reading through The Dojo Loader will provide answers.
Basically module = file and very often (as a matter of best practice) module = file = class (more precisely public class defined via dojo/_base/declare).
Ad #4: You will need to employ The Dojo Build System, that will resolve all the dependencies and put all your modules into a single file (or more files, this depends on your build profile). Have a look at Dojo Boilerplate project, it may help with building your application.

How do you create multiple modules within a single js file?
While this is possible to do, and is done by the build system when it creates layer files, it is not recommended. Putting multiple modules in a single file means you have to use a different form of define that gives an explicit ID to eahc of them. This is less flexible then having the module IDs automatically derived from the file name and path (this, together with relative paths makes it much easier to move and rename AMD modules, compared to old-style modules)
What is the difference between require and define?
define is a beefed up version of require that defines a new module with its return value.
I successfully required a module from a file, but I can't figure out how to get the variables out of the file.
I am not sure wha toyu mean with this (without giving a concrete runnable example) but all you have to do is create an object to be the value of your module and return it from define. This is not very far off from how you would define modules the old way, with manual "namespaces"
//foo.js
define([], function(){
var M = {}; //the exported stuff will go here
var myVariable = 16; //private function variables are private (like they are everywhere else)
//and cannot be seen from the outside
M.anumber = myvariable + 1; //things on the returned object can be seen from the outside
return M; //people who require this module will get this return value
});
//bar.js
define(['foo'], function(foo){
console.log( foo.anumber );
});
I was looking at how Dojo required its modules and notice that it does a http request for each file...
As phusick pointed out, the build system can be used to bundle all the modules into a single file (giving you the best of both worlds - modularity during development and performance during deployment). It can also do other cool stuff, like bundling CSS and passing the Javascript through a minifier or through the Closure compiler, checking ifdef for creating platform-dependent builds, etc.

Related

How can I access constants in the lib/constants.js file in Meteor?

I followed the documentation to put the constants in the lib/constants.js file.
Question:
How to access these constants in my client side html and js files?
Variables in Meteor are file-scoped.
Normally a var myVar would go in the global Node context, however in Meteor it stays enclosed in the file (which makes it really useful to write more transparent code). What happens is that Meteor will wrap all files in an IIFE, scoping the variables in that function and thus effectively in the file.
To define a global variable, simply remove the var/let/const keyword and Meteor will take care to export it. You have to create functions through the same mechanism (myFunc = function myFunc() {} or myFunc = () => {}). This export will either be client-side if the code is in the client directory, or server-side if it is in the server directory, or both if it is in some other not-so-special directories.
Don't forget to follow these rules:
HTML template files are always loaded before everything else
Files beginning with main. are loaded last
Files inside any lib/ directory are loaded next
Files with deeper paths are loaded next
Files are then loaded in alphabetical order of the entire path
Now you may run into an issue server-side if you try to access this global variable immediately, but Meteor hasn't yet instantiated it because it hasn't run over the file defining the variable. So you have to fight with files and folder names, or maybe try to trick Meteor.startup() (good luck with that). This means less readable, fragile location-dependant code. One of your colleague moves a file and your application breaks.
Or maybe you just don't want to have to go back to the documentation each time you add a file to run a five-step process to know where to place this file and how to name it.
There is two solutions to this problem as of Meteor 1.3:
1. ES6 modules
Meteor 1.3 (currently in beta) allows you to use modules in your application by using the modules package (meteor add modules or api.use('modules')).
Modules go a long way, here is a simple example taken directly from the link above:
File: a.js (loaded first with traditional load order rules):
import {bThing} from './b.js';
// bThing is now usable here
File: b.js (loaded second with traditional load order rules):
export const bThing = 'my constant';
Meteor 1.3 will take care of loading the b.js file before a.js since it's been explicitly told so.
2. Packages
The last option to declare global variables is to create a package.
meteor create --package global_constants
Each variable declared without the var keyword is exported to the whole package. It means that you can create your variables in their own files, finely grain the load order with api.addFiles, control if they should go to the client, the server, or both. It also allows you to api.use these variables in other packages.
This means clear, reusable code. Do you want to add a constant? Either do it in one of the already created file or create one and api.addFiles it.
You can read more about package management in the doc.
Here's a quote from "Structuring your application":
This [using packages] is the ultimate in code separation, modularity, and reusability. If you put the code for each feature in a separate package, the code for one feature won't be able to access the code for the other feature except through exports, making every dependency explicit. This also allows for the easiest independent testing of features. You can also publish the packages and use them in multiple apps with meteor add.
It's amazing to combine the two approaches with Meteor 1.3. Modules are way easier and lighter to write than packages since using them is one export line and as many imports as needed rather than the whole package creation procedure, but not as dumb-error-proof (forgot to write the import line at top of file) as packages.
A good bet would be to use modules first, then switch to a package as soon as they're tiring to write or if an error happened because of it (miswritten the import, ...).
Just make sure to avoid relying on traditional load order if you're doing anything bigger than a POC.
You will need to make them global variables in order for other files to see them.
JavaScript
/lib/constants.js
THE_ANSWER = 42; // note the lack of var
/client/some-other-file.js
console.log(THE_ANSWER);
CoffeeScript
/lib/constants.coffee
#THE_ANSWER = 42
/client/some-other-file.coffee
console.log THE_ANSWER

Why does requirejs allow you to specify a string id?

In requirejs it is possible to define an anonymous module or to give it a string id. According to this article, you would normally not use the string id:
You would not normally use the id when you define your module. It is typically used by tools when optimizing a RequireJS application.
I currently define my modules anonymously and use require.config.paths for the mappings. What I don't understand is: why does requirejs allow you to specify string id's if they're not needed?
why does requirejs allow you to specify string id's if they're not needed?
They only are not needed if require can figure out what module it is that just called define. This is the standard when require() did load the script file that contains the module, whose name and path it knows.
However, the optimizer will put multiple modules in a single file, and there needs to be a different way to figure out what modules are define()d. From the docs:
These [names] are normally generated by the optimization tool. You can
explicitly name modules yourself, but it makes the modules less
portable -- if you move the file to another directory you will need to
change the name. It is normally best to avoid coding in a name for the
module and just let the optimization tool burn in the module names.
The optimization tool needs to add the names so that more than one
module can be bundled in a file, to allow for faster loading in the
browser.
I can't answer as to James Burke's motivations, but I can point out instances of its usefulness.
Define your own single page "layer" for testing, using JSBin or JSFiddle. The following code can be easily executed without having to stand up endpoints for each module or to use r.js to create a layer.
define('A',[], function(){ console.log('A loaded');});
define('B',[], function(){ console.log('B loaded');});
define('c',['A','B'], function(){ console.log('C loaded');});
Define a "local override" for troubleshooting. Add a define right before your require to easily preempt a module definition add a new module so you don't have to touch several files while you're working
define('plugin/fancySelect',[], function(){/* ... */});
require([ /* ... */], function(){
// your main application code
});

How can I get auto-completion in IntelliJ 12.1.4 work with JavaScript CommonJS

In our current JavaScript project (HTML5 application) we are using a global namespace tree, i.e.
nsBase = nsBase || {};
nsBase.sub1 = nsBase.sub1 || {};
...
And then each class, in a dedicated file, hooks up into this namespace tree. Auto-completion can resolve them all across the project, together with parameter info and so forth.
We would like to get rid of the global namespace objects and introduce CommonJS format (and concatenate with browserify). But more important to that, any result returned by the require() calls should still allow auto-completion to work.
I found the Node.js plugin (http://plugins.jetbrains.com/plugin/?id=6098), but, after installing and enabling it for the project, it would only work for Node.js modules (e.g. 'fs') - For own files, the auto-complete would only propose 'exports' (as in module.exports).
Furthermore, I'd rather not like to enable all Node.js globals for the HTML5 targeted app, all we need is the knowledge of module.exports, require() and the corresponding logic behind it.
To the questions:
Did I do something wrong and it should work with the plugin?
Would there be a specific solution, just for the CommonJS format?

Getting closure-compiler and Node.js to play nice

Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a require("./MyLib.js"), that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it?
I have been using the Closure Compiler with Node for a project I haven't released yet. It has taken a bit of tooling, but it has helped catch many errors and has a pretty short edit-restart-test cycle.
First, I use plovr (which is a project that I created and maintain) in order to use the Closure Compiler, Library, and Templates together. I write my Node code in the style of the Closure Library, so each file defines its own class or collection of utilities (like goog.array).
The next step is to create a bunch of externs files for the Node functions you want to use. I published some of these publicly at:
https://github.com/bolinfest/node-google-closure-latitude-experiment/tree/master/externs/node/v0.4.8
Though ultimately, I think that this should be a more community driven thing because there are a lot of functions to document. (It's also annoying because some Node functions have optional middle arguments rather than last arguments, making the type annotations complicated.) I haven't started this movement myself because it's possible that we could do some work with the Closure Complier to make this less awkward (see below).
Say you have created the externs file for the Node namespace http. In my system, I have decided that anytime I need http, I will include it via:
var http = require('http');
Though I do not include that require() call in my code. Instead, I use the output-wrapper feature of the Closure Compiler the prepend all of the require()s at the start of the file, which when declared in plovr, in my current project looks like this:
"output-wrapper": [
// Because the server code depends on goog.net.Cookies, which references the
// global variable "document" when instantiating goog.net.cookies, we must
// supply a dummy global object for document.
"var document = {};\n",
"var bee = require('beeline');\n",
"var crypto = require('crypto');\n",
"var fs = require('fs');\n",
"var http = require('http');\n",
"var https = require('https');\n",
"var mongodb = require('mongodb');\n",
"var nodePath = require('path');\n",
"var nodeUrl = require('url');\n",
"var querystring = require('querystring');\n",
"var SocketIo = require('socket.io');\n",
"%output%"
],
In this way, my library code never calls Node's require(), but the Compiler tolerates the uses of things like http in my code because the Compiler recognizes them as externs. As they are not true externs, they have to be prepended as I described.
Ultimately, after talking about this on the discussion list, I think the better solution is to have a new type annotation for namespaces that would look something like:
goog.scope(function() {
/** #type {~NodeHttpNamesapce} */
var http = require('http');
// Use http throughout.
});
In this scenario, an externs file would define the NodeHttpNamespace such that the Closure Compiler would be able to typecheck properties on it using the externs file. The difference here is that you could name the return value of require() whatever you wanted because the type of http would be this special namespace type. (Identifying a "jQuery namespace" for $ is a similar issue.) This approach would eliminate the need to name your local variables for Node namespaces consistently, and would eliminate the need for that giant output-wrapper in the plovr config.
But that was a digression...once I have things set up as described above, I have a shell script that:
Uses plovr to build everything in RAW mode.
Runs node on the file generated by plovr.
Using RAW mode results in a large concatenation of all the files (though it also takes care of translating Soy templates and even CoffeeScript to JavaScript). Admittedly, this makes debugging a pain because the line numbers are nonsense, but has been working well enough for me so far. All of the checks performed by the Closure Compiler have made it worth it.
The svn HEAD of closure compiler seems to have support for AMD
Closure Library on Node.js in 60 seconds.
It's supported, check https://code.google.com/p/closure-library/wiki/NodeJS.
I replaced my old approach with a way simpler approach:
New approach
No require() calls for my own app code, only for Node modules
I need to concatenate server code to a single file before I can run or compile it
Concatenating and compiling is done using a simple grunt script
Funny thing is that I didn't even had to add an extern for the require() calls. The Google Closure compiler understands that automagically. I did have to add externs for nodejs modules that I use.
Old approach
As requested by OP, I will elaborated on my way of compiling node.js code with Google Closure Compiler.
I was inspired by the way bolinfest solved the problem and my solution uses the same principle. The difference is that I made one node.js script that does everything, including inlining modules (bolinfest's solution lets GCC take care of that). This makes it more automated, but also more fragile.
I just added code comments to every step I take to compile server code. See this commit: https://github.com/blaise-io/xssnake/commit/da52219567b3941f13b8d94e36f743b0cbef44a3
To summarize:
I start with my main module, the JS file that I pass to Node when I want to run it.
In my case, this file is start.js.
In this file, using a regular expression, I detect all require() calls, including the assignment part.
In start.js, this matches one require call: var Server = require('./lib/server.js');
I retrieve the path where the file exists based on the file name, fetch its contents as a string, and remove module.exports assignments within the contents.
Then I replace the require call from step 2 with the contents from step 3. Unless it is a core node.js module, then I add it to a list of core modules that I save for later.
Step 3 will probably contain more require() call, so I repeat step 3 and 4 recursively until all require() calls are gone and I'm left with one huge string containing all code.
If all recursion has completed, I compile the code using the REST API.
You could also use the offline compiler.
I have externs for every core node.js module. This tool is useful for generating externs.
I preprend the removed core.js module require calls to the compiled code.
Pre-Compiled code.
All require calls are removed. All my code is flattened.
http://pastebin.com/eC2rVMiN
Post-Compiled code.
Node.js core require calls have been prepended manually.
http://pastebin.com/uB8CaejN
Why you should not do it this way:
It uses regular expressions (not a parser or tokenizer) for detecting require calls, inlining and removing module.exports. This is fragile, as it does not cover all syntax variations.
When inlining, all module code is added to the global namespace. This is against the principles of Node.js, where every file has its own namespace, and this will cause errors if you have two different modules with the same global variables.
It does not improve the speed of your code that much, since V8 also performs a lot of code optimizations like inlining and dead code removal.
Why you should:
Because it does work when you have consistent code.
It will detect errors in your server code when you enable verbose warnings.
Option 4: Don't use closure compiler.
People in the node community don't tend to use it. You don't need to minify node.js source code, that's silly.
There's simply no good use for minification.
As for the performance benefits of closure, I personally doubt it actually makes your programs faster.
And of course there's a cost, debugging compiled JavaScript is a nightmare

Javascript object dependencies

In complex client side projects, the number of Javascript files can get very large. However, for performance reasons it's good to concatenate these files, and compress the resulting file for sending over the wire. I am having problems in concatenating these as the dependencies are included after they are needed in some cases.
For instance, there are 2 files:
/modules/Module.js <requires Core.js>
/modules/core/Core.js
The directories are recursively traversed, and Module.js gets included before Core.js, which causes errors. This is just a simple example where dependencies could span across directories, and there could be other complex cases. There are no circular dependencies though.
The Javascript structure I follow is similar to Java packages, where each file defines a single Object (I'm using MooTools, but that's irrelevant). The structure of each javascript file and the dependencies is always consistent:
Module.js
var Module = new Class({
Implements: Core,
...
});
Core.js
var Core = new Class({
...
});
What practices do you usually follow to handle dependencies in projects where the number of Javascript files is huge, and there are inter-file dependencies?
Using directories is clever, however, I think you might run into problems when you have multiple dependencies. I found that I had to create my own solution to handle this. So, I created a dependency management tool that is worth checking out. (Pyramid Dependency Manager documentation)
It does some important things other javascript dependency managers don't do, mainly
Handles other files (including inserting html for views...yes, you can separate your views during development)
Combines the files for you in javascript when you are ready for release (no need to install external tools)
Has a generic include for all html pages. You only have to update one file when a dependency gets added, removed, renamed, etc
Some sample code to show how it works during development.
File: dependencyLoader.js
//Set up file dependencies
Pyramid.newDependency({
name: 'standard',
files: [
'standardResources/jquery.1.6.1.min.js'
]
});
Pyramid.newDependency({
name:'lookAndFeel',
files: [
'styles.css',
'customStyles.css'
]
});
Pyramid.newDependency({
name:'main',
files: [
'createNamespace.js',
'views/buttonView.view', //contains just html code for a jquery.tmpl template
'models/person.js',
'init.js'
],
dependencies: ['standard','lookAndFeel']
});
Html Files
<head>
<script src="standardResources/pyramid-1.0.1.js"></script>
<script src="dependencyLoader.js"></script>
<script type="text/javascript">
Pyramid.load('main');
</script>
</head>
This may be crude, but what I do is keep my separate script fragments in separate files. My project is such that I'm willing to have all my Javascript available for every page (because, after all, it'll be cached, and I'm not noticing performance problems from the parse step). Therefore, at build time, my Ant script runs Freemarker via a little custom Ant task. That tasks roots around the source tree and gathers up all the separate Javascript source files into a group of Maps. There are a few different kinds of sources (jQuery extensions, some page-load operations, so general utilities, and so on), so the task groups those different kinds together (getting its hints as to what's what from the script source directory structure.
Once it's built the Maps, it feeds those into Freemarker. There's a single global template, and via Freemarker all the script fragments are packed into that one file. Then that goes through YUI compressor, and bingo! each page just grabs that one script, and once it's cached there's no more script fetchery over my entire site.
Dependencies, you ask? Well, that Ant task orders my source files by name as it builds those maps, so where I need to ensure definition-use ordering I just prefix the files with numeric codes. (At some point I'm going to spiff it up so that the source files can keep their ordering info, or maybe even explicitly declared dependencies, inside the source in comment blocks or something. I'm not too motivated because though it's a little ugly it really doesn't bother anybody that much.)
There is a very crude dependency finder that I've written based on which I am doing the concatenation. Turns out the fact that its using MooTools is not so irrelevant after all. The solution works great because it does not require maintaining dependency information separately, since it's available within the javascript files itself meaning I can be super lazy.
Since the class and file naming was consistent, class Something will always have the filename Something.js. To find the external dependencies, I'm looking for three things:
does it Implement any other
classes
does it Extend any other
classes
does it instantiate other classes
using the new keyword
A search for the above three patterns in each javascript file gives its dependent classes. After finding the dependent classes, all Javascript files residing in any folder are searched and matched with this class name to figure out where that class is defined. Once the dependencies are found, I build a dependency graph and use the topological sort algorithm to generate the order in which files should be included.
I say just copy and paste this files to a one file in an ordered way. Each file will have a starting and ending comment to distinguish each particular code.
Each time you updated one of the files, you'll need to updated this file. So, this file need to contain only finish libraries, that not going to changes in the near time.
Your directory structure is inverted...
Core dependencies should be in the root and modules are in subdirs.
scripts/core.js
scripts/modules/module1.js
and your problem is solved.
Any further dependency issues will be indicative of defective 'class'/dependency design.
Similar to Mendy, but I create combined files on server-side. The created files will also be minified, and will have a unique name to omit cache issues after an update.
Of course, this practice only makes sense in a whole application or in a framework.
I think your best bet if at all possible, would be to redesign to not have a huge number of javascript files with interfile dependencies. Javascript just wasn't intended to go there.
This is probably too obvious but have you looked at the mootools Core Depender: http://mootools.net/docs/more/Core/Depender
One way to break the parse-time or load-time dependencies is with Self-Defining Objects (a variation on Self-Defining Functions).
Let's say you have something like this:
var obj = new Obj();
Where this line is in someFile.js and Obj is defined in Obj.js. In order for this to parse successfully you must load or concatenate Obj.js before someFile.js.
But if you define obj like this:
var obj = {
init: function() {
obj = new Obj();
}
};
Then at parse or load time it doesn't matter what order you load the two files in as long as Obj is visible at run-time. You will have to call obj.init() in order to get your object into the state you want it, but that's a small price to pay for breaking the dependency.
Just to make it clearer how this works here is some code you can cut and paste into a browser console:
var Obj = function() {
this.func1 = function ( ) {
console.log("func1 in constructor function");
};
this.init = function () {
console.log("init in constructor function");
}
};
var obj = {
init: function() {
console.log("init in original object");
obj = new Obj();
obj.init();
}
};
obj.init();
obj.func1();
And you could also try a module loader like RequireJS.

Categories