Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a require("./MyLib.js"), that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it?
I have been using the Closure Compiler with Node for a project I haven't released yet. It has taken a bit of tooling, but it has helped catch many errors and has a pretty short edit-restart-test cycle.
First, I use plovr (which is a project that I created and maintain) in order to use the Closure Compiler, Library, and Templates together. I write my Node code in the style of the Closure Library, so each file defines its own class or collection of utilities (like goog.array).
The next step is to create a bunch of externs files for the Node functions you want to use. I published some of these publicly at:
https://github.com/bolinfest/node-google-closure-latitude-experiment/tree/master/externs/node/v0.4.8
Though ultimately, I think that this should be a more community driven thing because there are a lot of functions to document. (It's also annoying because some Node functions have optional middle arguments rather than last arguments, making the type annotations complicated.) I haven't started this movement myself because it's possible that we could do some work with the Closure Complier to make this less awkward (see below).
Say you have created the externs file for the Node namespace http. In my system, I have decided that anytime I need http, I will include it via:
var http = require('http');
Though I do not include that require() call in my code. Instead, I use the output-wrapper feature of the Closure Compiler the prepend all of the require()s at the start of the file, which when declared in plovr, in my current project looks like this:
"output-wrapper": [
// Because the server code depends on goog.net.Cookies, which references the
// global variable "document" when instantiating goog.net.cookies, we must
// supply a dummy global object for document.
"var document = {};\n",
"var bee = require('beeline');\n",
"var crypto = require('crypto');\n",
"var fs = require('fs');\n",
"var http = require('http');\n",
"var https = require('https');\n",
"var mongodb = require('mongodb');\n",
"var nodePath = require('path');\n",
"var nodeUrl = require('url');\n",
"var querystring = require('querystring');\n",
"var SocketIo = require('socket.io');\n",
"%output%"
],
In this way, my library code never calls Node's require(), but the Compiler tolerates the uses of things like http in my code because the Compiler recognizes them as externs. As they are not true externs, they have to be prepended as I described.
Ultimately, after talking about this on the discussion list, I think the better solution is to have a new type annotation for namespaces that would look something like:
goog.scope(function() {
/** #type {~NodeHttpNamesapce} */
var http = require('http');
// Use http throughout.
});
In this scenario, an externs file would define the NodeHttpNamespace such that the Closure Compiler would be able to typecheck properties on it using the externs file. The difference here is that you could name the return value of require() whatever you wanted because the type of http would be this special namespace type. (Identifying a "jQuery namespace" for $ is a similar issue.) This approach would eliminate the need to name your local variables for Node namespaces consistently, and would eliminate the need for that giant output-wrapper in the plovr config.
But that was a digression...once I have things set up as described above, I have a shell script that:
Uses plovr to build everything in RAW mode.
Runs node on the file generated by plovr.
Using RAW mode results in a large concatenation of all the files (though it also takes care of translating Soy templates and even CoffeeScript to JavaScript). Admittedly, this makes debugging a pain because the line numbers are nonsense, but has been working well enough for me so far. All of the checks performed by the Closure Compiler have made it worth it.
The svn HEAD of closure compiler seems to have support for AMD
Closure Library on Node.js in 60 seconds.
It's supported, check https://code.google.com/p/closure-library/wiki/NodeJS.
I replaced my old approach with a way simpler approach:
New approach
No require() calls for my own app code, only for Node modules
I need to concatenate server code to a single file before I can run or compile it
Concatenating and compiling is done using a simple grunt script
Funny thing is that I didn't even had to add an extern for the require() calls. The Google Closure compiler understands that automagically. I did have to add externs for nodejs modules that I use.
Old approach
As requested by OP, I will elaborated on my way of compiling node.js code with Google Closure Compiler.
I was inspired by the way bolinfest solved the problem and my solution uses the same principle. The difference is that I made one node.js script that does everything, including inlining modules (bolinfest's solution lets GCC take care of that). This makes it more automated, but also more fragile.
I just added code comments to every step I take to compile server code. See this commit: https://github.com/blaise-io/xssnake/commit/da52219567b3941f13b8d94e36f743b0cbef44a3
To summarize:
I start with my main module, the JS file that I pass to Node when I want to run it.
In my case, this file is start.js.
In this file, using a regular expression, I detect all require() calls, including the assignment part.
In start.js, this matches one require call: var Server = require('./lib/server.js');
I retrieve the path where the file exists based on the file name, fetch its contents as a string, and remove module.exports assignments within the contents.
Then I replace the require call from step 2 with the contents from step 3. Unless it is a core node.js module, then I add it to a list of core modules that I save for later.
Step 3 will probably contain more require() call, so I repeat step 3 and 4 recursively until all require() calls are gone and I'm left with one huge string containing all code.
If all recursion has completed, I compile the code using the REST API.
You could also use the offline compiler.
I have externs for every core node.js module. This tool is useful for generating externs.
I preprend the removed core.js module require calls to the compiled code.
Pre-Compiled code.
All require calls are removed. All my code is flattened.
http://pastebin.com/eC2rVMiN
Post-Compiled code.
Node.js core require calls have been prepended manually.
http://pastebin.com/uB8CaejN
Why you should not do it this way:
It uses regular expressions (not a parser or tokenizer) for detecting require calls, inlining and removing module.exports. This is fragile, as it does not cover all syntax variations.
When inlining, all module code is added to the global namespace. This is against the principles of Node.js, where every file has its own namespace, and this will cause errors if you have two different modules with the same global variables.
It does not improve the speed of your code that much, since V8 also performs a lot of code optimizations like inlining and dead code removal.
Why you should:
Because it does work when you have consistent code.
It will detect errors in your server code when you enable verbose warnings.
Option 4: Don't use closure compiler.
People in the node community don't tend to use it. You don't need to minify node.js source code, that's silly.
There's simply no good use for minification.
As for the performance benefits of closure, I personally doubt it actually makes your programs faster.
And of course there's a cost, debugging compiled JavaScript is a nightmare
Related
While trying to get started with Reason, in one JavaScript project, I've got an extremely light file that tries to be a Reason-typed interface to the existing, heavy, library:
/* TheLibrary.re */
type engine
external addEngine : string -> engine -> unit = "" [##bs.val] [##bs.module "../"]
However, when I try to use that library in a ReasonReact project (having added #org/the-library to the bsconfig.json bs-dependencies),
/* AComponent.re */
[#bs.val] [#bs.module "#org/game-engine/dist/game-engine.js"]
external gameEngine : TheLibrary.engine = "default";
/* Further down, a React lifecycle method, */
TheLibrary.addEngine("Game", gameEngine);
I get errors about ../ being not found, relative to that React component:
./src/components/main-menu/AComponent.re
Module not found: Can't resolve '../' in '/Users/ec/Work/reason-reacty/src/components/main-menu'
I've also tried, instead of ../ in TheLibrary.re's external declaration:
#bs.module "./index.js" (the direct, ES6 entry-point for the untyped-JavaScript side of the package in question,)
#bs.module "#org/the-library", the entire name of said library (even though I'm typing inside that library???)
Please help! I'd love to be able to further adopt ML, but I'm having the hardest time wrapping my head around ReasonReact's dependency-resolution!
Additional context:
So, we're trying to build our first ReasonReact project, and we've successfully added baby's-first-opaque-types to one of our internal libraries and include that in the ReasonReact page with something like the following — which works, by the way:
/* Imports.re */
type engine;
[#bs.val] [#bs.module "#org/game-engine/dist/game-engine.js"]
external gameEngine : engine = "default";
[#bs.val] [#bs.module "#org/the-library"] [#bs.scope "default"]
external addEngine : (string, engine) => unit = "";
This yields, when we Imports.(addEngine("Game", gameEngine)), the global setup line we need: TheLibrary.addEngine("Game", GameEngine). I'm in the very first stages of trying to upstream that typing-information into the parent project, and publish that code to npm, so that all consuming projects can start to use Reason.
t sounds like you might be a bit confused about the different tools that make up your toolchain, so let's first go through them one by one to put them in their place:
ReasonReact is a library of opinionated, "thick" bindings to react.js, which despite the name isn't actually all that Reason-specific, except for its integration with Reason's JSX syntax. It would be more accurate to call it a BuckleScript library.
Reason is mostly just the syntax you use, but is often also used more broadly to refer to the ecosystem around it, and usually also imply that BuckleScript is being used.
OCaml is the underlying language. The "semantics" of Reason, if you will.
BuckleScript is the OCaml-to-JavaScript compiler. It compiles ONE source file, which is considered a module, into ONE JavaScript module, but also requires the type information of other OCaml modules as input.
Now, I suspect you already know most of that, but what you do not seem to know is that NONE of these actually do ANY dependency resolution. These next parts of your toolchain are what does that:
The BuckleScript Build System, or bsb, is what finds all the modules in your local project according to what you've specified in src and any BuckleScript libraries you've listed in bs-dependecies in bsconfig.json. It will figure out the dependency order of all these and feed them to the compiler in the correct order to produce one JavaScript module for each OCaml module (along with some other artefacts containing type information and such). But it will not resolve any JavaScript dependencies.
Lastly, webpack, or some other JavaScript bundler, is what you likely use to combine all the JavaScript modules into a single file, and which therefore needs to resolve any JavaScript dependencies. And this is likely where the error message comes from.
Using [#bs.module "some-module"] will make the BuckleScript compiler emit var ... = require('some-module') (or import ... from 'some-module' if es6 is used), but BuckleScript itself will not do anything more with it. The string you pass to #bs.module is the same string you would pass to require if it had been an ordinary CommonJS module (or whatever other module format you have configured).
Also note that the import is not emitted where the external is defined, but where it's used. You can work around, or "ground" it in a module by re-exporting it as an ordinary definition, ie. let addEngine = addEngine.
In order to precisely answer your question I would need to know which bundler you use, where you've configured BuckleScript to output its JavaScript artefacts, where the externals are used, not just defined, and where the external JavaScript module is located. But I hope all this underlying knowledge will make it easy for you and future readers to identify and resolve the problem yourself. If you're still a bit unsure, look at the compiled JavaScript artefacts and just treat them as ordinary JavaScript modules. At this point that's really all they are.
I followed the documentation to put the constants in the lib/constants.js file.
Question:
How to access these constants in my client side html and js files?
Variables in Meteor are file-scoped.
Normally a var myVar would go in the global Node context, however in Meteor it stays enclosed in the file (which makes it really useful to write more transparent code). What happens is that Meteor will wrap all files in an IIFE, scoping the variables in that function and thus effectively in the file.
To define a global variable, simply remove the var/let/const keyword and Meteor will take care to export it. You have to create functions through the same mechanism (myFunc = function myFunc() {} or myFunc = () => {}). This export will either be client-side if the code is in the client directory, or server-side if it is in the server directory, or both if it is in some other not-so-special directories.
Don't forget to follow these rules:
HTML template files are always loaded before everything else
Files beginning with main. are loaded last
Files inside any lib/ directory are loaded next
Files with deeper paths are loaded next
Files are then loaded in alphabetical order of the entire path
Now you may run into an issue server-side if you try to access this global variable immediately, but Meteor hasn't yet instantiated it because it hasn't run over the file defining the variable. So you have to fight with files and folder names, or maybe try to trick Meteor.startup() (good luck with that). This means less readable, fragile location-dependant code. One of your colleague moves a file and your application breaks.
Or maybe you just don't want to have to go back to the documentation each time you add a file to run a five-step process to know where to place this file and how to name it.
There is two solutions to this problem as of Meteor 1.3:
1. ES6 modules
Meteor 1.3 (currently in beta) allows you to use modules in your application by using the modules package (meteor add modules or api.use('modules')).
Modules go a long way, here is a simple example taken directly from the link above:
File: a.js (loaded first with traditional load order rules):
import {bThing} from './b.js';
// bThing is now usable here
File: b.js (loaded second with traditional load order rules):
export const bThing = 'my constant';
Meteor 1.3 will take care of loading the b.js file before a.js since it's been explicitly told so.
2. Packages
The last option to declare global variables is to create a package.
meteor create --package global_constants
Each variable declared without the var keyword is exported to the whole package. It means that you can create your variables in their own files, finely grain the load order with api.addFiles, control if they should go to the client, the server, or both. It also allows you to api.use these variables in other packages.
This means clear, reusable code. Do you want to add a constant? Either do it in one of the already created file or create one and api.addFiles it.
You can read more about package management in the doc.
Here's a quote from "Structuring your application":
This [using packages] is the ultimate in code separation, modularity, and reusability. If you put the code for each feature in a separate package, the code for one feature won't be able to access the code for the other feature except through exports, making every dependency explicit. This also allows for the easiest independent testing of features. You can also publish the packages and use them in multiple apps with meteor add.
It's amazing to combine the two approaches with Meteor 1.3. Modules are way easier and lighter to write than packages since using them is one export line and as many imports as needed rather than the whole package creation procedure, but not as dumb-error-proof (forgot to write the import line at top of file) as packages.
A good bet would be to use modules first, then switch to a package as soon as they're tiring to write or if an error happened because of it (miswritten the import, ...).
Just make sure to avoid relying on traditional load order if you're doing anything bigger than a POC.
You will need to make them global variables in order for other files to see them.
JavaScript
/lib/constants.js
THE_ANSWER = 42; // note the lack of var
/client/some-other-file.js
console.log(THE_ANSWER);
CoffeeScript
/lib/constants.coffee
#THE_ANSWER = 42
/client/some-other-file.coffee
console.log THE_ANSWER
In our current JavaScript project (HTML5 application) we are using a global namespace tree, i.e.
nsBase = nsBase || {};
nsBase.sub1 = nsBase.sub1 || {};
...
And then each class, in a dedicated file, hooks up into this namespace tree. Auto-completion can resolve them all across the project, together with parameter info and so forth.
We would like to get rid of the global namespace objects and introduce CommonJS format (and concatenate with browserify). But more important to that, any result returned by the require() calls should still allow auto-completion to work.
I found the Node.js plugin (http://plugins.jetbrains.com/plugin/?id=6098), but, after installing and enabling it for the project, it would only work for Node.js modules (e.g. 'fs') - For own files, the auto-complete would only propose 'exports' (as in module.exports).
Furthermore, I'd rather not like to enable all Node.js globals for the HTML5 targeted app, all we need is the knowledge of module.exports, require() and the corresponding logic behind it.
To the questions:
Did I do something wrong and it should work with the plugin?
Would there be a specific solution, just for the CommonJS format?
I'm struggling to get requireJS to work properly. Page is running fine, but I think I'm doing things in an oh-so wrong way.
For example, on page xzy I'm adding the following JavaScript at the end of the page (the JS must stay on the page for now, so no external js-files possible)
<script type="text/javascript" language="javascript">
//<![CDATA[
(function () {
require([
'async!http://maps.google.com/maps/api/js?v=3&sensor=false',
'maps/jquery.ui.map.full.min.js',
'maps/jquery.ui.map.extensions.min'
], function() {
// ... do stuff with Google Maps
}
);
}());
//]]>
</script>
Doing this makes google.map and the $.().gmap method globally available, which probably shouldn't be available globally.
Questions:
Should I convert this into a requireJS module? Why?
If so, will the module be available on other pages as well or do I just "re-define" on page 123 and the dependency files will already have been cached?
And finally - will I have to convert the code inside my require call into module.methods, which I then call via module_name.method_name(pass_some_parameters)?
Just looking at the JS:
http://maps.google.com/maps/api/js?v=3&sensor=false
You can see that window.google is a global. There's not much you can do about that without Google releasing an AMD version.
Your decision regarding should you create a module should firstly be a question of readability/maintainability of the JS code. Modules are (should be), readable, reusable chunks of code/reusable abstractions that the rest of your code can consume. You should also derive testing benefits from this - each module should be easier to test in isolation.
You can end up with many more JS files if you choose a modular approach, and you might think this leads to performance issues - i.e. multiple HTTP requests. But this is mitigated by using the RequireJS Optimiser to optimise your modules to a single file.
If you convert to a module, yes you can require it from other pages, and if your HTTP caching headers are set up, then the browser may choose to use a cached version, thus saving you a HTTP request (same caching heuristics apply if you've optimised every module into a single file).
If you re-define (I assume you mean copy and paste the code block), then those dependencies listed in the call to require should all be cached by the browser, and therefore instantly available (depending on your web server and its HTTP caching headers).
Finally, yes you may have to refactor the code a bit to expose the new module's API. If that means exposing a single object with methods, then that's what you should do. This process almost inevitably leads to better code though in my experience. As you've had to think more about what the module's purpose is, and this often leads to your breaking the coupling between pieces of code.
I'm new to Dojo so I don't quite understand all of Dojo's features. Here are some questions, but I'm sure some of them seem really stupid since I don't quite get how Dojo is structured yet:
How do you create multiple modules within a single js file and access the module in the file it's created? Also, how do you access specific modules from a file containing multiple modules?
What is the difference between require and define?
I successfully required a module from a file, but I can't figure out how to get the variables out of the file, how do you do this?
I was looking at how Dojo required its modules and notice that it does a http request for each file, but isn't that really inefficient when you're dealing with lots of modules and/or on a large site, you really would like to minimize the number of http requests necessary? What's the way around this?
Reading through The Dojo Loader will provide answers.
Basically module = file and very often (as a matter of best practice) module = file = class (more precisely public class defined via dojo/_base/declare).
Ad #4: You will need to employ The Dojo Build System, that will resolve all the dependencies and put all your modules into a single file (or more files, this depends on your build profile). Have a look at Dojo Boilerplate project, it may help with building your application.
How do you create multiple modules within a single js file?
While this is possible to do, and is done by the build system when it creates layer files, it is not recommended. Putting multiple modules in a single file means you have to use a different form of define that gives an explicit ID to eahc of them. This is less flexible then having the module IDs automatically derived from the file name and path (this, together with relative paths makes it much easier to move and rename AMD modules, compared to old-style modules)
What is the difference between require and define?
define is a beefed up version of require that defines a new module with its return value.
I successfully required a module from a file, but I can't figure out how to get the variables out of the file.
I am not sure wha toyu mean with this (without giving a concrete runnable example) but all you have to do is create an object to be the value of your module and return it from define. This is not very far off from how you would define modules the old way, with manual "namespaces"
//foo.js
define([], function(){
var M = {}; //the exported stuff will go here
var myVariable = 16; //private function variables are private (like they are everywhere else)
//and cannot be seen from the outside
M.anumber = myvariable + 1; //things on the returned object can be seen from the outside
return M; //people who require this module will get this return value
});
//bar.js
define(['foo'], function(foo){
console.log( foo.anumber );
});
I was looking at how Dojo required its modules and notice that it does a http request for each file...
As phusick pointed out, the build system can be used to bundle all the modules into a single file (giving you the best of both worlds - modularity during development and performance during deployment). It can also do other cool stuff, like bundling CSS and passing the Javascript through a minifier or through the Closure compiler, checking ifdef for creating platform-dependent builds, etc.