Is there a way to substitute variables in javascript files, with a preprocessor during build process. I uses grunt, usemin, uglifyjs (part of yeoman stack).
I currently refer to url from a global javascript object. For example,
my.url = {
book: {
get : '/my/book/{id},
new: '/my/book'
}
}
in my program, I may refer the url as my.url.book.get, etc. The intention is
do not want the url string spread across the program, as any change during development make it hard to refactor.
url may generate based on the server API, and don't want to duplicate in client.
Now, once I am happy with the development, I like to preprocess all the javascript file to substitute all these references to actual url string. Intention is to avoid loading an extra file with all the url (may the user only need few of the url).
is there any tool, similar to html templating package, to process the javascript and replace all the variables. I prefer if it works with grunt/yeoman stack.
You can do that with grunt-replace It allows for all kinds of string substitutions in text files. I use it to sync version numbers in bower.json, package.json etc, but obviously you can use it for source file value substitutions as well.
That said, in your case I'd definitely opt for a more dynamic solution with env variables, with for instance grunt-env.
Try #Builder https://github.com/electricimp/Builder
Little example
in config.js:
#set apiEndpoint "https://somesite.com/api/v1"
then:
#include once "config.js"
let url = "#{apiEndpoint}"
Related
I use NodeJS and Angular/TS to create my Single Page Application (therefore browser/client side), and there I need to do modify some Url paths. I tried to use the path module, but this seems to only work on the server side (with express).
I also found the url module, but it doesn't have functions like normalize. One specific use case is to convert a path from A to B.
let A = "http://www.example.com/a/b/c"
let B = normalize(join(A, ".."));
console.log(B); // http://www.example.com/a/b/
https://github.com/browserify/path-browserify
This module is part of the https://github.com/webpack/node-libs-browser collection, which is deprecated (idk why). However it is reputable and stable, so feel free to use it.
I have a javascript build file, that is unminified; it contains prototype class definitions. Is there any tool i can use to break the build file into separate files, assuming one file per prototype/class definition?
This build file looks like the following, with many objects defined in the following way, all concatenated into the single file. I would like to break this file apart. I don't have access to the original source code, I was basically just given a dump in the form of this build file but its unmanageable in this form as its 10k+ lines of code.
MyClass = function(){
}
MyClass.prototype.foo = function(){
}
I would go with the suggested (in comments) do it manually manner, but if you insist to automate it some way, you can always use javaScript for the job.
One way would be to look up for constructors throw a regex like ([^\.=\s]+[^=]*=[\s]*function[\s]*[(]+[^)]*[)][\s]*[{]*) and extract all 'class'.prototype.'something' from the file, parse the entire file then write each group in separate files after doing any kind of ordering you would prefer.
Another manner would be to use a javaScript parser and group relevant function definitions throw token examination (this one is overkill, but might be interesting for learning purposes).
I have a need where I need to execute node code/modules in a node app (in a sandbox) with vm.createScript / script.runInNewContext. The host node app runs on heroku, so there is no local filesystem to speak of. I am able to download and run code that has no outside dependencies just fine, however the requirement is to be able to include other node modules as well. (as a build/packaging step would be ideal)
There are many existing solutions (browserify is one I've spent the most time with) which get close... but they inevitably generate a single blob of code (yeah!), meant to execute in a browser (boo!). Browserify for example generates dependencies on window., etc.
Does anyone know of a tool that will either read a package.json dependencies{} (or look at all require()'s in the source) and generate a single monolithic blob suitable for node's runInNewContext?
I don't think the solution you're looking for is the right solution. Basically you want to grab a bunch of require('lib')'s, mush them together into a single Javascript context, serialize that context into source code, then pass that serialized form into the runInNewContext function to deserialize and rebuild into a Javascript context, then deserialize your custom, sandboxed code, and finally run the whole thing.
Wouldn't it make much more sense to just create a context Object that includes the needed require('lib')'s and pass that object directly into your VM? Based on code from the documentation:
var vm = require('vm'),
initSandbox = {
async: require('async'),
http: require('http')
},
context = vm.createContext(initSandbox);
vm.runInContext("async.forEach([0, 1, 2], function(element) { console.log(element); });", context);
Now you have the required libraries accessible via the context without going through a costly serialization/deserialization process.
Are there any projects that used node.js and closure-compiler (CC for short) together?
The official CC recommendation is to compile all code for an application together, but when I compile some simple node.js code which contains a require("./MyLib.js"), that line is put directly into the output, but it doesn't make any sense in that context.
I see a few options:
Code the entire application as a single file. This solves the problem by avoiding it, but is bad for maintenance.
Assume that all files will be concatenated before execution. Again this avoids the problem, but makes it harder to implement a un-compiled debug mode.
I'd like to get CC to "understand" the node.js require() function, but that probably can't be done without editing the compiler itself, can it?
I have been using the Closure Compiler with Node for a project I haven't released yet. It has taken a bit of tooling, but it has helped catch many errors and has a pretty short edit-restart-test cycle.
First, I use plovr (which is a project that I created and maintain) in order to use the Closure Compiler, Library, and Templates together. I write my Node code in the style of the Closure Library, so each file defines its own class or collection of utilities (like goog.array).
The next step is to create a bunch of externs files for the Node functions you want to use. I published some of these publicly at:
https://github.com/bolinfest/node-google-closure-latitude-experiment/tree/master/externs/node/v0.4.8
Though ultimately, I think that this should be a more community driven thing because there are a lot of functions to document. (It's also annoying because some Node functions have optional middle arguments rather than last arguments, making the type annotations complicated.) I haven't started this movement myself because it's possible that we could do some work with the Closure Complier to make this less awkward (see below).
Say you have created the externs file for the Node namespace http. In my system, I have decided that anytime I need http, I will include it via:
var http = require('http');
Though I do not include that require() call in my code. Instead, I use the output-wrapper feature of the Closure Compiler the prepend all of the require()s at the start of the file, which when declared in plovr, in my current project looks like this:
"output-wrapper": [
// Because the server code depends on goog.net.Cookies, which references the
// global variable "document" when instantiating goog.net.cookies, we must
// supply a dummy global object for document.
"var document = {};\n",
"var bee = require('beeline');\n",
"var crypto = require('crypto');\n",
"var fs = require('fs');\n",
"var http = require('http');\n",
"var https = require('https');\n",
"var mongodb = require('mongodb');\n",
"var nodePath = require('path');\n",
"var nodeUrl = require('url');\n",
"var querystring = require('querystring');\n",
"var SocketIo = require('socket.io');\n",
"%output%"
],
In this way, my library code never calls Node's require(), but the Compiler tolerates the uses of things like http in my code because the Compiler recognizes them as externs. As they are not true externs, they have to be prepended as I described.
Ultimately, after talking about this on the discussion list, I think the better solution is to have a new type annotation for namespaces that would look something like:
goog.scope(function() {
/** #type {~NodeHttpNamesapce} */
var http = require('http');
// Use http throughout.
});
In this scenario, an externs file would define the NodeHttpNamespace such that the Closure Compiler would be able to typecheck properties on it using the externs file. The difference here is that you could name the return value of require() whatever you wanted because the type of http would be this special namespace type. (Identifying a "jQuery namespace" for $ is a similar issue.) This approach would eliminate the need to name your local variables for Node namespaces consistently, and would eliminate the need for that giant output-wrapper in the plovr config.
But that was a digression...once I have things set up as described above, I have a shell script that:
Uses plovr to build everything in RAW mode.
Runs node on the file generated by plovr.
Using RAW mode results in a large concatenation of all the files (though it also takes care of translating Soy templates and even CoffeeScript to JavaScript). Admittedly, this makes debugging a pain because the line numbers are nonsense, but has been working well enough for me so far. All of the checks performed by the Closure Compiler have made it worth it.
The svn HEAD of closure compiler seems to have support for AMD
Closure Library on Node.js in 60 seconds.
It's supported, check https://code.google.com/p/closure-library/wiki/NodeJS.
I replaced my old approach with a way simpler approach:
New approach
No require() calls for my own app code, only for Node modules
I need to concatenate server code to a single file before I can run or compile it
Concatenating and compiling is done using a simple grunt script
Funny thing is that I didn't even had to add an extern for the require() calls. The Google Closure compiler understands that automagically. I did have to add externs for nodejs modules that I use.
Old approach
As requested by OP, I will elaborated on my way of compiling node.js code with Google Closure Compiler.
I was inspired by the way bolinfest solved the problem and my solution uses the same principle. The difference is that I made one node.js script that does everything, including inlining modules (bolinfest's solution lets GCC take care of that). This makes it more automated, but also more fragile.
I just added code comments to every step I take to compile server code. See this commit: https://github.com/blaise-io/xssnake/commit/da52219567b3941f13b8d94e36f743b0cbef44a3
To summarize:
I start with my main module, the JS file that I pass to Node when I want to run it.
In my case, this file is start.js.
In this file, using a regular expression, I detect all require() calls, including the assignment part.
In start.js, this matches one require call: var Server = require('./lib/server.js');
I retrieve the path where the file exists based on the file name, fetch its contents as a string, and remove module.exports assignments within the contents.
Then I replace the require call from step 2 with the contents from step 3. Unless it is a core node.js module, then I add it to a list of core modules that I save for later.
Step 3 will probably contain more require() call, so I repeat step 3 and 4 recursively until all require() calls are gone and I'm left with one huge string containing all code.
If all recursion has completed, I compile the code using the REST API.
You could also use the offline compiler.
I have externs for every core node.js module. This tool is useful for generating externs.
I preprend the removed core.js module require calls to the compiled code.
Pre-Compiled code.
All require calls are removed. All my code is flattened.
http://pastebin.com/eC2rVMiN
Post-Compiled code.
Node.js core require calls have been prepended manually.
http://pastebin.com/uB8CaejN
Why you should not do it this way:
It uses regular expressions (not a parser or tokenizer) for detecting require calls, inlining and removing module.exports. This is fragile, as it does not cover all syntax variations.
When inlining, all module code is added to the global namespace. This is against the principles of Node.js, where every file has its own namespace, and this will cause errors if you have two different modules with the same global variables.
It does not improve the speed of your code that much, since V8 also performs a lot of code optimizations like inlining and dead code removal.
Why you should:
Because it does work when you have consistent code.
It will detect errors in your server code when you enable verbose warnings.
Option 4: Don't use closure compiler.
People in the node community don't tend to use it. You don't need to minify node.js source code, that's silly.
There's simply no good use for minification.
As for the performance benefits of closure, I personally doubt it actually makes your programs faster.
And of course there's a cost, debugging compiled JavaScript is a nightmare
In complex client side projects, the number of Javascript files can get very large. However, for performance reasons it's good to concatenate these files, and compress the resulting file for sending over the wire. I am having problems in concatenating these as the dependencies are included after they are needed in some cases.
For instance, there are 2 files:
/modules/Module.js <requires Core.js>
/modules/core/Core.js
The directories are recursively traversed, and Module.js gets included before Core.js, which causes errors. This is just a simple example where dependencies could span across directories, and there could be other complex cases. There are no circular dependencies though.
The Javascript structure I follow is similar to Java packages, where each file defines a single Object (I'm using MooTools, but that's irrelevant). The structure of each javascript file and the dependencies is always consistent:
Module.js
var Module = new Class({
Implements: Core,
...
});
Core.js
var Core = new Class({
...
});
What practices do you usually follow to handle dependencies in projects where the number of Javascript files is huge, and there are inter-file dependencies?
Using directories is clever, however, I think you might run into problems when you have multiple dependencies. I found that I had to create my own solution to handle this. So, I created a dependency management tool that is worth checking out. (Pyramid Dependency Manager documentation)
It does some important things other javascript dependency managers don't do, mainly
Handles other files (including inserting html for views...yes, you can separate your views during development)
Combines the files for you in javascript when you are ready for release (no need to install external tools)
Has a generic include for all html pages. You only have to update one file when a dependency gets added, removed, renamed, etc
Some sample code to show how it works during development.
File: dependencyLoader.js
//Set up file dependencies
Pyramid.newDependency({
name: 'standard',
files: [
'standardResources/jquery.1.6.1.min.js'
]
});
Pyramid.newDependency({
name:'lookAndFeel',
files: [
'styles.css',
'customStyles.css'
]
});
Pyramid.newDependency({
name:'main',
files: [
'createNamespace.js',
'views/buttonView.view', //contains just html code for a jquery.tmpl template
'models/person.js',
'init.js'
],
dependencies: ['standard','lookAndFeel']
});
Html Files
<head>
<script src="standardResources/pyramid-1.0.1.js"></script>
<script src="dependencyLoader.js"></script>
<script type="text/javascript">
Pyramid.load('main');
</script>
</head>
This may be crude, but what I do is keep my separate script fragments in separate files. My project is such that I'm willing to have all my Javascript available for every page (because, after all, it'll be cached, and I'm not noticing performance problems from the parse step). Therefore, at build time, my Ant script runs Freemarker via a little custom Ant task. That tasks roots around the source tree and gathers up all the separate Javascript source files into a group of Maps. There are a few different kinds of sources (jQuery extensions, some page-load operations, so general utilities, and so on), so the task groups those different kinds together (getting its hints as to what's what from the script source directory structure.
Once it's built the Maps, it feeds those into Freemarker. There's a single global template, and via Freemarker all the script fragments are packed into that one file. Then that goes through YUI compressor, and bingo! each page just grabs that one script, and once it's cached there's no more script fetchery over my entire site.
Dependencies, you ask? Well, that Ant task orders my source files by name as it builds those maps, so where I need to ensure definition-use ordering I just prefix the files with numeric codes. (At some point I'm going to spiff it up so that the source files can keep their ordering info, or maybe even explicitly declared dependencies, inside the source in comment blocks or something. I'm not too motivated because though it's a little ugly it really doesn't bother anybody that much.)
There is a very crude dependency finder that I've written based on which I am doing the concatenation. Turns out the fact that its using MooTools is not so irrelevant after all. The solution works great because it does not require maintaining dependency information separately, since it's available within the javascript files itself meaning I can be super lazy.
Since the class and file naming was consistent, class Something will always have the filename Something.js. To find the external dependencies, I'm looking for three things:
does it Implement any other
classes
does it Extend any other
classes
does it instantiate other classes
using the new keyword
A search for the above three patterns in each javascript file gives its dependent classes. After finding the dependent classes, all Javascript files residing in any folder are searched and matched with this class name to figure out where that class is defined. Once the dependencies are found, I build a dependency graph and use the topological sort algorithm to generate the order in which files should be included.
I say just copy and paste this files to a one file in an ordered way. Each file will have a starting and ending comment to distinguish each particular code.
Each time you updated one of the files, you'll need to updated this file. So, this file need to contain only finish libraries, that not going to changes in the near time.
Your directory structure is inverted...
Core dependencies should be in the root and modules are in subdirs.
scripts/core.js
scripts/modules/module1.js
and your problem is solved.
Any further dependency issues will be indicative of defective 'class'/dependency design.
Similar to Mendy, but I create combined files on server-side. The created files will also be minified, and will have a unique name to omit cache issues after an update.
Of course, this practice only makes sense in a whole application or in a framework.
I think your best bet if at all possible, would be to redesign to not have a huge number of javascript files with interfile dependencies. Javascript just wasn't intended to go there.
This is probably too obvious but have you looked at the mootools Core Depender: http://mootools.net/docs/more/Core/Depender
One way to break the parse-time or load-time dependencies is with Self-Defining Objects (a variation on Self-Defining Functions).
Let's say you have something like this:
var obj = new Obj();
Where this line is in someFile.js and Obj is defined in Obj.js. In order for this to parse successfully you must load or concatenate Obj.js before someFile.js.
But if you define obj like this:
var obj = {
init: function() {
obj = new Obj();
}
};
Then at parse or load time it doesn't matter what order you load the two files in as long as Obj is visible at run-time. You will have to call obj.init() in order to get your object into the state you want it, but that's a small price to pay for breaking the dependency.
Just to make it clearer how this works here is some code you can cut and paste into a browser console:
var Obj = function() {
this.func1 = function ( ) {
console.log("func1 in constructor function");
};
this.init = function () {
console.log("init in constructor function");
}
};
var obj = {
init: function() {
console.log("init in original object");
obj = new Obj();
obj.init();
}
};
obj.init();
obj.func1();
And you could also try a module loader like RequireJS.