I have a logging API I want to expose to some internal JS code. I want to be able to use this API to log, but only when I am making a debug build. Right now, I have it partially working. It only logs on debug builds, but the calls to this API are still in the code when there is a regular build. I would like the closure-compiler to remove this essentially dead code when I compiler with goog.DEBUG = false.
Log definition:
goog.provide('com.foo.android.Log');
com.foo.Log.e = function(message){
goog.DEBUG && AndroidLog.e(message);
}
goog.export(com.foo.Log, "e", com.foo.Log.e);
AndroidLog is a Java object provided to the webview this will run in, and properly externed like this:
var AndroidLog = {};
/**
* Log out to the error console
*
* #param {string} message The message to log
*/
AndroidLog.e = function(message) {};
Then, in my code, I can use:
com.foo.Log.e("Hello!"); // I want these stripped in production builds
My question is this: How can I provide this API, use this API all over my code, but then have any calls to this API removed when not compiled with goog.DEBUG = true? Right now, my code base is getting bloated with a bunch of calls to the Log API that are never called. I want the removed.
Thanks!
The Closure Compiler provides four options in CompilerOptions.java to strip code: 1) stripTypes, 2) stripNameSuffixes, 3) stripNamePrefixes and 4) stripTypePrefixes. The Closure build tool plovr, exposes stripNameSuffixes and stripTypePrefixes through its JSON configuration file options name-suffixes-to-strip and type-prefixes-to-strip.
There are excellent examples of how these options work in Closure: The Definitive Guide on pages 442 to 444. The following lines are provided as common use cases:
options.stripTypePrefixes = ImmutableSet.of(“goog.debug”, “goog.asserts”);
options.stripNameSuffixes = ImmutableSet.of(“logger”, “logger_”);
To understand the nuances of these options and avoid potential pitfalls, I highly recommend reading the complete examples in Closure: The Definitive Guide.
Instead of running your own script as jfriend00 suggested I would look at the define api of the compiler (which is where goog.DEBUG comes from as well), you have DEBUG, COMPILED by default, but there you can roll your own.
OK, it turns out this is easy to do if I stop exporting com.foo.Log() and its methods. If I really want to be able to log in some specific cases, but still strip out the log calls in my internal code, I can just declare two classes for this:
// This will get inlined and stripped, since its not exported.
goog.provide('com.foo.android.Log');
com.foo.Log.e = function(message){
goog.DEBUG && AndroidLog.e(message);
}
// Don't export.
// This be available to use after closure compiler runs, since it's exported.
goog.provide('com.foo.android.production.Log');
goog.exportSymbol("ProductionLog", com.foo.android.production.Log);
com.foo.android.production.Log.log = function(message){
goog.DEBUG && AndroidLog.e(message);
}
// Export.
goog.exportProperty(com.foo.android.production.Log, "log", com.foo.android.production.Log.log);
I have modified a compiler and packaged it as an npm package.
You can get it here: https://github.com/thanpolas/superstartup-closure-compiler#readme
It will strip all logging messages during compilation
Related
I'm currently running a heavy computation (i.e. generating a Monte Carlo tree), which is an expensive operation. I only have a few seconds to build as big of a tree as I can, so I am using subprocesses in Node.js in order to build multiple trees, and then aggregate their data together to make a more informed decision.
I understand that subprocesses do not share information/memory, and I need to use modules within these subprocesses that are located in a file, called "Epilog.js" on my machine.
When I run functions that are in epilog.js from the main file, it works just fine. But all of my functions that are in my worker threads return absolutely nothing.
I have tested to make sure that the parameters of the functions I am trying to use in "epilog.js" aren't empty, and they're not. The problem isn't in the parameter.
I have also tested to see what happens if I simply don't import, and instead of just outputting an undefined array, I get an error saying that there is no function called "findroles".
//My main thread.
var fs = require('fs');
eval(fs.readFileSync('epilog.js') + '');
var process = fork('./buildGraph.js');
process.send({library});
//My worker thread.
//buildGraph.js
var fs = require('fs');
eval(fs.readFileSync('epilog.js') + '');
// receive message from master process
process.on('message', async(message) => {
library = message["library"];
console.log(findroles(library));
// findroles(library) is a function that is defined in epilog.js,
//and this outputs an array of "roles" given a parameter,library.
// For some reason this function outputs [], rather than giving me
// all of the roles. If I run this exact line from my main thread,
// it doesn't give any errors and outputs the right array:
// e.g. ['red', 'white'].
});
I expect to get not the empty array, but [red, white], as I do if I were to run the same line in the main thread. Does anyone have an idea as to the inconsistency of the functions? I'm very new to node.js and this isn't a class focused too much on software engineering in JavaScript, so I'd appreciate if someone can dumb down what is going on, as this is all very new to me.
If your script does not find the function called findroles then there is a problem with the importing method. Using the eval function for importing is not the normal way of importing modules. Try something like this:
// buildGraph.js
const epilog = require("./epilog.js");
......
console.log(epilog.findroles(library));
then epilog.js
exports.findroles = function (library) {
// function content
}
You can find more info here:
https://www.w3schools.com/nodejs/nodejs_modules.asp
Base on the document and example here, everything seem correct but I think the problem come from this line:
var process = fork('./buildGraph.js');
you might override the original process.
try to change it to
const n = fork('./buildGraph.js');
So I'm working with an enterprise tool where we have javascript scripts embedded throughout. These scripts have access to certain built-in objects.
Unfortunately, the tool doesn't give any good way to unit test these scripts. So my thinking was to maintain the scripts in a repo, mock the built-in objects, and then set up unit tests that run on my system.
I'm pretty ignorant to how JavaScript works in terms of building, class loading, etc. but I've been just trying things and seeing what works. I started by trying out Mocha by making it a node project (even though it's just a directory full of scripts, not a real node project). The default test works, but when I try and test functions from my code, I get compiler errors.
Here's what a sample script from my project looks like. I'm hoping to test the functions, not the entire script:
var thing = builtInObject.foo();
doStuff(thing);
doMoreStuff(thing);
function doStuff(thing) {
// Code
}
function doMoreStuff(thing) {
// More Code
}
Here's what a test file looks like:
var assert = require('assert');
var sampleScript = require('../scripts/sampleScript.js');
describe('SampleScript', function() {
describe('#doStuff()', function() {
it('should do stuff', function() {
assert.equal(-1, sampleScript.doStuff("input"));
});
});
});
Problem happens when I import ("require") the script. I get compilation errors, because it doesn't builtInObject. Is there any way I can "inject" those built in objects with mocks? So I define variables and functions that those objects contain, and the compiler knows what they are?
I'm open to alternative frameworks or ideas. Sorry for my ignorance, I'm not really a javascript guy. And I know this is a bit hacky, but it seems like the best option since I'm not getting out of the enterprise tool.
So if I get it right you want to do the unit tests for the frontened file in the Node.js environment.
There are some complications.
First, in terms of Node.js each file has it's own scope so the variables defined inside of the file won't be accessible even if you required the file. So you need to export the vars to use them.
module.exports.doStuff = doStuff; //in the end of sample script
Second, you you start using things like require/module.exports on the frontend they'll be undefined so you'll get an error.
The easiest way to run your code would be. Inside the sample script:
var isNode = typeof module !== 'undefined' && module.exports;
if (isNode) {
//So we are exporting only when we are running in Node env.
//After this doStuff and doMoreStuff will be avail. in the test
module.exports.doStuff = doStuff;
module.exports.doMoreStuff = doMoreStuff;
}
What for the builtInObject. The easies way to mock it would be inside the test before the require do the following:
global.builtInObject = {
foo: function () { return 'thing'; }
};
The test just passed for me. See the sources.
Global variables are not good anyway. But in this case seems you cannot avoid using them.
Or you can avoid using Node.js by configuring something like Karma. It physically launches browser and runs the tests in it. :)
I've been creating a lot of tests for yeoman generators lately, and I'm having trouble understanding the purpose of the withGenerators() helper method.
The example given:
var angular = new RunContext('../../app');
angular.withGenerators([
'../../common',
'../../controller',
'../../main',
[helpers.createDummyGenerator(), 'testacular:app']
]);
The function itself:
RunContext.prototype.withGenerators = function (dependencies) {
assert(_.isArray(dependencies), 'dependencies should be an array');
this.dependencies = this.dependencies.concat(dependencies);
return this;
};
I see that the function adds an array of dependencies to the Run Context object, with each item in the array being a path to a dependent generator.
What are these paths used for?
When and why would I need to use this method?
In the example given, is calling angular.withGenerators([...]) identical to calling yo angular, yo angular:common, yo angular:controller, and so on from the command line, or does it somehow simulate or modify calls to composeWith() in the actual generator?
What is the difference between running withGenerators() in the tests, and calling composeWith() from the generator itself?
This is used for composeWith() (or the legacy invoke() and hookFor() methods).
You can see the relevant code here: https://github.com/yeoman/yeoman-test/blob/master/lib/index.js#L173-L188
Basically when calling composeWith(), it'll use the dummy generator mock you passed instead of the one the environment would resolve.
A tricky part at the moment is that if you pass the local path settings to composeWith, it'll ignore the stubs - you can see the bug filled here https://github.com/yeoman/generator/issues/704 (the ticket suggests some manual workarounds)
I've created a Dojo module which depends on dojox/data/JsonRestStore like this:
define("my/MyRestStore",
["dojo/_base/declare", "dojox/data/JsonRestStore"],
function(declare, JsonRestStore) {
var x = new JsonRestStore({
target: '/items',
identifier: 'id'
});
...
which is fine. But now I want to have the the uncompressed version of the JsonRestStore code loaded so that I can debug it. I can't find any documentation on how to do this, but since there is a file called 'JsonRestStore.js.uncompressed.js' I changed my code to:
define("my/MyRestStore",
["dojo/_base/declare", "dojox/data/JsonRestStore.js.uncompressed"],
function(declare, JsonRestStore) {
...
thinking that might work.
I can see the JsonRestStore.js.uncompressed.js file being loaded in FireBug, but I get an error when trying to do new JsonRestStore:
JsonRestStore is not a constructor
Should this work?
Is there a way of configuring Dojo to use uncompressed versions of all modules? That's what I really want, but will settle for doing it on a per dependency basis if that's the only way.
Update
I've found a way to achieve what I want to do: rename the JsonRestStore.js.uncompressed.js file to JsonRestStore.js.
However, this seems a bit like a hacky workaround so I'd still be keen to know if there is a better way (e.g. via configuration).
You have two options
1) Create a custom build. The custom build will output a single uncompressed file that you can use for debugging. Think the dojo.js.uncompressed.js but it includes all the extra modules that you use.
OR
2) For a development environment, use the dojo source code. This means downloading the Dojo Toolkit SDK and referencing dojo.js from that in the development environment.
For the projects I work on, I do both. I set up the Dojo configuration so that it can be dynamic and I can change which configuration that I want using a query string parameter.
When I am debugging a problem, I will use the first option just to let me step through code and see what is going on. I use the second option when I am writing some significant js and don't want the overhead of the custom build to see my changes.
I describe this a bit more at
http://swingingcode.blogspot.com/2012/03/dojo-configurations.html
I think the reason for this is due to the fact that the loader declares its class-loads (modules), by the file conventions used. The 1.7 loader is not too robust just yet, ive had similar problems until realizing how to separate the '.' and '/' chars.
Its only a qualified guess; but i believe it has to do with the interpretation of '.' character in the class-name which signifies as a sub-namespace and not module name.
The 'define(/ * BLANK * / [ / * DEPENDENCIES * / ], ...)' - where no first string parameter is given - gets loaded by the filename (basename). The returned declare also has a saying though. So, for your example with jsonrest, its split/parsed as such:
toplevel = dojox
mid = data
modulename = JsonRestStore.js.uncompressed
(Fail.. Module renders as dojox.data.JsonRestStore.js.uncompressed, not dojox.data.JsonRestStore as should).
So, three options;
Load uncomressed classes through <script src="{{dataUrl}}/dojox/data/JsonRestStore.js.uncompressed.js"></script> and work them on dojo.ready
I think modifying the define([], function(){}) in uncompressed.js to define("JsonRestStore", [], function() {}) would do the trick (uncomfirmed)
Use the dojo/text loader, see below
Text filler needed :)
define("my/MyRestStore",
["dojo/_base/declare", "dojo/text!dojox/data/JsonRestStore.js.uncompressed.js"],
function(declare, JsonRestStore) {
...
JsonRestStore = eval(JsonRestStore);
// not 100% sure 'define' returns reference to actual class,
// if above renders invalid, try access through global reference, such as
// dojox.dat...
I'm looking into different ways to minify my JavaScript code including the regular JSMin, Packer, and YUI solutions. I'm really interested in the new Google Closure Compiler, as it looks exceptionally powerful.
I noticed that Dean Edwards packer has a feature to exclude lines of code that start with three semicolons. This is handy to exclude debug code. For instance:
;;; console.log("Starting process");
I'm spending some time cleaning up my codebase and would like to add hints like this to easily exclude debug code. In preparation for this, I'd like to figure out if this is the best solution, or if there are other techniques.
Because I haven't chosen how to minify yet, I'd like to clean the code in a way that is compatible with whatever minifier I end up going with. So my questions are these:
Is using the semicolons a standard technique, or are there other ways to do it?
Is Packer the only solution that provides this feature?
Can the other solutions be adapted to work this way as well, or do they have alternative ways of accomplishing this?
I will probably start using Closure Compiler eventually. Is there anything I should do now that would prepare for it?
here's the (ultimate) answer for closure compiler :
/** #const */
var LOG = false;
...
LOG && log('hello world !'); // compiler will remove this line
...
this will even work with SIMPLE_OPTIMIZATIONS and no --define= is necessary !
Here's what I use with Closure Compiler. First, you need to define a DEBUG variable like this:
/** #define {boolean} */
var DEBUG = true;
It's using the JS annotation for closure, which you can read about in the documentation.
Now, whenever you want some debug-only code, just wrap it in an if statement, like so:
if (DEBUG) {
console.log("Running in DEBUG mode");
}
When compiling your code for release, add the following your compilation command: --define='DEBUG=false' -- any code within the debug statement will be completely left out of the compiled file.
A good solution in this case might be js-build-tools which supports 'conditional compilation'.
In short you can use comments such as
// #ifdef debug
var trace = debug.getTracer("easyXDM.Rpc");
trace("constructor");
// #endif
where you define a pragma such as debug.
Then when building it (it has an ant-task)
//this file will not have the debug code
<preprocess infile="work/easyXDM.combined.js" outfile="work/easyXDM.js"/>
//this file will
<preprocess infile="work/easyXDM.combined.js" outfile="work/easyXDM.debug.js" defines="debug"/>
Adding logic to every place in your code where you are logging to the console makes it harder to debug and maintain.
If you are already going to add a build step for your production code, you could always add another file at the top that turns your console methods into noop's.
Something like:
console.log = console.debug = console.info = function(){};
Ideally, you'd just strip out any console methods, but if you are keeping them in anyway but not using them, this is probably the easiest to work with.
If you use the Closure Compiler in Advanced mode, you can do something like:
if (DEBUG) console.log = function() {}
Then the compiler will remove all your console.log calls. Of course you need to --define the variable DEBUG in the command line.
However, this is only for Advanced mode. If you are using Simple mode, you'll need to run a preprocessor on your source file.
Why not consider the Dojo Toolkit? It has built-in comment-based pragma's to include/exclude sections of code based on a build. Plus, it is compatible with the Closure Compiler in Advanced mode (see link below)!
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
Even though its an old question. I stumbled upon the same issue today and found that it can be achieved using CompilerOptions.
I followed this thread.
We run the compiler, from Java, on our server before sending the code to the client. This worked for us in Simple mode.
private String compressWithClosureCompiler(final String code) {
final Compiler compiler = new Compiler();
final CompilerOptions options = new CompilerOptions();
Logger.getLogger("com.google.javascript.jscomp").setLevel(Level.OFF);
if (compressRemovesLogging) {
options.stripNamePrefixes = ImmutableSet.of("logger");
options.stripNameSuffixes = ImmutableSet.of("debug", "dev", "info", "error",
"warn", "startClock", "stopClock", "dir");
}
CompilationLevel.SIMPLE_OPTIMIZATIONS.setOptionsForCompilationLevel(options);
final JSSourceFile extern = JSSourceFile.fromCode("externs.js", "");
final JSSourceFile input = JSSourceFile.fromCode("input.js", code);
compiler.compile(extern, input, options);
return compiler.toSource();
}
It will remove all the calls to logger.debug, logger.dev...etc.etc
If you're using UglifyJS2, you can use the drop_console argument to remove console.* functions.
I use this in my React apps:
if (process.env.REACT_APP_STAGE === 'PROD')
console.log = function no_console() {};
In other words, console.log will return nothing on prod enviroment.
I am with #marcel-korpel. Isn't perfect but works. Replace the debug instructions before minification. The regular expression works in many places. Watch out unenclosed lines.
/console\.[^;]*/gm
Works on:
;;; console.log("Starting process");
console.log("Starting process");
console.dir("Starting process");;;;;
console.log("Starting "+(1+2)+" processes"); iamok('good');
console.log('Message ' +
'with new line'
);
console.group("a");
console.groupEnd();
swtich(input){
case 1 : alert('ok'); break;
default: console.warn("Fatal error"); break;
}
Don't works:
console.log("instruction without semicolon")
console.log("semicolon in ; string");
I haven't looked into minification so far, but this behaviour could be accomplished using a simple regular expression:
s/;;;.*//g
This replaces everything in a line after (and including) three semicolons with nothing, so it's discarded before minifying. You can run sed (or a similar tool) before running your minification tool, like this:
sed 's/;;;.*//g' < infile.js > outfile.js
BTW, if you're wondering whether the packed version or the minified version will be 'better', read this comparison of JavaScript compression methods.
I've used following self-made stuf:
// Uncomment to enable debug messages
// var debug = true;
function ShowDebugMessage(message) {
if (debug) {
alert(message);
}
}
So when you've declared variable debug which is set to true - all ShowDebugMessage() calls would call alert() as well. So just use it in a code and forget about in place conditions like ifdef or manual commenting of the debug output lines.
I was searching for a built-in option to do this. I have not found that yet, but my favorite answer also does not require any changes to existing source code. Here's an example with basic usage.
Assume HTML file test.html with:
<html>
<script src="hallo.js"></script>
</html>
And hallo.js with:
sayhi();
function sayhi()
{
console.log("hallo, world!");
}
We'll use a separate file, say noconsole.js, having this from the linked answer:
console.log = console.debug = console.info = function(){};
Then we can compile it as follows, bearing in mind that order matters, noconsole.js must be placed first in the arguments:
google-closure-compiler --js noconsole.js hallo.js --js_output_file hallo.out.js
If you cat hallo.out.js you'd see:
console.log=console.debug=console.info=function(){};sayhi();function sayhi(){console.log("hallo, world!")};
And if I test with mv hallo.out.js hallo.js and reload the page, I can see that the console is now empty.
Hope this clarifies it. Note that I have not yet tested this in the ideal mode of compiling all the source code with ADVANCED optimizations, but I'd expect it to also work.