UPDATE: Worked out a solution I'm comfortable with, the answer is below.
I have an app that uses manual bootstrapping, with a chunk of code that essentially looks like this:
(function() {
'use strict';
angular.element( document ).ready( function () {
function fetchLabels( sLang, labelPath ) {
// retrieves json files, builds an object and injects a constant into an angular module
}
function rMerge( oDestination, oSource ) {
// recursive deep merge function
}
function bootstrapApplication() {
angular.element( document ).ready( function () {
angular.bootstrap( document, [ 'dms.webui' ] );
});
}
fetchLabels( 'en_AU' ).then( bootstrapApplication );
}
It works great - essentially fetches two json files, combines them and injects the result as a constant, then bootstraps the app.
My question is how to unit test these functions? I want to write something to test the fetchLabels() and rMerge() methods, but I'm not sure how to go about it. My first thought was separate the methods out into a service and use it that way, but I wasn't sure if I could actually invoke my own service this way before I've even bootstrapped the application?
Otherwise, can anyone suggest a way to separate out these methods into something standalone that I can test more readily?
OK after some fiddling, I found a solution I'm happy with.
I moved the methods into the Labels Service, and modified the labels module file to preset an empty object into the relevant constant.
I added an exception to my grunt injection setup to ignore the label service files, and instead manually called them earlier where I am calling the module file itself.
I took the direct injection code for $http and $q from the bootstrap script and put them inside the Label service, as they aren't available for normal injection prior to bootstrap.
Finally I manually loaded the labels module, injected the Labels service, then used the Labels service to execute the aforementioned methods.
So my bootstrap script now looks like this:
(function () {
'use strict';
var injector = angular.injector( [ 'labels' ] );
var Labels = injector.get( 'Labels' );
/**
* Executes the actual bootstrapping
*/
function bootstrapApplication() {
angular.element( document ).ready( function () {
angular.bootstrap( document, [ 'myAppModule' ] );
});
}
// get the label files and build the labels constant, then load the app
Labels.fetchLabels( 'en_AU', '/lang/' ).then( bootstrapApplication );
})();
And I can properly unit test the Labels service discretely. The loading order and adding extra bits to the normal way of loading modules for us is an exception, but I'm comfortable that it's specific purpose warrants the exception, and it leaves us in a much more flexible position for testing.
Related
I'm building a midly size app using backbone and its friends jquery and underscore. My plan is to use QunitJS to create unittests.
I already have a Proof-Of-Concept of the app, so I basicly have a good grasp of how the code should look like without having to test it. It looks like that:
(function() {
// prepare some model
var Query = BackboneModel.extend({});
query = new Query();
// register some events
$('body.something').click(function() {
query.set('key', 'value');
# ...
});
// create some backbone view
var QueryView = Backbone.View.extend({...})
// init backbone view
query.view = new QueryView();
// add some plumbing here ...
// and so on...
})();
Otherwise said:
I wrap the module inside a function to avoid pollution
Class declaration and class use is interleaved
event registration is interleaved with the rest of the code
variables global to the module are reused inside the module
Now I need to test that. The problem is, I think, mainly about event registration and plumbing.
My plan is to wrap the code in functions and export every function and objects I want to test. The code will look like this:
var app = (function() {
var app = {}
// prepare some model
var Query = BackboneModel.extend({});
app.query = new Query();
// register some events
app.registerEvent = function() {
$('body.something').click(function() {
query.set('key', 'value');
# ...
});
};
app.registerEvent(); // XXX: call immediatly
// create some backbone view
app.QueryView = Backbone.View.extend({...})
// init backbone view
app.query.view = new QueryView();
// add some plumbing here ...
// wrapped in function with correct arguments and call it immediatly
// and so on...
// ...
return app;
})();
This is the first time I need to write tests in javascript for this kind of application so I'm wondering whether my approach to make the code testable is correct or can be improved. For instance, It seems silly to me to wrap the registration of events in function without arguments and call them immediatly.
Is there a javascript way to do this?
So I found an excellent way to test private functions while keeping my production code clean. I am not sure if you are using any build system like Grunt or Gulp, but if you are open to it you could do something like this:
//has a dependency of 'test' causing it to run testing suite first
gulp.task('js', ['test'], function() {
return gulp.src(source)
.pipe(plumber())
//create sourcemaps so console errors point to original file
.pipe(sourcemaps.init())
//strip out any code between comments
.pipe(stripCode({
start_comment: 'start-test',
end_comment: 'end-test'
}))
//combine all separatefiles into one
.pipe(concatenate('mimic.min.js'))
//minify and mangle
.pipe(uglify())
.pipe(sourcemaps.write('maps'))
.pipe(gulp.dest('dist/js'));
});
And the file could look like:
var app = (function () {
function somePrivateFunction () {}
function someotherPrivateFunction () {}
var app = {
publicFunction: function(){}
publicVar: publicVar
}
/* start-test */
app.test = {
testableFunction: somePrivateFunction
}
/* end-test */
}());
Everything between the test comments gets stripped out after the tests are run so the production code is clean. Grunt has a version of this and I assume any automated build system can do the same. You can even set a watch task so that it runs the tests on every save. Otherwise you'll have to manually remove the exported test object before deployment.
In the case of Backbone just attach a test object to the module and reference that object in the tests. Or if you really want to separate it, set the object in the global scope. window.testObject = { //list of objects and methods to test... }; and strip that code out before deployment.
Basically what I do is avoid any calls in the library I want to test. So everything is wrapped in a function and exported.
Then I have two other files one is main.js where I do the plumbing and should be tested using integration tests with selenium and tests.js which does unit testing.
There's a set of JS modules representing similar OOP Classes: think of e.g. different types of backend tasks (SendEmailTask, WriteToDbTask, WriteToDiskTask), or different actions on a drawing canvas (DrawArc, DrawLine, DrawBezier). Those are just examples.
Each of those is a single JS file, with its own define, and they're all located in a common directory. In a client module that depends on all of those, the dependency list and argument list have to include each one of those separately, e.g. something like:
define([
'tasks/sendEmailTask', 'tasks/writeToDbTask', 'tasks/writeToDiskTask', ...
], function (SendEmailTask, WriteToDbTask, WriteToDiskTask, ...) {
/* ... */
/* ... new SendEmailTask(); */
/* ... new WriteToDbTask(); */
/* ... new WriteToDiskTask(); */
});
and they both must be updated everytime a new module is added to the set, e.g. MakeCoffeeTask, which seems to me as a BadThing™.
Is there a way to avoid those last issues? I was thinking of a couple of possible ways, but I don't know how to make them work:
Create a kind of namespace module. Each of those sub-modules depends on the namespace one and adds its definition to it. But than if the client too depends only on the namespace, how do you ensure that client gets loaded after all sub-modules?
Express both dependencies and arguments with some kind of wildcard, like 'tasks/*' for dependency and < no idea > for arguments.
You can make a different module for each section. For example: 'tasks/main'
in the Task main you can include all the files you need. and in the task main, all the function of task will be assigned. and you call high level functions. Take look :
define(['tasks/main'], function(task){
task.add('Project Status meeting');
task.SendEmailNotification('tAll');
});
in the task/main.js :
define(['tasks/sendEmailTask', 'tasks/writeToDbTask', 'tasks/writeToDiskTask',], function(sendEmailTask, writeToDbTask, writeToDiskTask){
var task = {
add = function(taskObj){
// write your code using writeToDbTask, writeToDiskTask
},
SendEmailNotification = function(whom){
// write your code using sendEmailTask, writeToDbTask, writeToDiskTask
};
return task;
});
does it help you ?
I was not able to find any other solution. This is what I came up with in the meantime... it is just an improvement over current situation, but I don't think it solves the posed issues.
// in tasks/taskNamespace.js or in any client code
define (function(require) {
var _taskArray = [
require('tasks/sendEmailTask'),
require('tasks/writeToDbTask'),
require('tasks/writeToDiskTask'),
/* ... , */
];
var TaskNamespace = {};
_taskArray.forEach(function (taskClass) {
TaskNamespace[taskClass.name] = taskClass;
});
return TaskNamespace;
});
It takes advantage of RequireJS Sugar syntax and of JS Function.name coming new feature, proposed in ES6.
What's the end result? You still have to manually add each Class to a list, but it's only one list instead of two. WHOA
I am trying to use RequireJS to use the require() function in the browser. For context, I am trying to work with the Node wrapper for the Lob API: https://github.com/hisankaran/lob-node.
Here's the relevant code:
define (function (require) {
var LOB = require('lob');
LOB = new LOB(API_KEY);
})
// var LOB = new (require('lob')) (API_KEY);
console.log('Success?')
It runs successfully, but when I try to actually call anything, for example LOB.bankaccounts.create, it says LOB is not defined.
The Lob docs suggested that I do simply this:
var LOB = new (require('lob')) (LOB_API_KEY);
but I kept getting that the module had not yet been loaded for context error described here (http://requirejs.org/docs/errors.html#notloaded), so I tried the above syntax from the RequireJS website.
I'm super new to RequireJS (and coding in general), so I may just be doing something silly.
The define() function has to actually return the object it defines.
Furthermore, in the browser require() should be used asynchronously as the synchronous call only works, when the module has already been loaded.
That being said, I would restructure your code as follows:
define( ['lob'], function( LOB ){
return new LOB( API_KEY );
});
Put that in some module definition and load it in your main module, e.g., like this
require( [ 'myLob' ], function( myLob ){
// do something with myLob
});
This is my second weekend playing with Node, so this is a bit newbie.
I have a js file full of common utilities that provide stuff that JavaScript doesn't. Severely clipped, it looks like this:
module.exports = {
Round: function(num, dec) {
return Math.round(num * Math.pow(10,dec)) / Math.pow(10,dec);
}
};
Many other custom code modules - also included via require() statements - need to call the utility functions. They make calls like this:
module.exports = {
Init: function(pie) {
// does lots of other stuff, but now needs to round a number
// using the custom rounding fn provided in the common util code
console.log(util.Round(pie, 2)); // ReferenceError: util is not defined
}
};
The node.js file that is actually run is very simple (well, for this example). It just require()'s in the code and kicks off the custom code's Init() fn, like this:
var util = require("./utilities.js");
var customCode = require("./programCode.js");
customCode.Init(Math.PI);
Well, this doesn't work, I get a "ReferenceError: util is not defined" coming from the customCode. I know everything in each required file is "private" and this is why the error is occuring, but I also know that the variable holding the utility code object has GOT to be stored somewhere, perhaps hanging off of global?
I searched through global but didn't see any reference to utils in there. I was thinking of using something like global.utils.Round in the custom code.
So the question is, given that the utility code could be referred to as anything really (var u, util, or utility), how in heck can I organize this so that other code modules can see these utilities?
There are at least two ways to solve this:
If you need something from another module in a file, just require it. That's the easy one.
Provide something which actually builds the module for you. I will explain this in a second.
However, your current approach won't work as the node.js module system doesn't provide globals as you might expect them from other languages. Except for the things exported with module.exports you get nothing from the required module, and the required module doesn't know anything of the requiree's environment.
Just require it
To avoid the gap mentioned above, you need to require the other module beforehand:
// -- file: ./programCode.js
var util = require(...);
module.exports = {
Init: function(pie) {
console.log(util.Round(pie, 2));
}
};
requires are cached, so don't think too much about performance at this point.
Keep it flexible
In this case you don't directly export the contents of your module. Instead, you provide a constructor that will create the actual content. This enables you to give some additional arguments, for example another version of your utility library:
// -- file: ./programCode.js
module.exports = {
create: function(util){
return {
Init: function(pie) {
console.log(util.Round(pie, 2));
}
}
}
};
// --- other file
var util = require(...);
var myModule = require('./module').create(util);
As you can see this will create a new object when you call create. As such it will consume more memory as the first approach. Thus I recommend you to just require() things.
First a bit of history, we have an engine which is made up of many javascript files which are essentially modules. These modules return a single class that are assigned to the global scope, although under a specified namespace.
The engine itself is used to display eLearning content, with each different eLearning course requiring slightly different needs, which is where we include javascript files into the page based on the necessary functionality. (There is only one entry page).
I've been trying to weigh up if it's worth changing to AMD, require.js and r.js or if it's better to stay with our current system which includes everything required on the page and minimises it into one script.
One of my biggest problems with going to AMD would be that it seems to be harder to extend a class easily. For example, sometimes we have to adjust the behaviour of the original class slightly. So we add another script include on the page that extends the original class by copying the original prototype, execute the original function that's being overridden with apply and then do whatever additional code is required.
Can you extend an AMD module without adapting the original file? Or am I missing the point and we're best staying with what we're doing at the moment?
I recently started a project using RequireJS, and the method I use to extend underscore boils down to something like this:
Relevant Directory Structure:
/scripts
/scripts/underscore.js
/scripts/base/underscore.js
The real underscore library goes to /scripts/base/underscore.js.
My extensions go in /scripts/underscore.js.
The code in /scripts/underscore.js looks like this:
define(['./base/underscore'], function (_) {
'use strict';
var exports = {};
// add new underscore methods to exports
_.mixin(exports); // underscore's method for adding methods to itself
return _; // return the same object as returned from the underscore module
});
For a normal extension, it could look more like this:
define(['underscore', './base/SomeClass'], function (_, SomeClass) {
'use strict';
_.extend(SomeClass.prototype, {
someMethod: function (someValue) {
return this.somethingOrOther(someValue * 5);
}
});
return SomeClass;
});
Note on underscore: Elsewhere I used the RequireJS shim-config to get underscore to load as an AMD module, but that should have no effect on this process with non-shimmed AMD modules.
You can have modules that contain your constructor functions. when these modules get included, they are ready for use. then you can create objects out of them afterwards.
example in require:
//construction.js
define(function(){
//expose a constructor function
return function(){
this....
}
});
//then in foo.js
define([construction],function(Construction){
var newObj = new Construction; //one object using constructor
});
//then in bar.js
define([construction],function(Construction){
//play with Construction's prototype here then use it
var newObj = new Construction;
});