Preamble: I've read lots of of SO and blog posts, but haven't seen anything that answers this particular question. Maybe I'm just looking for the wrong thing...
Suppose I'm developing a WidgetManager class that will operate on Widget objects.
How do I use sinon to test that WidgetManager is using the Widget API correctly without pulling in the whole Widget library?
Rationale: The tests for a WidgetManager should be decoupled from the Widget class. Perhaps I haven't written Widget yet, or perhaps Widget is an external library. Either way, I should be able to test that WidgetManager is using Widget's API correctly without creating real Widgets.
I know that sinon mocks can only work on existing classes, and as far as I can tell, sinon stubs also need the class to exist before it can be stubbed.
To make it concrete, how would I test that Widget.create() is getting called exactly once with a single argument 'name' in the following code?
code under test
// file: widget-manager.js
function WidgetManager() {
this.widgets = []
}
WidgetManager.prototype.addWidget = function(name) {
this.widgets.push(Widget.create(name));
}
testing code
// file: widget-manager-test.js
var WidgetManager = require('../lib/widget-manager.js')
var sinon = require('sinon');
describe('WidgetManager', function() {
describe('#addWidget', function() {
it('should call Widget.create with the correct name', function() {
var widget_manager = new WidgetManager();
// what goes here?
});
it('should push one widget onto the widgets list', function() {
var widget_manager = new WidgetManager();
// what setup goes here?
widget_manager.addWidget('fred');
expect(widget_manager.widgets.length).to.equal(1);
});
});
Aside: Of course, I could define a MockWidget class for testing with the appropriate methods, but I'm more interested in really learning how to use sinon's spy / stub / mock facilities correctly.
The answer is really about dependency injection.
You want to test that WidgetManager is interacting with a dependency (Widget) in the expected way - and you want freedom to manipulate and interrogate that dependency. To do this, you need to inject a stub version of Widget at testing time.
Depending on how WidgetManager is created, there are several options for dependency injection.
A simple method is to allow the Widget dependency to be injected into the WidgetManager constructor:
// file: widget-manager.js
function WidgetManager(Widget) {
this.Widget = Widget;
this.widgets = [];
}
WidgetManager.prototype.addWidget = function(name) {
this.widgets.push(this.Widget.create(name));
}
And then in your test you simply pass a stubbed Widget to the WidgetManager under test:
it('should call Widget.create with the correct name', function() {
var stubbedWidget = {
create: sinon.stub()
}
var widget_manager = new WidgetManager(stubbedWidget);
widget_manager.addWidget('fred');
expect(stubbedWidget.create.calledOnce);
expect(stubbedWidget.create.args[0] === 'fred');
});
You can modify the behaviour of your stub depending on the needs of a particular test. For example, to test that the widget list length increments after widget creation, you can simply return an object from your stubbed create() method:
var stubbedWidget = {
create: sinon.stub().returns({})
}
This allows you to have full control over the dependency, without having to mock or stub all methods, and lets you test the interaction with its API.
There are also options like proxyquire or rewire which give more powerful options for overriding dependencies at test time. The most suitable option is down to implementation and preference - but in all cases you are simply aiming to replace a given dependency at testing time.
Your addWidget method does 2 things:
"converts" a string to a Widget instance;
adds that instance to internal storage.
I suggest you change addWidget signature to accept instance directly, instead of a name, and move out creation some other place. Will make testing easier:
Manager.prototype.addWidget = function (widget) {
this.widgets.push(widget);
}
// no stubs needed for testing:
const manager = new Manager();
const widget = {};
manager.addWidget(widget);
assert.deepStrictEquals(manager.widgets, [widget]);
After that, you'll need a way of creating widgets by name, which should be pretty straight-forward to test as well:
// Maybe this belongs to other place, not necessarily Manager class…
Manager.createWidget = function (name) {
return new Widget(name);
}
assert(Manager.createWidget('calendar') instanceof Widget);
Related
I am gradually improving a codebase that originally had some AngularJs in various versions and some code that was not in a framework at all using various versions of a software API. (For some reason this API is available - to pages loaded through the application - on AngularJS's $window.external...go figure.)
In my pre-ES6, AngularJs 1.8 phase, I have three services that interact with the software's API (call them someAPIget, someAPIset, and someAPIforms). Something like this:
// someAPIget.service.js
;(function () {
var APIget = function ($window, helperfunctions) {
function someFunc (param) {
// Do something with $window.external.someExternalFunc
return doSomethingWith(param)
}
return {
someFunc: someFunc
}
}
angular.module('someAPIModule').factory('someAPIget', ['$window', 'helperfunctions', someAPIget])
})()
I then had a service and module a level up from this, with someAPIModule as a dependency, that aggregated these functions and passed them through under one name, like this:
// apiinterface.service.js
;(function () {
// Change these lines to switch which API service all functions will use.
var APIget = 'someAPIget'
var APIset = 'someAPIset'
var APIforms = 'someAPIforms'
var APIInterface = function (APIget, APIset, APIforms) {
return {
someFunc: APIget.someFunc,
someSettingFunc: APIset.someSettingFunc,
someFormLoadingFunc: APIforms.someFormLoadingFunc
}
}
angular.module('APIInterface').factory('APIInterface', [APIget, APIset, APIforms, APIInterface])
})()
I would then call these functions in various other controllers and services by using APIInterface.someFunc(etc). It worked fine, and if we switch to a different software provider, we can use our same pages without rewriting everything, just the interface logic.
However, I'm trying to upgrade to Typescript and ES6 so I can use import and export and build some logic accessible via command line, plus prepare for upgrading to Angular 11 or whatever the latest version is when I'm ready to do it. So I rebuilt someAPIget to a class:
// someAPIget.service.ts
export class someAPIget {
private readonly $window
private readonly helperfunctions
static $inject = ['$window', 'helperfunctions']
constructor ($window, helperfunctions) {
this.$window = $window
this.helperfunctions = helperfunctions
}
someFunc (param) {
// Do something with this.$window.external.someExternalFunc
return doSomethingWith(param)
}
}
}
angular
.module('someAPImodule')
.service('someAPIget', ['$window', 'helperfunctions', someAPIget])
Initially it seemed like it worked (my tests still pass, or at least after a bit of cleanup in the Typescript compilation department they do), but then when I load it into the live app... this.$window is not defined. If, however, I use a direct dependency and call someAPIget.someFunc(param) instead of through APIInterface.someFunc(param) it works fine (but I really don't want to rewrite thousands of lines of code using APIInterface for the calls, plus it will moot the whole point of wrapping it in an interface to begin with). I've tried making APIInterface into a class and assigning getters for every function that return the imported function, but $window still isn't defined. Using console.log statements I can see that this.$window is defined inside someFunc itself, and it's defined inside the getter in APIInterface, but from what I can tell when I try to call it using APIInterface it's calling it without first running the constructor on someAPIget, even if I make sure to use $onInit() for the relevant calls.
I feel like I am missing something simple here. Is there some way to properly aggregate and rename these functions to use throughout my program? How do alias them correctly to a post-constructed version?
Edit to add: I have tried with someAPIget as both a factory and a service, and APIInterface as both a factory and a service, and by calling APIInterface in the .run() of the overall app.module.ts file, none of which works. (The last one just changes the location of the undefined error.)
Edit again: I have also tried using static for such a case, which is somewhat obviously wrong, but then at least I get the helpful error highlight in VSCode of Property 'someProp' is used before its initialization.ts(2729).
How exactly are you supposed to use a property that is assigned in the constructor? How can I force AngularJS to execute the constructor before attempting to access the class's members?
I am not at all convinced that I found an optimal or "correct" solution, but I did find one that works, which I'll share here in case it helps anyone else.
I ended up calling each imported function in a class method of the same name on the APIInterface class, something like this:
// apiinterface.service.ts
// Change these lines to switch which API service all functions will use.
const APIget = 'someAPIget'
const APIset = 'someAPIset'
const APIforms = 'someAPIforms'
export class APIInterface {
private readonly APIget
private readonly APIset
private readonly APIforms
constructor (APIget, APIset, APIforms) {
this.APIget = APIget
this.APIset = APIset
this.APIforms = APIforms
}
someFunc(param: string): string {
return this.APIget.someFunc(param)
}
someSettingFunc(param: string): string {
return this.APIset.someSettingFunc(param)
}
someFormLoadingFunc(param: string): string {
return this.APIforms.someFormLoadingFunc(param)
}
}
angular
.module('APIInterface')
.factory('APIInterface', [APIget, APIset, APIforms, APIInterface])
It feels hacky to me, but it does work.
Later Update:
I am now using Angular12, not AngularJS, so some details may be a bit different. Lately I have been looking at using the public-api.ts file that Angular12 generates to accomplish the same thing (ie, export { someAPIget as APIget } from './filename' but have not yet experimented with this, since it would still require either consolidating my functions somehow or rewriting the code that consumes them to use one of three possible solutions. It would be nice not to have to duplicate function signatures and doc strings however. It's still a question I'm trying to answer more effectively, I will update again if I find something that really works.
I'm new to NodeJs development coming from the .NET world
i'm searching the web for best practices regrading DI / DIP in Javascript
In .NET i would declare my dependencies at the constructor whereas in javascript i see a common pattern is to declare dependencies in the module level via a require statement.
for me it looks like that when i use require i'm coupled to a specific file while using a constructor to receive my dependency is more flexible.
What would you recommend doing as a best practice in javascript? (I'm looking for the architectural pattern and not an IOC technical solution)
searching the web i came along this blog post (which has some very interesting discussion in the comments):
https://blog.risingstack.com/dependency-injection-in-node-js/
it summerizes my conflict pretty good.
here's some code from the blog post to make you understand what i'm talking about:
// team.js
var User = require('./user');
function getTeam(teamId) {
return User.find({teamId: teamId});
}
module.exports.getTeam = getTeam;
A simple test would look something like this:
// team.spec.js
var Team = require('./team');
var User = require('./user');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
this.sandbox.stub(User, 'find', function() {
return Promise.resolve(users);
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
VS DI:
// team.js
function Team(options) {
this.options = options;
}
Team.prototype.getTeam = function(teamId) {
return this.options.User.find({teamId: teamId})
}
function create(options) {
return new Team(options);
}
test:
// team.spec.js
var Team = require('./team');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
var fakeUser = {
find: function() {
return Promise.resolve(users);
}
};
var team = Team.create({
User: fakeUser
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
Regarding your question: I don't think that there is a common practice in the JS community. I've seen both types in the wild, require modifications (like rewire or proxyquire) and constructor injection (often using a dedicated DI container). However, personally, I think not using a DI container is a better fit with JS. And that's because JS is a dynamic language with functions as first-class citizens. Let me explain that:
Using DI containers enforce constructor injection for everything. It creates a huge configuration overhead for two main reasons:
Providing mocks in unit tests
Creating abstract components that know nothing about their environment
Regarding the first argument: I would not adjust my code just for my unit tests. If it makes your code cleaner, simpler, more versatile and less error-prone, then go for out. But if your only reason is your unit test, I would not take the trade-off. You can get pretty far with require modifications and monkey patching. And if you find yourself writing too many mocks, you should probably not write a unit test at all, but an integration test. Eric Elliott has written a great article about this problem.
Regarding the second argument: This is a valid argument. If you want to create a component that only cares about an interface, but not about the actual implementation, I would opt for a simple constructor injection. However, since JS does not force you to use classes for everything, why not just use functions?
In functional programming, separating stateful IO from actual processing is a common paradigm. For instance, if you're writing code that is supposed to count file types in a folder, one could write this (especially when he/she is coming from a language that enforce classes everywhere):
const fs = require("fs");
class FileTypeCounter {
countFileTypes(dirname, callback) {
fs.readdir(dirname, function (err) {
if (err) return callback(err);
// recursively walk all folders and count file types
// ...
callback(null, fileTypes);
});
}
}
Now if you want to test that, you need to change your code in order to inject a fake fs module:
class FileTypeCounter {
constructor(fs) {
this.fs = fs;
}
countFileTypes(dirname, callback) {
this.fs.readdir(dirname, function (err) {
// ...
});
}
}
Now, everyone who is using your class needs to inject fs into the constructor. Since this is boring and makes your code more complicated once you have long dependency graphs, developers invented DI containers where they can just configure stuff and the DI container figures out the instantiation.
However, what about just writing pure functions?
function fileTypeCounter(allFiles) {
// count file types
return fileTypes;
}
function getAllFilesInDir(dirname, callback) {
// recursively walk all folders and collect all files
// ...
callback(null, allFiles);
}
// now let's compose both functions
function getAllFileTypesInDir(dirname, callback) {
getAllFilesInDir(dirname, (err, allFiles) => {
callback(err, !err && fileTypeCounter(allFiles));
});
}
Now you have two super-versatile functions out-of-the-box, one which is doing IO and the other one which processes the data. fileTypeCounter is a pure function and super-easy to test. getAllFilesInDir is impure but a such a common task, you'll often find it already on npm where other people have written integration tests for it. getAllFileTypesInDir just composes your functions with a little bit of control flow. This is a typical case for an integration test where you want to make sure that your whole application is working correctly.
By separating your code between IO and data processing, you won't find the need to inject anything at all. And if you don't need to inject anything, that's a good sign. Pure functions are the easiest thing to test and are still the easiest way to share code between projects.
In the past, DI containers as we know them from Java and .NET did not exist. With Node 6 came ES6 Proxies which opened up the possibility of such containers - Awilix for example.
So let's rewrite your code to modern ES6.
class Team {
constructor ({ User }) {
this.User = user
}
getTeam (teamId) {
return this.User.find({ teamId: teamId })
}
}
And the test:
import Team from './Team'
describe('Team', function() {
it('#getTeam', async function () {
const users = [{id: 1, id: 2}]
const fakeUser = {
find: function() {
return Promise.resolve(users)
}
}
const team = new Team({
User: fakeUser
})
const team = await team.getTeam()
expect(team).to.eql(users)
})
})
Now, using Awilix, let's write our composition root:
import { createContainer, asClass } from 'awilix'
import Team from './Team'
import User from './User'
const container = createContainer()
.register({
Team: asClass(Team),
User: asClass(User)
})
// Grab an instance of Team
const team = container.resolve('Team')
// Alternatively...
const team = container.cradle.Team
// Use it
team.getTeam(123) // calls User.find()
That's as simple as it gets; Awilix can handle object lifetimes as well, just like the .NET / Java containers did. This lets you do cool stuff like inject the current user to your services, intantiating your services once per http request, etc.
I'm unit testing JavaScript with Jasmine and I am running into some problems.
I have a large file to test and it has a lot of dependencies and those dependencies have their own dependencies. Because of said dependencies I want to mock all I can. There lies the problem. How can I mock a constructor so that it includes the methods that belong to it?
Lets say I'm testing a method createMap of class Map:
In that createMap method it calls for Layers class constructor using
var layers = new Layers()
I'm spying on it using
spyOn(window, 'Layers').and.callThrough()
That works fine but later in the createMap method it calls for layers.addLayer() where addLayer is a method of Layers class. Problem is that because I mocked the Layers call it doesn't recognize the addLayer method.
Is there a way to mock it so that it includes all the methods of the called class or is my only option to stub the whole Layers class or not mock it?
Or what would be a good way to handle this? I've tried to spyOn(Layers, 'addLayer') but there it says that no method addLayer is found.
I'm sorry if it's confusing a bit. I had trouble thinking how should I ask it.
IMO, it's unnecessary to spy on window, since you can easily shadow the variable in local scope by creating a spy object with the same name:
describe('Map', function () {
var Layers;
beforeEach(function () {
Layers = function () {
// alternatively, you could move this to Layers.prototype
this.addLayers = jasmine.createSpy('Layers#addLayers');
};
});
/* ... */
});
If you want an automatic mocking and using CommonJS modules, you may try Jest framework, which is built on top of Jasmine.
Let's talk in terms of example classes you have provided.
You're writing a test suite for Map. All its dependencies (in example we have only Layer) MUST be mocked. Because in a unit test you're supposed to test one layer, as small functionality as possible. It means that you should provide such a mocked Layer constructor that exposes interface used in Map. For example:
function Layers() {
this.addLayer = sinon.spy();
}
In this test suite only Map class should remain "real". I.e. it's code must not be altered. And with such mockups like Layer you make sure that you do not trigger any interaction with real-code dependencies (own-written dependencies should be tested in a different test suite, also make sure you don't try to test framework functions, like $tate.resolve, $inject etc.). If class Map is complicated and has multiple dependencies, investigate sinon features that help automate this process, for example sinon.mock
If you ever transpile class syntax to a es3 or another pre-2015 dialect you will discover something interesting.
class a {
constructor(){
...
}
index()
{
...
}
}
Becomes:
var a = /** #class */ (function () {
function a() {
...
}
a.prototype.index = function () {
...
};
return a;
}());
This same implementation is used by later standards but masked by the 2015 class syntax. In other words a.index doesn't exist instead it's defined as a.prototype.index. Thus you need spyOn(a.prototype, 'index') to spy on it.
Change spyOn(Layers, 'addLayer') to spyOn(Layers.prototype, 'addLayer')
I've been hoping to use inheritance in Meteor, but I couldn't find anything about it in the documentation or on Stack Overflow.
Is it possible to have templates inheriting properties and methods from another abstract template, or class?
I think the short answer is no, but here's a longer answer:
One thing I've done to share functionality among templates is to define an object of helpers, and then assign it to multiple templates, like so:
var helpers = {
displayName: function() {
return Meteor.user().profile.name;
},
};
Template.header.helpers(helpers);
Template.content.helpers(helpers);
var events = {
'click #me': function(event, template) {
// handle event
},
'click #you': function(event, template) {
// handle event
},
};
Template.header.events(events);
Template.content.events(events);
It's not inheritance, exactly, but it does enable you to share functionality between templates.
If you want all templates to have access to a helper, you can define a global helper like so (see https://github.com/meteor/meteor/wiki/Handlebars):
Handlebars.registerHelper('displayName',function(){return Meteor.user().profile.name;});
I've answered this question here. While the solution doesn't use inheritance, it allow you to share events and helpers across templates with ease.
In a nutshell, I define an extendTemplate function which takes in a template and an object with helpers and events as arguments:
extendTemplate = (template, mixin) ->
helpers = ({name, method} for name, method of mixin when name isnt "events")
template[obj.name] = obj.method for obj in helpers
if mixin.events?
template.events?.call(template, mixin.events)
template
For more details and an example see my other answer.
Recently, I needed the same functionality in my app so I've decided to create my own package that will do that job out of the box. Although it's still work in progress, you can give it a go.
Basically, the entire method is as follows:
// Defines new method /extend
Template.prototype.copyAs = function (newTemplateName) {
var self = this;
// Creating new mirror template
// Copying old template render method to keep its template
var newTemplate = Template.__define__(newTemplateName, self.__render);
newTemplate.__initView = self.__initView;
// Copying helpers
for (var h in self) {
if (self.hasOwnProperty(h) && (h.slice(0, 2) !== "__")) {
newTemplate[h] = self[h];
}
}
// Copying events
newTemplate.__eventMaps = self.__eventMaps;
// Assignment
Template[newTemplateName] = newTemplate;
};
In your new template (new_template.js) in which you want to extend your abstract one, write following:
// this copies your abstract template to your new one
Template.<your_abstract_template_name>.copyAs('<your_new_template_name>');
Now, you can simply either overwrite your helpers or events (in my case it's photos helper), by doing following:
Template.<your_new_template_name>.photos = function () {
return [];
};
Your will refer to overwritten helper methods and to abstract ones that are not overwritten.
Note that HTML file for new template is not necessary as we refer to abstract one all the time.
Source code is available on Github here!
How much can I stretch RequireJS to provide dependency injection for my app? As an example, let's say I have a model that I want to be a singleton. Not a singleton in a self-enforcing getInstance()-type singleton, but a context-enforced singleton (one instance per "context"). I'd like to do something like...
require(['mymodel'], function(mymodel) {
...
}
And have mymodel be an instance of the MyModel class. If I were to do this in multiple modules, I would want mymodel to be the same, shared instance.
I have successfully made this work by making the mymodel module like this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Is this type of usage expected and common or am I abusing RequireJS? Is there a more appropriate way I can perform dependency injection with RequireJS? Thanks for your help. Still trying to grasp this.
This is not actually dependency injection, but instead service location: your other modules request a "class" by a string "key," and get back an instance of it that the "service locator" (in this case RequireJS) has been wired to provide for them.
Dependency injection would involve returning the MyModel constructor, i.e. return MyModel, then in a central composition root injecting an instance of MyModel into other instances. I've put together a sample of how this works here: https://gist.github.com/1274607 (also quoted below)
This way the composition root determines whether to hand out a single instance of MyModel (i.e. make it singleton scoped) or new ones for each class that requires it (instance scoped), or something in between. That logic belongs neither in the definition of MyModel, nor in the classes that ask for an instance of it.
(Side note: although I haven't used it, wire.js is a full-fledged dependency injection container for JavaScript that looks pretty cool.)
You are not necessarily abusing RequireJS by using it as you do, although what you are doing seems a bit roundabout, i.e. declaring a class than returning a new instance of it. Why not just do the following?
define(function () {
var value = 10;
return {
doStuff: function () {
alert(value);
}
};
});
The analogy you might be missing is that modules are equivalent to "namespaces" in most other languages, albeit namespaces you can attach functions and values to. (So more like Python than Java or C#.) They are not equivalent to classes, although as you have shown you can make a module's exports equal to those of a given class instance.
So you can create singletons by attaching functions and values directly to the module, but this is kind of like creating a singleton by using a static class: it is highly inflexible and generally not best practice. However, most people do treat their modules as "static classes," because properly architecting a system for dependency injection requires a lot of thought from the outset that is not really the norm in JavaScript.
Here's https://gist.github.com/1274607 inline:
// EntryPoint.js
define(function () {
return function EntryPoint(model1, model2) {
// stuff
};
});
// Model1.js
define(function () {
return function Model1() {
// stuff
};
});
// Model2.js
define(function () {
return function Model2(helper) {
// stuff
};
});
// Helper.js
define(function () {
return function Helper() {
// stuff
};
});
// composition root, probably your main module
define(function (require) {
var EntryPoint = require("./EntryPoint");
var Model1 = require("./Model1");
var Model2 = require("./Model2");
var Helper = require("./Helper");
var entryPoint = new EntryPoint(new Model1(), new Model2(new Helper()));
entryPoint.start();
});
If you're serious about DI / IOC, you might be interested in wire.js: https://github.com/cujojs/wire
We use a combination of service relocation (like Domenic describes, but using curl.js instead of RequireJS) and DI (using wire.js). Service relocation comes in very handy when using mock objects in test harnesses. DI seems the best choice for most other use cases.
Not a singleton in a self-enforcing getInstance()-type singleton, but
a context-enforced singleton (one instance per "context").
I would recommend it only for static objects. It's perfectly fine to have a static object as a module that you load using in the require/define blocks. You then create a class with only static properties and functions. You then have the equivalent of the Math Object that has constants like PI, E, SQRT and functions like round(), random(), max(), min(). Great for creating Utility classes that can be injected at any time.
Instead of this:
define(function() {
var MyModel = function() {
this.value = 10;
}
return new MyModel();
});
Which creates an instance, use the pattern for a static object (one where values are always the same as the Object never gets to be instantiated):
define(function() {
return {
value: 10
};
});
or
define(function() {
var CONSTANT = 10;
return {
value: CONSTANT
};
});
If you want to pass an instance (the result of using a Module that have return new MyModel();), then, within an initialize function, pass a variable that capture the current state / context or pass on the Object that contains information on state / context that your modules needs to know about.