I am totally confused. There are many sources out there contradicting each other about the definitions of Dependency Injection and Inversion of Control. Here is the gist of my understanding without many additional detail, that in most cases made things more convoluted for me: Dependency Injection means that instead of my function conjuring the required dependencies, it is given the dependency as a parameter. Inversion of Control means that, for instance when you use a framework it is the framework that calls your userland code, and the control is inversed because in the 'default' case your code would be calling specific implementations in a library.
Now as I understand, somehow along the way, because my function that doesn't conjure up the dependencies anymore but gets it as an argument Inversion of Control magically happens when I use dependency injection like below.
So here is a silly example I wrote for myself to wrap my head around the idea:
getTime.js
function getTime(hour) {
return `${hour} UTC`
}
module.exports.getTime = getTime
welcome.js
function welcomeUser(name, hour) {
const getTime = require('./time').getTime
const time = getTime(`${hour} pm`)
return(`Hello ${name}, the time is ${time}!`)
}
const result = welcomeUser('Joe', '11:00')
console.log(result)
module.exports.welcomeUser = welcomeUser
welcome.test.js
const expect = require('chai').expect
const welcomeUser = require('./welcome').welcomeUser
describe('Welcome', () => {
it('Should welcome user', () => {
// But we would want to test how the welcomeUser calls the getTime function
expect(welcomeUser('Joe', '10:00')).to.equal('Hello Joe, the time is 10:00 pm UTC!')
})
})
The problem now is that the call of the getTime function is implemented in the welcome.js function, and it can not be intercepted by a test. What we would like to do is to test how the getTime function is called, and we can't to that this way.
The other problem is that the getTime function is pretty much harcoded, so we can't mock it, and that could be useful because we only want to test the welcomUser function separately, as that is the use of a unit test (the getTime function could be simultaneously implemented, for instance).
So the main problem is that the code is tightly coupled, it's harder to test and it is just wreaking havoc all around the place. Now let's use dependency injection:
getTime.js
function getTime(hour) {
return `${hour} UTC`
}
module.exports.getTime = getTime
welcome.js
const getTime = require('./time').getTime
function welcomeUser(name, hour, dependency) {
const time = dependency(hour)
return(`Hello ${name}, the time is ${time}!`)
}
const result = welcomeUser('Joe', '10:00', getTime)
console.log(result)
module.exports.welcomeUser = welcomeUser
welcome.test.js
const expect = require('chai').expect
const welcomeUser = require('./welcome').welcomeUser
describe('welcomeUser', () => {
it('should call getTime with the right hour value', () => {
const fakeGetTime = function(hour) {
expect(hour).to.equal('10:00')
}
// 'Joe' as an argument isn't even neccessary, but it's nice to leave it there
welcomeUser('Joe', '10:00', fakeGetTime)
})
it('should log the current message to the user', () => {
// Let's stub the getTime function
const fakeGetTime = function(hour) {
return `${hour} pm UTC`
}
expect(welcomeUser('Joe', '10:00', fakeGetTime)).to.equal('Hello Joe, the time is 10:00 pm UTC!')
})
})
As I understood, what I did above was Dependency Injection. Multiple sources claim that Dependency Injection is not possible without Inversion of Control. But where does Inversion of Control come into the picture?
Also what about the regular JavaScript workflow, where you just import the dependencies globally and use them later in your functions, instead of require-ing them inside of the functions or giving it to them as parameters?
Check Martin Fowler's article on IoC and DI. https://martinfowler.com/articles/injection.html
IoC: Very generic word. This inversion can happen in many ways.
DI: Can be viewed as one branch of this generic word IoC.
So in your code when you specifically implements DI, one would say your code has this general idea of IoC in the flavor of DI. What really inversed is, the default way of looking for behavior (default way is writing it within the method, inversed way would be getting behavior injected from outside).
Related
I want to create a JavaScript Github action and use Jest for testing purposes. Based on the docs I started parsing the input, given the following example code
import { getInput } from '#actions/core';
const myActionInput = getInput('my-key', { required: true });
Running this code during development throws the following error
Input required and not supplied: my-key
as expected because the code is not running inside a Github action environment. But is it possible to create tests for that? E.g.
describe('getMyKey', () => {
it('throws if the input is not present.', () => {
expect(() => getMyKey()).toThrow();
});
});
How can I "fake" / mock such an environment with a context to ensure my code works as expected?
There are several approaches you can take.
Set Inputs Manually
Inputs are passed to actions as environment variables with the prefix INPUT_ and uppercased. Knowing this, you can just set the respective environment variable before running the test.
In your case, the input my-key would need to be present as the environment variable named INPUT_MY-KEY.
This should make your code work:
describe('getMyKey', () => {
it('throws if the input is not present.', () => {
process.env['INPUT_MY-KEY'] = 'my-value';
expect(() => getMyKey()).toThrow();
});
});
Use Jest's Mocking
You could use jest.mock or jest.spyOn and thereby mock the behaviour of getInput.
Docs: ES6 Class Mocks
Abstract Action
I don't like setting global environment variables, because one test might affect another depending on the order they are run in.
Also, I don't like mocking using jest.mock, because it feels like a lot of magic and I usually spend too much time getting it to do what I want. Issues are difficult to diagnose.
What seems to bring all the benefits with a tiny bit more code is to split off the action into a function that can be called by passing in the "global" objects like core.
// index.js
import core from '#actions/core';
action(core);
// action.js
function action(core) {
const myActionInput = core.getInput('my-key', { required: true });
}
This allows you to nicely test your action like so:
// action.js
describe('getMyKey', () => {
it('gets required key from input', () => {
const core = {
getInput: jest.fn().mockReturnValueOnce('my-value')
};
action(core);
expect(core.getInput).toHaveBeenCalledWith('my-key', { required: true });
});
});
Now you could say that we're no longer testing if the action throws an error if the input is not present, but also consider what you're really testing there: You're testing if the core action throws an error if the input is missing. In my opinion, this is not your own code and therefore worthy of testing. All you want to make sure is that you're calling the getInput function correctly according to the contract (i.e. docs).
What are the consequences of exporting functions from a module like this:
const foo = () => {
console.log('foo')
}
const bar = () => {
console.log('bar')
}
const internalFunc = () => {
console.log('internal function not exported')
}
export const FooBarService = {
foo,
bar
}
The only reason I've found is that it prevents module bundlers to perform tree-shaking of the exports.
However, exporting a module this way provides a few nice benefits like easy unit test mocking:
// No need for jest.mock('./module')
// Easy to mock single function from module
FooBarService.foo = jest.fn().mockReturnValue('mock')
Another benefit is that it allows context to where the module is used (Simply "find all references" on FooBarService)
A slightly opiniated benefit is that when reading consumer code, you can instantly see where the function comes from, due to that it is preprended with FooBarService..
You can get similar effect by using import * as FooBarService from './module', but then the name of the service is not enforced, and could differ among the consumers.
So for the sake of argument, let's say I am not to concerned with the lack of tree-shaking. All code is used somewhere in the app anyway and we do not do any code-splitting. Why should I not export my modules this way?
Benefits of using individual named exports are:
they're more concise to declare, and let you quickly discover right at their definition whether something is exported or not (in the form export const … = … or export function …(…) { … }).
they give the consumer of the module the choice to import them individually (for concise usage) or in a namespace. Enforcing the use of a namespace is rare (and could be solved with a linter rule).
they are immutable. You mention the benefit of mutable objects (easier mocking), but at the same time this makes it easier to accidentally (or deliberately) overwrite them, which causes hard-to-find bugs or at least makes reasoning harder
they are proper aliases, with hoisting across module boundaries. This is useful in certain circular-dependency scenarios (which should be few, but still). Also it allows "find all references" on individual exports.
(you already mentioned tree shaking, which kinda relies on the above properties)
Here are a couple of other benefits to multiple named exports (besides tree-shaking):
Circular dependencies are only possible when exports happen throughout the module. When all exporting happens at the end, then you can't have circular dependencies (which should be avoided anyways)
Sometimes it's nice to only import specific values from a module, instead of the entire module at once. This is especially true when importing constants, exception classes, etc. Sometimes it can be nice to just extract a couple of functions too, especially when those functions get used often. multiple named exports make this a little easier to do.
It's a more common way to do things, so if you're making an external API for others to consume, I would just stick with multiple named exports.
There may be other reasons - I can't think of anything else, but other answers might mention some.
With all of that being said, you'll notice that none of these arguments are very strong. So, if you find value in exporting an object literally everywhere, go for it!
One potential issue is that it'll expose everything, even if some functions are intended for use only inside the module, and nowhere else. For example, if you have
// service.js
const foo = () => {
console.log('foo')
}
const bar = () => {
console.log('bar')
}
const serviceInternalFn = () => {
// do stuff
}
export const FooBarService = {
foo,
bar,
serviceInternalFn,
}
then consumers will see that serviceInternalFn is exported, and may well try to use it, even if you aren't intending them to.
For somewhat similar reasons, when writing a library without modules, you usually don't want to do something like
<script>
const doSomething = () => {
// ...
};
const doSomethingInternal = () => {
// ...
};
window.myLibrary = {
doSomething
};
</script>
because then anything will be able to use doSomethingInternal, which may not be intended by the script-writer and might cause bugs or errors.
Rather, you'd want to deliberately expose only what is intended to be public:
<script>
window.myLibrary = (() => {
const doSomething = () => {
// ...
};
const doSomethingInternal = () => {
// ...
};
return {
doSomething
};
})();
</script>
I have a network module to ping a legacy database with data in multiple formats, and I want to standardize it here in the network module before passing it into the application so my application can expect a certain format of data (don't want the old, poor formatting polluting my business logic). I'm struggling with how to pass mock data through as this network module, specifically as it relates to the formatter. Here's what I mean:
// User API Network Module
// UserAPI.ts
export const getUser = (uid: String, callback: (GetUserResponse) => void): void => {
// Do network call here and format the data into a typescript type
// matching the GetUserResponse structure by business logic expects
callback(formattedData)
}
In my test file, I can mock this call easily with:
import { getUser } from "./UserAPI"
jest.mock("./UserAPI", () => ({
getUser: (uid: String, callback: (GetUserResponse) => void) => {
const mockedUserData = require("./mockUser.json")
// But how do I format it here?
return formattedMockedUserData
},
}))
I can create a formatter function in my UserAPI.ts file, export it, and run it in the jest mock, but I'm wondering if that's a best practice because it technically leaks the UserAPI implementation details. And I point that out only because no other file cares about how UserAPI formats things except UserAPI. If I have to leak it for testing purposes, I'll do that. But is there a better way to mock the network call and run it through a formatter without exposing additional implementation details?
And please be gentle on my typescript - I come from both a JS and strongly typed background, but this is my first venture into using typescript :)
Even though it's not used multiple places extract it - following Single Responsibility Principle - into its own construct. You test all formatting logic in Formmater Test not in User API Test. Additionally you can test the integration of Formatter with User API in an Integration Test.
I'm working with Webpack modules to keep things organized in my projects, but something very annoying is having to explicitly specify all the variables that the module might need, instead of just having it search for them in the calling function's scope. Is there a better way to do it? Example:
main.js
import {logMonth} from "./helpers";
document.addEventListener("DOMContentLoaded", () => {
let month = "September";
logMonth();
});
helpers.js
let logMonth = () => {
console.log(month)
}
This will produce an error since logMonth() doesn't have access to the month variable.
This is an extremely simplified example, but for functions that need many variables, it can get pretty ugly to pass all the required arguments that the function might need.
My question is: Is there a way to make modules have access to the variables of the calling scope instead of explicitly passing them?
You could, but why would you want to? Modules are designed to prevent this. Always prefer pure functions, it's way easier to debug once the app becomes complicated.
Then you don't want to be searching multiple nested scopes from multiple modules for a bug, optimally you want to only be looking in the module that threw the error instead of every scope it has access to.
So logMonth = month => console.log( month ); and logMonth( 'September' ); is preferred.
You can use an object if you need to send multiple parameters to a function.
That way you do not have to change the function call signature in all places, you just add another (optional) parameter to the object:
logMonths = ({ year, month, day}) => { ...do stuff... }
This will work both with logMonths({ month: 'september' }) as with logMonths({ month: 'september', year: '2019' }), so you never have to change like logMonths( 'september' ) into logMonths( null, 'september' ) and logMonths( 2019, 'september' ) everywhere you used logMonths() before it had a year parameter.
I actually discovered a sneaky way to do this. I'll demonstrate with two JavaScript files. The first will be the top level file, and the second will be the module.
top-level.js
import {PipeLine} from './PipeLine.js';
App = {
var1: 'asdf',
var2: 'dfgh',
method1: function() {
// do something...
},
}
pipeLine = new PipeLine(App);
PipeLine.js
export class Pipeline {
constructor(app) {
this.app = app;
// Now do what you will with all the properties and methods of app.
}
}
I'm new to NodeJs development coming from the .NET world
i'm searching the web for best practices regrading DI / DIP in Javascript
In .NET i would declare my dependencies at the constructor whereas in javascript i see a common pattern is to declare dependencies in the module level via a require statement.
for me it looks like that when i use require i'm coupled to a specific file while using a constructor to receive my dependency is more flexible.
What would you recommend doing as a best practice in javascript? (I'm looking for the architectural pattern and not an IOC technical solution)
searching the web i came along this blog post (which has some very interesting discussion in the comments):
https://blog.risingstack.com/dependency-injection-in-node-js/
it summerizes my conflict pretty good.
here's some code from the blog post to make you understand what i'm talking about:
// team.js
var User = require('./user');
function getTeam(teamId) {
return User.find({teamId: teamId});
}
module.exports.getTeam = getTeam;
A simple test would look something like this:
// team.spec.js
var Team = require('./team');
var User = require('./user');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
this.sandbox.stub(User, 'find', function() {
return Promise.resolve(users);
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
VS DI:
// team.js
function Team(options) {
this.options = options;
}
Team.prototype.getTeam = function(teamId) {
return this.options.User.find({teamId: teamId})
}
function create(options) {
return new Team(options);
}
test:
// team.spec.js
var Team = require('./team');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
var fakeUser = {
find: function() {
return Promise.resolve(users);
}
};
var team = Team.create({
User: fakeUser
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
Regarding your question: I don't think that there is a common practice in the JS community. I've seen both types in the wild, require modifications (like rewire or proxyquire) and constructor injection (often using a dedicated DI container). However, personally, I think not using a DI container is a better fit with JS. And that's because JS is a dynamic language with functions as first-class citizens. Let me explain that:
Using DI containers enforce constructor injection for everything. It creates a huge configuration overhead for two main reasons:
Providing mocks in unit tests
Creating abstract components that know nothing about their environment
Regarding the first argument: I would not adjust my code just for my unit tests. If it makes your code cleaner, simpler, more versatile and less error-prone, then go for out. But if your only reason is your unit test, I would not take the trade-off. You can get pretty far with require modifications and monkey patching. And if you find yourself writing too many mocks, you should probably not write a unit test at all, but an integration test. Eric Elliott has written a great article about this problem.
Regarding the second argument: This is a valid argument. If you want to create a component that only cares about an interface, but not about the actual implementation, I would opt for a simple constructor injection. However, since JS does not force you to use classes for everything, why not just use functions?
In functional programming, separating stateful IO from actual processing is a common paradigm. For instance, if you're writing code that is supposed to count file types in a folder, one could write this (especially when he/she is coming from a language that enforce classes everywhere):
const fs = require("fs");
class FileTypeCounter {
countFileTypes(dirname, callback) {
fs.readdir(dirname, function (err) {
if (err) return callback(err);
// recursively walk all folders and count file types
// ...
callback(null, fileTypes);
});
}
}
Now if you want to test that, you need to change your code in order to inject a fake fs module:
class FileTypeCounter {
constructor(fs) {
this.fs = fs;
}
countFileTypes(dirname, callback) {
this.fs.readdir(dirname, function (err) {
// ...
});
}
}
Now, everyone who is using your class needs to inject fs into the constructor. Since this is boring and makes your code more complicated once you have long dependency graphs, developers invented DI containers where they can just configure stuff and the DI container figures out the instantiation.
However, what about just writing pure functions?
function fileTypeCounter(allFiles) {
// count file types
return fileTypes;
}
function getAllFilesInDir(dirname, callback) {
// recursively walk all folders and collect all files
// ...
callback(null, allFiles);
}
// now let's compose both functions
function getAllFileTypesInDir(dirname, callback) {
getAllFilesInDir(dirname, (err, allFiles) => {
callback(err, !err && fileTypeCounter(allFiles));
});
}
Now you have two super-versatile functions out-of-the-box, one which is doing IO and the other one which processes the data. fileTypeCounter is a pure function and super-easy to test. getAllFilesInDir is impure but a such a common task, you'll often find it already on npm where other people have written integration tests for it. getAllFileTypesInDir just composes your functions with a little bit of control flow. This is a typical case for an integration test where you want to make sure that your whole application is working correctly.
By separating your code between IO and data processing, you won't find the need to inject anything at all. And if you don't need to inject anything, that's a good sign. Pure functions are the easiest thing to test and are still the easiest way to share code between projects.
In the past, DI containers as we know them from Java and .NET did not exist. With Node 6 came ES6 Proxies which opened up the possibility of such containers - Awilix for example.
So let's rewrite your code to modern ES6.
class Team {
constructor ({ User }) {
this.User = user
}
getTeam (teamId) {
return this.User.find({ teamId: teamId })
}
}
And the test:
import Team from './Team'
describe('Team', function() {
it('#getTeam', async function () {
const users = [{id: 1, id: 2}]
const fakeUser = {
find: function() {
return Promise.resolve(users)
}
}
const team = new Team({
User: fakeUser
})
const team = await team.getTeam()
expect(team).to.eql(users)
})
})
Now, using Awilix, let's write our composition root:
import { createContainer, asClass } from 'awilix'
import Team from './Team'
import User from './User'
const container = createContainer()
.register({
Team: asClass(Team),
User: asClass(User)
})
// Grab an instance of Team
const team = container.resolve('Team')
// Alternatively...
const team = container.cradle.Team
// Use it
team.getTeam(123) // calls User.find()
That's as simple as it gets; Awilix can handle object lifetimes as well, just like the .NET / Java containers did. This lets you do cool stuff like inject the current user to your services, intantiating your services once per http request, etc.