In a Meteor app, I need to test some client code that has statements such as
Meteor.call('foo', param1, param2, (error, result) => { .... });
And, in these methods, I have security checks to make sure that the method can only be called by authenticated users. However, all these tests fail during tests because no user is authenticated.
In each server methods, I check users like this
if (!Roles.userIsInRole(this.userId, [ ...roles ], group)) {
throw new Meteor.Error('restricted', 'Access denied');
}
I have read that we should directly export the server methods and test them directly, and I actually do this for server methods testing, but it is not possible, here, since I need to test client code that depend on Meteor.call.
I also would certainly not want to have if (Meteor.isTest || Meteor.isAppTest) { ... } all over the place....
I thought perhaps wrapping my exported methods like this :
export default function methodsWrapper(methods) {
Object.keys(methods).forEach(method => {
const fn = methods[method];
methods[method] = (...args) => {
const user = Factory.create('user', { roles: { 'default': [ 'admin' ] } });
return fn.call({ userId: user._id }, ...args);
};
});
};
But it only works when calling the methods directly.
I'm not sure how I can test my client code with correct security validations. How can I test my client code with authenticated users?
Part I: Making the function an exported function
You just need to add the exported method also to meteor methods.
imports/api/foo.js
export const foo = function(param1, param2){
if (!Roles.userIsInRole(this.userId, [ ...roles ], group)) {
throw new Meteor.Error('restricted', 'Access denied');
}
//....and other code
};
This method can then be imported in your server script:
imports/startup/methods.js
import {foo} from '../api/foo.js'
Meteor.methods({
'foo' : foo
});
So it is available to be called via Mateor.call('foo'...). Note that the callback has not to be defined in foo's function header, since it is wrapped automatically by meteor.
imports/api/foo.tests.js
import {foo} from './foo.js'
if (Meteor.isServer) {
// ... your test setup
const result = foo(...) // call foo directly in your test.
}
This is only on the server, now here is the thing for testing on the client: you will not come around calling it via Meteor.call and test the callback result. So on your client you still would test like:
imports/api/foo.tests.js
if (Meteor.isClient) {
// ... your test setup
Meteor.call('foo', ..., function(err, res) {
// assert no err and res...
});
}
Additional info:
I would advice you to use mdg:validated-method which allows the same functionality above PLUS gives you more sophisticated control over method execution, document schema validation and flexibility. It is also documented well enough to allow you to implement your above described requirement.
See: https://github.com/meteor/validated-method
Part II: Running you integration test with user auth
You have two options here to test your user authentication. They have both advantages and disadvantages and there debates about what is the better approach. No mater which one of both you will test, you need to write a server method, that adds an existing user to given set of roles.
Approach 1 - Mocking Meteor.user() and Meter.userid()
This is basically described/discussed in the following resources:
A complete gist example
An example of using either mdg:validated-method or plain methods
Using sinon spy and below also an answer from myself by mocking it manually but this may not apply for your case because it is client-only. Using sinon requires the following package: https://github.com/practicalmeteor/meteor-sinon
Approach 2 - Copying the "real" application behavior
In this case you completely test without mocking anything. You create real users and use their data in other tests as well.
In any case you need a server method, that creates a new user by given name and roles. Note that it should only be in a file with .test.js as name. Otherwise it can be considered a risk for security.
/imports/api/accounts/accounts.tests.js
Meteor.methods({
createtestUser(name,password, roles, group);
const userId = Accounts.createUser({username:name, password:password});
Roles.addUserToRoles(userId, roles, group);
return userId;
});
Note: I often heard that this is bad testing, which I disagree. Especially integration testing should mime the real behavior as good as possible und should use less mocking/spying as unit tests do.
Related
I have Express Routing set up with multiple routes, each using a different Oracle connection. I have to call initOracleClient prior to getConnection, however I get an error (Error: NJS-077: Oracle Client library has already been initialized) when I try to initOracleClient in both routes. I've tried moving the initOracleClient to different locations in the structure; both at the app level and route level. Where in a REST MVC structure do you initialize the client?
A REST MVC application typically has some supporting infrastructure. That is to say, MVC is not a complete blueprint on how to structure the entirety of your program's code - only a general rule of thumb of how to assign certain responsibilities.
The library you're using needs initialization, and apparently this code should execute only once. There are several ways to go about it:
Initialize the client once before starting up the express server, and then pass in the ready-to-use client for use by route handlers. This may be the easiest to use, but must necessarily delay the .listen() call - so the time until your application starts responding to HTTP may be longer.
Use a pattern known as the Singleton to allow route handlers to initialize the client, but only execute the initialization once under the hood. Depending on how exactly the library is initialized (does it return a Promise? does it use a callback?), this may require some careful design - for example, you may need to store and return a Promise instance, so multiple consumers will be calling .then() on the same Promise.
I implemented the Singleton pattern as suggested:
import oracledb from 'oracledb';
class PrivateOraInitSingleton {
constructor() {
try {
oracledb.initOracleClient({libDir: '/usr/local/lib/instantclient_19_8'});
} catch (err) {
console.error(err);
process.exit(1);
}
}
}
class OraInitSingleton {
constructor() {
throw new Error('Use OraInitSingleton.getInstance()');
}
static getInstance() {
if (!OraInitSingleton.instance) {
OraInitSingleton.instance = new PrivateOraInitSingleton();
}
return OraInitSingleton.instance;
}
}
export default OraInitSingleton;
Usage:
const object = OraInitSingleton.getInstance();
try {
connectionPromise = oracledb.getConnection({
user : process.env.DB_USER,
password : process.env.DB_PASSWORD,
connectString : process.env.CONNECT_STRING
});
} catch (err) {
console.error(err);
process.exit(1);
}
I do not understand the difference between a resolver and a service in a nestJS application using graphQl and mongoDB.
I found examples like this, where the resolver just calls a service, so the resolver functions are always small as they just call a service function. But with this usage I don't understand the purpose of the resolver at all...
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
return this.taskService.deleteTask(id);
}
}
#Injectable()
export class TasksService {
deleteTask(id: string) {
// Define collection, get some data for any checking and then update dataset
const Tasks = this.db.collection('tasks')
const data = await Task.findOne({ _ id: id })
let res
if (data.checkSomething) res = Task.updateOne({ _id: id }, { $set: { delete: true } })
return res
}
}
On the other side I can put all the service logic into the resolver and just leave the mongodb part in the service, but then the services are small and just replacing a simple mongodb call. So why shouldn't I put that also to the resolver.
#Resolver('Tasks')
export class TasksResolver {
constructor(
private readonly taskService: TasksService
) {}
#Mutation(type => WriteResult)
async deleteTask(
#Args('id') id: string,
) {
const data = await this.taskService.findOne(id)
let res
if (data.checkSomething) {
const update = { $set: { delete: true } }
res = this.taskService.updateOne(id, update)
}
return res
}
}
#Injectable()
export class TasksService {
findOne(id: string) {
const Tasks = this.db.collection('tasks')
return Task.findOne({ _ id: id })
}
updateOne(id: string, update) {
const Tasks = this.db.collection('tasks')
return Task.updateOne({ _ id: id }, update)
}
}
What is the correct usage of the resolver and service? In both cases one part keeps nearly a one liner for each function, so why should I split that at all?
You're right that it's a pretty linear call and there isn't much logic behind it, but the idea is to separate the concerns of each class. The resolver, much like a REST or RPC controller, should act as a gateway to your business logic, so that the logic can be easily re-used or re-called in other parts of the server. If you have a hybrid server with RPC or a REST + GQL combo, you could re-use the service to ensure both REST and GQL get the same return.
In the end, it comes down to your choice on what you want to do, but separating the resovler from the service (having thin gateways and fat logic classes) is Nest's opinion on the right design.
Your service help you fetch the data from Database. Your Resolver help to delivery these data to user. Sometimes, the data that you delivery to user not the same with data from Database, so the Resolver will make up these data as user's demand before sending it to user.
In terms of best practice, the resolver or controller should be thought of as the manager. In this case (as is stereotypical), the manager shouldn't be doing any of the actual work except for telling the workers what to do. The Manager determines who (which worker/service) should do the work. Sometimes it might be two or more workers/services. They specialize in telling "who", "what" to do.
The workers on the other hand, execute on the actual task. In your case, another option would be to have a database "repository" for database commands like findOne, findOneByX, updateOne; and also a service to handle the actual logic of the task. So the service worker takes the instructions from the manager(resolver) and only uses their logic to tell their database fetching repository buddies what to fetch.
In this way, the manager manages who should do the task. The service contains the logic and tells the other repository methods focused on database fetches what to fetch.
// So you would have...
task.resolver.ts
task.service.ts
task.repository.ts
The task.resolver will contain one line that calls the task.service method
The task.service will contain the logic to manage the task
The task.repository will contain methods like what you have in your suggested task.service - essentially database only methods
I have a network module to ping a legacy database with data in multiple formats, and I want to standardize it here in the network module before passing it into the application so my application can expect a certain format of data (don't want the old, poor formatting polluting my business logic). I'm struggling with how to pass mock data through as this network module, specifically as it relates to the formatter. Here's what I mean:
// User API Network Module
// UserAPI.ts
export const getUser = (uid: String, callback: (GetUserResponse) => void): void => {
// Do network call here and format the data into a typescript type
// matching the GetUserResponse structure by business logic expects
callback(formattedData)
}
In my test file, I can mock this call easily with:
import { getUser } from "./UserAPI"
jest.mock("./UserAPI", () => ({
getUser: (uid: String, callback: (GetUserResponse) => void) => {
const mockedUserData = require("./mockUser.json")
// But how do I format it here?
return formattedMockedUserData
},
}))
I can create a formatter function in my UserAPI.ts file, export it, and run it in the jest mock, but I'm wondering if that's a best practice because it technically leaks the UserAPI implementation details. And I point that out only because no other file cares about how UserAPI formats things except UserAPI. If I have to leak it for testing purposes, I'll do that. But is there a better way to mock the network call and run it through a formatter without exposing additional implementation details?
And please be gentle on my typescript - I come from both a JS and strongly typed background, but this is my first venture into using typescript :)
Even though it's not used multiple places extract it - following Single Responsibility Principle - into its own construct. You test all formatting logic in Formmater Test not in User API Test. Additionally you can test the integration of Formatter with User API in an Integration Test.
Well I embraced test-driven-development in the past year while learning C# (those seem to go hand in hand). In javascript however I am struggling to find a good workflow for tdd. This is mainly due to the combination of many frameworks which seemingly consider testing a second class citizen.
As an example consider a class worker. This class would have some functionality to act upon a database. So how would I write unit tests for the functionality of this class?
In c# (and rest of C/JAVA family) I'd write this class in such a way that the constructor would take a database-connection parameter. Then during test runs the object is called with a mock-database-connection object instead of the real object. Thus no modification of the source.
In python a similar approach can be used, however apart from providing a mocking object to the constructor to handle HAS_A dependencies, we can also use dependency injection to mock IS_A dependencies.
Now apply this in javascript, and sailsJS in particular (Though a similar problem occurs with sencha and other frameworks). It seems that the code is so tightly coupled to the library/framework that I can't create manual stubs/mocks? - Other than by actually using a pre-run task to modify the source/config.js?
In sails an object (say worker, a controller) has to reside in a specific folder to work, and it "connects" automatically to the database, without me providing any notion of a database object. (Thus preventing me from actually supplying it with my own object).
Say I have a database with a table "Students", then a controller would look something like (With Students being a model defined in api/models:
const request = require('request');
module.exports = {
updateData: function (req, res) {
let idx = params.jobNumber;
Students.find({Nr:idx})
.exec(function (err, result) {
//....
});
},
};
So how would I load above function into a (mocha) test? And how would I decouple the database (used implicitly by sails) so that I can mock the functionality? - And what should I actually mock?
I of course don't wish to do integration tests, so I shouldn't build a "development database" as I don't wish to test the connection, I wish to test the controller functions.
In the documentation, they provide a nice quick example of how to set up testing using Mocha: https://sailsjs.com/documentation/concepts/testing
In the bootstrap.test.js file, all they're doing is lifting and lowering your application with Sails, just so your application has access to controllers/models/etc. within its test environment. They also show how to test individual controllers, which is essentially just making requests that hit the endpoints to fire off the controller's actions. To avoid testing the full lifecycle of a request, you can just require the controller file within a *.test.js file and test any exported action. Remember, though, Sails builds the request and response objects that get passed to the controllers. So, if you want all of the correct data and have those objects be valid, it's best to just let Sails handle it, and you only make a request to the endpoint, unless if you know exactly how to build the request and response objects. But, that's the point of a framework: you use it as intended, and test against/with it. You don't use your version of how it may work. TDD is in all languages and frameworks, you just need to fit it within your technology.
If you don't want to use a database for your test environment, you can tell it to use the sails-disk adapter by creating an environment file under config/env/ for the test environment and forcing that environment to use sails-disk.
For example...
config/env/test.js --> test environment file
module.exports = {
models: {
connection: 'localDiskDb',
migrate: 'drop',
},
port: 1337,
host: '127.0.0.1',
};
In config/connections.js the below to connections object (if not already there)...
localDiskDb: {
adapter: 'sails-disk'
},
And finally, we need to tell it to use that environment when running the tests. Modify test/bootstrap.test.js like the following...
var sails = require('sails');
before(function(done) {
// Increase the Mocha timeout so that Sails has enough time to lift.
this.timeout(10000);
// Set environment to testing
process.env.NODE_ENV = 'test';
sails.lift({
// configuration for testing purposes
}, function(err) {
if (err) {
return done(err);
}
//...
done(err, sails);
});
});
after(function(done) {
// here you can clear fixtures, etc.
// This will "refresh" the memory store so you
// have a clean test datastore every time you run tests
sails.once('hook:orm:reloaded', () => {
sails.lower((err) => {
done();
if (err) {
process.exit(1);
} else {
process.exit(0);
}
});
});
sails.emit('hook:orm:reload');
});
Adding Jason's suggestion in an "answer" format, so that others may find it more easily.
sails-mock-models allows simple mocking for sails model queries, based on sinon
Mock any of the standard query methods (ie 'find', 'count', 'update') They will be called with no side effects.
I haven't actually tried it yet (just found this question), but I'll edit this if/when I have any problems.
Sails Unit Test is perfectly explained in the following blog.
https://www.packtpub.com/books/content/how-add-unit-tests-sails-framework-application
Please refer to it.
I'm new to NodeJs development coming from the .NET world
i'm searching the web for best practices regrading DI / DIP in Javascript
In .NET i would declare my dependencies at the constructor whereas in javascript i see a common pattern is to declare dependencies in the module level via a require statement.
for me it looks like that when i use require i'm coupled to a specific file while using a constructor to receive my dependency is more flexible.
What would you recommend doing as a best practice in javascript? (I'm looking for the architectural pattern and not an IOC technical solution)
searching the web i came along this blog post (which has some very interesting discussion in the comments):
https://blog.risingstack.com/dependency-injection-in-node-js/
it summerizes my conflict pretty good.
here's some code from the blog post to make you understand what i'm talking about:
// team.js
var User = require('./user');
function getTeam(teamId) {
return User.find({teamId: teamId});
}
module.exports.getTeam = getTeam;
A simple test would look something like this:
// team.spec.js
var Team = require('./team');
var User = require('./user');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
this.sandbox.stub(User, 'find', function() {
return Promise.resolve(users);
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
VS DI:
// team.js
function Team(options) {
this.options = options;
}
Team.prototype.getTeam = function(teamId) {
return this.options.User.find({teamId: teamId})
}
function create(options) {
return new Team(options);
}
test:
// team.spec.js
var Team = require('./team');
describe('Team', function() {
it('#getTeam', function* () {
var users = [{id: 1, id: 2}];
var fakeUser = {
find: function() {
return Promise.resolve(users);
}
};
var team = Team.create({
User: fakeUser
});
var team = yield team.getTeam();
expect(team).to.eql(users);
});
});
Regarding your question: I don't think that there is a common practice in the JS community. I've seen both types in the wild, require modifications (like rewire or proxyquire) and constructor injection (often using a dedicated DI container). However, personally, I think not using a DI container is a better fit with JS. And that's because JS is a dynamic language with functions as first-class citizens. Let me explain that:
Using DI containers enforce constructor injection for everything. It creates a huge configuration overhead for two main reasons:
Providing mocks in unit tests
Creating abstract components that know nothing about their environment
Regarding the first argument: I would not adjust my code just for my unit tests. If it makes your code cleaner, simpler, more versatile and less error-prone, then go for out. But if your only reason is your unit test, I would not take the trade-off. You can get pretty far with require modifications and monkey patching. And if you find yourself writing too many mocks, you should probably not write a unit test at all, but an integration test. Eric Elliott has written a great article about this problem.
Regarding the second argument: This is a valid argument. If you want to create a component that only cares about an interface, but not about the actual implementation, I would opt for a simple constructor injection. However, since JS does not force you to use classes for everything, why not just use functions?
In functional programming, separating stateful IO from actual processing is a common paradigm. For instance, if you're writing code that is supposed to count file types in a folder, one could write this (especially when he/she is coming from a language that enforce classes everywhere):
const fs = require("fs");
class FileTypeCounter {
countFileTypes(dirname, callback) {
fs.readdir(dirname, function (err) {
if (err) return callback(err);
// recursively walk all folders and count file types
// ...
callback(null, fileTypes);
});
}
}
Now if you want to test that, you need to change your code in order to inject a fake fs module:
class FileTypeCounter {
constructor(fs) {
this.fs = fs;
}
countFileTypes(dirname, callback) {
this.fs.readdir(dirname, function (err) {
// ...
});
}
}
Now, everyone who is using your class needs to inject fs into the constructor. Since this is boring and makes your code more complicated once you have long dependency graphs, developers invented DI containers where they can just configure stuff and the DI container figures out the instantiation.
However, what about just writing pure functions?
function fileTypeCounter(allFiles) {
// count file types
return fileTypes;
}
function getAllFilesInDir(dirname, callback) {
// recursively walk all folders and collect all files
// ...
callback(null, allFiles);
}
// now let's compose both functions
function getAllFileTypesInDir(dirname, callback) {
getAllFilesInDir(dirname, (err, allFiles) => {
callback(err, !err && fileTypeCounter(allFiles));
});
}
Now you have two super-versatile functions out-of-the-box, one which is doing IO and the other one which processes the data. fileTypeCounter is a pure function and super-easy to test. getAllFilesInDir is impure but a such a common task, you'll often find it already on npm where other people have written integration tests for it. getAllFileTypesInDir just composes your functions with a little bit of control flow. This is a typical case for an integration test where you want to make sure that your whole application is working correctly.
By separating your code between IO and data processing, you won't find the need to inject anything at all. And if you don't need to inject anything, that's a good sign. Pure functions are the easiest thing to test and are still the easiest way to share code between projects.
In the past, DI containers as we know them from Java and .NET did not exist. With Node 6 came ES6 Proxies which opened up the possibility of such containers - Awilix for example.
So let's rewrite your code to modern ES6.
class Team {
constructor ({ User }) {
this.User = user
}
getTeam (teamId) {
return this.User.find({ teamId: teamId })
}
}
And the test:
import Team from './Team'
describe('Team', function() {
it('#getTeam', async function () {
const users = [{id: 1, id: 2}]
const fakeUser = {
find: function() {
return Promise.resolve(users)
}
}
const team = new Team({
User: fakeUser
})
const team = await team.getTeam()
expect(team).to.eql(users)
})
})
Now, using Awilix, let's write our composition root:
import { createContainer, asClass } from 'awilix'
import Team from './Team'
import User from './User'
const container = createContainer()
.register({
Team: asClass(Team),
User: asClass(User)
})
// Grab an instance of Team
const team = container.resolve('Team')
// Alternatively...
const team = container.cradle.Team
// Use it
team.getTeam(123) // calls User.find()
That's as simple as it gets; Awilix can handle object lifetimes as well, just like the .NET / Java containers did. This lets you do cool stuff like inject the current user to your services, intantiating your services once per http request, etc.