Using Jest spyOn a module that is initiated - javascript

Within an ExpresS API I'm using the Postmark library to send an email, which is initiated like this:
var postmark = require("postmark");
var client = new postmark.Client("aaaa-bbbbb-cccc");
And then used to send a password reset mail later on with:
client.sendEmailWithTemplate(
// Options here
);
Now, I would like to test this function has been called, but I have difficulties finding out how to mock/spy on this.
I have tried the following (simplified):
const request = require("supertest");
const app = require("../server/app");
const postmark = require("postmark");
jest.mock("postmark");
describe("API Tests", () => {
test("it should give a reset link when requesting with existing e-mail address", () => {
return request(app)
.post("/api/auth/passwordreset")
.send({
email: "user1#test.test"
})
.then(response => {
expect(postmark.Client).toHaveBeenCalled();
});
});
});
This works, but it's only testing if postmark has been used, since I can't figure out how to actually test the client.sendEmailWithTemplate method
Any suggestions on how to accomplish this?
EDIT: following up on #samanime answer I created a repo to illustrate the 'challenge'
https://github.com/Hyra/jest_test_example

You can specifically mock out the Client function that is returned by the mocked postmark to return an object with mocked functions.
In Jest, you can provide specific mocking code for a node_modules by creating a folder named __mocks__ at the same level as node_modules, i.e.,
/project-root
/node_modules
/__mocks__
Note, that is two underscores on each side.
In there, make a function named <package_name>.js (in your case, postmark.js). It will then load whatever is exported by that when you use the mock.
In that file, you can mock it out as needed. Something like this would probably work:
// global jest
module.exports = {
Client: jest.fn(() => ({
sendEmailWithTemplate: jest.fn(() => {})
}))
};
It doesn't have to be as compact as this, but basically it makes postmark have a function called Client which returns an object that has a functiono called sendEmailWithTemplate, both of which are mocks/spys.
Then you can just check if postmark.Client.sendEmailWithTemplate was called.
The one gotcha is you'll need to be sure to reset all of these in between tests. You could do this manually in your beforeEach(), but if you are going to reuse it, I like to add an extra function named __reset() which will reset the code and just call that:
// global jest
const mockedPostmark = {
Client: jest.fn(() => ({
sendEmailWithTemplate: jest.fn(() => {})
}))
};
mockedPostmark.__reset = () => {
mockedPostmark.Client.mockClear();
mockedPostmark.Client.sendEmailWithTemplate.mockClear();
};
module.exports = mockedPostmark;
You can add additional functions as needed as well.

Related

How to test custom JavaScript Github actions?

I want to create a JavaScript Github action and use Jest for testing purposes. Based on the docs I started parsing the input, given the following example code
import { getInput } from '#actions/core';
const myActionInput = getInput('my-key', { required: true });
Running this code during development throws the following error
Input required and not supplied: my-key
as expected because the code is not running inside a Github action environment. But is it possible to create tests for that? E.g.
describe('getMyKey', () => {
it('throws if the input is not present.', () => {
expect(() => getMyKey()).toThrow();
});
});
How can I "fake" / mock such an environment with a context to ensure my code works as expected?
There are several approaches you can take.
Set Inputs Manually
Inputs are passed to actions as environment variables with the prefix INPUT_ and uppercased. Knowing this, you can just set the respective environment variable before running the test.
In your case, the input my-key would need to be present as the environment variable named INPUT_MY-KEY.
This should make your code work:
describe('getMyKey', () => {
it('throws if the input is not present.', () => {
process.env['INPUT_MY-KEY'] = 'my-value';
expect(() => getMyKey()).toThrow();
});
});
Use Jest's Mocking
You could use jest.mock or jest.spyOn and thereby mock the behaviour of getInput.
Docs: ES6 Class Mocks
Abstract Action
I don't like setting global environment variables, because one test might affect another depending on the order they are run in.
Also, I don't like mocking using jest.mock, because it feels like a lot of magic and I usually spend too much time getting it to do what I want. Issues are difficult to diagnose.
What seems to bring all the benefits with a tiny bit more code is to split off the action into a function that can be called by passing in the "global" objects like core.
// index.js
import core from '#actions/core';
action(core);
// action.js
function action(core) {
const myActionInput = core.getInput('my-key', { required: true });
}
This allows you to nicely test your action like so:
// action.js
describe('getMyKey', () => {
it('gets required key from input', () => {
const core = {
getInput: jest.fn().mockReturnValueOnce('my-value')
};
action(core);
expect(core.getInput).toHaveBeenCalledWith('my-key', { required: true });
});
});
Now you could say that we're no longer testing if the action throws an error if the input is not present, but also consider what you're really testing there: You're testing if the core action throws an error if the input is missing. In my opinion, this is not your own code and therefore worthy of testing. All you want to make sure is that you're calling the getInput function correctly according to the contract (i.e. docs).

Is it okay to declare the db instance as a global variable for accessible at all the time , without need to use "require" statement in node js

I am a beginner in "node js"
I am developing an program and I want to use the database model in all
For example (something like "wpdb") WordPress
Is the best way to create it as a global variable, or to use the require statement as needed?
Please help me get the best answer.
Thanks
No, this is not the best way to handle a db connection. You only want the db open and around for as long as necessary and no longer. Usually this means opening the db connection at the place in your code that can know how long to keep it open and then close the connection after using it.
If you are simply referring to the configuration for opening a DB connection, then you could define that configuration object at application start and then pass it as a parameter to your db instantiation code.
import myUserDbCode from '../myUserDbFile';
import myUserMessagesCode from '../myUserMsgsDbFile';
const dbConfig = {
dbname: 'myFancyDb',
serverName: 'myDbServerName',
connectionTimeout: 60
}
(async () => {
const userList = await myUserDbCode(dbConfig)
.then((dbResultSet) => {
return dbResultSet.data;
}
};
const myUser = userList[42];
const userMessages = await myUserMessagesCode(dbConfig, myUser)
.then((dbResultSet) => {
return dbResultSet.data;
}
};
console.log(userMessages);
})();
I recommend following some DB tutorials that show how to build a complete application to see some patterns about how to handle passing configuration options to code and how to manage DB connections.

Can't access function after name changed in node.js

There is a function in one of my files in project which I changed it's name and can't access it with new name! still the old name is available to call
I tried deleting node_modules and install it again using npm i
there is the code to both files I'm using:
soapCall.js
before:
function call(username, password){
...
}
module.exports = call
after:
function checkUser(username, password){
...
}
module.exports = checkUser
how I imported and used:
app.js
const soap = require('../../models/soapCall');
...
soap.checkUser(username, password);
it's weired that still I can't access the new function name
I was using name call before that and STILL can use call function in my app.js file
call is already available as a method on the function prototype - Function.prototype.call. This means soap is a function, which is why call works, but checkUser doesn't.
soap is a function because you're exporting a function from your file, and simply renaming it in your main file. If you want to change the name, either change the import name:
const checkUser = require("../../models/soapCall");
Or export an object and use it as such:
module.exports = { checkUser };
// Main file
const soap = require("../../models/soapCall");
soap.checkUser(...);
The object method will also allow you to export multiple functions from the one file - you can get these into their own variables with destructuring:
module.exports = { checkUser, otherFunc };
// Main file
const { checkUser, otherFunc } = require("../../models/soapCall");
checkUser(...); // Calls checkUser function
otherFunc(...); // Calls otherFunc function
You are exporting a function not an Object, so you need to directly call soap().
And about being able to run call, it's part of function prototype. So you got confused with Function.prototype.call()
Use it like this, as per the comment by #Wiktor Zychla
soapCall.js
exports.checkUser(username, password){
}
app.js
const soap = require('../../models/soapCall');
soap.checkUser(username, password);

How to structure lambda code for testability

I am trying to make a small REST API with API gateway, lambda, and DynamoDB, while following good development practices such as TDD. I'm used to being able to use a DI container to provision my objects, which lends itself perfectly for mocking and testing. In an MVC framework, there would be a single entry point, where I could define my container configuration, bootstrap the application, and invoke the controller to handle the event. I could test the controller independently of the rest of the application, and inject mocked dependencies. I can't figure out how to decouple the dependencies a lambda function may have from the lambda function itself. For example:
const { DynamoDB } = require('aws-sdk')
const { UserRepo } = require('../lib/user-repo')
const client = new DynamoDB({ region: process.env.REGION }) // Should be resolved by DI container
const userRepo = new UserRepo(client) // Should be resolved by DI container
exports.handler = async (event) => {
return userRepo.get(event.id)
}
Please can anyone lead me in the right direction for structuring lambda code so it can be unit tested properly?
One way we've approached this in the project I'm currently working on is splitting out the requirements, so the handler is responsible for:
Creating the clients;
Extracting any config from the environment; and
Getting the parameters from the event.
Then it calls another function that does most of the work, and which we can test in isolation. Think of the handler like a controller, and the other function like the service that does the work.
In your specific case, that might look like:
const { DynamoDB } = require('aws-sdk');
const { UserRepo } = require('../lib/user-repo');
const doTheWork = (repo, id) => repo.get(id);
exports.handler = async (event) => {
const client = new DynamoDB({ region: process.env.REGION });
const userRepo = new UserRepo(client);
return doTheWork(userRepo, event.id);
}
doTheWork can now be exercised at the unit level using test doubles for the repo object and whatever inputs you want. The UserRepo is already decoupled by constructor injection of the Dynamo client, so that should be pretty testable too.
We also have tests at the integration level that only mock out the AWS SDK stuff (you could alternatively use transport layer mocking or something like aws-sdk-mock) plus E2E testing that ensures the whole system works together.

Node.js: create module with separation of actions

I have a node application and I took the following functionality and put it in separate file in new folder as a new module. In this file I need to handle some action like save delete edit etc. I have two questions:
Should I separate the functionality inside this file to actions and
expose it differently?
In any case how should I call to this functionality with the
parameters which is needed to all the actions like req, res, path?
I'm looking for concrete examples.
This is the code that I use:
module.exports = function () {
const fs = require('fs')
function fileAction (req, res, urlAction) {
switch (urlAction) {
case 'save':
const writeStream = fs.createWriteStream('c://myfile.txt', { flags: 'w' })
req.pipe(writeStream)
req.on('end', function () {
console.log('Finish to update data file')
})
res.end()
break
case 'delete':
case 'update':
default:
}
}
I like this approach more than implementing functions inside and export lexical scope.
Simply with this approach I feel like the concern of "exporting" is separated from the implementation of the functions. In addition to that you can rename the functions you are going to export. Most importantly you might control better what you want and do not want to export.
var delete= function(){
};
var update = function(){
};
var save = function(){
};
module.exports.update = update;
module.exports.delete = delete;
module.exports.save = save;
Then you'll be able to call methods from your main file:
var file = require('./file.js');
file.save();
file.delete();
file.update();
You should do something more object-oriented:
module.exports = {
save: function () {
},
delete: function () {
},
update: function () {
}
}
Then you'll be able to call methods from your main file:
const FileLib = require('./fileLib.js')
FileLib.save()
If you plan to use this as logic inside an Express application, you do not really need to use req and res directly from inside your module except if you are writing an Express middleware or a router.
But what I would recommend you is to use your library from the router:
const FileLib = require('./fileLib.js')
router.put('/file/:id', function (req, res) {
// Do your stuff with your library
FileLib.save(req.param('fileToSave'))
res.send()
})
Your library should not be too coupled to the express architecture unless it's a middleware.
Writing RESTful Express routing might also be a good idea. Use HTTP verbs to specify your action to the API.

Categories