How can I unit test this code snippet? - javascript

I've just recently moved to a new project that deals mainly in Javascript (as a Node.js web application).
I'm a fairly TDD focused developer, and am trying to figure out the best approaches / patterns to ensure that what we end up building is unit-testable and maintainable.
I've been trying to wrap the following code snippet with unit tests, but am having trouble getting good code coverage over the anonymous function passed in as the request callback.
I have mocked the request object using the rewire.js library, and can successfully test that the logger was called, that request was called with the correct parameters, but how do I complete the test coverage for this?
function _makeRequest(apiName, options, payload, callback) {
logger.info('DS API %s Request:\n %s %s\n %s', apiName, options.method, options.url, logger.look(payload));
request(options, function(error, response, body) {
var json = 'json' in options ? body : JSON.parse(body);
if ('error' in json) {
var msg = 'DS API ' + apiName + ' Error:\n ' + logger.look(json.error);
logger.info(msg);
callback(null);
} else { // no error
logger.info('DS API %s Response:\n %s', apiName, logger.look(json));
callback(json);
}
});
}
Should I be refactoring for better testability? Is there a common approach for unit testing callbacks that I'm not aware of?

Carl put me on the right direction. I had set up my parameters for the tests with a good range of input data (to ensure that all code lines would be executed in one test or another) but, in the end, was failing to actually execute the callback parameter after passing it to the rewire.js Mock.
The callback was making it in, but I needed to execute it from within the mock to ensure that the callback code would still be executed

Related

Make js wait for a promise before going to next line in Nodejs ideally using ES5

I know its asked many times but I cannot make it to work in my case.
I was writing a code for IBM tool which runs the JS file using Rhino engine version 1.7R4 (JavaScript 1.7 is default, ES5 compliance, JavaScript 1.8 generator expressions) See compatibility here
The js file works on the server fine as the Rhino implementation on server executes it some synchronoous way.
It calls some built in functions which works in synchronous way. e.g. one of them brings data from database. If I try to reproduce the same functionality in my local machine the js executes asynchronously.
A sample code is below which works synchronously on server (changing this code to await async might make it work on my machine but on server this will not work. Hence, I wont be running the same code on my machine and server)
var events = new Object();
function getEvents() {
var query = "select * from alerts.status";
events = DirectSQL("AGG_BSS_Objectserver", query, false);
}
function processEvents() {
// Do something with events
}
getEvents();
processEvents();
The DirectSQL is a built in function from server which executes synchronously and brings data in events before calling other lines of code. i.e processEVents() is called after obtaining results from DirectSQL. The above is a simplified example we are calling DirectSQL numerous times with complex logics.
For testing, I tried running this file on local computer using NodeJS. Since, my local machine does not have DirectSQL function. I created one in the same JS file as below. It works well but its asynchronous. i.e processEvents() is executed before the results are returned from DirectSQL(). Below is my test implementation of DirectSQL() for testing I can change it as much as I like
function DirectSQL(dataSrouce, query, countOnly) {
var Sybase = require('sybase'),
db = new Sybase(server, port, dbanem, user, pass);
db.connect(function (err) {
if (err) return console.log(err);
db.query(query, function (err, data) {
if (err) console.log(err);
console.log(data);
db.disconnect();
return data;
});
});
}
Is there any way to make DirectSQL return results in events before executing processEvents()
I guess this is what you are looking for: How is async/await transpiled to ES5. At the end of the article, you have a reference to more explanations. Good luck!

Test callback using in a route with supertest

I started to istanbul as a test coverage tool with mocha and one of the great things is that it shows you the paths (branches) you have tested in your test code logic.
There is a path that can only be taken if a error occurs on the database.
Screenshot of the part I am interested in testing
The [I] indicates the the first if was not tested.
The problem is that it uses a callback function(err, data) and the error is passed to this callback through a the mongoose model method find(), and because of that I don't have the flow control of this part of the code.
In this specific case I using supertest which is a module to test routes in node.js and it make requests to a route that calls a mongoose model method find().
What would be the best option to test this path? Create a stub to simulate the method? Or just remove the if?
EDIT: I noticed that I was using an anonymous function as a callback (err, data) and doing so I can't test it since it's not exposed to the outer scope. One approach I had in mind was to create a function:
handleDbFetchingResponse(res) {
return function(err, data) {
let response = {};
if (err) {
response = {error: true, message: 'Error fetching data'};
} else {
response = {error: false, message: data};
}
res.json(response);
}
}
Now I can expose the function and test it, I create another problem though. Since the other express routes have another logic when fetching data from the database I will have to create a handler function for each one of them. Maybe there is a way to create a handlerBuilder function that returns a new handler passing different arguments to deal with specific cases.

Unit Testing - Mock Methods within Methods?

I am writing unit tests for a Node.js application, and I am wondering if I am mocking the correct parts of the code.
The example below is a hypothetical class that has two static methods.
The method isTokenValid calls another method, decodeToken which takes the token and a callback. The callback is defined inside of isTokenValid. Both these methods belong to the same class.
When unit testing isTokenValid my approach is to mock the decodeToken method.
It is clear to me that when unit testing, dependencies such AJAX requests should be mocked. However, does that also hold true for this type of dependency or am I being too granular?
Is mocking decodeToken the right approach to unit testing isTokenValid?
var TokenClass = {};
TokenClass.isTokenValid(token) {
TokenClass.decodeToken(token, function(err, decoded) {
if (err) {
console.log('There was a validation error');
}
if (decoded) {
return true
};
}
}
TokenClass.decodeToken(token, callback) {
// some logic here to decode token
if (err) {
return callback(err);
}
// if token is not valid
if (!validToken) {
return callback(null, undefined);
}
// if token is valid
return callback(null, decoded);
}
}
There are two approaches.
In classic unit tests you can mock everything that is external to your tested unit - in this case isTokenValid method is your unit. But that approach isn't practical.
The best way is to mock things that doesn't let your tests run in isolation and in deterministic way (same result every time).
If decodeToken is not calling any external resource (url, database, file system) then you don't have to mock that out. However if it does call external resource, then decodeToken should be implemented in another object, i.e. TokenDecoder and injected into TokenValidator, then for unit test of TokenValidator you can inject mocked TokenDecoder that is not calling any external resource.
TokenDecoder then should be tested using integration test, but that is another topic.

wrapping mongo update in function in node.js

Whats the proper way to wrap a mongo query/insert in a function to make it reusable? I have an update operation that may take place in various places and want to only write it once and reuse it. I'm using MongoJS to interface w/ the MongoDb API.
When I take something like the following:
mongo.myt.update(
{'_id': req._id},
{
$addToSet: {
"aggregate.clientIds": req.myt.clientIds
},
$inc: {"aggregate.seenCount": 1},
$set: {
"headers": req.myt.headers,
"ip": req.myt.ip
},
$setOnInsert: {
'_id': req.myt._id,
'derived': req.myt.derived
}
},
{upsert: true},
function (err, savedId) {
if (err || !savedId) console.log("failed to save :" + req.myt + " because of " + err);
else console.log("successfully saved :" + req.myt);
});
And wrap it with a simple function like:
function mongoInsert(req) {
//same query as above
}
Then call it using:
mongoInsert(req);
I don't see any impact to speed when profiling. Should I be adding a callback to the wrapper function, is that needed? I was expecting it would have some performance impact and need to be done differently.
So a few questions.
Does the approach above calling mongoInsert() get called synchronously and block until the async mongo update is done?
If it does become sync, I would expect a performance impact which I didnt see. So is the approach I took ok to use?
And if not what would be the correct way to do this?
mongoInsert() is still asynchronous because it's calling an asynchronous function (mongo.myt.update()). Even if you don't add a callback function, it won't magically become synchronous.
The way you wrote it now is "fire-and-forget": somewhere in your code you call mongoInsert(), your code will continue to run while the update is taking place, and since you don't pass a callback the calling code cannot get informed about the result of the update (right now, you're just logging the result to console).
It's a design decision whether or not this is acceptable.
You could make the callback optional for situations where you do want to get informed about the update result:
function mongoInsert(req, callback) {
callback = callback || function() {}; // dummy callback when none is provided
mongo.myt.update(..., function(err, savedId) {
...log status here...
callback(err, savedId);
});
}

Trouble understanding Node.js callbacks

Today is my first foray into nodejs and I am particularly stumped trying to understand the way the following piece of logic flows. The logic is as follows:
request({ uri: db.createDbQuery('identifier:abcd1234') },
function(err, response, body) {
response.should.have.status(200);
var search = JSON.parse(body);
search.response.numFound.should.equal(1);
done();
});
});
At a higher level I do understand is that an http request is being made and the function is being called at some juncture that is taking the response and doing something to it. What I am trying to understand is the proper order of the calls and how does the binding of variables take place in the above given logic. How does the compiler know how to bind the return values from the request to the anonymous function? Basically, I want to gain an understanding on how things work under the hood for this snippet.
Thanks
Your question isnt specific to node.js, this is basically a feature of javascript.
Basically you are calling request() which is defined like function request(obj, callback)
Internally, the http request is being called, and once its completed, it calls callback which is actually a function pointer.
function request(obj, callback){
//http request logic...
var err = request_logic_internal_function();
var response = ...
var body = ...
callback(err, response, body)
}
Your code can actually be restructured as :
var options = { uri: db.createDbQuery('identifier:abcd1234') };
var request_callback = function(err, response, body) {
response.should.have.status(200);
var search = JSON.parse(body);
search.response.numFound.should.equal(1);
done();
};
request(options, request_callback);
What you're basically doing is sending in a function pointer as a variable.
I don't know what library(ies) you're using, and it looks like you may have anonymized them by assigning methods into your code's global scope like request, done, and db.
What I can say is this:
That indentation is horrible and initially misled me on what it was doing, please gg=G (vim syntax) your code so it's properly indented.
request takes two arguments, a configuration object and a callback.
db.createDbQuery must be a blocking method or the anonymous object you're creating won't have the proper value.
request uses that configuration value, makes a non-blocking I/O request of some kind, and later will call the callback function you provide. That means that the code immediately after that request call will execute before the callback you provide will execute.
Some time later the request data will come back, Node.js's event loop will provide the data to the library's registered event handler (which may or may not be your callback directly -- it could do something to it and then call your event handler afterwards, you don't know or really care).
Then the function does some checks that will throw errors if they fail, and finally calls a done function in its scope (defined somewhere else) that will execute and continue the logical stream of execution.

Categories