Run Jest tests multiple times if it fails - javascript

I'm using Jest for testing some API endpoints, but these endpoints might fail falsy at some times, because of network issues and not my program bugs. So I want to do the test again multiple times (5 times for example) with a delay, and if all of these tries failed, Jest should report a failure.
What is the best way to achieve this? Does Jest or other libraries provide a solution? Or should I write my own program with something like setInterval?

For testing you should not ideally be making network calls as they slow down your tests. Moreover you might be communicating to an external API which ideally should not be part of your test code. Also can lead to false negatives as already observed by you. You can do something like below:
async function getFirstMovieTitle() {
const response = await axios.get('https://dummyMovieApi/movies');
return response.data[0].title;
}
For testing above network call you should mock your axios get request in your test. To test the above code, you should
jest.mock('axios');
it('returns the title of the first movie', async () => {
axios.get.mockResolvedValue({
data: [
{
title: 'First Movie'
},
{
title: 'Second Movie'
}
]
});
const title = await getFirstMovieTitle();
expect(title).toEqual('First Movie');
});
Try to read more about mocking in jest to understand better

Related

How to handle lots of `mockImplentationOnce` calls

I'm writing integration tests for a class that has a lot of requests. The requests are done through a HttpClient singleton.
So, to avoid making real requests, I mock all calls to HttpClient. The problem is, I have too many requests.
HttpClient.get is called to fetch a token.
HttpClient.get is called to fetch a resource.
HttpClient.get is called to fetch all customers from this resource.
HttpClient.get is called to verify if a single customer exists in another API.
Conditional: HttpClient.post is called to add this one customer to the API, if it does not exist.
HttpClient.post is called to add the resource to another API.
It's actually a little more complicated than that, because some of these calls are done multiple times (inside a loop), but you get the picture.
I wrote a test case for every scenario. One test case to simulate a failed request to fetch the token, another to simulate a failed request to fetch a resource and so on.
To do this, I wrote a "happy" scenario - where everything goes well -, using mockImplementationOnce. My beforeEach looks a little like this:
tokenResponse = { body: { token: 'some-token'}, status: 200 }
HttpClient.get.mockImplementationOnce(() => tokenResponse)
tokenResource = { body: <some-fixture-with-resources>, status: 200 }
HttpClient.get.mockImplementationOnce(() => tokenResource
(...)
To write the scenarios, I reassigned the returned variable
it('fails to fetch the token', () => {
tokenResponse = { status: 500 }
// code that calls my class
// code that asserts that an error was thrown
}
Anyway, I managed to write simple test cases for all scenarios, but my beforeEach has a giant boilerplate. Besides that, now I want to write more advanced test cases where a request is done multiple times (n of customers > 1). It's getting quite complicated to handle all fixtures and keeping track of individual mocks.
Is this a common issue? Is there an easier way to handle mock implementations? I thought about something like mockImplementationNth but couldn't find anything.
Ps.: Changing the code itself is hard because it is legacy code and the APIs are a little clunky.
I thought about isolating the scenarios into a setupMocks function with a default setting that could be overwritten in the test cases with another function. It would look something like this:
describe('Integration test', () => {
beforeEach(() => {
setupMocks()
})
it('goes well', () => {
expect(myClass.execute()).resolves.toBe(true)
})
it('fails to fetch the token', () => {
overrideMocks('token-failed')
expect(myClass.execute()).rejects.toEqual('some-error')
})
(...)
})
At least the test cases will look simpler.

How to really call fetch in Jest test

Is there a way to call fetch in a Jest test? I just want to call the live API to make sure it is still working. If there are 500 errors or the data is not what I expect than the test should report that.
I noticed that using request from the http module doesn't work. Calling fetch, like I normally do in the code that is not for testing, will give an error: Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout. The API returns in less than a second when I call it in the browser. I use approximately the following to conduct the test but I also have simply returned the fetch function from within the test without using done with a similar lack of success:
import { JestEnvironment } from "#jest/environment";
import 'isomorphic-fetch';
import { request, } from "http";
jest.mock('../MY-API');
describe('tests of score structuring and display', () => {
test('call API - happy path', (done) => {
fetch(API).then(
res => res.json()
).then(res => {
expect(Array.isArray(response)).toBe(true);
console.log(`success: ${success}`);
done();
}).catch(reason => {
console.log(`reason: ${reason}`);
expect(reason).not.toBeTruthy();
done();
});
});
});
Oddly, there is an error message I can see as a console message after the timeout is reached: reason: ReferenceError: XMLHttpRequest is not defined
How can I make an actual, not a mocked, call to a live API in a Jest test? Is that simply prohibited? I don't see why this would fail given the documentation so I suspect there is something that is implicitly imported in React-Native that must be explicitly imported in a Jest test to make the fetch or request function work.
Putting aside any discussion about whether making actual network calls in unit tests is best practice...
There's no reason why you couldn't do it.
Here is a simple working example that pulls data from JSONPlaceholder:
import 'isomorphic-fetch';
test('real fetch call', async () => {
const res = await fetch('https://jsonplaceholder.typicode.com/users/1');
const result = await res.json();
expect(result.name).toBe('Leanne Graham'); // Success!
});
With all the work Jest does behind the scenes (defines globals like describe, beforeAll, test, etc., routes code files to transpilers, handles module caching and mocking, etc.) ultimately the actual tests are just JavaScript code and Jest just runs whatever JavaScript code it finds, so there really aren't any limitations on what you can run within your unit tests.

Async Mocha: `done` called, but next test never runs

I have an unconventional setup I can't change. It looks something like this:
test POSTs to server
server POSTs to blockchain
blockchain updates
syncing script updates the database
It is absolutely imperative that I do not run the next test until the test before it has completed that entire workflow, which typically takes 2-3 seconds. Here is an example of a test written for that flow with supertest and chai:
it('should create a user', done => {
request(server)
.post(`${API}/signup`)
.set('Content-Type', 'application/json')
.send(`{
"email":"${USER_EMAIL}",
"password":"${USER_PASSWORD}"
}`)
.expect(200)
.expect(res => {
expect(res.body.role).to.equal('user');
expect(res.body.id).to.match(ID_PATTERN);
})
.end(_wait(done));
});
That _wait function is the key issue here. If I write it very naively with a setTimeout, it will work:
const _wait = cb => () => setTimeout(cb, 5000);
However, this isn't a great solution since the blockchain is very unpredictable, and can sometimes take much more than 2-3 seconds. What would be much better is to watch the database for changes. Thankfully the database is written in Rethink, which provides cursor objects that update on a change. So that should be easy, and look something like this:
var _wait = cb => () => {
connector.exec(db => db.table('chain_info').changes())
.then(cursor => {
cursor.each((err, change) => {
cb(err);
return false;
});
});
};
This setup breaks the tests. As near as I can tell done does get called. Any console logs in and around it fire, and the test itself is logged as completed, but the next test never starts, and eventually everything times out:
Manager API Workflow:
Account Creation:
✓ should create a user (6335ms)
1) should login an administrator
1 passing (1m)
1 failing
1) Manager API Workflow: Account Creation: should login an administrator:
Error: timeout of 60000ms exceeded. Ensure the done() callback is being called in this test.
Any assistance would be greatly appreciated. I am using Mocha 3.1.2, Chai 3.5.0, Supertest 2.0.1, and Node 6.9.1.

Asynchronous code in custom ESLint rules

The Story and Motivation:
We have a rather huge end-to-end Protractor test codebase. Sometimes it happens that a test waits for a specific fix to be implemented - usually as a part of a TDD approach and to demonstrate how a problem is reproduced and what is the intended behavior. What we are currently doing is using Jasmine's pending() with a Jira issue number inside. Example:
pending("Missing functionality (AP-1234)", function () {
// some testing is done here
});
Now, we'd like to know when we can rename the pending() back to it() and run the test. Or, in other words, when the issue AP-1234 is resolved or sent to testing.
The Current Approach:
At the moment, I'm trying to solve it with a custom ESLint rule, jira NodeJS module, and Q. The custom ESLint rule searches for pending() calls with at least one argument. Extracts the ticket numbers in format of AP- followed by 4 digits and uses jira.findIssue() to check its status in Jira. If status is Resolved - report an error.
Here is what I've got so far:
"use strict";
var JiraApi = require("jira").JiraApi,
Q = require('q');
var jira = new JiraApi("https",
"jira.url.com",
"443",
"user",
"password",
"2");
module.exports = function (context) {
var jiraTicketRegex = /AP\-\d+/g;
return {
CallExpression: function (node) {
if (node.callee.name === "pending" && node.arguments.length > 0) {
var match = node.arguments[0].value.match(jiraTicketRegex);
if (match) {
match.forEach(function(ticket) {
console.log(ticket); // I see the ticket numbers printed
getTicket(ticket).then(function (status) {
console.log(status); // I don't see statuses printed
if (status === "Resolved") {
context.report(node, 'Ticket {{ticket}} is already resolved.', {
ticket: ticket
})
}
});
});
}
}
}
}
};
Where getTicket() is defined as:
function getTicket(ticket) {
var deferred = Q.defer();
jira.findIssue(ticket, function(error, issue) {
if (error) {
deferred.reject(new Error(error));
} else {
deferred.resolve(issue.fields.status.name);
}
});
return deferred.promise;
}
The problem is: currently, it successfully extracts the ticket numbers from the pending() calls, but doesn't print ticket statuses. No errors though.
The Question:
The general question is, I guess, would be: can I use asynchronous code blocks, wait for callbacks, resolve promises in custom ESLint rules? And, if not, what are my options?
A more specific question would be: what am I doing wrong and how can I use Node.js jira module with ESLint?
Would appreciate any insights or alternative approaches.
The short answer is - no, you can't use asynchronous code inside of the rules. ESLint is synchronous and heavily relies on EventEmitter when it walks AST. It would be very hard to modify ESLint code to be async, but at the same time guarantee that events will be emitted in the right order.
I think your only choice might be to write a sync rule that outputs enough information into the error message, then use one of the parsable formatters like JSON or UNIX and then create another application that you can pipe ESLint output to and do a async lookup in Jira based on the error message.
These answers remain valid in 2018.
For some insights from the eslint devs, see this conversation we had on their mailing list.
For a working example, in my "pseudo eslint plugin" I opted to use expensive but synchronous APIs and warn users about how best to use the "plugin" in their CI process.
Note: it does not answer original question about support of async code in ESLint custom rules, but provides an alternative solution to the issue.
I personally would not use ESLint in this case, it is supposed to be used to check if your code is written correctly and if you follow style guides; from my point of view missing tests is not the part of code check, it's more like your team internal processes. Also, this kind of requests may slow your ESLint executions significantly, if anyone runs it in real-time in their editor, calls will be made very often and will slow down the entire check. I would make this JIRA check a part of Protractor flow, so if the ticket is resolved, you will get a failed Protractor spec. (copied from the comment to make the answer complete)
Jasmine allows to mark specs as pending using xit(). I am not sure about pending() though, it works weird in Protractor. Also, Jasmine allows to call pending() inside a spec, so it will be marked as pending, but it is not implemented for Protractor yet (see issue). Knowing that, I would use a custom helper to define "pending specs", which should be checked for JIRA issue status. I guess you can still use Q to work with promises, I'll just post an alternative using WebDriver promises without external dependencies. Here is a modified version of getTicket():
function getTicketStatus(ticket) {
// Using WebDriver promises
var deferred = protractor.promise.defer();
jira.findIssue(ticket, function(error, issue) {
if (error) {
deferred.reject(new Error(error));
} else {
deferred.fulfill(issue.fields.status.name);
}
});
return deferred.promise;
}
Then there is a custom helper function:
function jira(name) {
// Display as pending in reporter results, remove when pending() is supported
xit(name);
// Using Jasmine Async API because Jira request is not a part of Control Flow
it(name, function (done) {
getTicketStatus().then(function (status) {
if (status === 'Resolved') {
done.fail('Ticket "' + name + '" is already resolved.');
} else {
done();
// pending() is not supported yet https://github.com/angular/protractor/issues/2454
// pending();
}
}, function (error) {
done.fail(error);
});
});
}
Usage example:
jira('Missing functionality (AP-1234)', function () {
//
});
jira('Missing functionality (AP-1235)');
In case if request to JIRA fails or issue has a status Resolved, you will get a failed spec (using Jasmine async API). In all situations you will still have this spec duplicated as pending in reporter results. I hope it can be improved, when pending() functionality inside a spec is implemented.

Using ng-describe for end-to-end testing with protractor

I've recently discovered an awesome ng-describe package that makes writing unit tests for AngularJS applications very transparent by abstracting away all of the boilerplate code you have to remember/look up and write in order to load, inject, mock or spy.
Has somebody tried to use ng-describe with protractor? Does it make sense and can we benefit from it?
One of the things that caught my eye is how easy you can mock the HTTP responses:
ngDescribe({
inject: '$http', // for making test calls
http: {
get: {
'/my/url': 42, // status 200, data 42
'/my/other/url': [202, 42], // status 202, data 42,
'/my/smart/url': function (method, url, data, headers) {
return [500, 'something is wrong'];
} // status 500, data "something is wrong"
},
post: {
// same format as GET
}
},
tests: function (deps) {
it('responds', function (done) {
deps.$http.get('/my/other/url')
.then(function (response) {
// response.status = 202
// response.data = 42
done();
});
http.flush();
});
}
});
Mocking HTTP responses usually helps to achieve a better e2e coverage and test how does UI reacts to specific situations and how does the error-handling work. This is something we are currently doing with protractor-http-mock, there are also other options which don't look as easy as it is with ng-describe.
Protractor primary is intended for E2E testing (with selenium webdriver) and that means that you need to have an actual backend hooked up (it could be a mock backend also). As the creator of Protractor wrote here, your application code runs separately with the test code and it isn't possible to gain easy access to the $http service.
By mocking the backend calls you are not doing E2E testing anymore even if you are using tool for E2E tests like Protractor. Why not to return to unit testing then. The only difference will be that you will use jQuery instead the Protractor API and the tests will be run with Karma. Then you can easily use ng-describe and $httpBackend which primary are intended to be used in unit tests.
However if you like to continue with this approach you can check the comments in this Protractor issue. There are several guys that are proposing solutions for this problem and as mentioned you are already using one of them. But in this case ng-describe won't help you much.
I hope that this answers your question.

Categories