I'm writing integration tests for a class that has a lot of requests. The requests are done through a HttpClient singleton.
So, to avoid making real requests, I mock all calls to HttpClient. The problem is, I have too many requests.
HttpClient.get is called to fetch a token.
HttpClient.get is called to fetch a resource.
HttpClient.get is called to fetch all customers from this resource.
HttpClient.get is called to verify if a single customer exists in another API.
Conditional: HttpClient.post is called to add this one customer to the API, if it does not exist.
HttpClient.post is called to add the resource to another API.
It's actually a little more complicated than that, because some of these calls are done multiple times (inside a loop), but you get the picture.
I wrote a test case for every scenario. One test case to simulate a failed request to fetch the token, another to simulate a failed request to fetch a resource and so on.
To do this, I wrote a "happy" scenario - where everything goes well -, using mockImplementationOnce. My beforeEach looks a little like this:
tokenResponse = { body: { token: 'some-token'}, status: 200 }
HttpClient.get.mockImplementationOnce(() => tokenResponse)
tokenResource = { body: <some-fixture-with-resources>, status: 200 }
HttpClient.get.mockImplementationOnce(() => tokenResource
(...)
To write the scenarios, I reassigned the returned variable
it('fails to fetch the token', () => {
tokenResponse = { status: 500 }
// code that calls my class
// code that asserts that an error was thrown
}
Anyway, I managed to write simple test cases for all scenarios, but my beforeEach has a giant boilerplate. Besides that, now I want to write more advanced test cases where a request is done multiple times (n of customers > 1). It's getting quite complicated to handle all fixtures and keeping track of individual mocks.
Is this a common issue? Is there an easier way to handle mock implementations? I thought about something like mockImplementationNth but couldn't find anything.
Ps.: Changing the code itself is hard because it is legacy code and the APIs are a little clunky.
I thought about isolating the scenarios into a setupMocks function with a default setting that could be overwritten in the test cases with another function. It would look something like this:
describe('Integration test', () => {
beforeEach(() => {
setupMocks()
})
it('goes well', () => {
expect(myClass.execute()).resolves.toBe(true)
})
it('fails to fetch the token', () => {
overrideMocks('token-failed')
expect(myClass.execute()).rejects.toEqual('some-error')
})
(...)
})
At least the test cases will look simpler.
Related
I'm using Jest for testing some API endpoints, but these endpoints might fail falsy at some times, because of network issues and not my program bugs. So I want to do the test again multiple times (5 times for example) with a delay, and if all of these tries failed, Jest should report a failure.
What is the best way to achieve this? Does Jest or other libraries provide a solution? Or should I write my own program with something like setInterval?
For testing you should not ideally be making network calls as they slow down your tests. Moreover you might be communicating to an external API which ideally should not be part of your test code. Also can lead to false negatives as already observed by you. You can do something like below:
async function getFirstMovieTitle() {
const response = await axios.get('https://dummyMovieApi/movies');
return response.data[0].title;
}
For testing above network call you should mock your axios get request in your test. To test the above code, you should
jest.mock('axios');
it('returns the title of the first movie', async () => {
axios.get.mockResolvedValue({
data: [
{
title: 'First Movie'
},
{
title: 'Second Movie'
}
]
});
const title = await getFirstMovieTitle();
expect(title).toEqual('First Movie');
});
Try to read more about mocking in jest to understand better
TL;DR: Is there some well-known solution out there using React/Redux for being able to offer a snappy and immediately responsive UI, while keeping an API/database up to date with changes that can gracefully handle failed API requests?
I'm looking to implement an application with a "card view" using https://github.com/atlassian/react-beautiful-dnd where a user can drag and drop cards to create groups. As a user creates, modifies, or breaks up groups, I'd like to make sure the API is kept up to date with the user's actions.
HOWEVER, I don't want to have to wait for an API response to set the state before updating the UI.
I've searched far and wide, but keep coming upon things such as https://redux.js.org/tutorials/fundamentals/part-6-async-logic which suggests that the response from the API should update the state.
For example:
export default function todosReducer(state = initialState, action) {
switch (action.type) {
case 'todos/todoAdded': {
// Return a new todos state array with the new todo item at the end
return [...state, action.payload]
}
// omit other cases
default:
return state
}
}
As a general concept, this has always seemed odd to me, since it's the local application telling the API what needs to change; we obviously already have the data before the server even responds. This may not always be the case, such as creating a new object and wanting the server to dictate a new "unique id" of some sort, but it seems like there might be a way to just "fill in the blanks" once the server does response with any missing data. In the case of an UPDATE vs CREATE, there's nothing the server is telling us that we don't already know.
This may work fine for a small and lightweight application, but if I'm looking at API responses in the range of 500-750ms on average, the user experience is going to just be absolute garbage.
It's simple enough to create two actions, one that will handle updating the state and another to trigger the API call, but what happens if the API returns an error or a network request fails and we need to revert?
I tested how Trello implements this sort of thing by cutting my network connection and creating a new card. It eagerly creates the card immediately upon submission, and then removes the card once it realizes that it cannot update the server. This is the sort of behavior I'm looking for.
I looked into https://redux.js.org/recipes/implementing-undo-history, which offers a way to "rewind" state, but being able to implement this for my purposes would need to assume that subsequent API calls all resolve in the same order that they were called - which obviously may not be the case.
As of now, I'm resigning myself to the fact that I may need to just follow the established limited pattern, and lock the UI until the API request completes, but would love a better option if it exists within the world of React/Redux.
The approach you're talking about is called "optimistic" network handling -- assuming that the server will receive and accept what the client is doing. This works in cases where you don't need server-side validation to determine if you can, say, create or update an object. It's also equally easy to implement using React and Redux.
Normally, with React and Redux, the update flow is as follows:
The component dispatches an async action creator
The async action creator runs its side-effect (calling the server), and waits for the response.
The async action creator, with the result of the side-effect, dispatches an action to call the reducer
The reducer updates the state, and the component is re-rendered.
Some example code to illustrate (I'm pretending we're using redux-thunk here):
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
const results = await myApi.postData(data);
dispatch(UpdateMyStore(results));
};
However, you can easily flip the order your asynchronous code runs in by simply not waiting for your asynchronous side effect to resolve. In practical terms, this means you don't wait for your API response. For example:
// ... in my-component.js:
export default () => {
const dispatch = useDispatch();
useEffect(() => {
dispatch(MyActions.UpdateData(someDataFromSomewhere));
});
return (<div />);
};
// ... in actions.js
export const UpdateData = async (data) => (dispatch, getStore) => {
// we're not waiting for the api response anymore,
// we just dispatch whatever data we want to our reducer
dispatch(UpdateMyStore(data));
myApi.postData(data);
};
One last thing though -- doing things this way, you will want to put some reconciliation mechanic in place, to make sure the client does know if the server calls fail, and that it retries or notifies the user, etc.
The key phrase here is "optimistic updates", which is a general pattern for updating the "local" state on the client immediately with a given change under the assumption that any API request will succeed. This pattern can be implemented regardless of what actual tool you're using to manage state on the client side.
It's up to you to define and implement what appropriate changes would be if the network request fails.
const functions = require('firebase-functions')
const axios = require('axios')
exports.getTown = functions.https.onCall((data, context) => {
axios.get(`https://maps.googleapis.com/maps/api/geocode/json?latlng=${data.lat},${data.lng}&result_type=locality&key=**********`)
.then(town => {
return town
}).catch(err => {
return err
})
})
When I call this in the front end I just get an error in the console:
POST https://europe-west2-heretic-hearts.cloudfunctions.net/getTown 500
Uncaught (in promise) Error: INTERNAL
I've tested to make sure the incoming data is being received properly and it is, so the problem must be in the function itself. But I can't see what could possibly be going wrong here...?
You can't invoke an onCall type function with a simple POST request. Callable functions have a specific protocol that they use on top of HTTP. If you can't reproduce that protocol, the function will fail every time.
If you want to write a simple HTTP function endpoint, then follow the instruction for writing an HTTP trigger instead using onRequest. It works very differently.
Also, I'm noticing that you're not handling promises correctly in your function. Please read the documentation thoroughly to understand what you need to do with promises in order to get your function to execute correctly, no matter what type of function you write.
Is there a way to call fetch in a Jest test? I just want to call the live API to make sure it is still working. If there are 500 errors or the data is not what I expect than the test should report that.
I noticed that using request from the http module doesn't work. Calling fetch, like I normally do in the code that is not for testing, will give an error: Timeout - Async callback was not invoked within the 5000ms timeout specified by jest.setTimeout. The API returns in less than a second when I call it in the browser. I use approximately the following to conduct the test but I also have simply returned the fetch function from within the test without using done with a similar lack of success:
import { JestEnvironment } from "#jest/environment";
import 'isomorphic-fetch';
import { request, } from "http";
jest.mock('../MY-API');
describe('tests of score structuring and display', () => {
test('call API - happy path', (done) => {
fetch(API).then(
res => res.json()
).then(res => {
expect(Array.isArray(response)).toBe(true);
console.log(`success: ${success}`);
done();
}).catch(reason => {
console.log(`reason: ${reason}`);
expect(reason).not.toBeTruthy();
done();
});
});
});
Oddly, there is an error message I can see as a console message after the timeout is reached: reason: ReferenceError: XMLHttpRequest is not defined
How can I make an actual, not a mocked, call to a live API in a Jest test? Is that simply prohibited? I don't see why this would fail given the documentation so I suspect there is something that is implicitly imported in React-Native that must be explicitly imported in a Jest test to make the fetch or request function work.
Putting aside any discussion about whether making actual network calls in unit tests is best practice...
There's no reason why you couldn't do it.
Here is a simple working example that pulls data from JSONPlaceholder:
import 'isomorphic-fetch';
test('real fetch call', async () => {
const res = await fetch('https://jsonplaceholder.typicode.com/users/1');
const result = await res.json();
expect(result.name).toBe('Leanne Graham'); // Success!
});
With all the work Jest does behind the scenes (defines globals like describe, beforeAll, test, etc., routes code files to transpilers, handles module caching and mocking, etc.) ultimately the actual tests are just JavaScript code and Jest just runs whatever JavaScript code it finds, so there really aren't any limitations on what you can run within your unit tests.
I'm trying to mock a response to a JSONP GET request which is made with a function that returns an ES6 promise which I've wrapped in $q.when(). The code itself works just fine, however, in the unit tests the request is not being caught by $httpBackend and goes through right to the actual URL. Thus when flush() is called I get an error stating Error: No pending request to flush !. The JSONP request is made via jQuery's $.getJSON() inside the ES6 promise so I opted to try and catch all outgoing requests by providing a regex instead of a hard-coded URL.
I've been searching all over trying to figure this out for a while now and still have yet to understand what's causing the call to go through. I feel as if the HTTP request in the ES6 promise is being made "outside of Angular" so $httpBackend doesn't know about it / isn't able to catch it, although that may not be the case if the call was being made inside of a $q promise from the get-go. Can anyone possibly tell me why this call is going through and why a simple timeout will work just fine? I've tried all combinations of $scope.$apply, $scope.$digest, and $httpBackend.flush() here, but to no avail.
Maybe some code will explain it better...
Controller
function homeController() {
...
var self = this;
self.getData = function getData() {
$q.when(user.getUserInformation()).then(function() {
self.username = user.username;
});
};
}
Unit Test
...
beforeEach(module('home'));
describe('Controller', function() {
var $httpBackend, scope, ctrl;
beforeEach(inject(function(_$httpBackend_, $rootScope, $componentController) {
$httpBackend = _$httpBackend_;
scope = $rootScope.$new(); // used to try and call $digest or $apply
// have also tried whenGET, when('GET', ..), etc...
$httpBackend.whenJSONP(/.*/)
.respond([
{
"user_information": {
"username": "TestUser",
}
}
]);
ctrl = $componentController("home");
}));
it("should add the username to the controller", function() {
ctrl.getData(); // make HTTP request
$httpBackend.flush(); // Error: No pending request to flush !
expect(ctrl.username).toBe("TestUser");
});
});
...
For some reason this works, however:
it("should add the username to the controller", function() {
ctrl.getData(); // make HTTP request
setTimeout(() => {
// don't even need to call flush, $digest, or $apply...?
expect(ctrl.username).toBe("TestUser");
});
});
Thanks to Graham's comment, I was brought further down a different rabbit hole due to my lack of understanding several things which I will summarize here in case someone ends up in the same situation...
I didn't fully understand how JSONP works. It doesn't rely on XmlHttpRequest at all (see here). Rather than trying to fiddle with mocking responses to these requests through JSONP I simply switched the "debug" flag on the code I was using which disabled JSONP so the calls were then being made via XHR objects (this would fail the same origin policy if real responses were needed from this external API).
Instead of trying to use jasmine-ajax, I simply set a spy on jQuery's getJSON and returned a mock response. This finally sent the mocked response to the ES6 promise, but for some reason the then function of the $q promise object which resulted from wrapping the ES6 promise wasn't being called (nor any other error-handling functions, even finally). I also tried calling $scope.$apply() pretty much anywhere in the off chance it would help, but to no avail.
Basic implementation (in unit test):
...
spyOn($, 'getJSON').and.callFake(function (url, success) {
success({"username": "TestUser"}); // send mock data
});
ctrl.getData(); // make GET request
...
Problem (in controller's source):
// user.getUserInformation() returns an ES6 promise
$q.when(user.getUserInformation()).then(function() {
// this was never being called / reached! (in the unit tests)
});
Ultimately I used #2's implementation to send the data and just wrapped the assertions in the unit test inside of a timeout with no time duration specified. I realize that's not optimal and hopefully isn't how it should be done, but after trying for many hours I've about reached my limit and given up. If anyone has any idea as to how to improve upon this, or why then isn't being called, I would honestly love to hear it.
Unit Test:
...
ctrl.getData(); // make GET request
setTimeout(() => {
expect(ctrl.username).toBe("TestUser"); // works!
});