I am new to Jasmine testing and looking for a "best practice" for unit testing a stateful AngularJS service. Most tutorials I could find focus on test cases that run atomic calls to stateless services.
This matches nicely the Jasmine syntax
it("should do something", function(){ expect(target.doSomething()).toBe... })
However, I found no obvious way to extend this pattern to a test case that involves multiple calls to one or several of the service's functions.
Let's imagine a service like this one:
angular.module("someModule").factory("someService", function(){
return {
enqueue: function(item){
// Add item to some queue
}
}
});
For such a service, it makes sense to test that sequential calls to enqueue() process the items in the correct order. This involves writing a test case that calls enqueue() multiple times and checks the end result (which obviously cannot be achieved with a service as simple as the one above, but this is not the point...)
What doesn't work:
describe("Some service", function(){
// Initialization omitted for simplicity
it("should accept the first call", function() {
expect(someService.enqueue(one)).toBe... // whatever is correct
});
it("should accept the second call", function() {
expect(someService.enqueue(two)).toBe... // whatever is correct
});
it("should process items in the correct order", function() {
// whatever can be used to test this
});
});
The code above (which actually defines not one but three test cases) fails randomly as the three test cases are executed... just as randomly.
A poster in this thread suggested that splitting the code into several describe blocks would have these to be executed in the given order, but again this seems to be different from one Jasmine version to another (according to other posters in same thread). Moreover, executing suites and tests in a random order seems to be the intended way; even if it was possible, by setup, to override this behaviour, that would probably NOT be the right way to go.
Thus it seems that the only correct way to test a multi-call scenario is to make it ONE test case, like this:
describe(("Some service", function(){
// Initialization omitted for simplicity
it("should work in my complex scenario", function(){
expect(someService.enqueue(one)).toBe... // whatever is correct
expect(someService.enqueue(two)).toBe... // whatever is correct
expect(/* whatever is necessary to ensure the order is correct */);
});
});
While technical-wise this seems the logical way to go (after all, a complex scenario is one test case not three), the Jasmine "description + code" pattern seems disturbed in this implementation as:
There is no way to associate a message to each "substep" that can fail within the test case;
The description for the single "it" is inevitably bulky, like in example above, to really say something useful about a complex scenario.
This makes me wonder whether this is the only correct solution (or ist it?) to this kind of testing needs. Again I am especially interested in "doing it the right way" rather than using some kind of hack that would make it work... where it should not.
Sorry no code for this... Not sure it needs, I think you just need to adjust your expectations of your tests.
For the general rule of testing you don't really care how external dependencies handle the service, you can't control that. You want to test what you think the expected results are going to be for your service.
For your example you'll just want to invoke the dependencies for your service and call the function and test what the expected results are from calling the enqueue function. If it returns a promise, check success and error. It calls an API check that and so on.
If you want to see how the external dependencies use your service, you'll test it on those dependencies tests.
For example you have a controller that invokes enqueue. In the test you'll have to inject your provider (the service). And handle the expectations.
Related
What is the point of mocking the method of an external library when unit testing?
Let's say that I have a function that uses XYZ library to get the current user using a given authorization token, and then if it finds the user, it returns a valid AWS policy:
export const getUser = async token => {
const user = await XYZ.getUser(token)
return {
// valid policy
context: user,
}
}
If an invalid token is given, the getUser should throw an error.
What is the point of testing this function if the XYZ.getUser is already well tested?
Why mock the getUser instead of using the real one?
You wouldn't want to invoke the actual HTTP request for XYZ.getUser(token), due to the following reasons.
1) The objective of unit testing is to test a specific behaviour, and in your very case, it is to test that your getUser function actually works.
2) You wouldn't want to make an actual request to the backend, as that would usually mean writing/reading/deleting something from the database. In many cases, it is not a good idea to overcomplicate your unit testing, as such operations should not be tested on the front-end, since you would want to limit and simplify your unit testing to test for a very specific behaviour.
3) Since the main objective is to test the above behaviour, you would want to simplify and limit what you are testing. As such, you will need to understand that unit testing of network requests of XYZ.getUser(token) should be done at the server-side, instead of client-side (or front-end).
4) By mocking the response from XYZ.getUser(token), we can simulate multiple scenarios and responses, such as successful responses (Code 200 OK), and other types of errors (business logic errors from the backend, or network errors such as error 400, 500, etc). You can do so by hardcoding the required JSON responses from XYZ.getUser(token) on the different types of scenarios you need to test.
this goes back to the definition and usages of unit tests.
Unit tests are supposed to test well defined functionality, for example you might write an algorithm and the functionality of that is very well defined. Then, because you know how it's supposed to function then you would unit test that.
Unit tests by definition, are not meant to touch any external systems, such as but not limited to files, APIs, databases, anything that requires going outside of the logical unit which is the code you want to test.
There are multiple reasons for that among which is execution speed. Calling an API, or using a third party call, takes a while. That's not necessary a problem when you look at one test, but when you have 500 tests or more then the delays will be important. The main idea is that you are supposed to run your unit tests all the time, very quickly, you get immediate feedback, is anything broken, have I broken something in other parts of the system. In any mature system you will run these tests as part of your CI/CD process and you cannot afford to wait too long for them to finish. no, they need to run in seconds, which is why you have the rules of what they should or should not touch.
You mock external things, to improve execution speed and eliminate third party issues as you really only want to test your own code.
What if you call a third party API and they are down for maintenance? Does that mean you can't test your code because of them?
Now, on the topic of mocking, do not abuse it too much, if you code in a functional way then you can eliminate the need for a lot of mocking. Instead of passing or har-coding dependencies, pass data. This makes it very easy to alter the input data without mocking much. You test the system as a while using integration / e2e tests, this gives you the certainty that your dependencies are resolved correctly.
Lately a colleague and I have had some disagreements on the "right" way to implement Cucumber step definitions using Protractor and Chai as Promised. Our contention comes from a mutual lack of understanding of precisely what is going with promise resolution in the context of Cucumber.
We're testing against an AngularJS application, so resolving promises and asynchronous behavior is a necessary evil. The biggest problem we've had is forcing synchronous test behavior and getting Cucumber to wait on promises between step definitions. In some cases, we've observed situations such that Cucumber seems to plow straight through step definitions before Webdriver even executes them. Our solutions to this problem vary...
Consider the hypothetical scenario:
Scenario: When a user logs in, they should see search form
Given a user exists in the system
When the user logs into the application
Then the search form should be displayed
Most of the confusion originates in the Then step. In this example the definition should assert that all fields for the search form exist on the page, meaning multiple isPresent() checks.
From the documentation and examples I was able to find, I felt the assertion should look something like this:
this.Then(/the search form should be displayed/, function(next) {
expect(element(by.model('searchTerms')).isPresent()).to.eventually.be.true;
expect(element(by.model('optionsBox')).isPresent()).to.eventually.be.true;
expect(element(by.button('Run Search')).isPresent()).to.eventually.be.true.and.notify(next);
});
However, my colleague contends that in order to satisfy promise resolution, you need to chain your expects with then() like this:
this.Then(/the search form should be displayed/, function(next) {
element(by.model('searchTerms')).isPresent().then(function(result) {
expect(result).to.be.true;
}).then(function() {
element(by.model('optionsBox')).isPresent().then(function(result) {
expect(result).to.be.true;
}).then(function() {
element(by.button('Run Search')).isPresent().then(function(result) {
expect(result).to.be.true;
next;
});
});
});
});
The latter feels really wrong to me, but I don't really know if the former is correct either. The way I understand eventually() is that it works similarly to then(), in that it waits for the promise to resolve before moving on. I would expect the former example to wait on each expect() call in order, and then call next() through notify() in the final expect() to signal to cucumber to move on to the next step.
To add even more confusion, I've observed other colleagues write their expects like this:
expect(some_element).to.eventually.be.true.then(function() {
expect(some_other_element).to.eventually.be.true.then(function() {
expect(third_element).to.eventually.be.true.then(function() {
next();
});
});
});
So the questions I think I'm alluding to are:
Is any of the above even kinda right?
What does eventually() really do? Does it force synchronous behavior like then()?
What does and.notify(next) really do? Is it different from calling next() inside of a then()?
Is there a best practices guide out there that we haven't found yet that gives more clarity on any of this?
Many thanks in advance.
Your feeling was correct, your colleague was wrong (though it was a reasonable mistake!). Protractor automatically waits for one WebDriver command to resolve before running a second. So in your second code block, element(by.button('Run Search')).isPresent() will not resolve until both element(by.model('optionsBox')).isPresent() and element(by.model('searchTerms')).isPresent() are done.
eventually resolves promises. An explanation is here: https://stackoverflow.com/a/30790425/1432449
I do not believe it's different from putting next() inside of then()
I do not believe there is a best practices guide. Cucumber is not a core focus of the Protractor team, and support for it is largely provided by the community on github. If you or someone you know would like to write a best practices guide, we (the protractor team) would welcome a PR!
What works for me is this - The function below searches for something that will always equal true - in the case the existence of a html tag. I call this function at the end of each test, passing in the callback
function callbackWhenDone(callback) {
browser.wait(EC.presenceOf(element(by.css('html'))))
.then(function () {callback();})
}
This is the usage in a simple test:
this.Given(/^I go on "([^"]*)"$/, function (arg1, callback) {
browser.get('/' + arg1);
callbackWhenDone(callback);
});
A bit of a hack I know but it gets the job done and looks pretty clean when used everywhere
With Jasmine it is possible to spyOn methods, but I'm not clear on when would it be actually useful. My understanding is that unit tests should not be concerned with implementation details and testing if method is called would be implementation detail.
One place I might think of is spying on scope.$broadcast (Angular) etc but then again this would be implementation detail and not sure if unit tests should even bother with how the code works, as long as it gives expected result.
Obviously there are good reasons to use spyOn so what would be good place to use it?
The spyOn you describe is more commonly known in testing as a MOCK, although to be more clear it allows for 2 operations:
Create a new implementation for a method via createSpy (this is the classical mock)
Instrument an existing method via spyOn (this allows you to see if the method was called and with what args, the return value etc.)
Mocking in probably the most used technique in unit testing. When you are testing a unit of code, you'll often find that there are dependencies to other units of code, and those dependencies have their own dependencies etc. If you try to test everything you'll end up with an module / UI test, which are expensive and difficult to maintain (they are still valuable, but you want as few of those as possible)
This is where mocking comes in. Imagine your unit calls to a REST service for some data. You don't want to take a dependency on a service in your unit test. So you mock the method that calls the service and you provide your own implementation that simply returns some data. Want to check that your unit handles REST errors? Have your mock return an error. etc.
It can sometimes be useful to know if your code actually calls another unit of code - imagine that you want to make sure your code correctly calls a logging module. Just mock (spyOn) that logging module and assert that it was called X number of times with the proper parameters.
You can spy functions and then you will be able to assert a couple of things
about it. You can check if it was called, what parameters it had, if it
returned something or even how many times it was called!
Spies are highly useful when writing tests, so I am going to explain how to
use the most common of them here.
// This is our SUT (Subject under test)
function Post(rest) {
this.rest = rest;
rest.init();
}
We have here our SUT which is a Post constructor. It uses a RestService to
fetch its stuff. Our Post will delegate all the Rest work to the RestService
which will be initialized when we create a new Post object. Let’s start testing
it step by step:
`describe('Posts', function() {
var rest, post;
beforeEach(function() {
rest = new RestService();
post = new Post(rest);
});
});`
Nothing new here. Since we are going to need both instances in every test,
we put the initialization on a beforeEach so we will have a new instance every
time.
Upon Post creation, we initialize the RestService. We want to test that, how
can we do that?:
`it('will initialize the rest service upon creation', function() {
spyOn(rest, 'init');
post = new Post(rest);
expect(rest.init).toHaveBeenCalled();
});`
We want to make sure that init on rest is being called when we create a new
Post object. For that we use the jasmine spyOn function. The first parameter is
the object we want to put the spy and the second parameter is a string which
represent the function to spy. In this case we want to spy the function init
on the spy object. Then we just need to create a new Post object that will call
that init function. The final part is to assert that rest.init have been
called. Easy right? Something important here is that the when you spy a
function, the real function is never called. So here rest.init doesn’t actually
run.
Recently I started unit testing JavaScript app I'm working on. No matter if I use Jasmine,QUnit or other I always write set of tests. Now, I have my source code with lets say:
function calc()
{
// some code
someOtherFunction();
// more code
}
I also have a test (no matter what framework, with Jasmine spies or sinon.js or something) that confirms that someOtherFunction() is called when calc() is executed. Test passes. Now at some point I refactor the calc function so the someOtherFunction() call doesn't exist e.g.:
function calc()
{
// some code
someVariable++;
// more code
}
The previous test will fail, yest the function still will function as expected, simply its code is different.
Now, I'm not sure if I understand correctly how testing is done. It seems obvious that I will have to go back and rewrite the test but if this happens is there something wrong with my approach? Is it bad practice? If so at which point I went wrong.
The general rule is you don't test implementation details. So given you decided that it was okay to remove the call, the method was an implementation detail and therefore you should not have tested that it was called.
20/20 hindsight is great thing is n't it?
In general I wouldn't test that a 'public' method called a 'private' one. Testing delegation should be reserved for when one class calls another.
You have written a great unit test.
The unit tests should notice sneaky side-effects when you change the implementation.
And the test must fail when the expected values don't show up.
So look to your unit test where the expected value don't match and decide what was the problem:
1) The unit test tested things it shouldn't (change the test)
2) The code is broken (add the missing side-effect)
It's fine to rewrite this test. A lot of tests fail to be perfect on the first pass. The most common test smell is tight-coupling with implementation details.
Your unit test should verify the behavior of the object, not how it achieved the result. If you're strict about doing this in a tdd style, maybe you should revert the code you changed and refactor the test first. But regardless of what technique you use, it's fine to change the test as long as you're decoupling it from the details of the system under test.
Let's say I have the following method:
function sendUserDataToServer(userData) {
// Do some configuration stuff
$.ajax('SomeEndpoint', userData);
}
When writing a unit test for this function, I could go one of two ways:
Create a spy around $.ajax and check that it was called with the expected parameters. An actual XHR request will not be sent.
Intercept the ajax request with a library like SinonJs and check the XHR object to make sure it was configured correctly.
Why I might go with option 1: It separates the functionality of $.ajax from my code. If $.ajax breaks due to a bug in jQuery, it won't create a false-negative in my unit test results.
Why I might go with option 2: If I decide I want to use another library besides jQuery to send XHRs, I won't have to change my unit tests to check for a different method. However, if there's a bug in that library, my unit tests will fail and I won't necessarily know it's the library and not my actual code right away.
Which approach is correct?
Your first option is correct. The purpose of your unit test is twofold
to verify your code contract
to verify that it communicates with other things it uses.
Your question is about the interaction with $.ajax, so the purpose of this test is to confirm it can talk correctly to its collaborators.
If you go with option 1, this is exactly what you are doing. If you go with option 2, you're testing the third party library and not that you're executing the communication protocol correctly.
You should make both tests though. The second test is probably fast enough to live with your unit tests, even though some people might call it an integration test. This test tells you that you're using the library correctly to create the object you intended to create. having both will let you differentiate between a bug in your code and the library, which will come in handy if there is a library bug that causes something else in your system to fail.
My opinion: option 1 is more correct within unit tests.
A unit test should test the unit of code only. In your case this would be the JavaScript under your control (e.g. logic within the function, within a class or namespace depending on the context of your function).
Your unit tests should mock the jQuery call and trust that it works correctly. Doing real AJAX calls is fine, but these would be part of integration tests.
See this answer regarding what is and what isn't a unit test.
As an aside, a third way would be to do the AJAX calls via a custom proxy class which can be mocked in tests but also allows you to switch the library you use.
I would use option 1 but go with the one that makes the tests easiest to understand.
You are depending on jQuery in your code, so you don't need to worry about it actually working as you point out. I have found that it makes the tests cleaner as you specify your parameters and the response and have your code process it. You can also specify which callback should be used (success or error) easily.
As for changing the library from jQuery. If you change the code your test should fail. You could argue that your behavior is now different. Yes, you are making an Ajax call but with a different dependency. And your test should be updated to reflect this change.
Both approaches are really correct. If you are creating your own wrapper for sending Ajax calls, it might make sense to intercept the XHR request or manually creating the request and sending it yourself. Choose the option that makes your test easiest to understand and maintain.
the most important thing is to test your own code. separate business logic from the IO code and your business logic. after that you should have methods without any logic that do the actual IO (exactly as in your example). and now: should you test that method? it depends. tests are to make you feel safer.
if you really worry that jQuery can fail and you can't afford such unlikely scenario than test it. but not with some mocking library but do full end-to-end integration tests
if you're not sure if you are using jQuery correctly then do the 'learning tests'. pass a parameters to it and check what kind of request it produces. mocking tests should be enough in this situation
if you need a documentation of a contract between frontend and backend then you may choose if you prefer to test if backend meets the contract, frontend or both
but if none of above applies to you then isn't it a waste of time to test external, well established framework? jquery guys for sure have tested it. still you will probably need some kind of integration tests (automatic or manual) to check if your frontend interacts with backend properly. and those tests will cover the network layer