Recently I started unit testing JavaScript app I'm working on. No matter if I use Jasmine,QUnit or other I always write set of tests. Now, I have my source code with lets say:
function calc()
{
// some code
someOtherFunction();
// more code
}
I also have a test (no matter what framework, with Jasmine spies or sinon.js or something) that confirms that someOtherFunction() is called when calc() is executed. Test passes. Now at some point I refactor the calc function so the someOtherFunction() call doesn't exist e.g.:
function calc()
{
// some code
someVariable++;
// more code
}
The previous test will fail, yest the function still will function as expected, simply its code is different.
Now, I'm not sure if I understand correctly how testing is done. It seems obvious that I will have to go back and rewrite the test but if this happens is there something wrong with my approach? Is it bad practice? If so at which point I went wrong.
The general rule is you don't test implementation details. So given you decided that it was okay to remove the call, the method was an implementation detail and therefore you should not have tested that it was called.
20/20 hindsight is great thing is n't it?
In general I wouldn't test that a 'public' method called a 'private' one. Testing delegation should be reserved for when one class calls another.
You have written a great unit test.
The unit tests should notice sneaky side-effects when you change the implementation.
And the test must fail when the expected values don't show up.
So look to your unit test where the expected value don't match and decide what was the problem:
1) The unit test tested things it shouldn't (change the test)
2) The code is broken (add the missing side-effect)
It's fine to rewrite this test. A lot of tests fail to be perfect on the first pass. The most common test smell is tight-coupling with implementation details.
Your unit test should verify the behavior of the object, not how it achieved the result. If you're strict about doing this in a tdd style, maybe you should revert the code you changed and refactor the test first. But regardless of what technique you use, it's fine to change the test as long as you're decoupling it from the details of the system under test.
Related
I am writing unit test cases using Jasmine for Angular 13 project. there is a test case which passes sometimes and fails sometimes. I presume this happens because of order of the tests execution. Any idea how to deal with it?
An error was thrown in afterall
By default the tests run in a random order each time. There is a seed value so that you can recreate the order. You can read on how to approach that in this answer.
Once you have yours executing where it fails each time you will easily know if any of the following has actually fixed your issue.
You can also check for anywhere you are subscribing to something - every time you subscribe in your test you need to make sure that it gets unsubscribed at the end of the test. To do this you can put .pipe(take(1)) or capture the subscription object and call unsubscribe on it.
const sub = someService.callObservable().subscribe();
// verify what you need to
sub.unsubscribe();
A third concept to look at - any variables you have defined above the beforeEach should be set to a new value in the beforeEach. Otherwise you will have the same objects reused between tests and that can lead to issues.
So I have a new assignment in university consisting of lots of people collaborating, and we want to use continuous integration, thinking of using CircleCI, and we want to use a TDD approach.
My biggest question is how do you correctly use TDD. I might have the wrong idea but from what I understand you write all your tests first and make them fail, because you don't have any code yet, but how can I write all my tests if I don't even know yet all the units I will have/need?
In this case since using CircleCI, assuming it won't let me merge code if it doesn't pass the tests, how can this work? Since there will be tests written but no code for that test specifically.
Am I wrong and you write the tests as you go along on the development of the features?
This is a subject that I am really having a hard time grasping but I would really love to understand it right as I believe it will really help on the future.
My biggest question is how do you correctly use TDD. I might have the wrong idea but from what I understand you write all your tests first and make them fail, because you don't have any code yet, but how can I write all my tests if I don't even know yet all the units I will have/need?
Not quite the right idea.
You might start by thinking about the problem, and creating a checklist of tests that you expect to implement before you are done.
But the actual implementation cycle is incremental. We work on one test at a time, starting from the first. We make that test pass, and clean up all of the code, before we introduce a second test.
The idea here being that we'll be learning as we go -- we may think of some more tests, which get added to the checklist, or we may decide the tests we thought would be important aren't after all, so they get crossed off the checklist.
At any given point in time, we expect that either (a) all of the implemented tests are passing, or (b) exactly one implemented test is failing, and it is the one we are currently working on. Any time we discover some other condition holds, then we back up, reverting to some previously well understood state, and then proceed forwards again.
We don't normally push/publish/share code when it has broken tests. Instead, the test and a working implementation are shared together. We don't share the broken intermediate stages, or known mistakes; instead, we share progress.
A review of the slides in the Bowling Game Kata may help to clarify what the rhythm of the work looks like.
It is completely normal to feel like the first test is hard -- you are writing a test implementation against code that doesn't exist yet. We tend to employ imagination here; suppose that the production code you need already exists, how would you invoke it? what data would you pass to it? What data would you get back? and you write the test as though the perfect interface for what you want to do already exists. Then you create production code that matches that interface; then you give that production code the correct behavior; then you give the production code a design that will make the code easy to change later.
And when you are happy with all of that, you introduce the second test, which usually looks like the first test with slightly different data, and a different expected result. So the second test fails, and then you go to the easy-to-change code you wrote before, and adapt it so that the second test also passes. And then you again clean up the design so that the code is easily changed.
And so it goes, until you reach the end of your checklist.
I am new to Jasmine testing and looking for a "best practice" for unit testing a stateful AngularJS service. Most tutorials I could find focus on test cases that run atomic calls to stateless services.
This matches nicely the Jasmine syntax
it("should do something", function(){ expect(target.doSomething()).toBe... })
However, I found no obvious way to extend this pattern to a test case that involves multiple calls to one or several of the service's functions.
Let's imagine a service like this one:
angular.module("someModule").factory("someService", function(){
return {
enqueue: function(item){
// Add item to some queue
}
}
});
For such a service, it makes sense to test that sequential calls to enqueue() process the items in the correct order. This involves writing a test case that calls enqueue() multiple times and checks the end result (which obviously cannot be achieved with a service as simple as the one above, but this is not the point...)
What doesn't work:
describe("Some service", function(){
// Initialization omitted for simplicity
it("should accept the first call", function() {
expect(someService.enqueue(one)).toBe... // whatever is correct
});
it("should accept the second call", function() {
expect(someService.enqueue(two)).toBe... // whatever is correct
});
it("should process items in the correct order", function() {
// whatever can be used to test this
});
});
The code above (which actually defines not one but three test cases) fails randomly as the three test cases are executed... just as randomly.
A poster in this thread suggested that splitting the code into several describe blocks would have these to be executed in the given order, but again this seems to be different from one Jasmine version to another (according to other posters in same thread). Moreover, executing suites and tests in a random order seems to be the intended way; even if it was possible, by setup, to override this behaviour, that would probably NOT be the right way to go.
Thus it seems that the only correct way to test a multi-call scenario is to make it ONE test case, like this:
describe(("Some service", function(){
// Initialization omitted for simplicity
it("should work in my complex scenario", function(){
expect(someService.enqueue(one)).toBe... // whatever is correct
expect(someService.enqueue(two)).toBe... // whatever is correct
expect(/* whatever is necessary to ensure the order is correct */);
});
});
While technical-wise this seems the logical way to go (after all, a complex scenario is one test case not three), the Jasmine "description + code" pattern seems disturbed in this implementation as:
There is no way to associate a message to each "substep" that can fail within the test case;
The description for the single "it" is inevitably bulky, like in example above, to really say something useful about a complex scenario.
This makes me wonder whether this is the only correct solution (or ist it?) to this kind of testing needs. Again I am especially interested in "doing it the right way" rather than using some kind of hack that would make it work... where it should not.
Sorry no code for this... Not sure it needs, I think you just need to adjust your expectations of your tests.
For the general rule of testing you don't really care how external dependencies handle the service, you can't control that. You want to test what you think the expected results are going to be for your service.
For your example you'll just want to invoke the dependencies for your service and call the function and test what the expected results are from calling the enqueue function. If it returns a promise, check success and error. It calls an API check that and so on.
If you want to see how the external dependencies use your service, you'll test it on those dependencies tests.
For example you have a controller that invokes enqueue. In the test you'll have to inject your provider (the service). And handle the expectations.
We're using jasmine for our testing environment. When implementing a test, a common mistake seems to be to first set up all the preconditions of the test and then to forget the actual call to expect. Unfortunately, such a test will always succeed; you don't see the test is actually faulty.
In my opinion, a test that does not expect anything should always fail. Is there a way to tell jasmine to adopt this behaviour? If not, is there another way how to make sure all my tests actually expect something?
There is an ESLint plugin eslint-plugin-jasmine that can ensure that tests have at least one expect() call, and that those calls have a matcher.
I am trying to write test driven Javascript. Testing each function, I know, is crucial. But I came to a stumbling block, in that the plugin which I am writing needs to have some private functions. I cannot peek into how they are functioning. What would I need to do if I want to keep my code well tested without changing the structure of it too much? (I am ok with exposing some API, though within limits.)
I am using sinon, QUnit, and Pavlov.
If you are doing test driven development (as suggested by the tags) each line of production code is first justified by failing test case.
In other words the existence of each and every line of your production code is implicitly tested because without it some test must have failed. That being said you can safely assume that private function/lambda/closure is already tested from the definition of TDD.
If you have a private function and you are wondering how to test it, it means you weren't doing TDD on the first place - and now you have a problem.
To sum up - never write production code before the test. If you follow this rule, every line of code is tested, no matter how deep it is.