How do you deal with a fluctuating unit test case with Jasmine? - javascript

I am writing unit test cases using Jasmine for Angular 13 project. there is a test case which passes sometimes and fails sometimes. I presume this happens because of order of the tests execution. Any idea how to deal with it?
An error was thrown in afterall

By default the tests run in a random order each time. There is a seed value so that you can recreate the order. You can read on how to approach that in this answer.
Once you have yours executing where it fails each time you will easily know if any of the following has actually fixed your issue.
You can also check for anywhere you are subscribing to something - every time you subscribe in your test you need to make sure that it gets unsubscribed at the end of the test. To do this you can put .pipe(take(1)) or capture the subscription object and call unsubscribe on it.
const sub = someService.callObservable().subscribe();
// verify what you need to
sub.unsubscribe();
A third concept to look at - any variables you have defined above the beforeEach should be set to a new value in the beforeEach. Otherwise you will have the same objects reused between tests and that can lead to issues.

Related

Mocha: bail on one specific describe only

I use mocha/chai to test my graphql endpoints for a nodejs/express server - which works fine. However, in the first test, I check if any .env variables have been set correctly.
If not, all further tests will be affected anyways. So I would like to terminate all further tests when any test in this block fails (preferably finishing the block to capture all 'missing' variables).
So can I somehow terminate the entire test, when a certain describe-block fails?
Note: I found the bail flag/config option, but this determines the whole test on any error, which is not what I'm looking for.
You can try add after hook and add a condition there
after('Fail for some describe', function () {
if(this.test.parent.title.startsWith('Describe title that should fail') && this.test.parent.isFailed())
this.test.parent._bail = true
});
This could be tricky since this context may contain several layers of parents so if not sure you can always log the keys for current this.test object. Also, bear in mind that the following describe might have different this context with overriden _bail value. This solution is good when it comes to tracking specific critical it failures.

What can I do if afterAll is not being executed when a scenario fail?

I have several scenarios to be executed. I introduce some test data in the database (using beforeAll) before executing these scenarios and remove such data after executing the scenarios.
The problem is that if a scenario fails, the code present within afterAll is not being executed. Therefore, test data is not removed from the data base. Is there any other way to accomplish this?
Thanks in advance.
First, You should mock the database connection, there are many libraries to do so, for instance if you are using mongodb have a look at Mockgoose
Mockgoose provides test database by spinning up mongod on the back when mongoose.connect call is made. By default it is using in memory store which does not have persistence.
For the afterAll hook that never runs (Which is the default behavior in case a test fails):
I suggest you truncate everything in the beforeAll hook, so whenever you start running the tests you have an empty database, even if you have some data from the last run (which will not be the case if you use Mockgoose or similar)

Protractor - How to run test cases with failures and to present them as passed items?

Is there an option to mark a test case with known issue/limitation as passed?
Actually, I want the test case will run with the bugs but to present him as "passed" in the generated report until I'll fix him or to leave it with the known issue for good.
What we do in such cases is marking these tests as pending referencing the Jira issue number in the test description:
pending("should do something (ISSUE-442)", function () {
// ...
});
Tests like these would not be failures (and they would not actually be executed) and would not change the exit code, but would be separately reported on the console (we are using jasmine-spec-reporter).
When an issue is resolved, we would check if we have a pending test with the issue number, and, if yes, we'll make the test executable again by renaming pending back to it. If the test passes, this usually serves, at least partially and assuming the test actually checks the functionality, as a prove that the fix was made and the issue can be resolved.
This is probably not ideal since it involves "human touch" in keeping track of pending specs (tried to solve it statically, but failed), but that proved to work for us.

When is spying on method useful during unit testing?

With Jasmine it is possible to spyOn methods, but I'm not clear on when would it be actually useful. My understanding is that unit tests should not be concerned with implementation details and testing if method is called would be implementation detail.
One place I might think of is spying on scope.$broadcast (Angular) etc but then again this would be implementation detail and not sure if unit tests should even bother with how the code works, as long as it gives expected result.
Obviously there are good reasons to use spyOn so what would be good place to use it?
The spyOn you describe is more commonly known in testing as a MOCK, although to be more clear it allows for 2 operations:
Create a new implementation for a method via createSpy (this is the classical mock)
Instrument an existing method via spyOn (this allows you to see if the method was called and with what args, the return value etc.)
Mocking in probably the most used technique in unit testing. When you are testing a unit of code, you'll often find that there are dependencies to other units of code, and those dependencies have their own dependencies etc. If you try to test everything you'll end up with an module / UI test, which are expensive and difficult to maintain (they are still valuable, but you want as few of those as possible)
This is where mocking comes in. Imagine your unit calls to a REST service for some data. You don't want to take a dependency on a service in your unit test. So you mock the method that calls the service and you provide your own implementation that simply returns some data. Want to check that your unit handles REST errors? Have your mock return an error. etc.
It can sometimes be useful to know if your code actually calls another unit of code - imagine that you want to make sure your code correctly calls a logging module. Just mock (spyOn) that logging module and assert that it was called X number of times with the proper parameters.
You can spy functions and then you will be able to assert a couple of things
about it. You can check if it was called, what parameters it had, if it
returned something or even how many times it was called!
Spies are highly useful when writing tests, so I am going to explain how to
use the most common of them here.
// This is our SUT (Subject under test)
function Post(rest) {
this.rest = rest;
rest.init();
}
We have here our SUT which is a Post constructor. It uses a RestService to
fetch its stuff. Our Post will delegate all the Rest work to the RestService
which will be initialized when we create a new Post object. Let’s start testing
it step by step:
`describe('Posts', function() {
var rest, post;
beforeEach(function() {
rest = new RestService();
post = new Post(rest);
});
});`
Nothing new here. Since we are going to need both instances in every test,
we put the initialization on a beforeEach so we will have a new instance every
time.
Upon Post creation, we initialize the RestService. We want to test that, how
can we do that?:
`it('will initialize the rest service upon creation', function() {
spyOn(rest, 'init');
post = new Post(rest);
expect(rest.init).toHaveBeenCalled();
});`
We want to make sure that init on rest is being called when we create a new
Post object. For that we use the jasmine spyOn function. The first parameter is
the object we want to put the spy and the second parameter is a string which
represent the function to spy. In this case we want to spy the function init
on the spy object. Then we just need to create a new Post object that will call
that init function. The final part is to assert that rest.init have been
called. Easy right? Something important here is that the when you spy a
function, the real function is never called. So here rest.init doesn’t actually
run.

Is it ok to rewrite unit test in Javascript?

Recently I started unit testing JavaScript app I'm working on. No matter if I use Jasmine,QUnit or other I always write set of tests. Now, I have my source code with lets say:
function calc()
{
// some code
someOtherFunction();
// more code
}
I also have a test (no matter what framework, with Jasmine spies or sinon.js or something) that confirms that someOtherFunction() is called when calc() is executed. Test passes. Now at some point I refactor the calc function so the someOtherFunction() call doesn't exist e.g.:
function calc()
{
// some code
someVariable++;
// more code
}
The previous test will fail, yest the function still will function as expected, simply its code is different.
Now, I'm not sure if I understand correctly how testing is done. It seems obvious that I will have to go back and rewrite the test but if this happens is there something wrong with my approach? Is it bad practice? If so at which point I went wrong.
The general rule is you don't test implementation details. So given you decided that it was okay to remove the call, the method was an implementation detail and therefore you should not have tested that it was called.
20/20 hindsight is great thing is n't it?
In general I wouldn't test that a 'public' method called a 'private' one. Testing delegation should be reserved for when one class calls another.
You have written a great unit test.
The unit tests should notice sneaky side-effects when you change the implementation.
And the test must fail when the expected values don't show up.
So look to your unit test where the expected value don't match and decide what was the problem:
1) The unit test tested things it shouldn't (change the test)
2) The code is broken (add the missing side-effect)
It's fine to rewrite this test. A lot of tests fail to be perfect on the first pass. The most common test smell is tight-coupling with implementation details.
Your unit test should verify the behavior of the object, not how it achieved the result. If you're strict about doing this in a tdd style, maybe you should revert the code you changed and refactor the test first. But regardless of what technique you use, it's fine to change the test as long as you're decoupling it from the details of the system under test.

Categories