I have a few functional tests in Jasmine that will periodically fail due to something in the DOM not having finished composing. I would like to be able to have these tests retry a couple of times rather than failing the test suite.
I'm looking for something similar to the way that Mocha behaves with https://mochajs.org/#retry-tests or, even better, the ability to specify a wait and then retry.
Per Jasmine's owner:
this is functionality we'd like to keep out of jasmine itself.
source: https://github.com/jasmine/jasmine/issues/960
protractor-flake has been recommended to solve this, assuming you are using Protractor.
Related
For my end-to-end testing, I am using Puppeteer with Jest. My codebase is large and I have a lot of tests at the moment. There is one 'preparatory' test suite which checks if there actually is sufficient data on our page for the rest of the tests to proceed. I would like to force this test to run first and then terminate the Jest process if it fails, since there is no need for other tests to run after that.
The --runInBand flag not only has a hard performance hit due to the large number of tsts but also requires manual termination of the Jest process after the failure of the first test, which I have not been able to do.
What is the best way to achieve the above? Could I please get a minimal example of the solution? Thanks!
One of places that precede all tests is globalSetup. It runs in parent process and doesn't obtain Jest environment, so the test needs to be set up manually:
import expect from 'expect';
export default async () => {
// assertions go here
};
I'm using jest for testing, and in a few test scenarios I get the jest message:
Jest did not exit one second after the test run has completed.
While taking Jest's recommendation to run with --detectOpenHandles and ending up with hanging test process that never ends, I saw other online suggestions to add the --forceExit option as well. Now the test ends and everything is ok.
It worths mentioning that all the tests are running properly and passing, with or without the --detectOpenHandles --forceExit options.
I wonder if is that considered as best practice in such cases? or is it just serving me as a "first aid"?
What are the side effects of doing so?
Cheers,
From the documentation, the detectOpenHandles option is for:
Attempt to collect and print open handles preventing Jest from exiting cleanly. Use this in cases where you need to use --forceExit in order for Jest to exit to potentially track down the reason. This implies --runInBand, making tests run serially. Implemented using async_hooks, so it only works in Node 8 and newer. This option has a significant performance penalty and should only be used for debugging.
The forceExit option should never be used as a best practice, the only time you have to use is because:
An async function did not finish
A promise function did not finish
A Websocket connection is still open
A Database connection is still open
Everything having a connect/disconnect methods did not disconnect before the end of the test
We're using jasmine for our testing environment. When implementing a test, a common mistake seems to be to first set up all the preconditions of the test and then to forget the actual call to expect. Unfortunately, such a test will always succeed; you don't see the test is actually faulty.
In my opinion, a test that does not expect anything should always fail. Is there a way to tell jasmine to adopt this behaviour? If not, is there another way how to make sure all my tests actually expect something?
There is an ESLint plugin eslint-plugin-jasmine that can ensure that tests have at least one expect() call, and that those calls have a matcher.
A lot of the functions exposed by Protractor return promises.
Do I need to structure my Jasmine tests using Protractor using things like async tests (with the done parameter), and .then, or does Protractor provide some magic to do this for me?
WebDriverJS takes care of that with the control-flow. Protractor adds the modification of Jasmine's expect to keep the thens at bay. It's best explained here.
Yes, there is some magic that protractor performs in order to wait for each promise to resolve.
The best description of the process is in the protractor documentation: How It Works.
This means we don't have to structure the tests as async using a done. We can simply assert using expect (in Jasmine) and everything should work.
Let's say I have the following method:
function sendUserDataToServer(userData) {
// Do some configuration stuff
$.ajax('SomeEndpoint', userData);
}
When writing a unit test for this function, I could go one of two ways:
Create a spy around $.ajax and check that it was called with the expected parameters. An actual XHR request will not be sent.
Intercept the ajax request with a library like SinonJs and check the XHR object to make sure it was configured correctly.
Why I might go with option 1: It separates the functionality of $.ajax from my code. If $.ajax breaks due to a bug in jQuery, it won't create a false-negative in my unit test results.
Why I might go with option 2: If I decide I want to use another library besides jQuery to send XHRs, I won't have to change my unit tests to check for a different method. However, if there's a bug in that library, my unit tests will fail and I won't necessarily know it's the library and not my actual code right away.
Which approach is correct?
Your first option is correct. The purpose of your unit test is twofold
to verify your code contract
to verify that it communicates with other things it uses.
Your question is about the interaction with $.ajax, so the purpose of this test is to confirm it can talk correctly to its collaborators.
If you go with option 1, this is exactly what you are doing. If you go with option 2, you're testing the third party library and not that you're executing the communication protocol correctly.
You should make both tests though. The second test is probably fast enough to live with your unit tests, even though some people might call it an integration test. This test tells you that you're using the library correctly to create the object you intended to create. having both will let you differentiate between a bug in your code and the library, which will come in handy if there is a library bug that causes something else in your system to fail.
My opinion: option 1 is more correct within unit tests.
A unit test should test the unit of code only. In your case this would be the JavaScript under your control (e.g. logic within the function, within a class or namespace depending on the context of your function).
Your unit tests should mock the jQuery call and trust that it works correctly. Doing real AJAX calls is fine, but these would be part of integration tests.
See this answer regarding what is and what isn't a unit test.
As an aside, a third way would be to do the AJAX calls via a custom proxy class which can be mocked in tests but also allows you to switch the library you use.
I would use option 1 but go with the one that makes the tests easiest to understand.
You are depending on jQuery in your code, so you don't need to worry about it actually working as you point out. I have found that it makes the tests cleaner as you specify your parameters and the response and have your code process it. You can also specify which callback should be used (success or error) easily.
As for changing the library from jQuery. If you change the code your test should fail. You could argue that your behavior is now different. Yes, you are making an Ajax call but with a different dependency. And your test should be updated to reflect this change.
Both approaches are really correct. If you are creating your own wrapper for sending Ajax calls, it might make sense to intercept the XHR request or manually creating the request and sending it yourself. Choose the option that makes your test easiest to understand and maintain.
the most important thing is to test your own code. separate business logic from the IO code and your business logic. after that you should have methods without any logic that do the actual IO (exactly as in your example). and now: should you test that method? it depends. tests are to make you feel safer.
if you really worry that jQuery can fail and you can't afford such unlikely scenario than test it. but not with some mocking library but do full end-to-end integration tests
if you're not sure if you are using jQuery correctly then do the 'learning tests'. pass a parameters to it and check what kind of request it produces. mocking tests should be enough in this situation
if you need a documentation of a contract between frontend and backend then you may choose if you prefer to test if backend meets the contract, frontend or both
but if none of above applies to you then isn't it a waste of time to test external, well established framework? jquery guys for sure have tested it. still you will probably need some kind of integration tests (automatic or manual) to check if your frontend interacts with backend properly. and those tests will cover the network layer