I'm using jest for testing, and in a few test scenarios I get the jest message:
Jest did not exit one second after the test run has completed.
While taking Jest's recommendation to run with --detectOpenHandles and ending up with hanging test process that never ends, I saw other online suggestions to add the --forceExit option as well. Now the test ends and everything is ok.
It worths mentioning that all the tests are running properly and passing, with or without the --detectOpenHandles --forceExit options.
I wonder if is that considered as best practice in such cases? or is it just serving me as a "first aid"?
What are the side effects of doing so?
Cheers,
From the documentation, the detectOpenHandles option is for:
Attempt to collect and print open handles preventing Jest from exiting cleanly. Use this in cases where you need to use --forceExit in order for Jest to exit to potentially track down the reason. This implies --runInBand, making tests run serially. Implemented using async_hooks, so it only works in Node 8 and newer. This option has a significant performance penalty and should only be used for debugging.
The forceExit option should never be used as a best practice, the only time you have to use is because:
An async function did not finish
A promise function did not finish
A Websocket connection is still open
A Database connection is still open
Everything having a connect/disconnect methods did not disconnect before the end of the test
Related
For my end-to-end testing, I am using Puppeteer with Jest. My codebase is large and I have a lot of tests at the moment. There is one 'preparatory' test suite which checks if there actually is sufficient data on our page for the rest of the tests to proceed. I would like to force this test to run first and then terminate the Jest process if it fails, since there is no need for other tests to run after that.
The --runInBand flag not only has a hard performance hit due to the large number of tsts but also requires manual termination of the Jest process after the failure of the first test, which I have not been able to do.
What is the best way to achieve the above? Could I please get a minimal example of the solution? Thanks!
One of places that precede all tests is globalSetup. It runs in parent process and doesn't obtain Jest environment, so the test needs to be set up manually:
import expect from 'expect';
export default async () => {
// assertions go here
};
Currently when a single test in it() block fails Cypress halts completely.
I want Cypress to continue running subsequent tests regardless if a previous test failed or not (but I still want to mark the failed tests so I know which one failed).
I tried to intercept the fail event in beforeEach:
beforeEach(() => {
Cypress.on('fail', (error, runnable) => {
cy.log('this single test failed, but continue other tests');
// don't stop!
// throw error; // marks test as failed but also makes Cypress stop
});
But it appears I cannot use any cy commands inside this handler because when I do it returns an error due to Cypress weird internal promise logic:
CypressError: Cypress detected that you returned a promise from a
command while also invoking one or more cy commands in that promise.
The command that returned the promise was:
cy.wait()
The cy command you invoked inside the promise was:
cy.log()
Because Cypress commands are already promise-like, you don't need to
wrap them or return your own promise.
Cypress will resolve your command with whatever the final Cypress
command yields.
The reason this is an error instead of a warning is because Cypress
internally queues commands serially whereas Promises execute as soon
as they are invoked. Attempting to reconcile this would prevent
Cypress from ever resolving.
https://on.cypress.io/returning-promise-and-commands-in-another-command
If I leave the Cypress.on('fail') block empty all tests are going to be marked as passed even if they fail.
If I uncomment throw error Cypress will halt completely on any failed test.
My way to ensure subsequent tests are done, and the failed test is marked as failure is - to put every it case in different file, and if needed to group them - I group them in a separate subfolder.
It has improved readability of reports and cypress tests run time, since before that I sometimes had problems with cypress not clearing its state between tests and we had memory leaks.
If you throw error in your test it will halt the script in the same way it does when there is an issue with the code. This is functioning as expected. You may need to revisit your test logic and consider adding some stubs/spies that wouldn't set the exit code to 1.
There is a lack of this feature(see this issue:https://github.com/cypress-io/cypress/issues/518). If you read through it you will find some code snippets that mention throwing an error to stop the test runner. You are doing this with the opposite intent.
If you must throw the error and NOT have the test runner bail, you need to catch it.
Is there a reason why don't you want to place tests in the different it() blocks?
If there is not, then definitely do it. Makes reports much more readable and ensures clean state before each test.
If you need to persist the application state between it()'s, consider using cy.session() (if your cypress version is new enough). Otherwise, there are some methods for getting, saving, and later using cookies. There are different options, so if you need further help, please describe your issue more.
Some useful links:
cy.session()
cy.getCookie()
I ran into an issue while testing a fairly large refactor (on this case taking moving an old service from node.js 0.12 to 10.x). We use grunt so I got the following results out of grunt nodeunit:all:
...
verify-api.routes.test.js
test setValues (pass)
Fatal error: Cannot read property 'setUp' of undefined
Some googling leads to a couple threads - this one is a good synopsis - that correctly show this error is when test.done is called multiple times.
Great! No problem. Armed with that you now dig into verify-api.routes.test.js where you see/assume that the problem is located based upon the output. Only - you're wrong. It turns out that the error (in my case) is located two test suites before verify-api.routes.test.js amongst the full suite of tests run. To be fair to nodeunit this is partly grunt's fault as the output is misleading us into identifying verify-api.routes.test.js... but as shown at the bottom the other ways simply make it more clear that nodeunit doesn't know where the problem lies - which is only marginally better.
I've found that I run into a problem like this maybe once once in a while - but when it happens it's painful... Situations like this are particularly painful bc they generally manifest only occasionally - e.g. at release time or after a seemingly benign merge.
Is there a fast trick out there that people are using to find these problems or make their code more resilient to these types of issues?
As mentioned some nodeunit runners provide different results... more/less misleading depending upon the context:
I got the following output when running nodeunit directly using : nodeunit tests/**/*.test.js
OK: 162 assertions (2720ms)
FAILURES: Undone tests (or their setups/teardowns):
- test setValues
And this through Intellij's IDEA which nicely gives us a bit more info:
./node_modules/nodeunit/lib/core.js:285
if (group.setUp) {
^
TypeError: Cannot read property 'setUp' of undefined
at wrapGroup (./node_modules/nodeunit/lib/core.js:285:15)
at Object.exports.runSuite (./node_modules/nodeunit/lib/core.js:93:13)
So - I'm looking for best practices that help for this class of issues. Out of self-defense and for those that follow I'll mention a few things that I think are important and then spend some time on the tactics that helped me today...
1. Continuous Integration
Running tests as frequently as possible shows you more precisely which change caused the issue. When you have a lot of tests or some long running tests you may find you cannot run them all (as described above) but refactoring the important ones to make them faster. Also running tests by area can help.
2. Peer Review / Pair Coding
Is often good for catching issues as they're happening or shortly thereafter saving debug time down the line in terms of debug and maintenance in exchange for a little bit more up front time.
3. Use of async
If you're async programming you really should look into this library to keep your code just a bit cleaner. Async also has almost magical components that can handle async dependency management, filtering and more. If you're developing node code get (async)[https://github.com/caolan/async] now.
Last but not least ... thing that helped me most today was running tests in isolation using find:
4. Run tests in isolation using find
The thing that helped me most today was running tests in isolation using 'find'. Prior to this we might split tests into groups to try to narrow the search space - binary search style - until we got something that worked.
I created a rule in package scripts to do this for me - with some echos to make it more clear what outputs belong to what tests:
"find-run-all-tests": "time find . -not -path \"./node_modules/*\" -type f -name \"*.test.js\" -exec echo \\n----------- Testing : {} --------------- \\; -exec node_modules/nodeunit/bin/nodeunit {} \\; -exec echo ----------- Finished : {} --------------- \\; ",
This of course allows us to npm run find-run-all-tests from the project. This has the benefits that a) it runs the version of nodeunit dictated by the project, b) shows us how much time was spent running the whole suite and c) creates output that clearly implicates which suite was the problem and d) runs each test in complete isolation restarting node each time (big performance penalty here) :
tokenoftrust-routes.test.js
✔ test login with basic privileges works.
✔ non-privileged access of privileged page. - when user is not logged in they should be directed to log-in
✖ non-privileged access of privileged page. - when user is logged in they should get an error page
FAILURES: Undone tests (or their setups/teardowns):
- non-privileged access of privileged page. - when a non-test user is logged in they should STILL be able to see Developer Home
To fix this, make sure all tests call test.done()
----------- Testing : ./website/tests/apiKeysInvite-routes.test.js ---------------
----------- Testing : ./tests/services/requestService.test.js ---------------
requestService.test.js
✔ request service - expire request works.
✔ request service basic CRUD operations on objects work.
✔ request service basic CRUD operations on simple types.
OK: 15 assertions (834ms)
----------- Finished : ./tests/services/requestService.test.js ---------------
Again - I don't see us running this all the time bc of the performance cost. In our case it took tests 5x longer to run BUT at a the tiny cost of 4 extra minutes it helped me isolate us isolate a series of problems in a large number of test suites to allow us to skip some of counterproductive sleuthing work we found ourselves doing.
This was an odd case where we were forced down this path after some very extensive changes but I want to know if others have/do experience this pain and if so what you're doing to ease the pain/problem. Please do share if I'm missing something obvious or if there's a hack you're using that's saving you gobs of time.
I see this becoming more frequent and expensive as we have more tests so we need to get better and faster.
so long story short I'm developing RESTapi that takes a movie title on POST request to the /movies route, fetches info about that movie from external api and saves that object to the database. On POST /comments you add a comment to the different collection but every comment has a 'movie_id' property that has an associated movie.
That's my first bigger project so I'm trying to write integrational tests.
Everything is great, at least in my opinion, except 3 weird test cases that are failing just out of nowhere. Tests could pass 10 times in a row and then suddenly that weird 'jest' timer shows up and 3 cases fail.
I'm using native mongodb driver, express and jest with supertest for testing, dropping test-database BeforeAll and AfterEach, I have no idea whats the reason of that.
Timer thingy:
And after timer this shows up, failed tests:
Full source code is here GITHUB
Other failed cases:
Any ideas, tips?
I was in the same hell "jest parallel tests" problem and i find a solution, maybe not the best but now jest run tests in "queue mode" so when i delete datas in beforeAll my next group of tests is ready to go with "fresh" new inserted datas.
--runInBand
Alias: -i. Run all tests serially in the current process, rather than creating a > worker pool of child processes that run tests. This can be useful for debugging.
jest source
So in my config.json i have :
"scripts": {
"test": "set NODE_ENV=test&& jest ./tests --runInBand --detectOpenHandles --forceExit",
"server": "set NODE_ENV=development&& nodemon app.js"
}
The code seems to return one entry, while the test expects zero. This looks very much like an issue with test independence: Your tests seem to depend on each other (through the database).
I would guess that one test creates the movie and then clears it again. When everything works fine, the second test does not find the movie. But, in some unforunate cases (bad timing, different execution order, ...), the database is not cleared fast enough and the second test finds the movie.
So, you should work hard on making your tests independent. This is not easy with integrated tests, especially when they involve a real database.
Maybe you can create smaller tests (Unit tests, micro tests, ...) to achieve independence. If this is not possible, the test could check it's precondition (database empty or whatever) and wait until it's fulfilled (or until a timeout happens).
Dropping the database in BeforeAll and AfterEach might not be enough, because Jest even runs your tests in parallel: Are tests inside one file run in parallel in Jest?
Also, dropping might not be a completely synchronous and atomic operation, especially when there is some caching in the DB driver. But I don't know mongodb and it's node.js integration well enough to judge that.
I have a use case where I'm testing something on our network, and sometimes the network takes a little longer than usual and the test ends up checking too early. Ideally, I would be able to set the test to retry with this.retries(1). When I do that, it does indeed retry and it works; however, it breaks my logging. I'm running a lot of tests, and if I don't use the retry function the logging for each suite gets split appropriately. But if a test gets retried, it stops splitting up the logs and they all get located under one test suite. I have no idea why and haven't been able to find any similar reports. Any help would be appreciated.
Instead of retrying you can also try timeouts because it is made for this.
This may resolve your issue as timeouts simply make mocha wait more time before setting the test as failed so it doesn't change the logs.