I'm using mocha for node.js functional testing.
There are several files in my test.
How can I run a piece of code for only one time before all tests start?
For example, I may have to set up a docker container before all tests start.
Is it possible to do this with mocha?
The before hook runs 1 time for every test file. This doesn't meet my needs.
You can have 'root' level hooks, if you put them outside of any describe blocks. So you can put your relevant code in any files inside of the test folder.
before(function() {
console.log('before any tests started');
});
See the docs: http://mochajs.org/#root-level-hooks
Mocha 8 introduces the concept of root hook plugins. In your case, the relevant ones are beforeAll and afterAll, which will run once before/after all tests, so long you tests run in serial.
You can write:
exports.mochaHooks = {
beforeAll(done) {
// do something before all tests run
done();
},
};
And you'll have to add this file using the --require flag.
See docs for more info.
There is a very clean solution for this. Use --file as parameter in your mocha command. It works like a global hook for your tests.
xargs mocha -R spec --file <path-to-setup-file>
A setup file can look like this:
'use strict';
const mongoHelper = require('./mongoHelper.js');
console.log("[INFO]: Starting tests ...");
// connect to database
mongoHelper.connect()
.then(function(connection){
console.log("[INFO]: Connected to database ...");
console.log(connection);
})
.catch(function(err){
console.error("[WARN]: Connection to database failed ...");
console.log(err);
});
Unfortunately I have not found a way to use async/await in this setup files. So I think you may have to be conent with using "old" promise and callback code.
Related
I am writing extensive tests for a new API via Jest and supertest. Prior to running the tests, I am setting up a test database and populating it with users:
Test command
jest --forceExit --config src/utils/testing/jest.config.js
File jest.config.js
module.exports = {
rootDir: process.cwd(),
// Sets up testing database with users
globalSetup: './src/utils/testing/jest.setup.js',
// Ensures connection to database for all test suites
setupTestFrameworkScriptFile: './src/utils/testing/jest.db.js',
}
So I am starting with a database of some users to test on. The problem is this:
Some of my tests rely upon the success of other tests. In this application, users can upload images, and group them into packs. So my grouping endpoint suite depends upon the success of my image upload suite, and so on.
I am well aware many people might say this is bad practice, and that tests should not rely upon other tests. That being said, I would really rather keep all my tests via supertest, and not get into dependency injection, etc. I don't want to have to meticulously set up testing conditions (for example, creating a bunch of user images artificially before running the tests), because: (1) this is just duplication of logic, and (2) it increases the possibility of something breaking.
Is there any way to group jest suites? For example, to run suites in order:
jest run creationSuite
jest run modificationSuite
This way, all my "creationSuite" tests could be run simultaneously, and success of all would then trigger the "modificationSuite" to run, etc., in a fail-fast manner.
Alternatively, specifying inside a test suite dependencies on other test suites would be great:
describe('Grouping endpoint', () => {
// Somehow define dependencies
this.dependsOn(uploadSuite)
You can use jest-runner-groups to define and run tests in groups. Once it's installed and added to the Jest configuration, you can tag your tests using docblock notation, like here:
/**
* Foo tests
*
* #group group1/subgroup1
* #group unit/foo
*/
describe( 'Foo class', () => {
...
} );
/**
* Bar tests
*
* #group group1/subgroup2
* #group unit/bar
*/
describe( 'Bar class', () => {
...
} );
Update your jest configuration to specify a new runner:
// jest.config.js
module.exports = {
...
runner: "groups"
};
Then to run a specific group, you need to use --group= argument:
// Using the Jest executable
jest --group=mygroup
// Or npm
npm test -- --group=mygroup
You can also use multiple --group arguments to run multiple groups:
// Will execute tests in the unit/bar and unit/foo groups
npm test -- --group=unit/bar --group=unit/foo
// Will execute tests in the unit group (including unit/bar and unit/foo groups)
npm test -- --group=unit
I have done it with the --testNamePattern flag. Here is the procedure.
Let’s say that you have two groups of tests:
DevTest for testing in development environment
ProdTest for testing in production environemnt
If you want to test only those features required for the development environment, you have to add DevTest to the test description:
describe('(DevTest): Test in development environment', () => {
// Your test
})
describe('(ProdTest): Test in production environment', () => {
// Your test
})
describe('Test everywhere', () => {
// Your test
})
After that, you can add commands to your package.json file:
"scripts": {
"test": "jest",
"test:prod": "jest --testNamePattern=ProdTest",
"test:dev": "jest --testNamePattern=DevTest",
"test:update": "jest --updateSnapshot"
}
Command npm test will run all of your tests, since you are not using flag --testNamePattern. If you want to run other tests, just use npm run test:dev for example.
Be careful when naming your test groups though. They are searched with regex in test description and you don't want this command to match other words.
Jest test suites are executed in multiple threads, and this is one of its main benefits. Test runs can be completed much faster this way, but the test sequence isn't preserved by design.
It's possible to disable this feature with the runInBand option.
It's possible to pick tests and suites based on their names with the testNamePattern option or based on their paths with testPathPattern option.
Since one suite depends on another one, they possibly could be combined into a single suite in the order they are expected to run. They can still reside in different files (make sure they aren't matched by Jest), e.g.:
// foobar.test.js
describe(..., () => {
require('foo.partial-test.js');
require('bar.partial-test.js');
});
The problem is this:
Some of my tests rely upon the success of other tests.
This is the real problem here. The approach that relies on previous a test state is considered flawed in any kind of automated tests.
I don't want to have to meticulously set up testing conditions (for example, creating a bunch of user images artificially before running the tests), because: (1) this is just duplication of logic, and (2) it increases the possibility of something breaking.
There is no need to set up testing conditions (fixtures) artificially. Fixtures can be extracted from existing environment, even from the results of your current tests if you're sure about their quality.
Redundancy and tautology naturally occur in automated tests, there's nothing wrong with them. Tests can be made DRYer with proper management of fixtures and shared code.
Quite the contrary, errors are always accumulated. A test that created faulty prerequisites may pass, but another test will fail, thus creating a debugging conundrum.
I'm trying to learn the A Test Driven Approach with MongodB. The folder structure
A user.js to test in the src folder
const mongoose = require('mongoose');
mongoose.Promise = require('bluebird');
const Schema = mongoose.Schema;
const UserSchema = new Schema ({
name: String
});
const User = mongoose.model('user', UserSchema);
module.exports = User;
Content of test_helper.js
const mongoose = require('mongoose');;
mongoose.connect('mongodb://localhost/users_test');
mongoose.connection
.once('open', () => {
console.log('Connected to Mongo!');
done()})
.on('error', (error) => {
console.warn('Warning', error);
});
create_test.js content
const assert = require('assert');
const User = require('../src/user');
describe('Creating records', () => {
it('Saves a user', (done) => {
const user = new User({ name: 'Ankur' });
user.save()
.then(() => {
assert(!user.isNew);
done();
});
Now when i run npm test the test are getting passed.
Connected to Mongo!
Creating records
√ Saves a user (779ms)
But My doubt is how does Mocha knows to run the test_helper.js file first, Everytime. (Also naming this file to any other name doesn't change the behavior).
Also i'm not using any root-level hook.
i know mocha loads files recursively in each directory, starting with the root directory, and since everything here is one directory only so its not making any difference here.
Can someone please suggest or help, how does Mocha exactly know that test_helper.js (or any filename with the same content) should be running first.
There is no default set order to how Mocha loads the test files.
When Mocha scans a directory to find files it it, it uses fs.readdirSync. This call is a wrapper around readdir(3), which itself does not guarantee order. Now, due to an implementation quirk the output of fs.readdir and fs.readdirSync is sorted on Linux (and probably POSIX systems in general) but not on Windows. Moreover, it is possible that the sorted behavior on Linux could eventually be removed because the documentation says fs.readdir is just readdir(3) and the latter does not guarantee order. There's a good argument to be made that the behavior observed on Linux is a bug (see the issue I linked to above).
Note that there is a --sort option that will sort files after Mocha finds them. But this is off by default.
The behavior you observe is explainable not merely by loading order but by execution order. Here is what happens:
Mocha loads the test files and executes them. So anything that is at the top level of your file executes right away. This means that the code in test_helper.js executes right away. Every call to describe immediately executes its callback. However, calls to it record the test for later execution. Mocha is discovering your tests while doing this but not executing them right away.
Once all files are executed, Mocha starts running the tests. By this time, the code in test_helper.js has already run and your test benefits from the connection it has created.
Major warning Connecting to a database is an asynchronous operation, and currently there is nothing that guarantees that the asynchronous operation in test_helper.js will have completed before the tests starts. That it works fine right now is just luck.
If this were me, I'd either put the connection creation in a global asynchronous before hook. (A global before hook appearing in any test file will be executed before any test whatsoever, even tests that appear in other files.) Or I'd use --delay and explicitly call run() to start the suite after the connection is guaranteed to be made.
It doesn't
Tests should not have a specific order.
All test suites should be working as standalone agnostic to other suites. Inside the suite you can use "before" and "beforeEach" (or "after", "afterEach") in-order to create setup and teardown steps.
But if the order of the tests matters, something is broken in the design.
There is a very easy way to load tests sequentially.
Step 1 : Set up a test script in package.json:
e.g.
"scripts": {
"test": "mocha ./tests.js"
}
Let us assume that tests.js is a file which defines the order to execute tests.
require("./test/general/test_login.js");
require("./test/Company/addCompany.js");
...
...
So here test_login will run first and then others one by one.
Step 2: Then run tests with:
$ npm test
I have a node module I am working on, and I want to write unit tests for it, however I am confused on how I would pass arguments (required for the CLI) to node through a testing suite.
Lets assume (for brevity) the module name is J, so I would call it like...
$ j --file test.js --file test2.js
how do i recreate those --file calls when I am writing my testing suite?
You can use node's child process module to run additional command line processes. This link can give you more info on syntax; I recommend also checking out the promised version of child-process.
var spawn = require('child_process').spawn;
spawn('j', ['--file', 'test.js', '--file', 'test2.js'])
.progress(function(childProcess){
// any logic you want to do here while process is running
})
.then(function(result){
// command was executed
// write tests here
})
.fail(function(err){
// maybe 1 last test to make sure there was no test
});
As far as unit-testing suites, I expect anything your comfortable with would work (mocha/chai etc.)
I'm completely new to sails, node and js in general so I might be missing something obvious.
I'm using sails 0.10.5 and node 0.10.33.
In the sails.js documentation there's a page about tests http://sailsjs.org/#/documentation/concepts/Testing, but it doesn't tell me how to actually run them.
I've set up the directories according to that documentation, added a test called test/unit/controllers/RoomController.test.js and now I'd like it to run.
There's no 'sails test' command or anything similar. I also didn't find any signs on how to add a task so tests are always run before a 'sails lift'.
UPDATE-2: After struggling a lil bit with how much it takes to run unit test this way, i decided to create a module to load the models and turn them into globals just as sails does, but without taking so much. Even when you strip out every hook, but the orm-loader depending on the machine, it can easily take a couple seconds WITHOUT ANY TESTS!, and as you add models it gets slower, so i created this module called waterline-loader so you can load just the basics (Its about 10x faster), the module is not stable and needs test, but you are welcome to use it or modify it to suit your needs, or help me out to improve it here -> https://github.com/Zaggen/waterline-loader
UPDATE-1:
I've added the info related to running your tests with mocha to the docs under Running tests section.
Just to expand on what others have said (specially what Alberto Souza said).
You need two steps in order to make mocha work with sails as you want. First, as stated in the sails.js Docs you need to lift the server before running your test, and to do that, you create a file called bootstrap.test.js (It can be called anything you like) in the root path (optional) of your tests (test/bootstrap.test.js) that will be called first by mocha, and then it'll call your test files.
var Sails = require('sails'),
sails;
before(function(done) {
Sails.lift({
// configuration for testing purposes
}, function(err, server) {
sails = server;
if (err) return done(err);
// here you can load fixtures, etc.
done(err, sails);
});
});
after(function(done) {
// here you can clear fixtures, etc.
sails.lower(done);
});
Now in your package.json, on the scripts key, add this line(Ignore the comments)
// package.json ....
scripts": {
// Some config
"test": "mocha test/bootstrap.test.js test/**/*.test.js"
},
// More config
This will load the bootstrap.test.js file, lift your sails server, and then runs all your test that use the format 'testname.test.js', you can change it to '.spec.js' if you prefer.
Now you can use npm test to run your test.
Note that you could do the same thing without modifying your package.json, and typying mocha test/bootstrap.test.js test/**/*.test.js in your command line
PST: For a more detailed configuration of the bootstrap.test.js check Alberto Souza answer or directly check this file in hist github repo
See my test structure in we.js: https://github.com/wejs/we-example/tree/master/test
You can copy and paste in you sails.js app and remove we.js plugin feature in bootstrap.js
And change you package.json to use set correct mocha command in npm test: https://github.com/wejs/we-example/blob/master/package.json#L10
-- edit --
I created a simple sails.js 0.10.x test example, see in: https://github.com/albertosouza/sails-test-example
Given that they don't give special instructions and that they use Mocha, I'd expect that running mocha from the command line while you are in the parent directory of test would work.
Sails uses mocha as a default testing framework.
But Sails do not handle test execution by itself.
So you have to run it manually using mocha command.
But there is an article how to make all Sails stuff included into tests.
http://sailsjs.org/#/documentation/concepts/Testing
Let's say I have some tests that require jQuery. Well, we don't have to make believe, I actually have the tests. The test themselves are not important, but the fact they depend on jQuery is important.
Disclaimer: this is node.js so you cannot depend on global variables in your solution. Any dependency must be called into the file with require.
On the server we need this API (to mock the window object required by server-side jquery)
// somefile.js
var jsdom = require("jsdom").jsdom;
var window = jsdom().parentWindow();
var $ = require("jquery")(window);
// my tests that depend on $
// ...
On the client we need a slightly different API
// somefile.js
// jsdom is not required obviously
// window is not needed because we don't have to pass it to jquery explicitly
// assume `require` is available
// requiring jquery is different
var $ = require("jquery");
// my tests that depend on $
// ...
This is a huge problem !
The setup for each environment is different, but duplicating each test just to change setup is completely stupid.
I feel like I'm overlooking something simple.
How can I write a single test file that requires jQuery and run it in multiple environments?
in the terminal via npm test
in the browser
Additional information
These informations shouldn't be necessary to solve the fundamental problem here; a general solution is acceptable. However, the tools I'm using might have components that make it easier to solve this.
I'm using mocha for my tests
I'm using webpack
I'm not married to jsdom, if there's something better, let's use it !
I haven't used phantomjs, but if it makes my life easier, let's do it !
Additional thoughts:
Is this jQuery's fault for not adhering to an actual UMD? Why would there be different APIs available based on which env required it?
I'm using karma to run my unit tests from the command line directly (CI too, with gulp).
Karma uses phantomjs to run the tests inside of a headless browser, you can configure it to run in real browsers too.
Example of karma configuration inside of gulp:
// Run karma tests
gulp.task("unit", function (done) {
var parseConfig = require("karma/lib/config").parseConfig,
server = karma.server,
karmaConfig = path.resolve("karma.conf.js"),
config = parseConfig(karmaConfig, {
singleRun: true,
client: {
specRegexp: ".spec.js$"
}
});
server.start(config, done);
});
In case of my tests it takes approx. 10 seconds to run 750 tests, so it's quite fast.