Node CLI Unit Testing - javascript

I have a node module I am working on, and I want to write unit tests for it, however I am confused on how I would pass arguments (required for the CLI) to node through a testing suite.
Lets assume (for brevity) the module name is J, so I would call it like...
$ j --file test.js --file test2.js
how do i recreate those --file calls when I am writing my testing suite?

You can use node's child process module to run additional command line processes. This link can give you more info on syntax; I recommend also checking out the promised version of child-process.
var spawn = require('child_process').spawn;
spawn('j', ['--file', 'test.js', '--file', 'test2.js'])
.progress(function(childProcess){
// any logic you want to do here while process is running
})
.then(function(result){
// command was executed
// write tests here
})
.fail(function(err){
// maybe 1 last test to make sure there was no test
});
As far as unit-testing suites, I expect anything your comfortable with would work (mocha/chai etc.)

Related

How to combine mocha with inquirer.js

I am looking to make my automation tests a bit more flexible. I have a QA team that does not know much javascript and possibly have to design the tests for users with no or little programming skills.
I had a few scripts created using mocha test framework and spectron.js (for an app built with electrion.js) test a few features of the product I dont want to run every single test every time I run the script. My temporary solution is bundle the tests in to a function as a "suite". Like this -
function DiagnosticSuite(location, workstation, workflowName){
CreateWorkflow(location, workflowName);
SetWorkFlowToStation(location, workstation, workflowName);
DiagnosticTestFlow();
return;
}
function PowerflowSuite(imei, location, workstation, workflowName){
SetWorkFlowToStation(location, workstation, workflowName);
powerOffFlow(imei);
return;
}
I was thinking of using Inquirer and use a conditional based on input to run one of the tests above. Like this -
inquirer.prompt([
{
type: 'list',
name: 'Which workflow do you want to run?',
choices: ['Power Off', 'Diagnostic']
}
]).then((answers) => {
if(answers == 'Power Off'){
PowerflowSuite(imei, location, workstation, workflowName);
}
})
How ever when I test that Mocha seems to not wait for the user input from inquerior to run the tests and I get an output like this -
$ npm test
> metistests#1.0.0 test C:\Users\DPerez1\Desktop\metis-automation
> mocha
? Which workflow do you want to run?: (Use arrow keys)
> Power Off
Diagnostic
0 passing (0ms)
Seems like it runs and doesnt see a any tests and finishes and when I select the answer the program just closes.
I am wondering why Mocha does this and if its possible to run my existing mocha scripts with a library like inquirer.
So I found a solution to my problem in case anyone stumbles here and is wondering.
I separated the CLI portion and the Mocha portion into different scripts on the package.json file. That way I can use nodes child process library to run the mocha part and pass the information from the CLI part via the process.argv objject.
So the CLI part will ask me what test to run, what env, and what user. And I create a command to put in child process exec function (I think there are others to use but not important.) When mocha tests run I parse the argvs and pass them into the functions so that the test can run based on that.

Run Jest test suites in groups

I am writing extensive tests for a new API via Jest and supertest. Prior to running the tests, I am setting up a test database and populating it with users:
Test command
jest --forceExit --config src/utils/testing/jest.config.js
File jest.config.js
module.exports = {
rootDir: process.cwd(),
// Sets up testing database with users
globalSetup: './src/utils/testing/jest.setup.js',
// Ensures connection to database for all test suites
setupTestFrameworkScriptFile: './src/utils/testing/jest.db.js',
}
So I am starting with a database of some users to test on. The problem is this:
Some of my tests rely upon the success of other tests. In this application, users can upload images, and group them into packs. So my grouping endpoint suite depends upon the success of my image upload suite, and so on.
I am well aware many people might say this is bad practice, and that tests should not rely upon other tests. That being said, I would really rather keep all my tests via supertest, and not get into dependency injection, etc. I don't want to have to meticulously set up testing conditions (for example, creating a bunch of user images artificially before running the tests), because: (1) this is just duplication of logic, and (2) it increases the possibility of something breaking.
Is there any way to group jest suites? For example, to run suites in order:
jest run creationSuite
jest run modificationSuite
This way, all my "creationSuite" tests could be run simultaneously, and success of all would then trigger the "modificationSuite" to run, etc., in a fail-fast manner.
Alternatively, specifying inside a test suite dependencies on other test suites would be great:
describe('Grouping endpoint', () => {
// Somehow define dependencies
this.dependsOn(uploadSuite)
You can use jest-runner-groups to define and run tests in groups. Once it's installed and added to the Jest configuration, you can tag your tests using docblock notation, like here:
/**
* Foo tests
*
* #group group1/subgroup1
* #group unit/foo
*/
describe( 'Foo class', () => {
...
} );
/**
* Bar tests
*
* #group group1/subgroup2
* #group unit/bar
*/
describe( 'Bar class', () => {
...
} );
Update your jest configuration to specify a new runner:
// jest.config.js
module.exports = {
...
runner: "groups"
};
Then to run a specific group, you need to use --group= argument:
// Using the Jest executable
jest --group=mygroup
// Or npm
npm test -- --group=mygroup
You can also use multiple --group arguments to run multiple groups:
// Will execute tests in the unit/bar and unit/foo groups
npm test -- --group=unit/bar --group=unit/foo
// Will execute tests in the unit group (including unit/bar and unit/foo groups)
npm test -- --group=unit
I have done it with the --testNamePattern flag. Here is the procedure.
Let’s say that you have two groups of tests:
DevTest for testing in development environment
ProdTest for testing in production environemnt
If you want to test only those features required for the development environment, you have to add DevTest to the test description:
describe('(DevTest): Test in development environment', () => {
// Your test
})
describe('(ProdTest): Test in production environment', () => {
// Your test
})
describe('Test everywhere', () => {
// Your test
})
After that, you can add commands to your package.json file:
"scripts": {
"test": "jest",
"test:prod": "jest --testNamePattern=ProdTest",
"test:dev": "jest --testNamePattern=DevTest",
"test:update": "jest --updateSnapshot"
}
Command npm test will run all of your tests, since you are not using flag --testNamePattern. If you want to run other tests, just use npm run test:dev for example.
Be careful when naming your test groups though. They are searched with regex in test description and you don't want this command to match other words.
Jest test suites are executed in multiple threads, and this is one of its main benefits. Test runs can be completed much faster this way, but the test sequence isn't preserved by design.
It's possible to disable this feature with the runInBand option.
It's possible to pick tests and suites based on their names with the testNamePattern option or based on their paths with testPathPattern option.
Since one suite depends on another one, they possibly could be combined into a single suite in the order they are expected to run. They can still reside in different files (make sure they aren't matched by Jest), e.g.:
// foobar.test.js
describe(..., () => {
require('foo.partial-test.js');
require('bar.partial-test.js');
});
The problem is this:
Some of my tests rely upon the success of other tests.
This is the real problem here. The approach that relies on previous a test state is considered flawed in any kind of automated tests.
I don't want to have to meticulously set up testing conditions (for example, creating a bunch of user images artificially before running the tests), because: (1) this is just duplication of logic, and (2) it increases the possibility of something breaking.
There is no need to set up testing conditions (fixtures) artificially. Fixtures can be extracted from existing environment, even from the results of your current tests if you're sure about their quality.
Redundancy and tautology naturally occur in automated tests, there's nothing wrong with them. Tests can be made DRYer with proper management of fixtures and shared code.
Quite the contrary, errors are always accumulated. A test that created faulty prerequisites may pass, but another test will fail, thus creating a debugging conundrum.

mocha global `before` that executes only 1 time

I'm using mocha for node.js functional testing.
There are several files in my test.
How can I run a piece of code for only one time before all tests start?
For example, I may have to set up a docker container before all tests start.
Is it possible to do this with mocha?
The before hook runs 1 time for every test file. This doesn't meet my needs.
You can have 'root' level hooks, if you put them outside of any describe blocks. So you can put your relevant code in any files inside of the test folder.
before(function() {
console.log('before any tests started');
});
See the docs: http://mochajs.org/#root-level-hooks
Mocha 8 introduces the concept of root hook plugins. In your case, the relevant ones are beforeAll and afterAll, which will run once before/after all tests, so long you tests run in serial.
You can write:
exports.mochaHooks = {
beforeAll(done) {
// do something before all tests run
done();
},
};
And you'll have to add this file using the --require flag.
See docs for more info.
There is a very clean solution for this. Use --file as parameter in your mocha command. It works like a global hook for your tests.
xargs mocha -R spec --file <path-to-setup-file>
A setup file can look like this:
'use strict';
const mongoHelper = require('./mongoHelper.js');
console.log("[INFO]: Starting tests ...");
// connect to database
mongoHelper.connect()
.then(function(connection){
console.log("[INFO]: Connected to database ...");
console.log(connection);
})
.catch(function(err){
console.error("[WARN]: Connection to database failed ...");
console.log(err);
});
Unfortunately I have not found a way to use async/await in this setup files. So I think you may have to be conent with using "old" promise and callback code.

Sails.js: How to actually run tests

I'm completely new to sails, node and js in general so I might be missing something obvious.
I'm using sails 0.10.5 and node 0.10.33.
In the sails.js documentation there's a page about tests http://sailsjs.org/#/documentation/concepts/Testing, but it doesn't tell me how to actually run them.
I've set up the directories according to that documentation, added a test called test/unit/controllers/RoomController.test.js and now I'd like it to run.
There's no 'sails test' command or anything similar. I also didn't find any signs on how to add a task so tests are always run before a 'sails lift'.
UPDATE-2: After struggling a lil bit with how much it takes to run unit test this way, i decided to create a module to load the models and turn them into globals just as sails does, but without taking so much. Even when you strip out every hook, but the orm-loader depending on the machine, it can easily take a couple seconds WITHOUT ANY TESTS!, and as you add models it gets slower, so i created this module called waterline-loader so you can load just the basics (Its about 10x faster), the module is not stable and needs test, but you are welcome to use it or modify it to suit your needs, or help me out to improve it here -> https://github.com/Zaggen/waterline-loader
UPDATE-1:
I've added the info related to running your tests with mocha to the docs under Running tests section.
Just to expand on what others have said (specially what Alberto Souza said).
You need two steps in order to make mocha work with sails as you want. First, as stated in the sails.js Docs you need to lift the server before running your test, and to do that, you create a file called bootstrap.test.js (It can be called anything you like) in the root path (optional) of your tests (test/bootstrap.test.js) that will be called first by mocha, and then it'll call your test files.
var Sails = require('sails'),
sails;
before(function(done) {
Sails.lift({
// configuration for testing purposes
}, function(err, server) {
sails = server;
if (err) return done(err);
// here you can load fixtures, etc.
done(err, sails);
});
});
after(function(done) {
// here you can clear fixtures, etc.
sails.lower(done);
});
Now in your package.json, on the scripts key, add this line(Ignore the comments)
// package.json ....
scripts": {
// Some config
"test": "mocha test/bootstrap.test.js test/**/*.test.js"
},
// More config
This will load the bootstrap.test.js file, lift your sails server, and then runs all your test that use the format 'testname.test.js', you can change it to '.spec.js' if you prefer.
Now you can use npm test to run your test.
Note that you could do the same thing without modifying your package.json, and typying mocha test/bootstrap.test.js test/**/*.test.js in your command line
PST: For a more detailed configuration of the bootstrap.test.js check Alberto Souza answer or directly check this file in hist github repo
See my test structure in we.js: https://github.com/wejs/we-example/tree/master/test
You can copy and paste in you sails.js app and remove we.js plugin feature in bootstrap.js
And change you package.json to use set correct mocha command in npm test: https://github.com/wejs/we-example/blob/master/package.json#L10
-- edit --
I created a simple sails.js 0.10.x test example, see in: https://github.com/albertosouza/sails-test-example
Given that they don't give special instructions and that they use Mocha, I'd expect that running mocha from the command line while you are in the parent directory of test would work.
Sails uses mocha as a default testing framework.
But Sails do not handle test execution by itself.
So you have to run it manually using mocha command.
But there is an article how to make all Sails stuff included into tests.
http://sailsjs.org/#/documentation/concepts/Testing

How to run JavaScript test script in multiple environments (terminal and browser)?

Let's say I have some tests that require jQuery. Well, we don't have to make believe, I actually have the tests. The test themselves are not important, but the fact they depend on jQuery is important.
Disclaimer: this is node.js so you cannot depend on global variables in your solution. Any dependency must be called into the file with require.
On the server we need this API (to mock the window object required by server-side jquery)
// somefile.js
var jsdom = require("jsdom").jsdom;
var window = jsdom().parentWindow();
var $ = require("jquery")(window);
// my tests that depend on $
// ...
On the client we need a slightly different API
// somefile.js
// jsdom is not required obviously
// window is not needed because we don't have to pass it to jquery explicitly
// assume `require` is available
// requiring jquery is different
var $ = require("jquery");
// my tests that depend on $
// ...
This is a huge problem !
The setup for each environment is different, but duplicating each test just to change setup is completely stupid.
I feel like I'm overlooking something simple.
How can I write a single test file that requires jQuery and run it in multiple environments?
in the terminal via npm test
in the browser
Additional information
These informations shouldn't be necessary to solve the fundamental problem here; a general solution is acceptable. However, the tools I'm using might have components that make it easier to solve this.
I'm using mocha for my tests
I'm using webpack
I'm not married to jsdom, if there's something better, let's use it !
I haven't used phantomjs, but if it makes my life easier, let's do it !
Additional thoughts:
Is this jQuery's fault for not adhering to an actual UMD? Why would there be different APIs available based on which env required it?
I'm using karma to run my unit tests from the command line directly (CI too, with gulp).
Karma uses phantomjs to run the tests inside of a headless browser, you can configure it to run in real browsers too.
Example of karma configuration inside of gulp:
// Run karma tests
gulp.task("unit", function (done) {
var parseConfig = require("karma/lib/config").parseConfig,
server = karma.server,
karmaConfig = path.resolve("karma.conf.js"),
config = parseConfig(karmaConfig, {
singleRun: true,
client: {
specRegexp: ".spec.js$"
}
});
server.start(config, done);
});
In case of my tests it takes approx. 10 seconds to run 750 tests, so it's quite fast.

Categories